Filter by type:

Sort by year:

A Closed Loop Perception Subsystem for small Unmanned Aerial Systems

Veera Venkata Ram Murali Krishna Rao Muvva, Kruttidipta Samal, Justin M. Bradley, Marilyn Wolf
Conference PapersAIAA Scitech Forum 2023.

Abstract

The perception subsystem in modern autonomous systems use convolutional neural networks (CNNs) for their high accuracy. The structure of such networks is static and there is an inverse relationship between their accuracy and resource consumption, such as inference speed and energy utilized. This poses a challenge while designing the perception subsystem for agile autonomous systems such as small unmanned aerial systems which have limited resources while operating in real world dynamic scenarios. Existing approaches design the perception subsystems for the worst case scenario which can lead to inefficient resource utilization that can hamper the mission capabilities of such systems. At the same time it is difficult to predict the worst case condition during design time especially for systems that operate in stochastic and dynamic real world scenarios. Therefore, we have developed a closed-loop perception subsystem that can dynamically change the structure of its CNN such that the accuracy and latency adapt to the requirements of a given scenario. The proposed system was tested on an UAS-UAS tracking scenario. It was demonstrated that the chasing UAS with the proposed closed loop perception subsystem was able to successfully track the target UAS more accurately than the baseline approach, with static CNNs, while consuming less energy. Furthermore, the adaptive latency of the proposed method lets the chasing UAS track the target UAS at higher velocities compared to baseline methods.

Autonomous UAV Landing on a Moving UAV using Machine Learning

Veera Venkata Ram Murali Krishna Rao Muvva, Guoming Li, Marilyn Wolf
Conference PapersAIAA Scitech Forum 2022.

Abstract

Autonomous landing of aUAV(agent UAV) on anotherUAV(base UAV) is critical to extendUAVmanipulation but has not been studied well. The objective of this research was to examinethe possibility of the autonomous landing of aUAVon anotherUAV. The whole landing processwas conducted in a simulated environment (AirSim). The Single Shot MultiBox Detector (SSD)network was trained with a customized dataset and embedded into theagent UAVto recognizethebase UAV, and landing was manipulated with a proportional-integral-derivative controllerwith the aid of robot operating system (ROS). Thebase UAVwas operated at the velocities of1, 2, 3, and 5 km/h and the trajectories of straight line, square, and ellipse. The results showedthe SSD network detected and localized thebase UAVwith over 99% accuracy. The landingwas successfully achieved with the four different velocities. Such results demonstrate greatpotential of the autonomous and movable landing for future aerial robot applications.

Measuring the Perceived Three-Dimensional Location of Virtual Objects in Optical See-Through Augmented Reality

Farzana Alam Khan, Veera Venkata Ram Murali Krishna Rao Muvva, Dennis Wu, Mohammed Safayet Arefin, Nate Phillips, J. Edward Swan II
Conference PapersIEEE International Symposium on Mixed and Augmented Reality, 2021

Abstract

For optical see-through augmented reality (AR), a new method for measuring the perceived three-dimensional location of virtual objects is presented, where participants verbally report a virtual object location relative to both a vertical and horizontal grid. The method is tested with a small (1.95 1.95 1.95 cm) virtual object at distances of 50 to 80 cm, viewed through a Microsoft HoloLens 1 st generation AR display. Two experiments examine two different virtual object designs, whether turning in a circle between reported object locations disrupts HoloLens tracking, and whether accuracy errors, including a rightward bias and underestimated depth, might be due to systematic errors that are restricted to a particular display. Turning in a circle did not disrupt HoloLens tracking, and testing with a second display did not suggest systematic errors restricted to a particular display. Instead, the experiments are consistent with the hypothesis that, when looking downwards at a horizontal plane, HoloLens 1st generation displays exhibit a systematic rightward perceptual bias. Precision analysis suggests that the method could measure the perceived location of a virtual object within an accuracy of less than 1 mm.

On the Effectiveness of Quantization and Pruning on the Performance of FPGAs-based NN Temperature Estimation

Veera Venkata Ram Murali Krishna Rao Muvva, Martin Rapp, Joerg Henkel, Hussam Amrouch, Marilyn Wolf
Workshop PapersACM/IEEE 3rd Workshop on Machine Learning for CAD (MLCAD) 2021 (pp. 1-7). IEEE.

Abstract

A well-functioning thermal management system on the chip requires knowledge of the current temperature and the potential changes in temperature in the near future. This information is important for ensuring proactive thermal management on the chip. However, the limited number of sensors on the chip makes it difficult to accomplish this task. Hence we proposed a neural network based approach to predict the temperature map of the chip. To solve the problem, we have implemented two different neural networks, one is a feedforward network and the other uses recurrent neural networks. Our proposed method requires only performance counters measure to predict the temperature map of the chip during the runtime. Each of the two models shows promising results regarding the estimation of the temperature map on the chip. The recurrent neural network outperformed the feedforward neural network. Furthermore, both networks have been quantized, pruned, and the feedforward network has been compiled into FPGA logic. Therefore, the network could be embedded in the chip, whether it be an ASIC or an FPGA.

A Method for Measuring the Perceived Location of Virtual Content in Optical See Through Augmented Reality

Farzana Alam Khan, Veera Venkata Ram Murali Krishna Rao Muvva, Dennis Wu; Mohammed Safayet Arefin, Nate Phillips, J. Edward Swan
Workshop PapersIEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW) 2021 Mar 27 (pp. 657-658). IEEE.

Abstract

For optical, see-through augmented reality (AR), a new method for measuring the perceived three-dimensional location of a small virtual object is presented, where participants verbally report the virtual object's location relative to both a horizontal and vertical grid. The method is tested with a Microsoft HoloLens AR display, and examines two different virtual object designs, whether turning in a circle between reported object locations disrupts HoloLens tracking, and whether accuracy errors found with a HoloLens display might be due to systematic errors that are restricted to that particular display. Turning in a circle did not disrupt HoloLens tracking, and a second HoloLens did not suggest systematic errors restricted to a specific display. The proposed method could measure the perceived location of a virtual object to a precision of ~1 mm.

Assuring Learning-Enabled Components in Small Unmanned Aircraft Systems

Veera V. R. M. K. R. Muvva, Justin M Bradley, Marilyn Wolf, Taylor Johnson
Conference PapersAIAA Scitech Forum, p. 0994. 2021.

Abstract

Aviation has a remarkable safety record ensured by strict processes, rules, certifications,and regulations, in which formal methods have played a role in large companies developingcommercial aerospace vehicles and related cyber-physical systems (CPS). This has not been thecase for small Unmanned Aircraft Systems (UAS) that are still largely unregulated, uncertified,and not fully integrated into the national airspace. However, emerging UAS missions interactclosely with the environment and utilize learning-enabled components (LECs), such as neuralnetworks (NNs) for many tasks. Applying formal methods in this context will enable improvedsafety and ease the immersion of UASs into the national airspace.

We develop UAS that interact closely with the environment, interact with human users,and require precise plans, navigation, and controllers. They also generally leverage LECs forperception and data collection. However, the impact of ML-based LECs on UAS performanceis still an area of research. We have developed an advanced simulator incorporating ML-basedperception in highly dynamic situations requiring advanced control strategies to study theimpacts of ML-based perception on holistic UAS performance.

In other work, we have developed a WebGME-based software framework called theAssurance-based Learning-enabled CPS (ALC) toolchain for designing CPS that incorporateLECs, including the Neural Network Verification (NNV) formal verification tool. In this paper,we present two key developments: 1) a quantification of the impact of ML-based perception onholistic (physical and cyber) UAS performance, and 2) a discussion of challenges in applyingthese methods in this environment to guarantee UAS performance under various Neural Net(NN) strategies, executed at various computational rates, and with vehicles moving at variousspeeds. We demonstrate that vehicle dynamics, rate of perception execution, the design of thecontroller, and the design of the NN all contributed to total vehicle performance.

Automatic Identification of Broiler Mortality using Image Processing Technology

Veera V. R. M. K. R. Muvva, Yang Zhao, Pratik Parajuli, Song Zhang, Tom Tabler, Josehp Purswell
Conference Papers 10th International Livestock Environment Symposium (ILES X), (p. 1), ASABE, 2018

Abstract

Identifying dead birds is a time and labor consuming task in commercial broiler production. Automatic mortality identification not only helps to save the time and labor, but also offers a critical procedure/component for autonomous mortality removal systems. The objectives of this study were to 1) investigate the accuracy of automatically identifying dead broilers at two stocking densities through processing thermal and visible images, and 2) delineate the dynamic body surface temperature drops after euthanasia. The tests were conducted on a weekly basis over two 9-week production cycles in a commercial broiler house. A 0.8mx0.6m floor area was fenced using chicken wires to accommodate experimental broilers. A dual-function camera was installed above the fenced area and simultaneously took thermal and visible videos of the broilers for 20 min at each stocking density. An algorithm was developed to extract pixels of live broilers in thermal images and pixels of all (live and dead) broilers in visible images. The algorithm further detected pixels of dead birds by subtracting the two processed thermal and visible images taken at the same time, and reported the coordinates of the dead broilers. The results show that the accuracy of mortality identification was 90.7% for the regular stocking density and 95.0% for the low stocking density, respectively, for 5-week old or younger broilers. The accuracy decreased for older broilers due to less body-background temperature gradients and more body interactions among birds. The body surface temperatures dropped more slowly for older broilers than younger ones. Body surface temperature requires approximately 1.7 hour for 1-week old broiler to reach 1°C above the background level, while over 6 hours for 4-week and 7-week old broilers. In conclusion, the system and algorithm developed in this study successfully identified broiler mortalities at promising accuracies for younger birds ( less than 5-week old), while requires improvement for older ones.

Towards Training an Agent in Augmented Reality World with Reinforcement Learning

Veera Venkata Ram Murali Krishna Rao Muvva, Naresh Adhikari, Amrita D. Ghimire
Conference Papers InControl, Automation and Systems (ICCAS), 2017 17th International Conference on 2017 Oct 18 (pp. 1884-1888). IEEE.

Abstract

Reinforcement learning (RL) helps an agent to learn an optimal path within a specific environment while maximizing its performance. Reinforcement learning (RL) plays a crucial role on training an agent to accomplish a specific job in an environment. To train an agent an optimal policy, the robot must go through intensive training which is not cost-effective in the real-world. A cost-effective solution is required for training an agent by using a virtual environment so that the agent learns an optimal policy, which can be used in virtual as well as real environment for reaching the goal state. In this paper, a new method is purposed to train a physical robot to evade mix of physical and virtual obstacles to reach a desired goal state using optimal policy obtained by training the robot in an augmented reality (AR) world with one of the active reinforcement learning (RL) techniques, known as Q-learning.

A Collaborative Filtering Recommender System with Randomized Learning Rate and Regularized Parameter

V. V. R. Murali Krishna Rao Muvva
Conference Papers InCurrent Trends in Advanced Computing (ICCTAC), IEEE International Conference on 2016 Mar 10 (pp. 1-5). IEEE.

Abstract

Recommender systems with the approach of collaborative filtering by using the algorithms of machine learning gives better optimized results. But selecting the appropriate learning rate and regularized parameter is not an easy task. RMSE changes from one set of these values to others. The best set of these parameters has to be selected so that the RMSE must be optimized. In this paper we proposed a method to resolve this problem. Our proposed system selects appropriate learning rate and regularized parameter for given data.

A Novel Method to Achieve Optimization in Facial Expression Recognition Using HMM

D. Ngarjuna Devi, M. V. V. R. Murali Krishna Rao
Conference Papers InSignal Processing And Communication Engineering Systems (SPACES), 2015 International Conference on 2015 Jan 2 (pp. 48-52). IEEE.

Abstract

Human-Computer Interaction is an emerging field of Computer Science where, Computer Vision, especially facial expression recognition occupies an essential role. There are so many approaches to resolve this problem, among them HMM is a considerable one. This paper aims to achieve optimization in both, the usage of number of states and the time complexity of HMM runtime. It also focusses to enable parallel processing which aims to process more than one image simultaneously.