Identifying dead birds is a time and labor consuming task in commercial broiler production. Automatic mortality identification not only helps to save the time and labor, but also offers a critical procedure/component for autonomous mortality removal systems. The objectives of this study were to 1) investigate the accuracy of automatically identifying dead broilers at two stocking densities through processing thermal and visible images, and 2) delineate the dynamic body surface temperature drops after euthanasia. The tests were conducted on a weekly basis over two 9-week production cycles in a commercial broiler house. A 0.8mx0.6m floor area was fenced using chicken wires to accommodate experimental broilers. A dual-function camera was installed above the fenced area and simultaneously took thermal and visible videos of the broilers for 20 min at each stocking density. An algorithm was developed to extract pixels of live broilers in thermal images and pixels of all (live and dead) broilers in visible images. The algorithm further detected pixels of dead birds by subtracting the two processed thermal and visible images taken at the same time, and reported the coordinates of the dead broilers. The results show that the accuracy of mortality identification was 90.7% for the regular stocking density and 95.0% for the low stocking density, respectively, for 5-week old or younger broilers. The accuracy decreased for older broilers due to less body-background temperature gradients and more body interactions among birds. The body surface temperatures dropped more slowly for older broilers than younger ones. Body surface temperature requires approximately 1.7 hour for 1-week old broiler to reach 1°C above the background level, while over 6 hours for 4-week and 7-week old broilers. In conclusion, the system and algorithm developed in this study successfully identified broiler mortalities at promising accuracies for younger birds ( less than 5-week old), while requires improvement for older ones.
Reinforcement learning (RL) helps an agent to learn an optimal path within a specific environment while maximizing its performance. Reinforcement learning (RL) plays a crucial role on training an agent to accomplish a specific job in an environment. To train an agent an optimal policy, the robot must go through intensive training which is not cost-effective in the real-world. A cost-effective solution is required for training an agent by using a virtual environment so that the agent learns an optimal policy, which can be used in virtual as well as real environment for reaching the goal state. In this paper, a new method is purposed to train a physical robot to evade mix of physical and virtual obstacles to reach a desired goal state using optimal policy obtained by training the robot in an augmented reality (AR) world with one of the active reinforcement learning (RL) techniques, known as Q-learning.
Recommender systems with the approach of collaborative filtering by using the algorithms of machine learning gives better optimized results. But selecting the appropriate learning rate and regularized parameter is not an easy task. RMSE changes from one set of these values to others. The best set of these parameters has to be selected so that the RMSE must be optimized. In this paper we proposed a method to resolve this problem. Our proposed system selects appropriate learning rate and regularized parameter for given data.
Human-Computer Interaction is an emerging field of Computer Science where, Computer Vision, especially facial expression recognition occupies an essential role. There are so many approaches to resolve this problem, among them HMM is a considerable one. This paper aims to achieve optimization in both, the usage of number of states and the time complexity of HMM runtime. It also focusses to enable parallel processing which aims to process more than one image simultaneously.