Manish Kumar, Ph.D.
Manish Kumar, Ph.D.
Department of Mechanical, Industrial & Manufacturing Engineering
University of Toledo, Ohio
Manish Kumar received his Bachelor of Technology degree in Mechanical Engineering from Indian Institute of Technology, Kharagpur, India in 1998, and his M.S. and Ph.D. degrees in Mechanical Engineering from Duke University, NC, USA in 2002 and 2004 respectively. After finishing his Ph.D., he worked as a postdoctoral research associate in the Department of Mechanical Engineering and Materials Science at Duke University from 2004 to 2005. In 2005, he received the Research Associateship Award from National Research Council (NRC). This award allowed him to work as a postdoctoral Research Associate with the Army Research Office, NC, USA from 2005 to 2007. As a part of his NRC Associateship program, he was a visiting scholar at General Robotics, Automation, Sensing, and Perception (GRASP) laboratory at the University of Pennsylvania, PA, USA. Subsequently, he worked as an Assistant Professor in the School of Dynamic Systems at the University of Cincinnati, OH, USA where he directed the Cooperative Distributed Systems (CDS) Laboratory and co-directed the Center for Robotics Research. He is currently an Associate Professor in the Department of Mechanical, Industrial, and Manufacturing Engineering in the University of Toledo, OH, USA. His current research interests include complex systems, decision-making and control in large-scale systems, development of novel techniques to fuse data from multiple sources, robotics, swarm systems, and multiple robot coordination and control. He is a member of American Society of Mechanical Engineers.
Vision Based Tracking of Human Gait using Kinematic Models
Human gait tracking holds tremendous potential in a wide range of applications including surveillance systems at public places and monitoring of the elderly in their homes. However, it remains an extremely challenging problem from the perspectives of sensing and data processing. Usually, tracking of human gait involves obtaining temporal positional information of different points on human body. Indeed, the location of these points on the body depends on the kind of motion being analyzed. The positional information of the selected points can be obtained using vision cameras or via wearing visual, radio frequency, or inertial sensor based trackables. While the latter requires wearing special devices, the former remains the most passive and non-intrusive way to gather data. Motivation of using vision sensors also arises from widespread use of this technology in monitoring and surveillance. However, the primary challenge remains to analyze the data received from a vision based sensor system, to meaningfully interpret the data, and to carry out a real-time estimation of the configuration that best describes the motion of the body that is being tracked. The challenge primarily originates from the fact that the computer vision based tracking techniques involve a substantial degree of uncertainties caused by factors such as occlusion, improper analysis of images, or environmental ambiguities. In this talk, we would explore the notion that exploits the fact that human body can be considered as a kinematic system of linkages with fixed lengths. The kinematic model is used to obtain the motion dynamics which is further used to develop a filtering scheme that helps reduce some of the uncertainties mentioned above. The temporal tracking data of human gait can be further used to identify motion patterns such as walking, falling, or sit-to-stand action.