Hansajeewa, K.H.Ekanayake, R.M.T.C.B.Wijesooriya, P.N.2022-03-212022-03-212013http://www.erepo.lib.uwu.ac.lk/bitstream/handle/123456789/8539/06-SCT-Kinect%20Human%20Follower%20.pdf?sequence=1&isAllowed=yIn near future people will need their own personal robot for day to day work. With this trend education, entertainment and security, especially in defense, the market for robots will be enormously high (Saccoccio & Taleb, 2013). The range of the camera in Microsoft's Kinect, intended for the Xbox 360 gaming console, offers a powerful alternative to many standard sensors used in robotics for gathering spatial information about a robot’s surroundings (Schwab, 2011). The recently released Kinect is the first commercially available product which provides the depth data of its resolution and accuracy reach of many robotics projects. This project aims to design and construct a mobile autonomous robot which can navigate independently taking gesture commands from Kinect sensor in a very cost effective way in order to be used in home application (MikkelViager, 2011). Methodology The project is divided into two parts: hardware implementation and software implementation. The platform provides the robot’s mobility and serves as the base of the robot (Schwab, 2011). All of the other components are mounted on the base. The steering consists of two links. One steering link consists of a medium caster wheel and other consisting with two wheels driven by 12 Volt two DC motors. The circuit design consists of microcontrollers circuit with max 232 for serial communication. Motor controlling circuit was constructed using H-Bridge. The system software can be broken down into several components that approximately correspond to the hardware components. OpenNI, NITE from PrimeSense were downloaded and installed.Visual Studio 2008 was installed to further implementation of Kinect sensor. Results and Discussion Overall goal of this project was to identify the commands and Kinect’s ability to provide sufficient sensory information to perform basic navigation. The first program was tested to identify the ability of the device. Therefore invincible pen was created using hand. Procedure was completed successfully to draw lines using invincible pen.The system was tested to identify the depth image. Kinect could able to identify the depth image. Closer objects are dark in color and objects that are further away are lighter in color of the depth image. Figure 1 depicts the Kinect depth data represented as an image.Also of importance, there are noticeable black regions in this image that Kinect can’t identify. To take the gesture command Prime sense NITE should be linked with openNI.Visual studio 2008 unable to link Prime sense NITE with openNI. Therefore the Kinect unable to perform gesture commands. The motors successfully operated for a long time and delivered a better torque to the wheels in order to carry higher weights. Motor controlling of the robot was successfully actuated. The circuit which was prepared for serial communication was successfully operated. It could able to communicate with PC and Microcontroller.enScience and TechnologyTechnologyComputer ScienceRoboticsHuman RoboticsKinect Human FollowerResearch Symposium 2013Other