หน้าหลัก
2. Connected Work two.1. Pose Recognition Applying Depth Cameras Depth cameras are systems which can develop a 3D depth map of a scene by projecting light to that scene. The principle is comparable to that of Laser Interferometry Detection and Ranging (LIDAR) scanners, using the difference becoming that the latter are only capable of performing a 2D scan of your scene, when depth cameras scan the entire scene at once. Depth cameras are an attractive tool in many fields that require intense analysis in the 3D atmosphere. Two surveys thoroughly describe the field. The very first one [8] dates from 2009 and surveys the technologies and applications before the release in the Kinect Sensor. This sensor revolutionized the field by generating readily available a high-resolution and high-precision technology at customer prices. A more recent survey (2013), but far more focused on algorithms for body-motion analysis, is presented in [9]. Nevertheless, the idea of employing depth cameras for physique evaluation isn't current. By way of example, in references [10,11] their use to find physique parts is proposed. Given that then, lots of other operates have researched gesture recognition with depth cameras [12?6]. A few of these functions rely on kinematic models to track human gestures when the physique is detected [17?9]. Most of these performs depend on capturing only one or handful of parts on the physique. Even so, current kinematic approaches, like the 1 in [18], make achievable the tracking of your whole body with no a considerable enhance in CPU consumption. Schwarz et al. [20] propose a approach to estimate the complete body by transforming the foreground depth image into a point cloud. Then, they determine the centroid of this point cloud and come across the major landmarks by calculating the geodesic Losmapimod Epigenetics distance along the 3D physique mesh. Shotton et al. [21] will be the authors in the human pose estimation technologies applied inside the Xbox. They proposed a skeleton model, where the joints are fitted to previously labeled physique parts using imply shift. Our strategy focuses on teaching concepts interactively to a social robot. For that reason, rather than extracting the physique and tracking it directly, it utilizes the out there technologies and algorithms as data-sources, which will be utilised to allow the grounding of high-level ideas, for example the name of a specific pose. Concretely, our vision program relies around the OpenNI (NI stands for Organic Interaction) [22] libraries for physique extraction and tracking. OpenNI's skeleton tracking is similar towards the ones described above. two.two. Machine Learning in Human obot Interactions Fong et al. present a survey [23] on the interactions among humans and social robots in which the authors stress that the main goal of mastering in social robotics would be to enhance the interaction experience. In the time on the survey (2003), a lot of the finding out applications have been employed in robot-robot interaction. Some operates addressed the situation of learning in human obot interaction, mainly focusing on imitating human behaviors, for example motor primitives. As outlined by the authors, finding out in social robots is usedSensors 2013,for transferring expertise, tasks and data to the robot. Even so, the authors do not mention the use of finding out for transferring ideas, which include poses, that allow the robot to know the user improved.