Kinect v2 Avateering

News

Extraordinary Robot
Robot
Joined
Jun 27, 2006
Location
Chicago, IL
Peter Daukintis, Friend of the Gallery, posted another great example of using the Kinect v2, this time using it and its capabilities to start an Avatar journey...

Here's some of the other posts from Peter we've highlighted recently;

Avateering with Kinect V2 – Joint Orientations


For my own learning I wanted to understand the process of using the Kinect V2 to drive the real-time movement of a character made in 3D modelling software. This post is the first part of that learning which is taking the joint orientations data provided by the Kinect SDK and using that to position and rotate ‘bones’ which I will represent by rendering cubes since this is a very simple way to visualise the data. (I won’t cover smoothing the data or modelling/rigging in this post). So the result should be something similar to the Kinect Evolution Block Man demo which can be discovered using the Kinect SDK browser.



To follow this along you would need a working Kinect V2 sensor with USB adapter, a fairly high-specced machine running Windows 8.0/8.1 with USB3 and a DirectX11-compatible GPU and also the Kinect V2 SDK installed. Here are some instructions for setting up your environment.

To back up a little there are two main ways to represent body data from the Kinect; the first being to use the absolute positions provided by the SDK which are values in 3D Camera-space which are measured in metres, the other is to use the joint orientation data to rotate a hierarchy of bones. The latter is the one we will look at here. Now, there is an advantage in using joint orientations and that is, as long as your model has the same overall skeleton structure as the Kinect data then it doesn’t matter so much what the relative sizes of the bones are which frees up the modelling constraints. The SDK has done the job of calculating the rotations from the absolute joint positions for us so let’s explore how we can apply those orientations in code.

Code


I am going to program this by starting with the DirectX and XAML C++ template in Visual Studio which provides a basic DirectX 11 environment, with XAML integration, basic shaders and a cube model described in code ...

Body Data


Let’s start by getting the body data into our program from the sensor. As always we start with getting a KinectSensor object which I will initialise in the Sample3DSceneRenderer class constructor, then we open a BodyFrameReader on the BodyFrameSource, for which there is a handy property on the KinectSensor object. ...

Kinect Joint Hierarchy


The first subject to consider is how the Kinect joint hierarchy is constructed as it is not made explicit in the SDK. Each joint is identified by one of the following enum values:...

Bones


To draw each separate bone I modified the original cube model that was supplied with the default project template. I modified the coordinates of the original cube so that one end was at the origin and the other was 4 units in the y-direction; so when rendered ...

...

...this shows the end result:



Project Information URL: http://peted.azurewebsites.net/avateering-with-kinect-v2-joint-orientations/

Project Source URL: https://github.com/peted70/kinectv2-avateer-jointorientations

Contact Information:


Follow @CH9
Follow @Coding4Fun
Follow @KinectWindows
Follow @gduncan411

njs.gif


Continue reading...
 
Back
Top Bottom