- Joined
- Jun 27, 2006
- Messages
- 23,048
- Thread Author
- #1
Friend of the Gallery and Kinect MVP Vangos Pterneas is back with a great and detailed post on developing with the HD Face API.
Some of the other times we've highlighted Vangos Pterneas's work;
Link Removed
Throughout my previous article, I demonstrated how you can access the 2D positions of the eyes, nose, and mouth, using Microsoft’s Kinect Face API. The Face API provides us with some basic, yet impressive, functionality: we can detect the X and Y coordinates of 4 eye points and identify a few facial expressions using just a few lines of C# code. This is pretty cool for basic applications, like Augmented Reality games, but what if you need more advanced functionality from your app?
Recently, we decided to extend our Kinetisense project with advanced facial capabilities. More specifically, we needed to access more facial points, including lips, jaw and cheeks. Moreover, we needed the X, Y and Z position of each point in the 3D space. Kinect Face API could not help us, since it was very limited for our scope of work.
Thankfully, Microsoft has implemented a second Face API within the latest Link Removed. This API is called HD Face and is designed to blow your mind!
At the time of writing, HD Face is the most advanced face tracking library out there. Not only does it detect the human face, but it also allows you to access over 1,000 facial points in the 3D space. Real-time. Within a few milliseconds. Not convinced? I developed a basic program that displays all of these points. Creepy, huh?!
In this article, I am going to show you how to access all these points and display them on a canvas. I’ll also show you how to use Kinect HD Face efficiently and get the most out of it.
Prerequisites
Tutorial
Although Kinect HD Face is truly powerful, you’ll notice that it’s badly documented, too. Insufficient documentation makes it hard to understand what’s going on inside the API. Actually, this is because HD Face is supposed to provide advanced, low-level functionality. It gives us access to raw facial data. We, the developers, are responsible to properly interpret the data and use them in our applications. Let me guide you through the whole process.
Step 1: Create a new project
Let’s start by creating a new project. Launch Visual Studio and select File -> New Project. Select C# as you programming language and choose either the WPF or the Windows Store app template. Give your project a name and start coding.
...
But wait!
OK, we drew the points on screen. So what? Is there a way to actually understand what each point is? How can we identify where they eyes are? How can we detect the jaw? The API has no built-in mechanism to get a human-friendly representation of the face data. We need to handle over 1,000 points in the 3D space manually!
Don’t worry, though. Each one of the vertices has a specific index number. Knowing the index number, you can easily deduce where does it correspond to. For example, the vertex numbers 1086, 820, 824, 840, 847, 850, 807, 782, and 755 belong to the left eyebrow.
Similarly, you can find accurate semantics for every point. Just play with the API, experiment with its capabilities and build your own next-gen facial applications!
If you wish, you can use the Color, Depth, or Infrared bitmap generator and display the camera view behind the face. Keep in mind that simultaneous bitmap and face rendering may cause performance issues in your application. So, handle with care and do not over-use your resources.
Link Removed
Project Information URL: http://pterneas.com/2015/06/06/kinect-hd-face/
Project Source URL: https://github.com/Vangos/kinect-2-face-hd
Contact Information:
Link Removed
Link Removed
Link Removed
Link Removed
Link Removed
Continue reading...
Some of the other times we've highlighted Vangos Pterneas's work;
- Just face it... with these two Kinect for Windows v2 SDK Face Examples
- Kinetisense and Kinect app development the right way
- Looking at the Kinect for Windows v2
- Color, depth and infrared streams in the Kinect for Windows v2 world (here's how)
- Kinect gestures, an implementation walk through
- Put down the measuring tape. Using the Kinect to find your height
- Body Tracking with the Kinect for Windows v2
- Kinect for Windows v2 will make you green (with envy at its background removal features)
Link Removed
Throughout my previous article, I demonstrated how you can access the 2D positions of the eyes, nose, and mouth, using Microsoft’s Kinect Face API. The Face API provides us with some basic, yet impressive, functionality: we can detect the X and Y coordinates of 4 eye points and identify a few facial expressions using just a few lines of C# code. This is pretty cool for basic applications, like Augmented Reality games, but what if you need more advanced functionality from your app?
Recently, we decided to extend our Kinetisense project with advanced facial capabilities. More specifically, we needed to access more facial points, including lips, jaw and cheeks. Moreover, we needed the X, Y and Z position of each point in the 3D space. Kinect Face API could not help us, since it was very limited for our scope of work.
Thankfully, Microsoft has implemented a second Face API within the latest Link Removed. This API is called HD Face and is designed to blow your mind!
At the time of writing, HD Face is the most advanced face tracking library out there. Not only does it detect the human face, but it also allows you to access over 1,000 facial points in the 3D space. Real-time. Within a few milliseconds. Not convinced? I developed a basic program that displays all of these points. Creepy, huh?!
In this article, I am going to show you how to access all these points and display them on a canvas. I’ll also show you how to use Kinect HD Face efficiently and get the most out of it.
Prerequisites
- Link Removed with an adapter (or Link Removed)
- Link Removed
- Windows 8.1 or higher
- Visual Studio 2013 or higher
- A dedicated USB 3 port
Tutorial
Although Kinect HD Face is truly powerful, you’ll notice that it’s badly documented, too. Insufficient documentation makes it hard to understand what’s going on inside the API. Actually, this is because HD Face is supposed to provide advanced, low-level functionality. It gives us access to raw facial data. We, the developers, are responsible to properly interpret the data and use them in our applications. Let me guide you through the whole process.
Step 1: Create a new project
Let’s start by creating a new project. Launch Visual Studio and select File -> New Project. Select C# as you programming language and choose either the WPF or the Windows Store app template. Give your project a name and start coding.
...
But wait!
OK, we drew the points on screen. So what? Is there a way to actually understand what each point is? How can we identify where they eyes are? How can we detect the jaw? The API has no built-in mechanism to get a human-friendly representation of the face data. We need to handle over 1,000 points in the 3D space manually!
Don’t worry, though. Each one of the vertices has a specific index number. Knowing the index number, you can easily deduce where does it correspond to. For example, the vertex numbers 1086, 820, 824, 840, 847, 850, 807, 782, and 755 belong to the left eyebrow.
Similarly, you can find accurate semantics for every point. Just play with the API, experiment with its capabilities and build your own next-gen facial applications!
If you wish, you can use the Color, Depth, or Infrared bitmap generator and display the camera view behind the face. Keep in mind that simultaneous bitmap and face rendering may cause performance issues in your application. So, handle with care and do not over-use your resources.
Link Removed
Project Information URL: http://pterneas.com/2015/06/06/kinect-hd-face/
Project Source URL: https://github.com/Vangos/kinect-2-face-hd
Contact Information:
- Blog: http://pterneas.com
- Link Removed
Link Removed
Link Removed
Link Removed
Link Removed
Link Removed
Continue reading...