Windows 7 Jellybean, the Kinect Drivable Lounge Chair

News

Extraordinary Robot
Robot
You saw it at Mix—in typical fashion, our mission was to build two Jellybean robots in three weeks for the Mix keynote; no pressure, right?—and now it's time to introduce Project Jellybean on Coding4Fun. So, here it is—the Kinect drivable lounge chair! The lounge chair has Omni-directional wheels, eight batteries, two motor controllers, and a frame made of extruded aluminum.
Jellybean exists as a proof-of-concept of what crazy things are possible when utilizing the Kinect for Windows SDK, and the project also leverages the Coding4Fun Kinect Toolkit in order to handle some of the more complex operations.
Before we get into the code, let me point out, THIS WILL WORK WITHOUT THE ROBOT. There is an application setting called IsMotorEnabled, and with this setting set to false, you can play with the user interface and see how we did all our Kinect-enabled goodness.
wlEmoticon-smile%5B2%5D.png
The screenshot at the bottom is of me testing this puppy at my desk without any of the motors or relays connected.

[h=3]Overview[/h]There are five total projects in the C# solution, and Jellybean is broken down into a four big parts:
  • Hardware
  • Robot Software
  • Kinect Software
  • User Interface

[h=3]Hardware[/h]A lot of the hardware is pretty straightforward and can be gleaned from the part list and the wiring diagram. Larry Larsen has a video of me building out the robot and explaining some of the hardware, both during the construction and at the actual event.

WARNING
The motors are extremely powerful—everything is very heavy and there is a lot of power in the batteries. Be careful. The wheels easily catch on shoelaces and headphone cords, etc.
With other projects, such as the t-shirt cannon from last Mix, I had to disconnect a rather large number of wires, and so risked short-circuiting the entire project. Jellybean, however, is wired to make charging it a lot easier. The solution below allows me to charge the robot by flipping four heavy-duty switches to the off position. This wiring diagram is also included in the source code as a Visio file called "WiringDiagram.vsd" located in the "Files" directory:


[h=4]Wiring Up the Chair and Relay[/h]I decided to pick a chair that was already electric and just tap into the existing switches, and so I mimicked the wiring to match the chair's "stock" wiring. You'll have to alter this design depending on how your chair is set up.

[h=4]Wiring, Wire Management and easy access[/h]Another lesson I learned from the cannon project was to make sure the wiring is nice and easy to get to so the project doesn't have to be half disassembled when I want to reach an individual connection.
To ensure a solid connection, every wire was crimped and soldered with ring connectors. I didn't want any chance of a wire coming loose. As you can see, the left and right wiring harnesses are pretty much exact clones of each other.

[h=4]Wheels[/h]These are AndyMark 10" steel omni-directional wheels. A heads-up—you can mount them backwards, and if you do, the chair won't be able to rotate in place. Accordingly, your co-workers will mock you…trust me. What you want is for the wheels to form an O pattern, not an X. Here is a picture of improperly mounted wheels.

[h=2]Jellybean Object[/h]The jellybean is what talks to the robotic platform so we can test the platform without the Kinect. The object only knows about two serial ports, which are connected to the motor controllers, and our trusty phidget relay controller, which controls the footrest.
The three methods called during operation are
  • CalculateSpeed
  • Drive
  • ToggleFootrest
[h=4]How to drive sidewise[/h]Since driving an omni-directional armchair isn't exactly something someone does every day, looked at how I'd drive it with an xbox controller. The Y-axis is the throttle and the X-axis is what I call the vector multiplier. The formula for this is surprisingly straight forward:.


private static double ThrottlesThroughVectorMultiplier(double throttle, double vectorMultiplier, bool isFrontMotor)
{
return vectorMultiplier + ((isFrontMotor) ? throttle : -throttle);
}Since we're dealing with our hands, I also included a "dead" zone where the driver's hands can move but the motors won't react:


private double AdjustValueForDeadzone(double value)
{
// positive value
if (value > 0)
{
// value under threshold
if (value < AllowedMovementArea)
return 0; // re-adjust value back to 0 to 1
// Example: deadzone of .2
// that means 1 to .8 is only value, jerky movement
// readjust so .2 = 0 and use same curve
// that brings value - allowed deadzone movement
// so if value = 1, that would be 1, .2 = 0
// values between .2 and 0 would return 0 due to if statement above
// so our adjusted range is .2 to 0 but need 1 to 0
// negating allowed deadzone would be 2 in this case
// which brings us back to 1 to 0 range
value = (value - AllowedMovementArea) * _negatedAllowedMovementArea;
}
else // negative values
{
// value under threshold
if (value > -AllowedMovementArea)
return 0; value = (value + AllowedMovementArea) * _negatedAllowedMovementArea;
} return value;
}[h=3]Kinect for Windows SDK[/h]Aww snap, we're finally here! Using the Coding4Fun.Kinect.WPF API with the Kinect for Windows SDK, I simplified the amount of heavy lifting I had to do.
I have two core classes here, and one is just a simple wrapper around the SDK:
From sensor.cs

public void Open()
{
if (_isInit)
Close(); RuntimeOptions flags = 0; if (TrackSkeleton)
{
flags |= RuntimeOptions.UseDepthAndPlayerIndex;
flags |= RuntimeOptions.UseSkeletalTracking;
}
else if (UseDepthCameraStream)
{
flags |= RuntimeOptions.UseDepth;
} if (UseColorCameraStream)
{
flags |= RuntimeOptions.UseColor;
} _runtime.Initialize(flags); if (TrackSkeleton || UseDepthCameraStream)
{
var imageType = (TrackSkeleton) ? ImageType.DepthAndPlayerIndex : ImageType.Depth;
_runtime.DepthStream.Open(ImageStreamType.Depth, 2, DepthResolution, imageType);
} // now open streams
if (UseColorCameraStream)
{
_runtime.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color);
} _runtime.VideoFrameReady += RuntimeColorFrameReady;
_runtime.DepthFrameReady += RuntimeDepthFrameReady;
_runtime.SkeletonFrameReady += RuntimeSkeletonFrameReady;

_isInit = true;
}The second class is all about processing the data, NuiDepth.cs. Since the Coding4Fun.Kinect.WPF handles the heavy lifting, the code is pretty straight forward! It's all housed in the DepthFrameReady event:
From NuiDepth.cs

void _sensor_DepthFrameReady(object sender, ImageFrameReadyEventArgs e)
{
var imageWidth = e.ImageFrame.Image.Width;
var imageHeight = e.ImageFrame.Image.Height;
var imageHeightWithMargin = imageHeight - 50; var depthArray = e.ImageFrame.ToDepthArray();
var rightHandOffset = imageWidth / 2;
var leftHand = depthArray.GetMidpoint(imageWidth, imageHeight, 0, 0, rightHandOffset, imageHeightWithMargin, MinDistance);
var rightHand = depthArray.GetMidpoint(imageWidth, imageHeight, rightHandOffset, 0, imageWidth, imageHeightWithMargin, MinDistance); leftHand.X *= _bitmapScale;
leftHand.Y *= _bitmapScale; rightHand.X *= _bitmapScale;
rightHand.Y *= _bitmapScale;

var args = new FrameReadyEventArgs
{
DepthBitmap = depthArray.ToBitmapSource(imageWidth, imageHeight, MinDistance, Color.FromArgb(255, 255, 0, 0)),
ImageBitmap = _colorImage,
LeftHand = leftHand,
RightHand = rightHand
};

FrameReady(this, args);
}[h=3]User Interface and NUI[/h]Our user interface was designed by the fine folks over at 352 Media and implemented by Dan Fernandez and myself. From this interface, we can turn on the motors and honk a horn as well as raise and lower the chair. We also have a visual for how fast we are going and how the program views our hands.

[h=4]Why didn't we use skeleton tracking?[/h]Well, we wanted to, and as you can see we actually have it turned on. The issue is getting a lock this close to the Kinect due to how close we had to mount the it. Accordingly, we decided to go pure depth data.
We leveraged the GetMidpoint and ToBitmapSource with minimum distance extensions from the Coding4Fun Kinect toolkit to do the coloring and provide us with the hand positions in the screen.
[h=4][/h][h=4][/h][h=3]Conclusion[/h]Now you know how we pulled off Project Jellybean! If you want to try this out, the download link for the source code is at the top of the article. And if you build one and ask nicely, Scott Guthrie may ride it
wlEmoticon-smile%5B2%5D.png


[h=3]About The Author[/h]Clint Rutkas runs Coding4Fun and has built a few crazy projects in the past. Clint is part of the Channel9 team at Microsoft and can be reached at [email protected] or on twitter at @clintrutkas. If you ever have a question, please reach out.
njs.gif


More...
 
Back
Top