I just found out that the Kinect SDK 1.6 is up!
Now you are going to say I am out of it, since it was released almost a month ago, but I didn't have much time to work with my Kinect with working on other projects not having to do with Kinect, but it doesn't sound like there are any new updates that are a matter of life or death, so I don't think it really matters (except developing for Windows 8, which I hardly find fascinating),
Anyways I just wanted to celebrate this occasion, so here I am blogging about it. Also whenever a new version of any software is released it should be significant (so you will be seeing this a lot more in the future).
I am also saying sorry for not getting on or blogging in the last 3 months (Has it really been that long?), and I will try and do better, such as putting up new tutorials and so forth at least once a month, and try and update some of my other blogs too.
That's all I wanted to accomplish in this post (pathetic right?), so I will see you another day!
Kinecting to the Kinect SDK
Sunday, November 4, 2012
Friday, August 31, 2012
New Update for Blog
Hey guys, I just wanted to know that now this blog will be used for any programming related tasks (beside my secret ones). I will probably cover:
Flash
.Net
UnityScript
Unity3d
JavaScript
Html
CSS
Racket/Scheme
And any other language I end up learning :) Just wanted to keep you guys posted
Flash
.Net
UnityScript
Unity3d
JavaScript
Html
CSS
Racket/Scheme
And any other language I end up learning :) Just wanted to keep you guys posted
Monday, August 13, 2012
Player Tracking with Kinect
Recently I saw this question on StackOverflow and reminded me of Skeleton IDs, Player Indexs, etc; so I thought I would show you guys how to do this. It is pretty simple, and easy to keep track of.
So first we want to cover IDs since they are the easy ones.
Really simple right? Index's are also easy but are a bit more complicated.
Still extremely simple. Now the code for detecting humans is a bit more complicated than that, but still extremely simple.
See? this is all pretty easy and very useful. Hope this helps anyone wanting to know about it.
So first we want to cover IDs since they are the easy ones.
void nui_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
{
SkeletonFrame sf = e.SkeletonFrame;
if (sf.TrackingState == SkeletalTrackingState.Tracked)
{
int ID1 = sf.TrackingID;
}
}
Really simple right? Index's are also easy but are a bit more complicated.
void nui_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
{
SkeletonFrame sf = e.SkeletonFrame;
//check which skeletons in array are active and
// use that array indexes for player index
SkeletonData player1 = sf.Skeletons[playerIndex1];
SkeletonData player2 = sf.Skeletons[playerIndex2];
}
Still extremely simple. Now the code for detecting humans is a bit more complicated than that, but still extremely simple.
void DepthFrameReady(object sender, DepthFrameReadyEventArgs e)
{
using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
{
short[] rawDepthData = new short[depthFrame.PixelDataLength];
depthFrame.CopyPixelDataTo(rawDepthData);
Byte[] pixels = new byte[depthFrame.Height * depthFrame.Width * 4];
int player = rawDepthData[depthIndex] & DepthImageFrame.PlayerIndexBitmask;
if (player > 0)
{
//do something
}
}
}
See? this is all pretty easy and very useful. Hope this helps anyone wanting to know about it.
Saturday, August 11, 2012
Saving an Image from a Canvas C#
Hey all, like I promised in my last post I will show you how to save a canvas image. This is really cool and useful (I think) and pretty simple.
public void ExportToPng(Uri path,Canvas surface)
{
if(path == null) return;
Size size = new Size(surface.Width,surface.Height);
surface.Measure(size);
surface.Arrange(new Rect(size ));
RenderTargetBitmap renderBitmap = new RenderTargetBitmap(
(int)size.Width, //width
(int)size.Height, //height
96d, //dpi x
96d, //dpi y
PixelFormats.Pbgra32 // pixelformat
);
renderBitmap.Render(surface);
using (FileStream outstream = new FileStream(path.LocalPath,FileMode.Create))
{
JpegBitmapEncoder encoder = new JpegBitmapEncoder();
encoder.Frames.Add(BitmapFrame.Create(renderBitmap));
encoder.Save(outstream);
}
}
That's all there is to it! I think it is really useful and I would like to use it more often. Hope this helps anyone who wanted to know it!
Saving Images with C#
I am making this very basic tutorial because it was hard for me to change from the old beta's way. It used to just be:
But now we have to use this long line of intense code to save one image, involving converting it to a RenderBitmap, then save it using an Encoder. Here is the new and complicated way of doing it.
colorImage.WritePixels(new Int32Rect(0, 0, colorImage.PixelWidth, colorImage.PixelHeight),
pixels, colorImage.PixelWidth * 4, 0);
colorImage.Save(path, ImageFormat.Jpeg);
But now we have to use this long line of intense code to save one image, involving converting it to a RenderBitmap, then save it using an Encoder. Here is the new and complicated way of doing it.
BitmapEncoder encoder = new JpegBitmapEncoder();
encoder.Frames.Add(BitmapFrame.Create(colorImage));
try
{
using (FileStream fs = new FileStream(path, FileMode.Create))
{
encoder.Save(fs);
}
}
catch (IOException)
{
MessageBox.Show("Save Failed");
}
Alright, I exaggerated, it is not that complicated, but it is a lot less understandable. But I am fine with it since it is more efficient and you can use this method to save anything, like a canvas which I will cover in my next post. See you then!
Friday, August 10, 2012
Kinect Basics: Joint Tracking
This post is Kinect basics. I don't mean any of the stuff you could read else where, I mean the stuff only you read here. So assuming you know stuff about Kinect, this will make sense to you. But to those of you who are pretending to know something about Kinect when you know nothing then I banish you from this blog.
Moving on, we are going to make an app (in WPF, if you use Winforms I banish you again, and don't even get me started on Silverlight) that takes a joint (your hand, for example) and maps an ellipse to it. This is very simple and I use it a lot. With the XAML just make an 2 images, 4 ellipses, and a canvas, like so:
Now keep in mind that this is my first post with code so I don't have the whole Blogger setup figured out. I will find some way to put this stuff in code but it works for now. Now I am assuming if you know something about the Kinect SDK you will know how to display a color image, do that now with "imageviewer" Image. Now for the mapping, you will need these few methods:
Now all you need is to add in the AllFramesReadyEventArgs to tie it all together.
And that is all there is to it! Hope this helps. A large portion of this code was taken from http://channel9.msdn.com/Series/KinectQuickstart/Skeletal-Tracking-Fundamentals
Moving on, we are going to make an app (in WPF, if you use Winforms I banish you again, and don't even get me started on Silverlight) that takes a joint (your hand, for example) and maps an ellipse to it. This is very simple and I use it a lot. With the XAML just make an 2 images, 4 ellipses, and a canvas, like so:
<Window x:Class="SkeletalTracking.MainWindow"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Title="MainWindow" Height="600" Width="800" Loaded="Window_Loaded"
Closing="Window_Closing" WindowState="Maximized"> <Canvas Name="MainCanvas">
<Image Canvas.Left="0" Canvas.Top="0" Width="640" Height="480" Name="imageviewer" />
<Ellipse Canvas.Left="0" Canvas.Top="0" Height="50" Name="leftEllipse" Width="50" Fill="#FF4D298D" Opacity="1" Stroke="White" />
<Ellipse Canvas.Left="100" Canvas.Top="0" Fill="#FF2CACE3" Height="50" Name="rightEllipse" Width="50" Opacity="1" Stroke="White" />
<Image Canvas.Left="66" Canvas.Top="90" Height="87" Name="headImage" Stretch="Fill" Width="84" Source="whateverimagefileyouwant.jpg" />
<Ellipse Canvas.Left="283" Canvas.Top="233" Height="23" Name="leftknee" Stroke="Black" Width="29" />
<Ellipse Canvas.Left="232" Canvas.Top="233" Height="23" Name="rightknee" Stroke="Black" Width="30" />
</Canvas>
</Window>
Now keep in mind that this is my first post with code so I don't have the whole Blogger setup figured out. I will find some way to put this stuff in code but it works for now. Now I am assuming if you know something about the Kinect SDK you will know how to display a color image, do that now with "imageviewer" Image. Now for the mapping, you will need these few methods:
void GetCameraPoint(Skeleton first, AllFramesReadyEventArgs e)
{
using (DepthImageFrame depth = e.OpenDepthImageFrame())
{
if (depth == null ||
kinectSensorChooser1.Kinect == null)
{
return;
}
//Map a joint location to a point on the depth map
//head
DepthImagePoint headDepthPoint =
depth.MapFromSkeletonPoint(first.Joints[JointType.Head].Position);
//left hand
DepthImagePoint leftDepthPoint =
depth.MapFromSkeletonPoint(first.Joints[JointType.HandLeft].Position);
//right hand
DepthImagePoint rightDepthPoint =
depth.MapFromSkeletonPoint(first.Joints[JointType.HandRight].Position);
//Map a depth point to a point on the color image
//head
ColorImagePoint headColorPoint =
depth.MapToColorImagePoint(headDepthPoint.X, headDepthPoint.Y,
ColorImageFormat.RgbResolution640x480Fps30);
//left hand
ColorImagePoint leftColorPoint =
depth.MapToColorImagePoint(leftDepthPoint.X, leftDepthPoint.Y,
ColorImageFormat.RgbResolution640x480Fps30);
//right hand
ColorImagePoint rightColorPoint =
depth.MapToColorImagePoint(rightDepthPoint.X, rightDepthPoint.Y,
ColorImageFormat.RgbResolution640x480Fps30);
//Set location
CameraPosition(headImage, headColorPoint);
CameraPosition(leftEllipse, leftColorPoint);
CameraPosition(rightEllipse, rightColorPoint);
}
}
Skeleton GetFirstSkeleton(AllFramesReadyEventArgs e)
{
using (SkeletonFrame skeletonFrameData = e.OpenSkeletonFrame())
{
if (skeletonFrameData == null)
{
return null;
}
skeletonFrameData.CopySkeletonDataTo(allSkeletons);
//get the first tracked skeleton
Skeleton first = (from s in allSkeletons
where s.TrackingState == SkeletonTrackingState.Tracked
select s).FirstOrDefault();
return first;
}
}
Now all you need is to add in the AllFramesReadyEventArgs to tie it all together.
void sensor_AllFramesReady(object sender, AllFramesReadyEventArgs e)
{
if (closing)
{
return;
}
//Get a skeleton
Skeleton first = GetFirstSkeleton(e);
if (first == null)
{
return;
}
//set scaled position
ScalePosition(headImage, first.Joints[JointType.Head]);
ScalePosition(leftEllipse, first.Joints[JointType.HandLeft]);
ScalePosition(rightEllipse, first.Joints[JointType.HandRight]);
ScalePosition(leftknee, first.Joints[JointType.KneeLeft]);
ScalePosition(rightknee, first.Joints[JointType.KneeRight]);
GetCameraPoint(first, e);
}
And that is all there is to it! Hope this helps. A large portion of this code was taken from http://channel9.msdn.com/Series/KinectQuickstart/Skeletal-Tracking-Fundamentals
First post!
Hey guys. This is my first post and I just wanted to talk about what this site is about!
I will mostly be covering
1)Kinect
2)The Kinect SDK
3)Things you can do with the SDK
4) Anything else I feel like talking about.
This post falls in the 4 category. I would put a number of what category each post falls into, but then I would be claimed of being organized... which I am not. I am running out of things to say, so I will consider this a post well done! I will post a post again in the next day about something semi-serious. See you then!
I will mostly be covering
1)Kinect
2)The Kinect SDK
3)Things you can do with the SDK
4) Anything else I feel like talking about.
This post falls in the 4 category. I would put a number of what category each post falls into, but then I would be claimed of being organized... which I am not. I am running out of things to say, so I will consider this a post well done! I will post a post again in the next day about something semi-serious. See you then!
Subscribe to:
Posts (Atom)