- Joined
- Jun 27, 2006
- Messages
- 23,048
- Thread Author
- #1
Last year the Coding4Fun/Channel 9 guys asked me to work on a few things for Link Removed. One of these items was a way to output a webcam stream to Windows Phone 7 for use with Clint's Link Removed you may have read about. I figured the easiest way to accomplish this was by using a network/IP camera capable of sending a Motion-JPEG stream, which can be easily decoded and displayed that can display a JPEG image. Thus, this library was born.
It has gone through quite a few changes and I have expanded it to easily display MJPEG streams on a variety of platforms. The developer just references the assembly appropriate to their platform, adds a few lines of code, and away it goes.
Usage
For those that are just interested in the usage, it's as simple as this:
public partial class MainWindow : Window{ MjpegDecoder _mjpeg; public MainWindow() { InitializeComponent(); _mjpeg = new MjpegDecoder(); _mjpeg.FrameReady += mjpeg_FrameReady; } private void Start_Click(object sender, RoutedEventArgs e) { _mjpeg.ParseStream(new Uri("http://192.168.2.200/img/video.mjpeg")); } private void mjpeg_FrameReady(object sender, FrameReadyEventArgs e) { image.Source = e.BitmapImage; }}If that doesn't fit your needs, you can also access the Bitmap/BitmapImage properties directly from the MjpegDecoder object, or the CurrentFrame property, which will contain the raw JPEG data prior to being decoded.
A Word About Network/IP Cameras
I have tested this against several different cameras. Each device has its own quirks, but all of them seem to work with this library with one exception: several cameras will respond differently when an Internet Explorer user agent header is sent with the HTTP request. Instead of sending down an MJPEG stream, it will send a single JPEG image as Internet Explorer does not properly support MJPEG streams. Unfortunately, this causes the Silverlight processor to not work properly as the header cannot be changed from the Internet Explorer default. When this happens, only a single frame will be sent, and the decoding will fail. The only fix I have found is to use a different camera that doesn't work in this way.
What is MJPEG?
Pretty simply, it's a video format where each frame of video is sent as a separate, compressed JPEG image. A standard HTTP request is made to a specific URL, and a multipart response is sent. Parsing this multipart stream into separate images as they are sent results in a series of JPEG images. The viewer displays those JPEG images as quickly as they are sent and that creates the video. It's not a well documented format, nor is it perfectly standardized, but it does work. For more information, see the MJPEG article on Wikipedia.
How Do I Find the MJPEG URL of My Camera?
Excellent question. Not an excellent answer. The user manual may mention the URL. A quick internet search with the model number should get you a result. Or, you can also try this company's lookup tool.
How Does It Work?
Glad you asked. If you take a look at the project, you'll notice there isn't much code. One single file is used with a variety of compiler directives to compile certain portions based on the platform assembly being generated. The MjpegDecoder.cs/.vb contains the entire implementation.
First, an asynchronous request is made to the provided MJPEG URL inside the ParseStream method. If we are in a Silverlight environment, the AllowReadStreamBuffering property must be set to false so the response returns immediately instead of being buffered. Additionally, we need to register the http:// prefix to use the client http stack vs. the browser stack. Finally, the request is made using the BeginGetResponse method, specifying the OnGetResponse method as the callback. This will be called as soon as data is sent from the camera in response to our request.
public void ParseStream(Uri uri){#if SILVERLIGHT HttpWebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp);#endif HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(uri);#if SILVERLIGHT // start the stream immediately request.AllowReadStreamBuffering = false;#endif // asynchronously get a response request.BeginGetResponse(OnGetResponse, request);}OnGetResponse grabs the response headers and uses the Content-Type header to determine the boundary marker that will be sent between JPEG frames.
private void OnGetResponse(IAsyncResult asyncResult){ HttpWebResponse resp; byte[] buff; byte[] imageBuffer = new byte[1024 * 1024]; Stream s; // get the response HttpWebRequest req = (HttpWebRequest)asyncResult.AsyncState; resp = (HttpWebResponse)req.EndGetResponse(asyncResult); // find our magic boundary value string contentType = resp.Headers["Content-Type"]; if(!string.IsNullOrEmpty(contentType) && !contentType.Contains("=")) throw new Exception("Invalid content-type header. The camera is likely not returning a proper MJPEG stream."); string boundary = resp.Headers["Content-Type"].Split('=')[1]; byte[] boundaryBytes = Encoding.UTF8.GetBytes(boundary.StartsWith("--") ? boundary : "--" + boundary);...
It then streams the response data, looks for a JPEG header marker, then reads until it finds the boundary marker, copies the data into a buffer, decodes it, passes it on to whoever wants it via an event, and then starts over.
... s = resp.GetResponseStream(); BinaryReader br = new BinaryReader(s); _streamActive = true; buff = br.ReadBytes(ChunkSize); while (_streamActive) { int size; // find the JPEG header int imageStart = buff.Find(JpegHeader); if(imageStart != -1) { // copy the start of the JPEG image to the imageBuffer size = buff.Length - imageStart; Array.Copy(buff, imageStart, imageBuffer, 0, size); while(true) { buff = br.ReadBytes(ChunkSize); // find the boundary text int imageEnd = buff.Find(boundaryBytes); if(imageEnd != -1) { // copy the remainder of the JPEG to the imageBuffer Array.Copy(buff, 0, imageBuffer, size, imageEnd); size += imageEnd; // create a single JPEG frame CurrentFrame = new byte[size]; Array.Copy(imageBuffer, 0, CurrentFrame, 0, size);#if !XNA ProcessFrame(CurrentFrame);#endif // copy the leftover data to the start Array.Copy(buff, imageEnd, buff, 0, buff.Length - imageEnd); // fill the remainder of the buffer with new data and start over byte[] temp = br.ReadBytes(imageEnd); Array.Copy(temp, 0, buff, buff.Length - imageEnd, temp.Length); break; } // copy all of the data to the imageBuffer Array.Copy(buff, 0, imageBuffer, size, buff.Length); size += buff.Length; } } } resp.Close();}The ProcessFrame method seen above takes the raw byte buffer, which contains an undecoded JPEG image, and decodes it based on the environment. However this isn't called in the case of XNA which we'll see in a moment:
private void ProcessFrame(byte[] frameBuffer){#if SILVERLIGHT // need to get this back on the UI thread Deployment.Current.Dispatcher.BeginInvoke((Action)(() => { // resets the BitmapImage to the new frame BitmapImage.SetSource(new MemoryStream(frameBuffer, 0, frameBuffer.Length)); // tell whoever's listening that we have a frame to draw if(FrameReady != null) FrameReady(this, new FrameReadyEventArgs { FrameBuffer = CurrentFrame, BitmapImage = BitmapImage }); }));#endif#if !SILVERLIGHT && !XNA // I assume if there's an Application.Current then we're in WPF, not WinForms if(Application.Current != null) { // get it on the UI thread Application.Current.Dispatcher.BeginInvoke((Action)(() => { // create a new BitmapImage from the JPEG bytes BitmapImage = new BitmapImage(); BitmapImage.BeginInit(); BitmapImage.StreamSource = new MemoryStream(frameBuffer); BitmapImage.EndInit(); // tell whoever's listening that we have a frame to draw if(FrameReady != null) FrameReady(this, new FrameReadyEventArgs { FrameBuffer = CurrentFrame, Bitmap = Bitmap, BitmapImage = BitmapImage }); })); } else { // create a simple GDI+ happy Bitmap Bitmap = new Bitmap(new MemoryStream(frameBuffer)); // tell whoever's listening that we have a frame to draw if(FrameReady != null) FrameReady(this, new FrameReadyEventArgs { FrameBuffer = CurrentFrame, Bitmap = Bitmap, BitmapImage = BitmapImage }); }#endif}
In the case of Silverlight, the BitmapImage object has a SetSource method which takes a stream to be decoded and turned into the image. In WPF, BitmapImage works differently. In this case, BeginInit is called, then the StreamSource property is set to the stream of bytes, and finally EndInit is called. In WinForms, the library will return a Bitmap object which can be initialized with the stream right in the constructor.
In the code above, I look at the Application.Current property to determine if the library is being used by a WPF project. If that property is not null, it is assumed the library is being called from a WPF project.
When compiled as an XNA library, we have no use for a BitmapImage or a Bitmap…we need a Texture2D object. The GetMjpegFrame method seen below is what is called by an XNA application during the Update method to pull the current frame:
public Texture2D GetMjpegFrame(GraphicsDevice graphicsDevice){ // create a Texture2D from the current byte buffer if(CurrentFrame != null) return Texture2D.FromStream(graphicsDevice, new MemoryStream(CurrentFrame, 0, CurrentFrame.Length)); return null;}Conclusion
And that's about it for the library. Give it a try and please contact me with feedback or if you run into any issues. Enjoy!
Link Removed
More...
It has gone through quite a few changes and I have expanded it to easily display MJPEG streams on a variety of platforms. The developer just references the assembly appropriate to their platform, adds a few lines of code, and away it goes.
Usage
For those that are just interested in the usage, it's as simple as this:
- Reference one of the following assemblies appropriate for your project:
- MjpegProcessorSL.dll - Silverlight (Out of Browser Only!)
- MjpegProcessorWP7.dll - Windows Phone 7 (XNA or Silverlight, performance maxes out around 320x240 @ 15fps, so set your camera settings accordingly)
- MjpegProcessorXna4.dll - XNA 4.0 (Windows)
- MjpegProcessor.dll - WinForms and WPF
- Create a new MjpegDecoder object.
- Hook up the FrameReady event.
- In the event handler, take the Bitmap/BitmapImage and assign it to your image display control:
- In the case of XNA, use the GetMjpegFrame method in the Update method, which will return a Texture2D you can use in your Draw method.
- Call the ParseStream method with the Uri of the MJPEG "endpoint".
public partial class MainWindow : Window{ MjpegDecoder _mjpeg; public MainWindow() { InitializeComponent(); _mjpeg = new MjpegDecoder(); _mjpeg.FrameReady += mjpeg_FrameReady; } private void Start_Click(object sender, RoutedEventArgs e) { _mjpeg.ParseStream(new Uri("http://192.168.2.200/img/video.mjpeg")); } private void mjpeg_FrameReady(object sender, FrameReadyEventArgs e) { image.Source = e.BitmapImage; }}If that doesn't fit your needs, you can also access the Bitmap/BitmapImage properties directly from the MjpegDecoder object, or the CurrentFrame property, which will contain the raw JPEG data prior to being decoded.
A Word About Network/IP Cameras
I have tested this against several different cameras. Each device has its own quirks, but all of them seem to work with this library with one exception: several cameras will respond differently when an Internet Explorer user agent header is sent with the HTTP request. Instead of sending down an MJPEG stream, it will send a single JPEG image as Internet Explorer does not properly support MJPEG streams. Unfortunately, this causes the Silverlight processor to not work properly as the header cannot be changed from the Internet Explorer default. When this happens, only a single frame will be sent, and the decoding will fail. The only fix I have found is to use a different camera that doesn't work in this way.
What is MJPEG?
Pretty simply, it's a video format where each frame of video is sent as a separate, compressed JPEG image. A standard HTTP request is made to a specific URL, and a multipart response is sent. Parsing this multipart stream into separate images as they are sent results in a series of JPEG images. The viewer displays those JPEG images as quickly as they are sent and that creates the video. It's not a well documented format, nor is it perfectly standardized, but it does work. For more information, see the MJPEG article on Wikipedia.
How Do I Find the MJPEG URL of My Camera?
Excellent question. Not an excellent answer. The user manual may mention the URL. A quick internet search with the model number should get you a result. Or, you can also try this company's lookup tool.
How Does It Work?
Glad you asked. If you take a look at the project, you'll notice there isn't much code. One single file is used with a variety of compiler directives to compile certain portions based on the platform assembly being generated. The MjpegDecoder.cs/.vb contains the entire implementation.
First, an asynchronous request is made to the provided MJPEG URL inside the ParseStream method. If we are in a Silverlight environment, the AllowReadStreamBuffering property must be set to false so the response returns immediately instead of being buffered. Additionally, we need to register the http:// prefix to use the client http stack vs. the browser stack. Finally, the request is made using the BeginGetResponse method, specifying the OnGetResponse method as the callback. This will be called as soon as data is sent from the camera in response to our request.
public void ParseStream(Uri uri){#if SILVERLIGHT HttpWebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp);#endif HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(uri);#if SILVERLIGHT // start the stream immediately request.AllowReadStreamBuffering = false;#endif // asynchronously get a response request.BeginGetResponse(OnGetResponse, request);}OnGetResponse grabs the response headers and uses the Content-Type header to determine the boundary marker that will be sent between JPEG frames.
private void OnGetResponse(IAsyncResult asyncResult){ HttpWebResponse resp; byte[] buff; byte[] imageBuffer = new byte[1024 * 1024]; Stream s; // get the response HttpWebRequest req = (HttpWebRequest)asyncResult.AsyncState; resp = (HttpWebResponse)req.EndGetResponse(asyncResult); // find our magic boundary value string contentType = resp.Headers["Content-Type"]; if(!string.IsNullOrEmpty(contentType) && !contentType.Contains("=")) throw new Exception("Invalid content-type header. The camera is likely not returning a proper MJPEG stream."); string boundary = resp.Headers["Content-Type"].Split('=')[1]; byte[] boundaryBytes = Encoding.UTF8.GetBytes(boundary.StartsWith("--") ? boundary : "--" + boundary);...
It then streams the response data, looks for a JPEG header marker, then reads until it finds the boundary marker, copies the data into a buffer, decodes it, passes it on to whoever wants it via an event, and then starts over.
... s = resp.GetResponseStream(); BinaryReader br = new BinaryReader(s); _streamActive = true; buff = br.ReadBytes(ChunkSize); while (_streamActive) { int size; // find the JPEG header int imageStart = buff.Find(JpegHeader); if(imageStart != -1) { // copy the start of the JPEG image to the imageBuffer size = buff.Length - imageStart; Array.Copy(buff, imageStart, imageBuffer, 0, size); while(true) { buff = br.ReadBytes(ChunkSize); // find the boundary text int imageEnd = buff.Find(boundaryBytes); if(imageEnd != -1) { // copy the remainder of the JPEG to the imageBuffer Array.Copy(buff, 0, imageBuffer, size, imageEnd); size += imageEnd; // create a single JPEG frame CurrentFrame = new byte[size]; Array.Copy(imageBuffer, 0, CurrentFrame, 0, size);#if !XNA ProcessFrame(CurrentFrame);#endif // copy the leftover data to the start Array.Copy(buff, imageEnd, buff, 0, buff.Length - imageEnd); // fill the remainder of the buffer with new data and start over byte[] temp = br.ReadBytes(imageEnd); Array.Copy(temp, 0, buff, buff.Length - imageEnd, temp.Length); break; } // copy all of the data to the imageBuffer Array.Copy(buff, 0, imageBuffer, size, buff.Length); size += buff.Length; } } } resp.Close();}The ProcessFrame method seen above takes the raw byte buffer, which contains an undecoded JPEG image, and decodes it based on the environment. However this isn't called in the case of XNA which we'll see in a moment:
private void ProcessFrame(byte[] frameBuffer){#if SILVERLIGHT // need to get this back on the UI thread Deployment.Current.Dispatcher.BeginInvoke((Action)(() => { // resets the BitmapImage to the new frame BitmapImage.SetSource(new MemoryStream(frameBuffer, 0, frameBuffer.Length)); // tell whoever's listening that we have a frame to draw if(FrameReady != null) FrameReady(this, new FrameReadyEventArgs { FrameBuffer = CurrentFrame, BitmapImage = BitmapImage }); }));#endif#if !SILVERLIGHT && !XNA // I assume if there's an Application.Current then we're in WPF, not WinForms if(Application.Current != null) { // get it on the UI thread Application.Current.Dispatcher.BeginInvoke((Action)(() => { // create a new BitmapImage from the JPEG bytes BitmapImage = new BitmapImage(); BitmapImage.BeginInit(); BitmapImage.StreamSource = new MemoryStream(frameBuffer); BitmapImage.EndInit(); // tell whoever's listening that we have a frame to draw if(FrameReady != null) FrameReady(this, new FrameReadyEventArgs { FrameBuffer = CurrentFrame, Bitmap = Bitmap, BitmapImage = BitmapImage }); })); } else { // create a simple GDI+ happy Bitmap Bitmap = new Bitmap(new MemoryStream(frameBuffer)); // tell whoever's listening that we have a frame to draw if(FrameReady != null) FrameReady(this, new FrameReadyEventArgs { FrameBuffer = CurrentFrame, Bitmap = Bitmap, BitmapImage = BitmapImage }); }#endif}
In the case of Silverlight, the BitmapImage object has a SetSource method which takes a stream to be decoded and turned into the image. In WPF, BitmapImage works differently. In this case, BeginInit is called, then the StreamSource property is set to the stream of bytes, and finally EndInit is called. In WinForms, the library will return a Bitmap object which can be initialized with the stream right in the constructor.
In the code above, I look at the Application.Current property to determine if the library is being used by a WPF project. If that property is not null, it is assumed the library is being called from a WPF project.
When compiled as an XNA library, we have no use for a BitmapImage or a Bitmap…we need a Texture2D object. The GetMjpegFrame method seen below is what is called by an XNA application during the Update method to pull the current frame:
public Texture2D GetMjpegFrame(GraphicsDevice graphicsDevice){ // create a Texture2D from the current byte buffer if(CurrentFrame != null) return Texture2D.FromStream(graphicsDevice, new MemoryStream(CurrentFrame, 0, CurrentFrame.Length)); return null;}Conclusion
And that's about it for the library. Give it a try and please contact me with feedback or if you run into any issues. Enjoy!
Link Removed
More...