current-bollywoods-news.blogspot.com
The Kinect sensor that comes with the Microsoft Kinect is a real-time 3D depth measurement system that is a fraction of the cost of similar commercial systems. The system features a 640×480 visible camera and a 320×240 depth image enabled by an infrared camera and an infrared projector. The entire system operates at 30 frames per second.
The drivers for the Kinect were hacked within 3 hours the system’s release. The low cost, ubiquitous nature of this depth measurement system is set to drastically alter the sensing capabilities of computer vision systems.
Kinect Calibration
Initially we were going to tackle Kinect calibration, specifically calibrating the visible camera, infrared camera and infrared projector. However, there are readily available calibration solutions for the Kinect camera that use very similar approaches to our own. An organization called OpenNI5 was recently formed by Willow Garage (OpenCV), Prime Sense (Manufacturers of the Kinect camera) and Side-Kick (Game Studio). This organization will be releasing the original hardware drivers for the Kinect along with calibration and a skeletal tracking solution. Also, ROS6 has a functioning calibration procedure with a lot of community momentum.
Therefore we chose not to reinvent the wheel and we utilize the ROS code to calibrate the Kinect. There is an excellent description of the process on the ROS website. They essentially have some simplifying assumptions about the intrinsic and radial distortion of the two cameras. They then use a Zhang calibration process extremely similar to our own to calibrate the visible and infrared cameras. The Kinect camera provides a disparity map in the image space of the infrared camera, which is calibrated using a minimization technique described in their documentation.
Kinect-Projector Viewer
In order to demonstrate the calibration results and show a glimpse at potential projection based interfaces we create a simple Kinect-Projector viewer. The application projects color onto physical objects based upon the distance to the range sensor. The application reads in the calibration parameters for the Kinect and the projector. Then the disparity map from the Kinect is transformed into a 3D mesh via the Kinect parameters and a vertex shaded. This way the mesh can be updated in real-time without slowing down the rendering process by uploading a new mesh every frame.
The development team have a pretty complex array of gear running to define a plane and a polygon boundary area for the video projector and Kinect to work together. Once they map a set of coordinates recorded by the Kinect onto the coordinates from the projector, the system’s calibrated and multi-touch ready.
1 comments:
I actually enjoyed reading through this posting.Many thanks.
Post a Comment