Using the Scripts...
The basic premise of using these scripts is that you first take rapid (30FPS) snapshots from the Kinect and save the depth and color information for each frame as numerically ordered images. Windows automation is used to go through each pair of snapshots, modify the Minecraft save file to show the statue, open Minecraft to take a screenshot, close Minecraft, and then restore the world to its previous state. This ocurrs for each snapshot taken from the Kinect. The snapshots from Minecraft are numerically ordered so that they can be opened in a freeware program like Vdub to be converted to an animation!
- Step 1: Take snapshots from the Kinect using getSnapshot.py.
- Install the Kinect windows drivers from CodeLaboratories
- Make sure to put the CLNUIDevice.dll in the same directory of the scripts!
- Update variables in the script:
- Update saveDir to be a valid directory.
- Update colorDir and depthDir to be valid subdirectories of saveDir
- Make sure they all exist!
- Step 2: Update the parameters of ProcessImages.py.
- Update depthImageName and colorImageName to reflect one of the images in the sequence you plan to animate. It will be used to create the 3D bounding box of the statue cut-out, so use one where the actors take up the most space.
- Update pathToSaves to be the valid path to the Minecraft world you wish to save the world into. Make sure to make a copy of this world incase something goes horribly wrong!
- Update blocksPerMeter to make the statues bigger or smaller. 80 or so is a good starting point.
- Update floorBlock to be the the floor of the statue. 0 sets the statues feet at bedrock, 63 would be sealevel. Remember the hard limit of the world is 128.
- Update centerChunkX, centerChunkZ to be the 'middle' chunk where the statues are displayed. If you are unsure, set them at 0,0 and see where they are placed, and adjust accordingly.
- Update minmaxX, minmaxZ, floor to contain the actors in the image. This may take a couple tries, so I'd suggest using restoreImages.py (update the path there) so you can quickly revert the changes.
- You can check the mapImage.png to see what the cutout will look like projected into 2D.
- Step 3: Use windows automation(driverAnimation.py) to create the many snapshots.
- Take a snapshot from step 2 and position the player so that he is looking out at the statue. The player will not move through the process, so ensure the entire stage is seen from his vantage.
- Restore the worlds save file so that it is at a pre-edited state.
- Update worldNum and pathToSaves to be the save directory of the Minecraft world.
- Update pathToMinecraft to point to the Minecraft executable.
- Update frameDir / colorDir / depthDir to reflect the existing directories that contain the snapshots.
- Set the blocksPerMeter
- Run the script! Be sure not to touch the mouse when minecraft is opened or you will mess up his point of view for subsequent images!
- Step 4: Open the first image of the resulting snapshots in VirtualDub and save as a 30FPS animation.
- Save it and vioala! You're done!
How it works
The Kinect captures two types of data. One is a color image from a normal camera, and the other is a raw depth image.
A grid of 76,800 IR beams are sent out from the kinect. An IR camera takes a picture of the room sprayed with beams of light and processes it into a raw-depth image. Using some calibration techniques the raw-depth image can be converted into a 3D point cloud.
Once a 3D point cloud has been created, some Python scripts are used to cut out the actors in the image. Once the background is gone, a Minecraft save file is opened and the data is pushed inside.
Phase 1 aimed to get this point cloud represented in Minecraft.
Phase 2 animated this data at 30FPS.
Phase 3 is currently under development.
Phase 4 is outlined.