Some of the best hacks are when you mash together two pieces of commercial hardware that weren't designed with each other in mind. That's exactly what Gene Kogan did when he made the Kinect-Projector Toolkit for openFrameworks and Processing.
This toolkit allows you to mix the human-body tracking features of a commercial depth sensor with the imagery from a commercial projector.
In the video above, participants are interacting with virtual objects falling from the ceiling. These objects collide with each other using 'virtual' physics, but also they can collide with you as a participant in physical space.
Of course it seems that way, but in reality it's all virtual. The toolkit allows you to take the skeletal data from the Kinect and accurately map it to any point in a virtual space. You then project the virtual space back over the physical space to complete the illusion.
At the heart of this hack is a calibration process, similar to the one found in RGBDToolkit. This process allows your computer to work with the intrinsics and relative positioning of both the Kinct lens and the projector lens.
So long as the computer has this information, it can accurately map skeletal joints to virtual objects such that the interactions mirror what's going on in the real world.
If that sounds complicated, don't worry about it. That's what the openFrameworks and Processing addons are for.