PoCoMo (see http://fluid.media.mit.edu/people/roy/media/PoCoMo-cam-ready-optimized.pdf) consists of two mobile projector-camera systems with computer vision algorithms to support the behaviours of characters projected in the environment. The characters are guided by hand movements and can respond to other characters, simulating a reality of life-like agents. Users hold micro projector-camera devices to project animated characters on the wall and the characters recognize and interact with one another. Extracting visual features from the environment, an algorithm enables operations in a limited resource environment. The system creates games including social scenarios of relating and exchange between to co-located users. The characters are programmed to have component parts with separate articulation and with different sequences based on the proximity of the other characters. The projected characters can respond to the presence and the orientation of one another and acknowledge each other. Also they can trigger gestures of friendship such as shaking hands. Characters can leave presents to one another. Each character has an identity that is represented by the color of its markers. A detection algorithm scans the image by applying a threshold extracting the contour of the figures. The detection algorithm has been implemented in C++ and compiled to a native library. The UI of the application has been implemented in JAVA using ANDROID API. In a future work, the authors (Shilkrot, Hunter and Maes from MIT Media Lab) will integrate markers with the projected content and migrate the application to devices with wider fields of views.
sábado, 31 de marzo de 2012
Suscribirse a:
Entradas (Atom)