With Move Mirror, we’re showing how computer vision techniques like pose estimation can be available to anyone with a computer and a webcam. We also wanted to make machine learning more accessible to coders and makers by bringing pose estimation into the browser—hopefully inspiring them to experiment with this technology.
To build this experiment, we used PoseNet, a model that can detect human figures in images and videos by identifying where key body joints are. Move Mirror takes the input from your camera feed and maps it to a database of more than 80,000 images to find the best match. It’s powered by Tensorflow.js—a library that runs machine learning models on-device, in your browser—which means the pose estimation happens directly in the browser, and your images are not being stored or sent to a server. For a deep dive into how we built this experiment, check out this Medium post.
We hope you’ll play around with Move Mirror and share your experience by making a GIF. Try it out now at g.co/movemirror.