After presenting a mockup of our idea last class, a few colleagues expressed their concerns about standing behind the screen and not actually seeing themselves. Elena and I thought of standing in front of the screen, but that wouldn’t really make sense as an x-ray experience. After panicking, Julie, thankfully suggested we would add a vertical sliding scanner before appearing on the screen as skeletons. Elena and i thought it was a genius idea, but not really effectively doable with our fabrication skills.
After two meetings of brainstorming and a long weekend of passing ideas back and forth, we decided to add a gate that would light up when the user steps inside, then appear on the screen as a skeleton when stepping one step outside the gate.
As for our progress so far, we got the kinect to work with p5.js and it’s so accurate, but still trying to figure out how we’ll add the skeleton to appear.
Our plan-b was to work with ml5 incase kinect gets tricky, so we got the webcam to detect the face, torso and the hip so far.
This weekend, we’ll finish the code using ml5 if kinect doesn’t work with our idea and start adding the p. comp. part of the project which is to introduce the buttons.
We tweaked the idea of the buttons a little bit as well. Elena suggested it would be cooler to add speech bubbles of the user’s thoughts and I thought this was an amazing idea. So now we’ll have 2 buttons, labeled “hungry” or “full”. Where when the user presses on “hungry”, a speech bubble would appear next to their skull thinking of food, and if “full” is pressed, a picture of food will appear inside them.