DinaKhalil
Interactive X-Ray!
Updated: Jun 19, 2021
The What:
A collaboration with my bestfriend and fellow ITPer, Elena Glazkova.
User's motion gets to be projected at a skeleton, and they get to choose whether they're hungry or full with a click of a button (well, 2 buttons).
The How:
Tools: Kinect 2, Kinectron, a Windows Laptop, Projector, Arduino, LED Strips.
Skills: Coding, Patience, Physical Computing, Soldering.
Time: December. 2019
In Details:
We used a Kinect 2— a motion-sensing input device
Kinectron— an open-source software out of two components — an electron application to broadcast Kinect data over a peer connection, and a client-side API that brings realtime, motion capture data into the browser, along with p5.js library and Arduino to build this project.
Super final version of code polished and explicitly commented by Lisa Jamhoury, based on our project, and published in official Kinectron repository.
GitHub repository with our final codeBrowser version of code in p5.js
* Please note that the code runs more efficiently if you download all the files and run it locally on your computer using local host.
Developing Process:
So, in order to achieve what we want, we receive data from Kinect through the Kinectron server. Kinect detects 25 main joints of the human body in front of it and then Kinectron normalizes the data so that the body could fit into p5.js canvas. We then draw the bones of the skeleton assigning ‘x’ and ‘y’ positions to the corresponding joints. We also have the serial communication part in our code. There is a physical part of the whole project and two buttons controlled by Arduino Nano — “hungry” button and “full” button.

Yellow button indicates that the user is hungry, and the green button means they're full.
If the user chooses the hungry option, non-edible food will pop up in their head (a thought bubble), and if they choose they're full, food will appear inside of them.
Closer Look at the Food:
What we Learned:
🦴 It’s crucial to figure out the right scaling of the skeleton from the start since it should influence both bone’s placement and the rotation.
🦴 It is more efficient to work with recorded Kinectron data while adjusting the skeleton and to run the sketch locally at all of the stages.
🦴 We used cameraX and cameraY coordinates, createVector() p5.js function, translate() function and offset java script method to place the bones correctly. We got tons of help to write placeBone() and rotateBone()functions from Elena's ICM
🦴 The amount of transparency of the bone images that are being drawn MATTER for the placement’s and the rotation’s accuracy. The less transparency is, the better.
🦴The Kinect position, the lighting of the environment, obstacles, and foreign items such as chairs and glass doors, lack of space MATTER.
Building the Installation
We also built a physical part for our experience — a “scanner” in front of the X-Ray that “scans” users (of course, it’s just an installation to make the experience feel more complete).
All the parts of the installation should stick together in a very stable manner and at the same time be very flexible since there is electricity involved and we’ll have to re-arrange and re-assemble all the parts and components many times. The initial idea of this scanner looks like that:
So, two hula hoops, a lot of LEDs, some kind of the ‘poles’ and a lot to figure out — most importantly, how to get it all together without looking chaotic. I was lucky to get my fabrication professor Ben Light's recommendation on that just on time. Our salvation was PVC pipes.


Special thanks to this project for bringing Elena and I together (forever 🤞🏼🤞🏼).
Special thanks to Lisa Jamhoury!!