C3 Pilot Program

Wow, I have not posted in WAY too long.

I have done so much, and not documented it on here at all — but that will change! I’ll be putting stuff up more and more as i get it looking good.

To start, here’s a video of the DIY activity that I developed as part of the Collect, Construct, Change (C3) curriculum at the New York Hall of Science!


I’ll post up way more about this soon! Including… KIT INFORMATION!


For my final openFrameworks project in Zach Lieberman’s setPixel course at Parsons MFA DT, I worked with Clay Ewing and Joon Moon on a public space computer vision public space projection project.  We undertook the challenge of “seeing” the infamous movable cube at Astor Place in NoHo, NYC.  We devised a system which used a hacked PS3 eye camera and cleverly diffused infrared LEDs to “see” the cube at night.  The idea was to track it with openCV, and then project onto each surface of the cube as it was rotated in real-time.  The following videos (and presentation) document the the work we’ve accomplished so far.

[pdf-ppt-viewer href=”http://jmsaavedra.com/weblog/wp-content/uploads/2009/12/ofxAlamoPrese_web.pdf” width=”500″ height=”400″]

Here is a demo vid of the app we wrote in OF for the cube.  You can see the mouse “drawing” (in green) the specific areas to be CV’d.  This way we can ignore the extraneous sources of IR light (headlights, streetlights, etc) and only track the points we want to track.


Here is documentation from our second attempt at tracking the cube with the system. It was largely a fail because of generator issues, and the IR lights not being bright enough.  Zach has recommended we use IR flood lights and reflective tape on the balls for the next attempt.  Special thanks to Jen Cotton for documenting the night and then editing this sweet vid.


Here is a demonstration of how nicely styrofoam balls diffuse LED light, even with extremely direction superbright LEDs.  Thanks to GRL for the LED Throwie precedent, which we modified slightly with this diffusion and IR light.


Thanks to Zach Lieberman for the amazing course (yet again), Jen Cotton, Zach Gage, and Graffiti Research Lab.  We will return to this project in the spring when it is warmer — so look out.

SOBEaR v01

I have finished my first prototype of SOBEaR, the robot bartender. SOBEaR is a robot friend for anyone who does not know their own limits, or has problems controlling themselves.

SOBEaR has an alcohol sensor mounted under his chin, so that the user presses a button inside his right foot, breathes into SOBEaR’s face, and then watches their alcohol consumption level displayed by the color LED column in SOBEaRs chest.

Following their sobriety test, SOBEaR then immediately pours a drink, a ratio of alcohol and mixer (OJ, cranberry, tonic, cola, etc), appropriate for the user at this time.

As you can see in the video, I still need to play with the angles for each pour. Can’t have the bear pouring the bottle straight down into your glass. Wouldn’t be very classy to just spill liquor or mixer all over the place. So I’ll be fixing that before presenting this project, as well as adding a coaster for the user to place their glass under. It will have an LED indicator light as well…