about 2 weeks ago, i got the chance to see one of my teachers, zachary lieberman, perform at NYU. It was a piece called Messa di Voce which in italian means “placing the voice”. It essentially is an interactive visualizer for the human voice in real time. IR light and cameras track the performers’ location on stage and 2 projectors were used side by side to project the visualizations of their respective voices. here’s some video that i took —
thanks to stephanie for letting me borrow her camera!
today, zach lieberman surprised our class (audio visual systems + machines) with a special appearance by his pal, daito manabe. daito has reached quai-youtube fame with a couple videos. here’s one.
here is a video that i took, of daito shocking my friend nick’s arm and face, at school.
and an image of me and daito!!
but this is just one project he is involved in. while i have an affinity for electrodes on the human body, i’m not so interested in controlling or manipulating the body itself. i’m more interested in taking data (via EKG, EEG, GSR, EMG) from the body and applying it to audio and video – but i’ll post more about that later (my ars collab will be about this, maybe thesis material also?
the project of daito’s that I am MORE interested in currently, is his that resembles my work of the last 8 weeks. this one:
each cube has a microphone and a PIC chip. the PIC is listening for specific frequencies of tones, and also patterns of the notes being played. when it hears a specific note it might blink a specific color. when it hears a specific pattern of notes, it might glow a specific color for a length of time. daito has handed these out at shows before, and then during the show inserted a pattern of notes. the cubes then all respond from all around the room, and has a great effect. in essence, it is serial communication through sound. brilliant. I’ll post a vid of daito showing me the insides of the cubes, as well as zach lieberman trying on daito’s electrode stimulus system, real soon.
completed the first assignment for audio/visual systems and machines. we were given several images, and then created audio that we thought fit each image. here are mine —
concept: physically and functionality modular, multi-input controllers with visual feedback and dynamic sensor data output. Manipulating video, sound, or anything digital is possible with LightBox, and when using more than one simultaneously, group interaction and collaboration is possible, as the controllers themselves are wirelessly networked. project post-mortem paper here.
Here is a not-so-revealing video of one mothercube and one daughtercube (all i had the time and money to build), being run through a theremin-emulator max patch. I have written a sampler patch and video controller in jitter, I will document these soon enough (making sure you can see my hands) and then post the patches.
***I apologize that you cannot see my hands, however they are controlling the pitch of the sound based on how close they are to the cubes — very similar to how a theremin works***
Essentially, frosted plexi-glass cubes with IR range finders and one single button (capacitance [touch] sensor). The sensors are being powered and read by an arduino, and then processed in Max/MSP/Jitter. I used the PDuino firmware for that connection. What I was hoping to acheive also, was one array of LEDs sensitive to the IR (you’ll see in this video, only the touch changes the color). I worked on using a TLC5940 LED driver run by a PIC16F88 microcontroller, but was never completely successful. In fact it was quite frustrating, to the point that i gave up on it for the moment.
Here are some images of the process:
Left to do now is finish the wireless aspect, between the two cubes. Then more patching.