Max Msp Jitter
Referring to the previous blog entry, the idea was to test Modul8 to see if it could be used for our project.
We had a very useful lecture Monday on Modul8, which included midi mapping which I had some concerns about originally but all looked promising in going ahead and using this application for our project. Although there was one proviso, a possible negative regarding just how many videos Modul8 could playback at any one time.
I created a new project and started to add small 10 second MP4 movie files to the Media Set until I had filled one set completely, that is 16 video files in all. I then programmed 1 video file to each of the layers in groups A and B. I disabled the A -B Grouping and transition controls so that all 10 layers/videos would display simultaneously in the preview window.
However, when selecting a new group and adding video files to the next layer group I realised that it seemed that only 1 group could be live at a time. This meant that it would seem to be impossible to have the planned 50 videos available to run live at any time.
If the idea was to have just the 10 videos running at any time the project could have worked using Modul8 as this was not the case I decided we would have to look at using alternatives such as MAX MSP and then develop our own midi controlled VJing application.
MAX MSP Jitter – First Look
I’ve looked at MAX MSP Jitter before for a 1st year project and although I decided at the time not to use it, I did put together a few patches just to familiarise myself with how it functions.
A recent lecture from Liam re-introduced MAX MSP Jitter and it’s uses for the non-programmers and although it would take some work I do believe it will be possible to produce a working program.
I downloaded the MAX MSP demo version from http://cycling74.com/downloads/ which is free to use for 30 days and at the same time downloaded some of the patches Liam demoed, Computer Vision for Jitter or cv.jit. This can be downloaded from http://jmpelletier.com/cvjit/ and included a number of Video Tracking and Video Manipulation Patches. I installed these as directed and soon had an example patch up and running. The Patch chosen tracked movement either using the Macs iSight Webcam or a video I pre-loaded. The running patch produces coordinates relating to movement seemingly identifying face and hand movements particularly well on both the live view and from the video sequences I’d used.
Although these patches were interesting and offered possibilities for pre-loading the video files for our Human Orchestra concept, I still needed to link this to a midi keyboard.
I then searched through the MAX objects looking for midi and audio related objects and came across ‘kslider’ which displays an on-screen keyboard with note and key velocity information. Using this as a base I looked through the list of midi objects and with Lees assistance we tested a number of these midi objects using a midi keyboard until we’d identified ‘notein’ as the best object to use. This picked up the key presses from the midi keyboard and reproduced this visually on the on-screen keyboard and played a note through the AU Midi interface.
However, this solution wasn’t perfect as the key mapping was slightly out of sequence and the tone was played both on key press and again as the key was released.
I deemed this a good start and have booked out a midi keyboard to continue working on this over the next few days at home and hopefully find a method of linking the video and keyboard patches together to produce a working system.
Performance Video – Microsoft Kinect
We may however be modifying our concept to include the use of Microsoft’s Kinect tracking system. Following a quick introduction to the Kinect by Liam today we’ve decided that we’d like to abandon the midi keyboard as the main input device for our concept and instead look at using the Kinect instead. This would make our concept a true performance based project with videos interacting to a person or persons movements.
The Performance Video
I’ve also had some ideas about the Performance Video itself which I need to put before the group and seek a consensus on what is the best way forward for this project.
What I now visualise is a looping video of the entire choir moving slightly to and fro and layered on top would be the videos that become active when an input either from the midi device or the Kinect has been received. Think of tiered rows of the choir and then one of the choir moves independently and sings a tone. When a chord is played or more than one movement is detected through the Kinect, more members of the choir move and sing a chord.
Project Blog Entry Links
- Performance Video – Conclusion
- Performance Video – Wiimote MAC
- Performance Video – VJ’ing using Quartz Composer
- Performance Video – Kinnect on MAC and PC
- Performance Video – MAX MSP Jitter
- Performance Video – Modul8
- Performance Video – The Human Orchestra
Word Count 904