Ian F. Hunt

Cinematographer and Filmmaker

By

Performance Video – MAX MSP Jitter

Max Msp Jitter

Testing Modul8

Modul8 A - B Group Layers

Modul8 A - B Group Layers

Referring to the previous blog entry, the idea was to test Modul8 to see if it could be used for our project.

We had a very useful lecture Monday on Modul8, which included midi mapping which I had some concerns about originally but all looked promising in going ahead and using this application for our project. Although there was one proviso, a possible negative regarding just how many videos Modul8 could playback at any one time.

I created a new project and started to add small 10 second MP4 movie files to the Media Set until I had filled one set completely, that is 16 video files in all. I then programmed 1 video file to each of the layers in groups A and B. I disabled the A -B Grouping and transition controls so that all 10 layers/videos would display simultaneously in the  preview window.

However, when selecting a new group and adding video files to the next layer group I realised that it seemed that only 1 group could be live at a time. This meant that it would seem to be impossible to have the planned 50 videos available to run live at any time.

Modul8 Media Set

Modul8 Media Set

If the idea was to have just the 10 videos running at any time the project could have worked using Modul8 as this was not the case I decided we would have to look at using alternatives such as MAX MSP and then develop our own midi controlled VJing application.

MAX MSP Jitter – First Look

I’ve looked at MAX MSP Jitter before for a 1st year project and although I decided at the time not to use it, I did put together a few patches just to familiarise myself with how it functions.

A recent lecture from Liam re-introduced MAX MSP Jitter and it’s uses for the non-programmers and although it would take some work I do believe it will be possible to produce a working program.

MAX MSP - cv.jits.faces

MAX MSP - cv.jits.faces

I downloaded the MAX MSP demo version from http://cycling74.com/downloads/ which is free to use for 30 days and at the same time downloaded some of the patches Liam demoed, Computer Vision for Jitter or cv.jit. This can be downloaded from http://jmpelletier.com/cvjit/ and included a number of Video Tracking and Video Manipulation Patches. I installed these as directed and soon had an example patch up and running. The Patch chosen tracked movement either using the Macs iSight Webcam or a video I pre-loaded. The running patch produces coordinates relating to movement seemingly identifying face and hand movements particularly well on both the live view and from the video sequences I’d used.

Although these patches were interesting and offered possibilities for pre-loading the video files for our Human Orchestra concept, I still needed to link this to a midi keyboard.

I then searched through the MAX objects looking for midi and audio related objects and came across ‘kslider’ which displays an on-screen keyboard with note and key velocity information. Using this as a base I looked through the list of midi objects and with Lees assistance we tested a number of these midi objects using a midi keyboard until we’d identified ‘notein’ as the best object to use. This picked up the key presses from the midi keyboard and reproduced this visually on the on-screen keyboard and played a note through the AU Midi interface.

However, this solution wasn’t perfect as the key mapping was slightly out of sequence and the tone was played both on key press and again as the key was released.

MAX MSP kslider

MAX MSP kslider

I deemed this a good start and have booked out a midi keyboard to continue working on this over the next few days at home and hopefully find a method of linking the video and keyboard patches together to produce a working system.

Performance Video – Microsoft Kinect

Microsoft Kinect

Microsoft Kinect

We may however be modifying our concept to include the use of Microsoft’s Kinect tracking system. Following a quick introduction to the Kinect by Liam today we’ve decided that we’d like to abandon the midi keyboard as the main input device for our concept and instead look at using the Kinect instead. This would make our concept a true performance based project with videos interacting to a person or persons movements.

The Performance Video

I’ve also had some ideas about the Performance Video itself which I need to put before the group and seek a consensus on what is the best way forward for this project.

What I now visualise is a looping video of the entire choir moving slightly to and fro and layered on top would be the videos that become active when an input either from the midi device or the Kinect has been received. Think of tiered rows of the choir and then one of the choir moves independently and sings a tone. When a chord is played or more than one movement is detected through the Kinect, more members of the choir move and sing a chord.

Project Blog Entry Links

  1. Performance Video – Conclusion
  2. Performance Video – Wiimote MAC
  3. Performance Video – VJ’ing using Quartz Composer
  4. Performance Video – Kinnect on MAC and PC
  5. Performance Video – MAX MSP Jitter
  6. Performance Video – Modul8
  7. Performance Video – The Human Orchestra

 

Word Count 904

By

Head Tracking using a Laptop/Netbook. Webcam Tracking

Webcam Tracking

Thursday 28th October 2010


Interactivity and it’s possible application in the Self Portrait project – Webcam Tracking

I thought it would be a good idea to research the internet for possible solutions for the Tracking of Head movements using something most people have, that is the Webcam.

I identified a number of Free or Shareware programs that processed the image from a Webcam from forums where other people were also seeking a similar solution.

The programs I found were:-

1. Fake Webcam

2. WebCam Monitor 5.2

3. Willing Webcam

All three applications working in very similar ways the main differences were in how they responded to the image from the Webcam and what action they took or could be set by the user.

Willing WebCam - Screenshot

Willing WebCam - Screenshot

The most suitable for my requirement would be Willing WebCam which has extensive settings including the ability to play a sound or load a program when either the head or indeed any movement was detected.

However on further investigation and after a great deal of experimentation I came to the conclusion that the standard Webcam is just not accurate enough or could become unsettled just by a changing light levels and therefore in turn creating too many false tracking movements.

The only conclusion I could come to was that a specialist Tracking camera would be required for accuracy and reliable responses to movements. The camera would also ideally have to work outside of the visible spectrum possibly infra-red in order to be able to distinguish the subject from the background. Alternatively the camera would have to be directed at a fixed well lighted background for example a white screen.

In regard to the original idea development and because of time constraints I have decided to go with the original idea of producing a film, a portrait of myself and things and activities which interest me, combining still photographs with video content. This would follow a timeline, a chronology of my life from my earliest days for which I have some photographs to the current date.

Netbook Details

I used a Netbook for the project, the Acer Aspire One 533 with the Atom N455 processor. Running Windows 7 Starter. This comes with 1GB memory but this can be expanded to 2GB. But most importantly this setup works with all the applications I tested and Processing recognises it’s built-in webcam without having to source additional drivers etc.

Processing

Webcam Tracking - Processing & Jmyron Screenshot

Webcam Tracking - Processing & Jmyron Screenshot

 

Having being recently introduced to Processing during a Lecture I searched online for more examples of programs using the Webcam to control video or track movement. I sourced a number of programs one group called “JMyron” and one of these programs tracked head/hand movement and converted it into mouse inputs. I could see several possibilities for controlling video or adding effects to a video.

There were restrictions on how this could be utilised the most restricting was that the tracking of movement would only work at its best against a white background and it’s response in open areas unpredictable. In fact it only seemed to work best in the dark. However as I had now decided on the design I was going to produce for my Self Portrait I decided to save this for a future projects and further research.

Sorry about the quality of the video, I  used a handheld mobile phone to record the screen output.
[youtube.com/watch?v=jmoXL95KImc]

Hand tracking using JMyron and Processing

Project Update

Microsoft Kinect

Microsoft Kinect

Since I originally wrote this Blog entry, Microsoft’s Kinect has been released and with it the software to write your own programs which can be found here http://kinectforwindows.org/
I’ve also looked at interfacing the Kinect to a Macbook Pro details of which can be found here Performance Video

Kinect Specification

Sensor

Colour and depth-sensing lenses
Voice microphone array (made up of 4 microphones)
Tilt motor for sensor adjustment

Field of View

Horizontal field of view: 57 degrees
Vertical field of view: 43 degrees
Physical tilt range: ± 27 degrees
Depth sensor range: 1.2m – 3.5m

Data Streams

320×240 16-bit depth @ 30 frames/sec
640×480 32-bit colour@ 30 frames/sec
16-bit audio @ 16 kHz

Skeletal Tracking System

Tracks up to 6 people, including 2 active players
Tracks 20 joints per active player
Ability to map active players to LIVE Avatars

%d bloggers like this: