Clickformedia

Ian Hunt Digital Media Designer

By

Head Tracking using a Laptop/Netbook. Webcam Tracking

Webcam Tracking

Thursday 28th October 2010


Interactivity and it’s possible application in the Self Portrait project – Webcam Tracking

I thought it would be a good idea to research the internet for possible solutions for the Tracking of Head movements using something most people have, that is the Webcam.

I identified a number of Free or Shareware programs that processed the image from a Webcam from forums where other people were also seeking a similar solution.

The programs I found were:-

1. Fake Webcam

2. WebCam Monitor 5.2

3. Willing Webcam

All three applications working in very similar ways the main differences were in how they responded to the image from the Webcam and what action they took or could be set by the user.

Willing WebCam - Screenshot

Willing WebCam - Screenshot

The most suitable for my requirement would be Willing WebCam which has extensive settings including the ability to play a sound or load a program when either the head or indeed any movement was detected.

However on further investigation and after a great deal of experimentation I came to the conclusion that the standard Webcam is just not accurate enough or could become unsettled just by a changing light levels and therefore in turn creating too many false tracking movements.

The only conclusion I could come to was that a specialist Tracking camera would be required for accuracy and reliable responses to movements. The camera would also ideally have to work outside of the visible spectrum possibly infra-red in order to be able to distinguish the subject from the background. Alternatively the camera would have to be directed at a fixed well lighted background for example a white screen.

In regard to the original idea development and because of time constraints I have decided to go with the original idea of producing a film, a portrait of myself and things and activities which interest me, combining still photographs with video content. This would follow a timeline, a chronology of my life from my earliest days for which I have some photographs to the current date.

Netbook Details

I used a Netbook for the project, the Acer Aspire One 533 with the Atom N455 processor. Running Windows 7 Starter. This comes with 1GB memory but this can be expanded to 2GB. But most importantly this setup works with all the applications I tested and Processing recognises it’s built-in webcam without having to source additional drivers etc.

Processing

Webcam Tracking - Processing & Jmyron Screenshot

Webcam Tracking - Processing & Jmyron Screenshot

 

Having being recently introduced to Processing during a Lecture I searched online for more examples of programs using the Webcam to control video or track movement. I sourced a number of programs one group called “JMyron” and one of these programs tracked head/hand movement and converted it into mouse inputs. I could see several possibilities for controlling video or adding effects to a video.

There were restrictions on how this could be utilised the most restricting was that the tracking of movement would only work at its best against a white background and it’s response in open areas unpredictable. In fact it only seemed to work best in the dark. However as I had now decided on the design I was going to produce for my Self Portrait I decided to save this for a future projects and further research.

Sorry about the quality of the video, I  used a handheld mobile phone to record the screen output.
[youtube.com/watch?v=jmoXL95KImc]

Hand tracking using JMyron and Processing

Project Update

Microsoft Kinect

Microsoft Kinect

Since I originally wrote this Blog entry, Microsoft’s Kinect has been released and with it the software to write your own programs which can be found here http://kinectforwindows.org/
I’ve also looked at interfacing the Kinect to a Macbook Pro details of which can be found here Performance Video

Kinect Specification

Sensor

Colour and depth-sensing lenses
Voice microphone array (made up of 4 microphones)
Tilt motor for sensor adjustment

Field of View

Horizontal field of view: 57 degrees
Vertical field of view: 43 degrees
Physical tilt range: ± 27 degrees
Depth sensor range: 1.2m – 3.5m

Data Streams

320×240 16-bit depth @ 30 frames/sec
640×480 32-bit colour@ 30 frames/sec
16-bit audio @ 16 kHz

Skeletal Tracking System

Tracks up to 6 people, including 2 active players
Tracks 20 joints per active player
Ability to map active players to LIVE Avatars

By

Processing and Idea Development

Processing

Processing or computer programming for the purposes of the course essentially means Java programming.

Using examples from the website; http://processing.org from which the program Processing can be downloaded. This is an Open Source application with several examples of Java programs from basic drawing to Video processing. The application is available for both the Windows and Mac environments.

Two alternative methods of developing programs using modules to put together processes were reviewed these are MaxMSP and VVVV. Both use similar methods to design processes to manipulate Video in many ways similar to programming in Ladder networks as used in machine control systems design.

Idea Development

Following on from the mornings processing lecture time was utilised to consider ideas for the personal portrait assignment.

I had the idea (although not a new idea) of using possibly one of the applications introduced this morning to motion track a subject moving in front of a projection screen. As they passed from left to right an image of myself would be displayed as they moved either another image or the profile of the image would change seemingly following their movement. Using a different image would allow me to say start with a picture of myself at a young age and then substitute images of older versions of me as they moved. Another idea was to pixelate in the first image as they passed in front of the screen and pixelate out the final image as they moved past the screen. The alternative is to take a series of pictures of myself at different positions corresponding to where I think the person will be standing so that my face seems to turn towards them, in effect follow them. Another alternative is to take photographs at different positions so that they see a different profile depending upon where they are standing – effectively giving a 3D effect. Finally the image could be a Maya produced 3D image of myself. (Although this may be beyond my current knowledge of Maya).

maxmsp

%d bloggers like this: