Ian F. Hunt

Cinematographer and Filmmaker

By

Performance Video – Wiimote MAC

Wiimote Mac

A recently acquired Wii Games Console offered the opportunity to investigate interfacing the Wii Remote to my Macbook Pro and in turn to Quartz Composer.

The Wiimote is a Bluetooth device and the obvious thing to do was to connect it directly to the Mac using the built-in Bluetooth stack but although this connection works the Wiimote will be unresponsive as there is no native application on the Mac that supports the Wiimote.

Researching on the Internet I came across a number of Apps that do support interfacing the Wiimote to the Mac these are:-

  • OSCulator
  • DarwiinRemote
  • Wiinstrument
Fig 1.0 Connecting the Wiimote to OSCulator

Fig 1.0 Connecting the Wiimote to OSCulator

OSCulator

Of these OSCulator provides the most control and customisation allowing you to dedicate MIDI notes and controls to individual keys on the Wiimote and most importantly choice of Port Address for connecting to the MAC.

OSCulator is a free download from http://www.osculator.net/ this is a demo version and although fully functional you do get reminder screens with a delay built in during which time functionality is lost.

Wiimote MAC

The first task is to connect the Wiimote to OSCulator using Bluetooth. Press buttons 1 and 2 on the Wiimote simultaneously, all 4 blue lights on the Wiimote should be flashing, then click Start Discovery on the OSCulator App and the App should identify and link to the Wiimote, note the green tick and MAC device address in the window.

Fig 2.0 Settings for the Wiimote and OSCulator

Fig 2.0 Settings for the Wiimote and OSCulator

The next image Fig 2.0 is of the Wiimote buttons and XY mapping options, for example each of the buttons can be assigned to a MIDI Note and the XY to control Pitch, Volume etc.

Note the box at the top left of the window which shows the Port number, matching this setting to the input port of your Quartz Composer composition links the Wiimote to the input of patch.

As a reminder OSC stands for Open Sound Control

Open Sound Control (OSC) is a protocol for communication among computers, sound synthesizers, and other multimedia devices that is optimized for modern networking technology. Bringing the benefits of modern networking technology to the world of electronic musical instruments, OSC’s advantages include interoperability, accuracy, flexibility, and enhanced organization and documentation.
Introduction to OSC http://opensoundcontrol.org/introduction-osc Site Accessed 02/01/2012

Wiinstrument

Fig 3.0 Wiinstrument Configuration Screen

Fig 3.0 Wiinstrument Configuration Screen

This free application supports the Wiimote and the Nunchuck accelerometers download from here http://screenfashion.org/releases/the_wiinstrument/. Although not as configurable as OSCulator it does provide a useful MIDI connection. It has 2 modes Keyboard and Drumsticks.

The first thing to do is to connect the Wiimote in the usual way by pressing keys 1 and 2 simultaneously when successfully connected you’ll notice the graphical displays for the accelerometers will respond to the movements of the Wiimote.

Fig 4.0 Wiinstrument Drumsticks Screen

Fig 4.0 Wiinstrument Drumsticks Screen

There’s few options but you can change the keyboard scale and select which mode to use using the Home key.

In keyboard mode the Wiimote keys are assigned to MIDI notes which should be compatible with all MIDI applications although I’ve only tested it with GarageBand which seemed to work well.

In Drumsticks you can use the movement of the Wiimote and Nunchuck to simulate the hitting of a virtual Drum, again I’ve only tested this using GarageBand but it works well.

DarwiinRemote

Fig 5.0 DarwiinRemote Configuration Screen

Fig 5.0 DarwiinRemote Configuration Screen

DarwiinRemote also connects to the Wiimote and simulates Mouse movements or Apple Remote functions rather than MIDI. You can download this free program from the usual download site or from the authors site here http://blog.hiroaki.jp/2006/12/000433.html

Fig 6.0 DarwiinRemote Nunchuck Configuration Screen

Fig 6.0 DarwiinRemote Nunchuck Configuration Screen

This is more of a general purpose input interface but will find many uses where Mouse is the input device. For example the control of a PowerPoint presentation or by using a Processing application could control Video or Video Effects just by the movement of the Wiimote.

Personally I found DarwiinRemote perfect for controlling Front Row (Apple’s Media Application) using the Wiimote keys to move through the Menus playing Music and Video content.

Project Blog Entry Links

  1. Performance Video – Conclusion
  2. Performance Video – Wiimote MAC
  3. Performance Video – VJ’ing using Quartz Composer
  4. Performance Video – Kinnect on MAC and PC
  5. Performance Video – MAX MSP Jitter
  6. Performance Video – Modul8
  7. Performance Video – The Human Orchestra

 

Word Count 724

By

Performance Video – VJing using Quartz Composer

VJing using Quartz Composer

The project took a new turn when I came across Synapse for Kinnect.

Synapse is an app for Mac and Windows that allows you to easily use your Kinect to control Ableton Live, Quartz Composer, Max/MSP/Jitter, and any other application that can receive OSC events. It sends joint positions and hit events via OSC, and also sends the depth image into Quartz Composer. In a way, this allows you to use your whole body as an instrument.
http://synapsekinect.tumblr.com/
Synapse for Kinect. Site Accessed Dec 10th 2011

After downloading the source files I quickly had the depth image running on the Mac and from this point I decided to develop the entire project using Quartz Composer.

Quartz Composer is included with Xcode and is a visual programming tool similar in many ways to Max MSP in that you drag patches into a composition and then connect these to create an API (Application Programming Interface).

With my new-found interest in using Quartz Composer I began researching for methods of creating the Video Wall from the video clips we recorded. Having already created a number of Quartz Composer compositions from Tutorials I had a good idea that the patches I would need would include the ‘Billboard’, ‘Keyboard’ and ‘Midi’.

Fig 1.0 Quartz Composer Video and Billboard Patches

Fig 1.0 Quartz Composer Video and Billboard Patches

To start the new composition I added the video loops to a folder and then in turn dragged each of these into the composition. Each video file creating its own patch (Fig 1.0) from the image output of these patches I connected these to the image inputs of the Billboard patch. This I repeated for the 8 video loops which would make up my video wall.

You then need to set the X and Y coordinates for the video loops using the Patch Inspector on the Billboard patches. These I arranged around the centre of the screen so that I could leave the centre space free in order to display the image from the Kinect device, in effect the conductor of the video loops.

Input Patches for Keyboard and Midi

Fig 2.0 Quartz Composer Input Patches

Fig 2.0 Quartz Composer Input Patches

To test the video wall functions as expected I dragged a ‘Keyboard’ patch onto the composition and linked the outputs of this to the enable inputs of each of the Billboard patches. The Keyboard keys are configurable so I changed these from the arrow key defaults to the number keys 1 to 8. (See Fig 2.0)

Now when I run the composition and pressed the number keys each of the video loops played in turn corresponding to the keys set to enable them.

The next step was to test that Midi would also be able to control the video loops and so I dragged a Midi patch onto the composition, which I set for middle C and 5th Octave. The outputs are labeled as for a keyboard and so it was a simple matter to link the outputs of the keys to the enable inputs of the Billboard patches.

However when I’d attached a Midi Keyboard to the USB of my Laptop nothing appeared to work. Going into the Audio Midi setup I made sure it was setup for the M-Audio Oxygen keyboard I was using and I also ran the test program which confirmed the keyboard was talking to the Laptop correctly. Unfortunately there was still no response from the Quartz composer composition so I had to go into the Midi patch and use the patch inspector to re-configure the settings. In  the end the correct settings were:-  Midi Channel 1, Middle C as 5 and Midi Source as ALL and then on the Keyboard itself setting to Octave 1. With these changes it was now possible to control the video loops using the keys on the piano keyboard.

Quartz Composer the complete Video Wall Patches

Fig 3.0 Quartz Composer the complete Video Wall Patches

To complete the design I added an Image patch to act as a background for the video wall it was at this point that I noticed that the layer order needed to be changed so that the background did not overlay the video loops. The completed patch looks complicated but it’s quite easy to understand. (See Fig 3.0 click on images to see bigger versions)

Finally the completed Quartz Composer composition with Video display can be seen in Fig 4.0 along with a short video showing the Midi Keyboard controlling the video loops. Please note these are not the final videos to be used in the project but are just temporary videos to prove the operation.

[youtube.com/watch?v=05ra1f9BsjU]
Quartz Composer Video Wall Composition

Fig 4.0 Quartz Composer Video Wall Composition

Project Blog Entry Links

  1. Performance Video – Conclusion
  2. Performance Video – Wiimote MAC
  3. Performance Video – VJ’ing using Quartz Composer
  4. Performance Video – Kinnect on MAC and PC
  5. Performance Video – MAX MSP Jitter
  6. Performance Video – Modul8
  7. Performance Video – The Human Orchestra

 

Word Count 697

By

Performance Video – Kinect on MAC & PC

Kinect on MAC

Interfacing the Kinect for Interactivity

Microsoft Kinect

Microsoft Kinect

The Performance Video Project has evolved over the weeks but the common requirement for our concept is to have some form of interactivity. This will allow the audience to have some control and to be able to interact with the Video.

Each member of the group is investigating a different method of input device for a MAX MSP Patcher which still needs to be designed to control the Video. I have taken on the task of researching into the newest home use technology, that is a Microsoft Kinect, which I plan to interface to my dual boot Macbook Pro.

MAX MSP for the Kinect on MAC

jit.freenect.grab MAX MSP Patcher

jit.freenect.grab MAX MSP Patcher

First though I started by researching MAX MSP patchers specifically for the Kinect theorising that it would make sense to work backwards, this way I would be able to identify the software requirements in order to get the Patcher to work.

I now had a list of software requirements and applications and libraries to download and install before I could connect the Kinect these are:-

  • Xcode
  • Macports
  • Libfreenect

Xcode is a free download from the Apple App Store but the current version 4.0 only runs on Lion so I had to install the version from the disks that came with my Macbook – you’ll find Xcode as an optional install on the OSX disk.

Macports download and installation instruction can be found here http://guide.macports.org/ – it’s very involved but if you work through them methodically the installation should be fairly straightforward.

I found that the libfreenect software was not required in the end, which is just as well as this does involve more than just downloading and running an installation – it needs compiling!

There’s also an alternative called Homebrew which can be found here http://mxcl.github.com/homebrew/ which is an alternative to Macports packaging systems for installing software applications for the OSX platform. I didn’t use this as I’d already gone down the Macports route but they say this is an easier way of installing what you need.

After I’d completed all this work I got slightly sidetracked after coming across a link to Microsoft’s official Kinect Drivers for installation on a PC. This can be found at http://kinectforwindows.org/ where you can download the 32 or 64 bit versions, make sure you read the system requirements.

I had to install the following additional applications.

  • Visual Studio Express
  • DirectX 9
  • .NET Framework 4.0

With all the software installed I connected the Kinect for the first time and started one of the demo programs that came with the SDK download, unfortunately the system reported that I was missing some .dll files which I manually installed but after some investigation it seemed that the 3rd party firewall I was using was blocking these files from running so by just temporarily disabling this Firewall the errors stopped and the Kinect worked perfectly. Each of the demo programs functioned as expected, so I now had a working and fully interfaced Kinect but only for the PC, time to get the MAC installation working.

Rebooting the Mac to OSX I reconnected the Kinect and ran the MAX MSP Patchers I had installed earlier and surprisingly it worked first time even though I had yet to install libfreenect libraries, which is why I mentioned earlier that you may not need this.

Having continued with my research I found several more examples of MAC Kinect installs one in particular was of interest involving the use of Quartz Composer which is either already installed or comes with the OSX System Disk. Quartz Composer works in a similar way to MAX MSP in allowing the design of applications without writing any code.

Keen to try this alternative I set about loading the required libraries and programs to get the Kinect to work with Quartz Composer.

What Next?

The next stage is to design a MAX MSP Patcher that will link the input from the Kinect to be able to call and run Video loops depending on the position of the person or persons in front of the Kinect.

Project Blog Entry Links

  1. Performance Video – Conclusion
  2. Performance Video – Wiimote MAC
  3. Performance Video – VJ’ing using Quartz Composer
  4. Performance Video – Kinnect on MAC and PC
  5. Performance Video – MAX MSP Jitter
  6. Performance Video – Modul8
  7. Performance Video – The Human Orchestra

 

Word Count 728

 

By

Performance Video – MAX MSP Jitter

Max Msp Jitter

Testing Modul8

Modul8 A - B Group Layers

Modul8 A - B Group Layers

Referring to the previous blog entry, the idea was to test Modul8 to see if it could be used for our project.

We had a very useful lecture Monday on Modul8, which included midi mapping which I had some concerns about originally but all looked promising in going ahead and using this application for our project. Although there was one proviso, a possible negative regarding just how many videos Modul8 could playback at any one time.

I created a new project and started to add small 10 second MP4 movie files to the Media Set until I had filled one set completely, that is 16 video files in all. I then programmed 1 video file to each of the layers in groups A and B. I disabled the A -B Grouping and transition controls so that all 10 layers/videos would display simultaneously in the  preview window.

However, when selecting a new group and adding video files to the next layer group I realised that it seemed that only 1 group could be live at a time. This meant that it would seem to be impossible to have the planned 50 videos available to run live at any time.

Modul8 Media Set

Modul8 Media Set

If the idea was to have just the 10 videos running at any time the project could have worked using Modul8 as this was not the case I decided we would have to look at using alternatives such as MAX MSP and then develop our own midi controlled VJing application.

MAX MSP Jitter – First Look

I’ve looked at MAX MSP Jitter before for a 1st year project and although I decided at the time not to use it, I did put together a few patches just to familiarise myself with how it functions.

A recent lecture from Liam re-introduced MAX MSP Jitter and it’s uses for the non-programmers and although it would take some work I do believe it will be possible to produce a working program.

MAX MSP - cv.jits.faces

MAX MSP - cv.jits.faces

I downloaded the MAX MSP demo version from http://cycling74.com/downloads/ which is free to use for 30 days and at the same time downloaded some of the patches Liam demoed, Computer Vision for Jitter or cv.jit. This can be downloaded from http://jmpelletier.com/cvjit/ and included a number of Video Tracking and Video Manipulation Patches. I installed these as directed and soon had an example patch up and running. The Patch chosen tracked movement either using the Macs iSight Webcam or a video I pre-loaded. The running patch produces coordinates relating to movement seemingly identifying face and hand movements particularly well on both the live view and from the video sequences I’d used.

Although these patches were interesting and offered possibilities for pre-loading the video files for our Human Orchestra concept, I still needed to link this to a midi keyboard.

I then searched through the MAX objects looking for midi and audio related objects and came across ‘kslider’ which displays an on-screen keyboard with note and key velocity information. Using this as a base I looked through the list of midi objects and with Lees assistance we tested a number of these midi objects using a midi keyboard until we’d identified ‘notein’ as the best object to use. This picked up the key presses from the midi keyboard and reproduced this visually on the on-screen keyboard and played a note through the AU Midi interface.

However, this solution wasn’t perfect as the key mapping was slightly out of sequence and the tone was played both on key press and again as the key was released.

MAX MSP kslider

MAX MSP kslider

I deemed this a good start and have booked out a midi keyboard to continue working on this over the next few days at home and hopefully find a method of linking the video and keyboard patches together to produce a working system.

Performance Video – Microsoft Kinect

Microsoft Kinect

Microsoft Kinect

We may however be modifying our concept to include the use of Microsoft’s Kinect tracking system. Following a quick introduction to the Kinect by Liam today we’ve decided that we’d like to abandon the midi keyboard as the main input device for our concept and instead look at using the Kinect instead. This would make our concept a true performance based project with videos interacting to a person or persons movements.

The Performance Video

I’ve also had some ideas about the Performance Video itself which I need to put before the group and seek a consensus on what is the best way forward for this project.

What I now visualise is a looping video of the entire choir moving slightly to and fro and layered on top would be the videos that become active when an input either from the midi device or the Kinect has been received. Think of tiered rows of the choir and then one of the choir moves independently and sings a tone. When a chord is played or more than one movement is detected through the Kinect, more members of the choir move and sing a chord.

Project Blog Entry Links

  1. Performance Video – Conclusion
  2. Performance Video – Wiimote MAC
  3. Performance Video – VJ’ing using Quartz Composer
  4. Performance Video – Kinnect on MAC and PC
  5. Performance Video – MAX MSP Jitter
  6. Performance Video – Modul8
  7. Performance Video – The Human Orchestra

 

Word Count 904

By

Performance Video – Modul8

Modul8

The Plan – using Modul8

The more I think about the Performance Video project the more I think we should be concentrating on the Performance aspect rather than the technology used to produce our concept. With this in mind I began looking more closely at the Technology already available to us. I came to the conclusion that by basing the entire project around some VJing software (Modul8) that can interface with a midi musical instrument and apart from some programming, essentially the technical part of the project is complete.

The idea to this approach is based upon the Ophelia summer project I worked on with Samantha Else and Michael Moore see the Ophelia Blog entries for details and video below.

[youtube.com/watch?v=tP9P8mHqmlk]

Links

This would allow us to concentrate on producing the videos, the sound quality and possibly identifying a suitable location to project our videos, as we all feel this project would work well projected externally on a building.

How will it work?

I’ve sketched out my idea below, essentially all we need is a Macbook Pro with Modul8 installed, a midi keyboard and a VGA projector – simple!

Performance Video - The Plan Modul8

Fig 1.0 Performance Video - The Plan Modul8

The idea is that by using Modul8, which of course will be able to interface to a projector and handle all the video output and hopefully will also be able to interpret the input signals from the midi keyboard, which will in turn be mapped to the individual videos. This is all theory at this stage as I understand from my research that Modul8 does an initial scan of midi inputs at start up and that there is no subsequent midi mapping functionality.

My previous experience of using Modul8 also makes me believe that this simple plan will be anything but simple. I fully expect there to be issues with mapping the keyboard notes to the corresponding videos. I’ve had problems with video files working one day but not the next due to CODEC choices and I suspect that the performance of the overall concept maybe limited by the performance of the technology. By this I mean will the Macbook/Modul8 be able to handle all the multiple videos running simultaneously, especially when a chord is played.

The only solution is to carry out some testing as soon as possible to see if this works, if not then another plan will be needed, one most probably using Max/Msp and Jitter.

 

Research Links

Modul8 is a revolutionary MacOS X application designed for real time video mixing and compositing. It has been designed for VJs and live performers. Modul8. Site Accessed 20/11/2011 http://www.modul8.ch/

 

Project Blog Entry Links

  1. Performance Video – Conclusion
  2. Performance Video – Wiimote MAC
  3. Performance Video – VJ’ing using Quartz Composer
  4. Performance Video – Kinnect on MAC and PC
  5. Performance Video – MAX MSP Jitter
  6. Performance Video – Modul8
  7. Performance Video – The Human Orchestra

Word Count = 501

 

By

Performance Video – The Human Orchestra

Performance Video

Performance Video – The Brief

The musical instrument

For the interface design unit your task is to create a ‘digital’ musical instrument and to use it as part of a performance or to get someone to perform with it. Your instrument can be constructed in numerous ways, it can be software, it can be a physical construction, it could be generative or use environmental factors. The performance, should have an audience and a performance, the place can be a street, the pub, and hilltop or a nightclub.

Performance Video – Idea Development.

The groups idea (from Lee’s Personal Portrait project) for the Performance Video project is to produce a sequence of videos for each of the members of a choir. Each singing to camera a single note. From these video performances we hope to have enough videos/notes to reproduce  up to 3 to 4 Octaves.

Midi Keyboard

Midi Keyboard

To control these videos we will design a custom midi interface to control the sequence of the videos, so that any midi compatible instrument will be able to effectively play the videos, producing a musical score.

Performance Video - Video Wall

Performance Video - Video Wall

To visualise this concept think of a video wall with each individual video featuring a single member of the choir and as you play, for example a midi keyboard – for each note depressed a video runs and you hear that note being sung. Pressing more than one key at a time to make a chord will play the corresponding videos and so you would hear the chord being sung.

We are already considering a number of input devices one of which is a oversized floor positioned piano keyboard an example of which featured in the film ‘Big’

[youtube.com/watch?v=AByIokt3X0E]

Another example of the Giant Piano

[youtube.com/watch?v=1HjG6TYMFfg]

My Responsibilities

For this project my primary responsibility will be to shoot and edit the video footage, almost certainly using my Canon EOS 60D. Due to the technical nature of this project I fully expect to have some involvement in the design and development of the midi interface, which may involve the use of max/msp and jitter or similar technology, which will become apparent after researching appropriate website references and published material.

It may be that the control technology is already in existence in which case we can concentrate on the performance aspect of the project.

As usual I will offer assistance/input to other members of the team as required.

Research Links


What is Max?

Make connections. Make things happen.

Max gives you the parts to create unique sounds, stunning visuals, and engaging interactive media. These parts are called ‘objects’ – visual boxes that contain tiny programs to do something specific. Each object does something different. Some make noises, some make video effects, others just do simple calculations or make decisions.

In Max you add objects to a visual canvas and connect them together with patchcords. You can use as many as you like. By combining objects, you create interactive and unique software without ever writing any code (you can do that too if you really want to). Just connect.
Website Accessed 18/11/2011. http://cycling74.com/whatismax/

MuSET

MuSET is a research group dedicated to the exploration of computer applications to music and sound. The research group was established in 2004 and is located in the School of Music at the University of British Columbia. Website Accessed 18/11/2011 http://debussy.music.ubc.ca/muset/index.html

Project Blog Entry Links

  1. Performance Video – Conclusion
  2. Performance Video – Wiimote MAC
  3. Performance Video – VJ’ing using Quartz Composer
  4. Performance Video – Kinnect on MAC and PC
  5. Performance Video – MAX MSP Jitter
  6. Performance Video – Modul8
  7. Performance Video – The Human Orchestra

Word Count 592

 

By

Future Cinema – Project Conclusion

Future Cinema

Mad Doctor Storyline

Please use headphones for best effect

Blackboot the Pirate Storyline

Please use headphones for best effect

For best results use headphones while watching the two videos above. I’ve added visuals which should be viewed full screen but for the full binaural audio effect I recommend the listener to listen to the audio only and with eyes shut.

Future Cinema – The Making Of – A Video Documentary

Future Cinema – The Synopsis

Our groups idea was to incorporate a Binaural Audio Recording element into a 5.1 Surround Sound Film Soundtrack effectively creating a 7.1 Surround Sound Soundtrack. The 5.1 surround sound would be delivered using a Surround Sound speaker system and the binaural soundtrack played back simultaneously through headphones or speakers built into a Cinema seats headrest.

Future Cinema – How did we do?

Fig 1.0 M-Audio 410

Fig 1.0 M-Audio 410

See previous entries  for more details on the groups research and development, final testing, however in summary after days of testing various positions and locations for microphones and recording techniques we developed a solution which allowed us to simultaneously record both the surround sound and binaural sound recordings.

With the sound recorded onto 6 tracks, stored on SD cards, we then edited them together using Logic Pro, an audio editing application. Each audio recording was assigned to a separate channel for example track 1 was mapped to front left, track 2 front right, track 3 rear left and track 4 rear right.  Finally track 5 was mapped to the headphones left channel and track 6 headphones right channel.

It should be noted that we have intentionally made no provision for a centre speaker channel and similarly no provision for the LFE (Low Frequency Effects) channel, that is the .1 in the 5.1 surround sound systems. This was due primarily to a lack of resources for it would have been relatively simple to setup a channel for each but there was no speaker system available for testing and we also felt it was unnecessary for the purposes of the design at this stage.

Using a M-Audio 410, 4 of the tracks were mapped to the speakers in the Edit Suite and the 2 tracks of the binaural audio these were mapped to headphone output socket.

M-Audio 410 specifications

2 x 8 24-bit/96kHz analog I/O; 192kHz stereo out
2 mic/line ins w/ preamps and phantom power
8 line outs to mixer or direct surround output
S/PDIF digital I/O w/ PCM, AC-3, and DTS support
1 x 1 MIDI I/O

As well as assigning each track to a channel we also adjusted individual track volumes to balance the sound levels from each of the channels, then added the effects, the ambient noise of the Hospital and the creaking of the ship.

Fig 2.0 Headphone Test Angle

Fig 2.0 Not Mickey Mouse but testing headphone positioning to optomise the 5.1 with the binaural effect

Using headphones held slightly away from the ear (or turned slightly sideways  see Fig 2.0) it was possible to hear both the 5.1 surround sound audio from the speakers and the binaural audio from the headphones.

Using headphones may not be the final solution for an installation in a Cinema but it was the optimal setup for  demonstrating the concept to a selected audience in order to obtain feedback (See the video below for the audience testing stages).

For the audience the effect of hearing both soundtracks made for a much more immersive experience, for not only was it possible to hear the surround sound but there was the added effect of having sound originating from a point very close to your ear via the headphones. Ideal for horror films, the protagonist whispering into your ear, the sound of a bullet passing close to your ear, or a whispered instruction that only you can hear.

Audience Testing – Screen Tests

Fig 3.0 Audio Levels Testing

Fig 3.0 Audio Levels Testing

With the audio tracks locked down we began Audience Testing, inviting fellow students and staff to experience the project while we recorded their responses in real time on video. We then followed each test with a short question and answer session on camera to gauge each subjects response and to find out if our idea would indeed add value to the Cinema audiences experience.

From the video and looking at the screenshots below, as you can see the subjects gave an overwhelmingly positive response to the experience. All felt that it put them at the centre of the action, made it a more personal and more immersive experience than they would normally expect from watching a film at the Cinema.

Fig 4.0 Audience Testing - Video Setup

Fig 4.0 Audience Testing – Video Setup

Surprisingly most felt the experience was the better for the lack of visuals, the imagination more than making up for this.

Future Cinema – Sound X.1?

Cinema sound is a technological area that still has much to offer, for example Dolby (TM) have developed a new Dolby Pro Logic IIZ system which adds a height element to the sound, which they have done by adding extra channels 5.1 to 7.1 and 7.1 to 9.1 and by positioning speakers above the existing Front Left & Front Right speakers. These extra channels add to the depth and spacial qualities of the sound, allowing film-makers the opportunity to add a feeling of height to their films, an example of which, would be the distant approach and then passing of an aeroplane – as it approaches gradually gaining in volume and then passes over your head and behind rather than to the left or right.

At the moment when considering existing sound set-ups in Cinemas, film makers show aeroplanes and in fact any form of transport passing from front to back or vice versa by filming them passing either to the left or right rather than passing directly overhead, most probably due to the limitations in faithfully reproducing the sound of the passing aircraft in the Cinema. (NB this may not be the only reason)

What does this mean? as the number of channels continues to grow so will the number of speakers and with the positioning of these new speakers coverage will also grow until eventually complete coverage will have been achieved and the audience will be totally immersed in a hemisphere of sound.

Fig 5.0 Immersive Headset

Fig 5.0 Immersive Headset

Final thoughts

Personally I feel that the group have worked hard to prove that a 5.1 surround sound soundtrack with the addition of the binaural soundtrack combined together would both enhance and add a new dimension to the Cinematic experience. With the right Film, with changes to the narrative to include the binaural sound element and with minor modification to the Cinemas seating (speakers built into the headrests) it would be possible to provide a much more immersive experience for the Cinema audience.

The concept could also be applied to the Gaming environment using a headset, which has both visual and audio capabilities, for example a headset such as the one shown in Fig 5.0 would be perfect for such an application.

One of the many hurdles we had to overcome was that what we thought we knew about sound recording did not match the results. We thought that by widely spacing the microphones we would get the best separation for the channels. In fact we produced the best recordings by having the microphones just a few centimeters apart and facing in completely the opposite direction to what we had originally planned.

Time was the usual thing in short supply, working late into the night to get the recordings done in the studio space. The original assigned roles in the group blurred as we each took on extra tasks when short handed, grabbing a camera to record the processes and work carried out for the ‘Making of’ documentary.

In summary though I personally think the group have produced an effective design that can be demonstrated to an audience based on our original conceptualisation of the 5.1 Surround Sound combined with Binaural Sound Recording and it’s possible inclusion in a Future Cinema Design.

  • Future Cinema – Screenshots

    The Mad Doctor in action Mad Doctor & Lawrence Microphone Positioning Alexsandra in the Sound Booth Audio Recording Working on his Lines Sound Test Subject 1 Ben Sound Test Subject 2 Sound Test Subject  3 Jason Watkins Sound Test Subject 4 Sound Test Subject 5 Chris Pegg

Future Cinema – Links to Related Blog Entries

  1. Future Cinema – Project Conclusion
  2. Future Cinema – Sound Effects
  3. Future Cinema – Digital Cinema
  4. Future Cinema – 5.1 Surround Sound
  5. Future Cinema – Binaural Sound – Digital Sound Recording
  6. Future Cinema – 360 Degree Camera Mount
  7. Future Cinema – Learning Agreement (Updated)
  8. Future Cinema – Audio / Film Script 1st Draft
  9. Future Cinema – Binaural Sound Recording
  10. Future Cinema – The Film Pitch
  11. Future Cinema – does it have one?

 

 

By

Future Cinema – Sound Effects

Sound Effects

Following a successful conclusion to the recording tests earlier in the week we again setup in the Editing Suite to review the sound recordings for a short list to edit into the final sequence.

This took more time than we’d originally planned but eventually we had a complete set of recordings for both the 5.1 surround sound sequence and the binaural soundtrack by the end of the day. Then for the first time using the 5.1 surround soundtrack played back through the speaker systems using Logic Pro and the binaural via the output of a PC through a set of headphones. Both recordings were started in synch and for the first time we listened to the combined soundtrack.

Although not all the sound effects have been added yet we were generally happy with the result so far and decided to demonstrate it to Chris Pegg. Chris commented positively, that he was happy with the progress the group had made and with the overall project, which was a new experience for him. Chris also suggested that we test the project further by asking individuals from outside DMP to experience the soundtrack and to record their responses and feedback in order to confirm our concept and/or to make improvements.

We felt this was a good idea and immediately asked a 1st year and a 2nd year from another course to listen to the soundtrack in the edit suite. We recorded their responses on video to be included in the ‘Making Off Video Record’ which we are making in conjunction with the project as a complete record of the work, the testing and each individuals role in the development of the project.

Sound Effects

Audacity Noise Removal Filter

Audacity Noise Removal Filter

I have personally sourced some additional sound effects to go with the Mad Doctor & Pirate storyline. For example the ambient noise from inside a Hospital Ward, the bleeping of medical equipment for the Mad Doctor version and for the Pirate version the creaking of the ship, the wind and the sea. All of these will help to establish the setting for the audience, to assist with their imagination, to visualise themselves in a Hospital Ward and the Cabin of a Pirate Ship in a heavy sea.

To test the suitability of these sound effects and additional soundtrack clips I first edited together the binauaral sound recordings, taking the best recordings from each scene and cutting and pasting them together using Audacity. After I had completed this I discovered noise where there should have been silence. Using the noise removal effect in Audacity, I first recorded the ‘Noise Profile’ by selecting a 2 second sequence of the soundtrack free from effects and voices and then using this, the noise removal tool removes the noise by comparing with the Noise Profile in effect filtering it out.

The resultant soundtrack was very clean, free of any noise. I then added a 10 second sound sequence of the ambient sound taken from a Hospital waiting room and mixed onto this the sound of an EKG medical unit using it’s bleeping to link sound clips together. The final touch was to add some sound clips of a fellow student screaming and the majority of the editing was done. All I had to do then was to balance the audio levels on the individual tracks and then export the whole sequence of 6 tracks to a stereo WAV file. During the export process the 6 tracks were mixed to 2 track stereo file.

I repeated this process for the Pirate storyline version, but using the sound of a ship rocking in the sea with the wind in it’s sails. Again I exported the 6 tracks as a 2 track stereo WAV file.

I’ll post these sountracks up onto the final blog posting for this unit following the critique.

Sound Scary Laugh by Ian Hunt 1
Please note not used in the final soundtrack

What Next?

We plan to finalise the sound tracks, setting levels and balancing for the 5.1 Surround Sound channels with the additional sound effects added.

Alternative Headphone Tests

Alternative Headphone Tests

I have sourced some headphones that do not filter out external ambient sound, with these I am expecting that the listener will be clearly able to hear the Surround Sound played through the edit suite speaker systems while at the same time getting the full effect of the binaural sound recordings through the headphones. The plan is to test the setup with students from other courses and to record their reactions on video to see if the concept has worked as well as we had originally visualised.

This will be followed by the completion and editing of the ‘Making Of Video’

 

Future Cinema – Links to Related Blog Entries

  1. Future Cinema – Project Conclusion
  2. Future Cinema – Sound Effects
  3. Future Cinema – Digital Cinema
  4. Future Cinema – 5.1 Surround Sound
  5. Future Cinema – Binaural Sound – Digital Sound Recording
  6. Future Cinema – 360 Degree Camera Mount
  7. Future Cinema – Learning Agreement (Updated)
  8. Future Cinema – Audio / Film Script 1st Draft
  9. Future Cinema – Binaural Sound Recording
  10. Future Cinema – The Film Pitch
  11. Future Cinema – does it have one?

 

By

Future Cinema – Digital Cinema

7.1 Surround Sound

7.1 Surround Sound

Digital Cinema

Cinema – Digital Cinema conversion

With the ongoing process of converting Cinemas to Digital Technology this opens up new possibilities for sound technology. The majority of Digital Cinemas offer 5.1 Surround Sound that is 6 channels of audio and now many offer 7.1 Surround Sound, 8 channels of audio. The additional speakers are either positioned high above the existing front speakers, that is Front Left and Front Right, alternatively they are positioned mid-way between the front and rear speakers.

Dolby 7.1 Surround Sound. Site Accessed 23/11/2011
http://www.dolby.com/us/en/consumer/setup/connection-guide/home-theater-speaker-guide/index.html

Digital Cinema – Future Cinema will be able to deliver a growing number of audio channels as the technology advances. For this project we think that some of these additional audio channels could be used to deliver a more immersive audio experience directly to individuals in the audience.

X Rocker Gaming Chair

X Rocker Gaming Chair

As an example of this we can look at the Games industry and a relatively low cost piece of Technology the Gaming Chair.

These come complete with their own built in 2.1 Sound Systems including a sub-woofer and the ability to deliver additional sensory experience using rumble and vibration to create sensation of movement directly related to sound levels and frequency.

X-Rocker Gaming Chairs site Accessed 23/10/2011
http://www.xrocker.eu/x-rocker-vision-pedestal-54-p.asp

A cinema fitted out with these types of chairs could deliver a whole new experience to the cinemas audience.

For the purposes of the project the chairs speakers provide the means to deliver the additional sound tracks we conceptualised, to be able to deliver a personal message, a sound clip direct to the audience in very close proximity – the whisper in an ear, the gun shot fired inches from your ears. Such a close proximity in the delivery of sound to the ear would also be felt as well as heard as the compression of loud sound waves or sound coming from a location in  close proximity to the ear would create air pressure effectively pushing against the ear drum, the sensation of wind blowing in the ear.

I’ve thought of a side benefit of using these chairs which would be to provide additional sensations using the built in sub-woofer to create rumble and vibration, in fact feeling as well as hearing the sound. This could be a possible update to Sensurround introduced for a small number of films in the 1970’s for example ‘Earthquake’ released in 1975.

The failing of Sensurround was that it relied on using additional Bass speakers and playing low frequency sound at high decibel levels which could be heard in adjacent theaters ruining the audience experience due to hearing the sound of another film through the walls of the Cinema.

My idea using these chairs would mean that the the high decibel level could be avoided and yet still get the vibration effect.

Our binaural soundtrack could be sent to the speakers in these chairs negating the need to use headphones. Although some of the effect would be lost by using speakers the close proximity of these speakers to the ears and combined with some form of sound processing to simulate the 3D Stereo effect and the brains ability to process sound information known as “interaural level differences (ILD) and the interaural time differences (ITD) that characterize two-eared human hearing” Ambiophonics. Site Accessed 23/10/2011 – http://en.wikipedia.org/wiki/Ambiophonics

The Human Ear or rather the human brain can determine not only where a sounds location originates from, anywhere in a 360 degree perspective, but can in fact localise and process sound in a sphere. The visual medium still has some way to go before it can deliver a cinematic experience based on a humans spherical perception.

Future Cinema – Links to Related Blog Entries

  1. Future Cinema – Project Conclusion
  2. Future Cinema – Sound Effects
  3. Future Cinema – Digital Cinema
  4. Future Cinema – 5.1 Surround Sound
  5. Future Cinema – Binaural Sound – Digital Sound Recording
  6. Future Cinema – 360 Degree Camera Mount
  7. Future Cinema – Learning Agreement (Updated)
  8. Future Cinema – Audio / Film Script 1st Draft
  9. Future Cinema – Binaural Sound Recording
  10. Future Cinema – The Film Pitch
  11. Future Cinema – does it have one?

 

By

Future Cinema – 5.1 Surround Sound

5.1 Surround Sound

A day testing and proving and disproving ideas for recording and playing 5.1 Surround Sound files.

Idea Development – 5.1 Surround Sound

Sennheiser ME66 Microphones - Surround Sound Recording Setup

Sennheiser ME66 Microphones - Surround Sound Recording Setup

I propositioned the idea that if the dummy head complete with the binaural microphones was positioned in a simulated cinema environment and a DVD with a 5.1 Surround Sound soundtrack was played using a surround sound speaker system then recorded using the binaural microphone setup then the surround sound effect would be duplicated in the recording.

5.1 Surround Sound Headphones

5.1 Surround Sound Headphones

Following a number of recording tests this did not seem to be the case, although I suspect the problems of sourcing a true 5.1 Surround Sound System could have contributed to the failure of the test. We did successfully combine the binaural effect with the recording of the films soundtrack by overlaying our own voices into the recording but the 5.1 Surround Sound effect did not truly represent itself when played back through headphones.

However I have a theory that with the application of the right technology the original theory may well prove to be correct, for example it is possible to source 5.1 Surround Sound Headphones where the speakers are actually separated by 12 degrees in the headset.

[youtube.com/watch?v=5QETQNFYnp4]

5.1 Surround Sound

Definition: 5.1 channel sound, also known as Surround Sound, is a standard sound format found on most DVDs and some CDs. The five channels are left and right main speakers (stereo), a center channel for movie dialog and on-screen action, two rear speakers to surround the listener and a .1 channel (pronounced ‘point-one channel’) for bass. The .1 channel is intended for a subwoofer, used for special effects in movies and very deep bass in music. The designation ‘.1’ means that it is not a full range channel and is designed to reproduce only a narrow range of bass tones.

5.1 Channel Sound Definition. Site Accessed 20/11/2011 http://stereos.about.com/od/glossary/g/FivePointOne.htm

Zoom R24 Digital Recorder

Zoom R24 Digital Audio Recorder

Zoom R24 Digital Audio Recorder

For the new recordings we decided to use a Zoom R24 Digital Recorder rather than the Fostek recorders we used in the previous recording sessions. The main advantage to using the Zoom R24 was that it could record more than 2 channels of audio simultaneously in fact it can record 8 tracks in all, which saved us from having to synchronise 2 recording devices as we had previously. The Audio tracks were also saved onto a SD card making it much easier to import the data to the Logic Pro application.

Initially the Zoom was powered from the mains supply and Phantom power used to power the microphones, but we were getting a power hum on all the audio recordings. To cure this we had to power both the recorder and microphones using batteries. I suspected at the time that as we were in a studio using a dimmable lighting system, either the power from the lighting racks was effecting the main supply or the neutral had  high frequency noise running on it or it was not pegged at earth potential and was floating several volts above zero. As we had no means of checking this and this being outside of the brief, switching to battery power was the only option.

Logic Pro

In the afternoon session we utilised a new software application called Logic Pro and attempted to create a 5.1 Surround Sound soundtrack using the sound files recorded previously in the studio see previous blog Future Cinema – Binaural Sound – Digital Sound Recording. Each track was assigned a channel 1 to 4 representing Left Front, Right Front and Left Back, Right Back.

Logic Pro Application Screenshot

Logic Pro Application Screenshot

The microphones used to record the original tracks was centrally located, the resulting soundtrack appeared to have good separation but not necessarily 5.1 Surround Sound. Personally I felt that this might be due to the fact that the recordings were of only one sound source, centrally located and so the sound when played back should have been heard equally from all 4 speakers which was the case. If instead on just using one sound source, we had recorded multiple sound sources across the studio space we would have been able to have identified these different sound sources and their locations when played back. For example four members of the group each stand beside a microphone and announce their location that is; Left Front, Right Front, Left Back and Right Back, a simple idea but not thought of at the time.

Audacity 1.3.13-beta!

Audacity 1.3.13-beta screenshot

Audacity 1.3.13-beta screenshot

The beta version of this popular Sound Editing Application according to my research now supports AC3 file formats which allows you to save sound files with up to 6 channels. For our purposes this would allow us to create a 5.1 Surround Sound file with the option of a Sub-woofer channel.

Unfortunately although it is now possible to  be able to create these file formats we still have to find a 5.1 Surround System to be able to play them back.

The AC3 file type is primarily associated with ‘AC3 Audio File Format’ by Dolby Laboratories.

AC3 is a 6-channel, audio file format by Dolby Laboratories that usually accompanies DVD viewing. It operates 5 channels for normal range speakers (20 to 20,000 Hz) and the 6th channel reserved for low-frequency (20 to 120Hz) sub-woofer operation.

Human’s audible range of frequency is typically between 20Hz to 20kHz (that’s 20,000Hz) and this range is called sonic. Anything below the range is referred to as infrasonic whilst anything above is ultrasonic.

FileExt – Website Accessed 21/10/2011. http://filext.com/file-extension/AC3

Next Steps

The next step is to revisit the studio space and finalise the full soundtrack, that is the 5.1 surround sound and the binaural sound simultaneously and then go back to the Logic Pro application and assign a track to each channel and experiment to see how the 2 sound sources can be integrated for playback in a cinematic environment.

Future Cinema – Links to Related Blog Entries

  1. Future Cinema – Project Conclusion
  2. Future Cinema – Sound Effects
  3. Future Cinema – Digital Cinema
  4. Future Cinema – 5.1 Surround Sound
  5. Future Cinema – Binaural Sound – Digital Sound Recording
  6. Future Cinema – 360 Degree Camera Mount
  7. Future Cinema – Learning Agreement (Updated)
  8. Future Cinema – Audio / Film Script 1st Draft
  9. Future Cinema – Binaural Sound Recording
  10. Future Cinema – The Film Pitch
  11. Future Cinema – does it have one?

 

%d bloggers like this: