This blog intends to work as notebook for my research on synergies between Sound-Visuals-Movement, audiovisual interactive systems, augmented realities, graphic representations of sound and movement in real time, hypersensory immersive media and synaesthesia states.
David Rokeby was part of a group of pioneers (like Erkki Kurenniemi featured in the last post) that developed interactive systems that would allow using the human body as an instrument. Moving the body would have a direct sound feedback. Different parts of the body controlled different parameters of the sound.
His early explorations on computer based interactive systems defined metaphors and interactions that are the basis of many of today’s interactive systems.
// Image from the 16x16 pixels camera tracking Rokeby movements
”Reflexions was my first interactive sound installation. I constructed some very bulky 8 x 8 pixel video cameras (the large black box over the monitor in the image), connected then to a wire-wrapped card in the Apple ][ which digitized the images, and wrote a program in 6502 assembly code for the Apple ][ which controlled a Korg MS-20 Analog synthesizer to make sounds in response to the movements seen by the cameras. Movement also controlled the volume of two tape loops of water sounds. The synthesizer and water sounds were mixed individually to 4 speakers in a tetrahedron (one on the ceiling and three in a triangle on the floor. The sounds would move around you in addition to responding to your movement.” (Rokeby )
Very Nervous System was an evolution of the Rokeby’s previous interactive sound installations (Reflexions and Body Language).
Human gestures and movements are captured through a video camera, and translated in real time into an improvised music system that reflects and reacts to the qualities of the movements.
//Interactive system scheme (left). Audience interacting with V.N.S. (right)
“I use video cameras, image processors, computers, synthesisers and a sound system to create a space in which the movements of one’s body create sound and/or music. It has been primarily presented as an installation in galleries but has also been installed in public outdoor spaces, and has been used in a number of performances.” (Rokeby)
// Rokeby interacting with Very Nervous System in 1991
“The installation is a complex but quick feedback loop. The feedback is not simply ‘negative’ or ‘positive’, inhibitory or reinforcing; the loop is subject to constant transformation as the elements, human and computer, change in response to each other. The two interpenetrate, until the notion of control is lost and the relationship becomes encounter and involvement. The diffuse, parallel nature of the interaction and the intensity of the interactive feedback loop can produce a state that is almost shamanistic. The self expands (and loses itself) to fill the installation environment, and by implication the world. After 15 minutes in the installation people often feel an after image of the experience, feeling directly involved in the random actions of the street.” (Rokeby).
// One of the 3 cameras used to track the interactive space (left), Rokeby performing with Very Nervous System in Dam Street ,1993. (right)
“International Feel” was an interactive sound installation made for Strategic Arts Initiative 2.0 (2011) exhibition, and it was an update of “Body Language”, made for the same exhibition in 1986.
“International Feel” is a telematic version of Very Nervous System. Two systems are installed in two different physical locations, the visitors of both systems meet in cyberspace and interact together to create a collaborative soundscape.
// A visitor interacting with the installation in Toronto (left), and Robert Rokeby in Rotterdam (right).
”For “International Feel” I created identical 2.8 x 2.8 meter spaces in Toronto at Inter/Access and in Rotterdam at V2. The kinect sensor in each space captured the depth image of whoever was in this space, and translated it into a “bubble-body”, a set of spheres that built up an approximate representation of that body in space.
This body data was transmitted over the internet to the other location, allowing each installation to place both virtual bubble- bodies into an imaginary shared space. The spaces where outfitted with directional sound, and this was used to give a sense of the location of the other person in your shared space. If there was no contact between the bodies, there was the sound of breathing, coming from the exact direction of your invisible partner. By moving toward the sound of breathing, you could attempt to touch the other virtual body. On contact, other sounds emerged, with the sounds changing to indicate how much of your body was in contact with the remote body, and the directionality of the sound intensified to give precise cues as to the direction to move to maximize contact. The sense of physical engagement was very powerful. One found oneself almost bouncing off the remote person’s body on contact.” (Rokeby).
Second part of the Movement Sonification post, where movement is used as input to generate sound. This time focused on the DIMI-O, a video synthesiser that converts the movements recorded by the video camera into real-time sounds and music, made by Erkki Kurenniemi,a Finnish artist, musician, inventor, and a pioneer in electronic arts and media culture.
// Kurenniemi explaining how it works. Stills from DIMI-O BALLET showcase, watch it bellow
In the early 70’s he created an electronic instrument, a video organ, called DIMI-O. It was an electronic organ, a 32-step sequencer memory unit. It had a video screen where the notes were visualised as a score in a optical interface, and from which the player could play and manipulate the notes sequences in the memory . Through some control bottoms he could loop the sequence, stretch it, reverse, duplicate it, speed up/down, … The instrument could be played with a keyboard as notes input or via a video camera. The picture captured by the camera was converted into black and white, and then used as signal to control de notes in the memory unit.
// Dancer triggering sounds. Stills from DIMI-O BALLET showcase, watch it bellow
// DIMI-O funtional diagram
Kurenniemi imagined and listed some uses for DIMI-O, like : - a studio instrument, to read graphic music, and record sounds through the video input. - as a live instrument to perform with orchestras - as an instrument for dance/ballet performances where the body movements would manipulate the music; - in experimental films, where the pictures would be transformed into music
DIMI BALLET (1971) was a performance made to demonstrate the use of this instrument with video input. This video documentation is part of the DVD “The Dawn of DIMI” (by Mika Taanila ), it contains the documentary film about Erkki Kurennimemi “Future Is Not What Is Used To Be” (by Mika Taanila ) and some extra tracks like the DIMI-Ballet, six experimental films and some “Basic II” computer Animations.
// The “DIMI-Ballet” video. First Kurenniemi explains how the instrument works, then at 2:30 the performer start to dance an generating sound sequences.
In 2002, the Avanto Festival in Helsinki, payed an homage to Kurenniemi and among other activities they reconstructed the performance DEAL (1971) and performed live dance with a DIMI-O video synthetizer.
// DEAL intermedia performance , Helsinki 2002, (Ilona Jäntti, Topi Tateishi, Mikko Ojanen)
Since long time audiovisual environments have been explored to create augmented realities and temporary autonomous spaces. The magical features of these environments attracts the humans since long time before the new media technologies. In the research on media archaeology we find immersive media events, tools and mechanisms since ancient times that were the predecessors of the actual new media technologies. I started to get interested in these topics some time ago when I got in touch with the work of Siegfried Zielinski (Check his book “The Deep Time of Media”). I talked about him some months ago in this post.
This post collects a series of audiovisual systems that share a similar interaction paradigm, a stretched fabric sensitive to depth. When the audience touches/pushes/pulls the membrane a camera detects the depth variations and responsive visuals are projected in the fabric.
Second part of the previous post about sound that is generated from shapes features. “Size”, “colour”, “lines” and other visual and form elements are analysed and used as data to trigger and synthesise sound. In this case, this post collects cases where sound is generated by colour.
Third part of the post related to Motion Sculptures. Digital or physical sculptures that are the result of the dynamics of motion. By being dragged through time, forms, shapes and lights are deformed, de-materialised and extended into new forms.
In part one and two I collected cases where the shapes and forms were built from movement data information (position, velocity would define parameters that control and define the shapes). In this post the shapes are defined by the sum of different moments captured over time, and result not in real final shapes but in an optical illusions caused by continuous motion.
This post collects cases studies where the movement is used as input for the generation of sound. Every kind of movement from gestures to movement in space, or string vibrations, can be mapped algorithmically sonified for sound creation.
The modern history of the visual music/colour music starts in the XVIII century, inspired by the ideas of Pythagoras (Music of the Spheres) and the recent advances of science in the fields of Physics and Optics. Specially by the work of Isaac Newton, who in his famous publication “Optiks” (1704) defined a correlation between colour and a music scale. In his research, Newton with the help of a triangular prism was able to divide “white” light in seven different colours. Later he assigned to each one of that colours a correspondent sound note.
Third part of the “EARLY COMPUTER GRAPHICS” post with two more computer artists pioneers Herbet W. Franke and Manfred Mohr.
The emergence of the computer with graphic capabilities brought new paradigms, aesthetics and creative processes to art. As Manfred Morh says “it became a physical and intellectual extension in the process of creating”. The computer allowed to execute complex processes in a fast and cheap way, which enabled the artists to have a much more freedom for experimentation. Algorithmic and computer generated work led to creations of artwork never seen before, with endless possibilities, new forms and creative processes. Also enabled in the creators the “attraction of serendipity: the possibility of an unpredictable but satisfying outcome” (Mike King, 1995)