Outside the labyrinth…looking in at someone waving

gesture-sound-place-3SoundLabyrinth demonstration and workshop, Mark Pedersen and Roger Alsop
9–10 June

In a utilitarian studio within UTS’s Bon Marche building is a 6-metre geodesic dome, covered in translucent scrim and with 24 speakers ranged around its three-dimensional interior. Through the scrim, the studio’s ceiling lights and exposed piping are visible; behind the dome, two desks are cluttered with laptops and flanked by piles of gear in sturdy roadcases.

Inside, the sound of wind follows me as I move. Then the pings of bell-birds, running water, cathedral bells, all overlapping one another in waves. And a woman’s voice, in French: I catch fragments that translate as “the world and its abundance”, “open me”, “your wisdom and your light”. I feel I’m in a futuristic spiritual incubator, softly proselytised from a spherical dimension. Before I’m completely transported, I notice a set of wavering synthetic sounds, wobbling in microtones toward or away from melody, not always quite succeeding in rising or falling. But today’s experience of SoundLabyrinth is focused on what’s outside the dome: those cluttered desks and what happens behind them.

SoundLabyrinth is created by Melbourne-based artists Mark Pedersen and Roger Alsop. They explain the technical basics to us: essentially, a Mac camera is used to detect light changes, and therefore movement, within the dome’s inner space, which is divided into 16 segments. Gesture drives the installation. Some gestures are unintentional – walking in, for example. But the system recognises movement and responds with sound; the user realises this is happening, which then elicits intentional gestures in response.

Alsop and Pedersen are using separate computer systems to deliver sound to 24 speakers placed throughout the dome; essentially, we’re seeing two independent projects in a shared space. Pedersen’s software system uses ambisonics to place virtual sound objects within the aural space; object ‘placement’ can be driven by his own choices, or he can use the movement of a visitor’s arm, say, as the driver. He achieves this by defining planar location and elevation – on screen, we can see a circle with floor position marked, and a semicircle for setting the height. A decoder then does the maths to determine the balance across all 24 speakers. A complex Venn diagram maps out ‘what plays where’ across the floor space, with each overlapping circle representing a sound.

Alsop’s set-up represents parameters less graphically. His screen is a maze of lozenge-shaped boxes, grids, and connecting lines, representing sensor sectors, range, area, frequencies, amplitude and more. Alsop’s project uses just eight of the 24 speakers; their outputs are set up either through a compositional sequence, or manually driven. In other words, while Pederson’s ambisonics first maps a spatial sound location and processes accordingly to achieve the result, Alsop’s programming begins with the allocation of sound to individual speakers to ‘create’ the spatial location of the sound as a consequence.

Watching visitors move around the space, waving arms around to see what happens, I wonder what makes us want to ‘interact’ with a work like SoundLabyrinth. Having watched delighted responses to Experimenta’s playful and interactive Speak to Me exhibition at the Powerhouse on the weekend, I’m especially curious. Once upon a time, humans and nature lived in a responsive, reciprocal interaction with nature. In cities, are we hungry for the environment’s response to (or awareness of) us?

Experimenting with all these parameters has an anthropological aspect, says Alsop. For example, in his set-up, some people might stand where they can hear beat frequencies across similar tones, while others avoid them. By setting up minimal differences between speakers – around 8 Hz difference, he suggests –a variety of tonal colours and phases can be achieved as people move in the space. The person almost becomes a synthesiser, he says; their body in effect creating the tones.

Both artists are fascinated by gesture, and by the capacity for computers to read gestures and respond. Alsop says that just about everything we do – even thought – is a gesture: “humans are a gesture recognition system”. Pederson speaks of gesture in the context of encountering the other through immersive spaces – where what you experience can be “both foreign and tactile”.

What gestures mean to people and how much this varies can’t be measured by a computer, Alsop says, but over time it can be ‘taught’ incrementally. A system such as his own for SoundLabyrinth is like a piano – it offers a finite number of choices or notes, but infinite combinations are possible. Human interpretation, on the other hand, is like a cello: multitimbral, infinite.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s