|Applications:||Art Exhibitions | Cinema | Corporate A/V | Cruise Ships | Live Performance Venues | Live Sound|
|Restaurants/Bars/Clubs | Retail | Sports Venues | Theatre | Worship | Other Installs|
An Interview with Sound Designer Mark Grey
When American composer John Adams was commissioned by the New York Philharmonic to create a work commemorating the first anniversary of the September 11, 2001, attacks on the World Trade Center and Pentagon, he knew from the outset that he wanted a setting with music, choir, recorded voice and streetscape sounds that felt "otherworldly." His goal was to convey the presence of many souls and their collected energy in order to create what he termed a "memory space," where the listener could reflect on grieving and loss.
The resulting composition for orchestra and choir, "On the Transmigration of Souls", received its debut performances at the Lincoln Center in New York under the baton of famed conductor Lorin Maazel. Since then, the piece has won a Pulitzer Prize and been performed in many of the world's most prestigious concert halls, such as the Royal Albert Hall in London. In 2004, it was presented as part of the Sydney Festival program in the Concert Hall of the Sydney Opera House.
In conceiving how the composition's disparate elements would be woven together into a cohesive aural vision, Adams drew on the technical and artistic assistance of San Francisco Bay Area sound designer and composer Mark Grey, a San Jose University graduate with degrees in both composition and electro-acoustics. Meyer Sound spoke with Grey in Sydney about the artistic collaboration and technical process that unfolded to present this extraordinary work.
At Sydney Opera House, Transmigration was presented with a 5.1 surround sound system, a new and unusual format for orchestral performance. Grey opened the interview with the reasons for that decision: "We were attempting to take a work for conventional orchestra and chorus and modify it in a way that has never been done before on a major stage. One of the most profound spaces for people to be with themselves and their thoughts is a cathedral, and I figured the best way to go about creating this kind of space for the music was to use surround speakers. John (Adams) already had the idea of using pre-recorded sounds with the orchestra, so what better way to approach it than with a surround environment?"
How do you begin to frame the work? Is it a collaboration?
It's collaborative, though John has the musical concept and then invites me to work on the project. He is very knowledgeable about electronic music: He started writing with technology when he was younger, before he began working with the San Francisco Symphony and moving into the large orchestral format. He knows what filters are and what envelope generators are. He understands synthesis technology, synthesisers and samplers and those architectural tools very well, so it's great: we can talk both with a musical language, because I have a compositional background, and a technical language. When it gets to a certain level with the technology, though, he doesn't want to go there, and that's where I come in.
You are probably the first person to come into this hall and focus two line array systems at the stage. Can you take us through the sound system design for this piece and how it was conceived?
The piece was always conceived to be performed with an LCR system at the stage, with mid-auditorium speakers and rear surrounds in a total of seven discrete zones. What changes is the physical nature of the spaces and what equipment may be available to me, but I always specify Meyer Sound products for this piece. It's a little daunting always walking into a space and having to explain why it should be set up this way, but I really enjoy the challenge of doing something different.
Here, the system consists of main left and right arrays, each consisting of 10 M2D (compact curvilinear array loudspeakers) with an M3D-Sub directional subwoofer flown at the top of the array. The centre channel is eight M1D (ultra-compact curvilinear array loudspeakers), the mid-auditorium loudspeakers are two UPA-1P (compact wide coverage) per side (running) in stereo, and, for the rears, we have stereo arrays of eight M1Ds each. The front fill is eight UPM-1P (ultra-compact wide coverage loudspeakers), also in a stereo configuration.
Can you compare using what is basically a 5.1 line array set up in a concert hall to using a conventional system?
In the performance here, having the line array is so great, as it has a tight focused sound that can punch through the murkiness all large halls have, and reshape the room by tuning the system to not activate the nodal points.
The M2D has a clarity that is exponential, and the similarity of sound between all the different Meyer line array elements helps me integrate the intensity coming from the stage and create an image of clarity that can be pushed quite hard, while still feeling acoustic.
The transparency of the amplified sound seems central to the success of this design.
Absolutely. Given the profile of this piece, I can afford to specify exactly what I need. When I travel with Kronos Quartet, we don't always get that luxury, but, to help with that, we travel with four UPM-IPs so I can always maintain that integrity and transparency from the stage. Here, it's been fantastic, as the local crew have a good knowledge of all the products in their inventory, and the system has been set up and zoned in such a logical way that it just makes my job much easier. I have noticed that, since Meyer now manufacture their own drivers in house, the consistency between the products is just astounding. The great thing about that consistency is that, even in a venue like the Concertgebouw in Amsterdam, we can use the tiny MM-4 (miniature wide range loudspeakers) for surrounds, due to the physical restraint of the flat wall spaces, and still achieve the same result.
How did you learn and develop the techniques you are using?
Jonathan Deans was sound designer for all of John Adams' early work, but in 1995, John had a new opera and Jonathan was too busy (to work on the project), so he had Francois Bergeron and myself work on it, and I learned so much from (Deans and Bergeron) about how to localize sound using loudspeaker placement and equalization to create the illusion it is unamplified.
Broadway musicals are, to my taste, always over-amplified, and the vocal quality is usually brittle. What I attempt to do with opera is use the sound system to clarify the transient qualities of the soloist and chorus voices, and orchestra instruments, if spot mics are used in the pit. As the singers are so good to begin with, you just let the room do the majority of the work and clean up diction with lavs, or sometimes PCC area mics. With Jonathan and Francois, they both put a high priority on achieving natural vocal sounds and I learned much about the importance of maintaining the vocal integrity. I apply this approach on the opera stage as well as the orchestra stage by using a localized sound source like front fills. UPMs and M1Ds are fantastic for this as they are capable of reproducing all kinds of information from light and airy sounds to darker tones.
When I first heard that the piece would be mixed in surround, I envisaged you creating an ambience with a fixed orchestral balance that was pretty much left static for the duration, but in fact your mix is very dynamic and you are using multiple effects and panning.
It's to thrust the energy of the orchestra from the stage, then use reverberation to soften the image so you can then throw it farther, because the image from the reverb is coming in from a lot of different directions. Basically, I have the direct sound at the stage area, and 10 percent (effects/dry sound mix) in the LCR arrays, then the left and right mid-surround speakers are bled with, say, a 60 percent (effects/dry sound mix). Sometimes instruments are in the mid-auditorium speakers to give them a sense of being in a different space, but you get the clarity of the pitch.
It depends on the shape that I am trying to create: in certain venues, like the Concertgebouw, I don't need to (do this), but here we have these little alcoves for placing the mid-auditorium speakers and they actually work as a resonator box for the violins. I can bring the sound from a different side and you actually hear these lines almost like sound clouds. The strings are playing long pitches while the rest of the orchestra is playing much more complex passages, so you have these multiple layers going by with long sustain notes. By the time we get to the rear speakers it's 80 percent reverb and I am sending no direct microphone signal to the rear speakers at all.
So you are not really concerned with imaging the system to a given point using delays for this piece, as would be the standard approach?
With the thrust that comes from the stage, I can get the LCR system to push the image and I can open up two microphones that are down stage and send them to a reverb processor, which then gets the kind of general mix the conductor is hearing, and push that reverb out up the side of the house. I only delay the mid or rear speakers if there is a difficulty with the feel of the reverb in the room; if I can hear the reverb in the back then maybe I still need the SPL to push it out but I just need to delay it back a bit more. With the sound effects, voices and replay we are using, and with seven discrete zones, it doesn't really matter if it is 50 milliseconds late or something.
What is the effects processing set up for the show?
I am using two processors. A Lexicon PCM 91 and Max/MSP from Cycling 74, (which is) a software package designed by computer scientists that work in music and acoustics, available for both Mac and PC now, and used with any standard FireWire audio interface. The program will allow you to do anything you could mentally conceive in the audio world so far. You can apply filters and delays, and from this create reverbs and flanging effects, chorus effects. These audio processes are then the basis of sound synthesis and manipulation, either processing the real time computer audio input (multi-channel with very low latency), or by first creating soundfiles then processing the stored audio data. Control of all synthesis parameters can be done by other audio sources or any MIDI device.
How is the software package integrated into the sound design?
Max/MSP is driving playback levels of the pre-recorded spoken word and cityscape sounds for the entire piece as well as reverb processing on select orchestra microphones. Eight outputs of the FireWire audio interface feed console input channels. There are two reverb outputs and six playback outputs. Through the FOH console's matrix I feed the respective playback zones to loudspeaker grouping as well as where I locate the custom Max/MSP tuned reverb we created. This custom reverb processing in Max/MSP is basically very long freeze-frame tails made from selective orchestra and chorus microphones into the computer, then pushing through a multi-band vocoder, then pushing through long reverb tails, all done in Max/MSP. I can then harmonically tune and change all of the bands of the vocoder, in real time, as the orchestra performs. It's like a sequence of block chords moving along tuning the vocoder in real time. The result is something we call "Tuned Space." Harmonically tuning the reverb to the music creates an image of the concert hall walls disappearing. The playback zones are a one-to-one "zone to speaker" relationship, though to fill in the gaps I sometimes "cross blend" zones, or Max/MSP does. The PCM91 is used as reverb processing only.
During the show I was hearing trails of sound moving up and down the hall. What was happening there?
What you are hearing is "Tuned Space" and how Max/MSP is controlling the reverb to speaker relationship. It's (about) balancing: Sometimes I'll use the celli mics to capture something that is happening mid-stage in the woodwinds, and feed that to the computer, then, at times, there are these very soft first violin notes and those I send directly to the computer. You then really hear this tail in the high strings and, as an audience member, you know something is happening, but you're not quite sure what. With the rest of the orchestra the same thing is happening but on a much more subtle level. The ear is amazing: you only have to give it the idea once and it knows.
I noticed that the PCM 91 had about 29 seconds of reverb decay dialed up on it. How do you control that?
The closer the reverb gets to the stage from the main sound system, (the more) I will get feedback problems, so I spread that reverb to the mid-auditorium and rear speakers, and I selectively feed instruments to it, depending on the musical passage. It's how I create those long trails.
Looking to the future do you see this style of treatment and surround sound in the live environment becoming more accepted?
I think it will and it has been happening slowly. There are systems like the LARES system but a lot of the places that have them are not using them much. Not that it's a bad conceptual design, it's just seemed like the wrong time for it. But the concept of creating an ambience for an audience tailored to a piece of music and the particular (performance) space is a good one. The impact of transient information, especially with an orchestra, is such that you really don't have to give much, the problem is more that, as soon as you put microphones on a stage with an orchestra in a concert hall, you open up all kinds of political and artistic issues.
My observation, though, is that the expectations of the average listener are changing, and the classical and opera worlds will need to keep pace with that to maintain a fresh audience.
Yes the 20th Century has created recorded music, and the onset of digital technology means that people have access to great sound in their homes, and their expectations are changing daily. People can listen to their favorite passages of music over and over, where, before, people would listen to a show as an event, which would finish and then they may not have a chance to re-listen to that music for a couple of years. This is totally changing the approach to music and music-making for modern composers. John Adams, for example, tries to apply technology in his compositions due to that, and he is trying to push forward the concept of modernizing the concert hall stage a little bit.
When you are constantly touring and coming across new venues how do you communicate with the staff at those venues and get a sense of what lies ahead?
Typically, I will e-mail out a suggested design for the hall, going on what information I have. Here (at Sydney Opera House), it was easy, as they have a great virtual tour of the spaces on their Web site, so I was able to have a strong sense of how the design would best work. I then wait for the reply e-mail and, from that, I usually get a feeling for whether I am going to need more help or not. When specifying Meyer for these performances I know I will get great support from everyone, right through to John and Helen Meyer themselves, who seem to have a great understanding of the artistic vision and seeing it all come to fruition. Here at the Sydney Opera House it has been smooth from day one, the crew have a really strong ethic and good knowledge of the hall and the speakers they are using, it's a really tight ship. I've been really happy here.