March 12: More material and superior capture

Lately, I have been generating more material and experimenting with different approaches to constructing the form of my drafts. All three of the samples here are the products of slightly different approaches.

This post also marks the first collection of sounds produced with my new interface, the Steinberg UR44: certainly a huge improvement over my outmoded Tapco Link.USB. Aside from the superior preamps, it offers more inputs and four line outs, which will be important when I reach the stage that I am mixing for quadraphonic diffusion.

Here is an excerpt from an improvisation using multiple high-feedback harmonizer/delay effects, and my attempt at approaching the often assigned task of composing the essence of cracked out monkeys. The approach was to record a few takes of the dry signal, and two distinct effected channels, picking the best one, and essentially cropping a mix down of the tracks around the inevitable golden section.


O&GBR sample

Here I plotted out in Ableton’s arrangement view alternating sections, where I had clearly allocated portions of time for one flavor of processing (controlled by automation) and performance technique on the guitar. Controlling the sections and their duration was an impromptu piece of prose that I typed up, believing that if I apply my ability to form ideas with English to my construction of musical form (something I struggle with), I could make more endearing music. I’m pretty sure this was a misapplication at this stage, because it was a lot of work for a relatively small amount of music. Perhaps the approach can help me out with finishing this work, though.

Here Is a kind of hybrid of the above approaches, where I overdubbed affected string scrapes in a separate track over quasi-serial improvisations through a beat repeater.

February 19th Drafts

Last week’s assignment called for creating bulkier textures with the effect chains created earlier, and creating somewhat musical arrangements with multiple chains. Arrangements 1 and 3 are fairly representative of what I would like to do with the beginning and ending respectively. However, Arrangement 2 requires a lot more work in the way of mixing and automation of processes.

Arrangement 1

Arrangement 3

12 Patch Challenge

In last week’s meeting, I at last showed Professor Snyder the recording featured in my last post. The problem, asides from the performance and poor quality of the recording, is that it is by far too one-dimensional and doesn’t include any interactivity. To get me thinking about other timbres, he assigned me the task of doing what I had already done with convolution-reverb (that is, improvise with and explore its processes), but twelve times over with contrasting patches.

Each patch was designed away from my instrument as a separate project file in Ableton, and challenged me to imagine a brand new timbre every time. After a series of patches was composed, I would then enter the university MIDI lab, and process my guitar using one of the Audiobox 22vsl interfaces and a single ribbon microphone that I built using a kit from Austin microphones. Since this mic requires a lot of gain for a usable signal, there is a lot of self-noise from the Audiobox present in some of the recordings; sometimes this noise is incorporated into the improvisation. Overall I am pleased with the variety of sounds that I could get, while only once or twice being too far off from the initial “idea” I had in my head.

Patch 1 – Two tracks, in one: A simple effect rack containing three instances of MaxSpectralDelay, delaying different at three different rates: 2.5″, 5″, and 10″. This is accompanied by another track looping a low tone into a long-tail convolution reverb.

Patch 2 – A failed attempt at emulating the string textures from the “Nocturne” of Britten’s “Serenade for Tenor, Horn, and Strings;” uses an LFO to cycle through the various lanes of an effect rack containing various types of harmonizers.

Patch 3 – An effect rack that uses a reverse gate (only passes the signal when it is below a certain threshold) alongside a dense reverb track that is only readily noticeable when the gate is “shut” at high volume.

Patch 4 – Nearly straight-up granular synthesis, with an LFO added to modulate the speed of (interval between grains), and an LFO to modulate THAT oscillator’s modulation.

Patch 5 – Bit-reduced noise, with an LFO modulating (rapidly and randomly) its output.

Patch 6 – A simple instance of MaxSpectralDelay producing higher register frequencies slowe than low and middle register ones (which themselves are permitted higher feedback). Accompanied by LFO of panning.

Patch 7 – A cloud of MaxSpectralHarm grouped into an audio effect rack, with some chorus applied.

Patch 8 – Ableton’s stock Saturator effect run into a vocoder set to pitch-tracking and using a square-wave oscillator. Scelsi-worship.

Patch 9 – A sort of Instant-Reich texture that couples two beat repeat effects (one transposing down an octave via MaxSpectralHarm) and blended with a little high-feedback delay.

Patch 10 – Ableton’s Erosion effect plugged into a ring modulator and ping pong delay.

Patch 11 – No effects. All processing performed by Ableton’s latency-compensating track delay, and unusual routing between redundant tracks to produce strange harmonics.

Patch 12 – An effect rack filled with enough instances of Corpus (Ableton’s resonator emulator) to process the signal through all types of surfaces simultaneously (membrane, pipe, string, tube, etc). Recorded material is a selection from the first of Nuccio D’Angelo’s “Due Canzoni Lidie” (1987). An unusual “aura” in the high register seems to be produced from the noise of the Audiobox, but is not altogether unpleasant (if utterly beyond my control).

Tempest for guitar and electronics: a very rough draft

My main motivation for playing and composing for the classical guitar has always been its peculiar combination of portability and polyphony. I romanticize the family of self-accompanying instruments (piano/other string-klaviers, harp, lute/guitar, organs, mallet percussion…) as being part of their own tradition of music that is distinct – though not alienated – from chamber and orchestral genres, and the composers that I most respect are typically those that are able to accommodate this family’s propensity for introspective, “solitary” atmospheres. The guitar’s relatively quiet dynamics are a slight inhibition, but the electroacoustic medium is an equalizer that makes it a serious rival for its larger siblings. Though I expected that I would attempt to write a very sentimental work for this project, I’ve found that I am more attracted to the cacophonous possibilities offered by the electronics; the attendant character of this draft has more in common with how I write for very large forces.

What is attached is a recording made in Ableton of an improvised series of gestures that will eventually be properly arranged and scored. The core timbre driving this section of the piece is inspired by the opening measures of “Alta Paz” by Argentinian composer/guitarist Quique Sinesi, which consists of a very light and rapid brushing of the strings with the flesh of the fingers. The resulting flurry of sound has a subdued attack that brings orchestral strings to mind, and the variety of dynamics available makes it a fun and musical technique to improvise on. So far there is only one channel processing the guitar’s signal, and there I have an array of Max For Live devices that harmonize the fleshy strumming both a P5 above, and a M7 and M9 below (with the lower reproductions feeding back into themselves). This signal is then bused into a return track containing the M4L Convolution Reverb Pro unit set to produce a nearly 56 second long tail.

There is an awful lot of low-end here, but future drafts will use more channels that will specifically process chord hits/string pull offs/etc to fill out more of the spectrum. The end product will likely consist entirely of live-performed guitar, with the Softstep being used to redirect the signal to wherever it needs to be to establish a complete picture.

Gearing up

The second half of this year long study will have me shifting focus from my yet-to-be discussed art song to my piece for solo processed classical guitar, and I am well rewarded for putting this project off for the Spring because I did not, until now, have sufficient computing power at home to accomplish anything.

Prior work on electroacoustic projects at home was done on severely dated hardware; almost all of the original editing for The Woman in Black was done on a desktop I had built in 2009 with a dual core CPU that even then was budget priced, and used whatever bits of RAM I had laying around from a previous build (anyone that has attempted to do their projects on the older iMacs in the UMW MIDI lab will know my pain). My mobile situation was even worse, and the only reason that the production went as smoothly as it did was because Symantec Corp. had better things to do than hound my father every week to return his previous work laptop: a barely aged Lenovo Thinkpad with an Intel i7 and ample memory.

Forward a few months, and the holiday season had me upgrading to an HP Envy that appeared to be built as a Black Friday special for Best Buy. The cost was kept low at the expense of video performance (it has simple integrated graphics), but an i5 and 8gb of RAM is absolutely enough to tide me over until I can justify purchasing a new Macbook Pro. Going by both my cursory assessment of the new hardware (the entirely synthesized demo tune provided with Ableton 9 pegs CPU usage at around 50%, while my previous laptop registered a dubious ~200%…), in addition to the experiments I’ve performed with my own writing, I can see this platform handling everything I throw at it for the next two years.

Working on this guitar project also means that I can finally put the devices to use that were graciously purchased by the University through its Undergraduate Research program, a Keith McMillen Instruments Softstep foot controller and 3rd generation Apple iPad. The Softstep is desirable as the ultimate solution for launching audio clips and triggering processes mid-score while having my hands busy with the guitar, with the iPad controlling “set and forget” parameters during sound check. On paper, it’s perfect, but all of my experience with getting the Softstep to communicate with its host computer has led to frustration, so it has not provided the degree of liberated improvisation during the writing process that I had hoped. I’ve reached the point that I will refuse to work with the device until I have all of the electronics and performance score realized, and only then will I begin to integrate it into the project.

The next major purchase in the pipeline before I am truly outfitted for this business is a new audio interface. The Tapco Link.usb purchased around the time of my desktop’s construction is similarly useless, and has a signal to noise ratio that does no favors for my gain-starved DIY-built ribbon microphones. I intend to go big and acquire a premium interface along the lines of a Focusrite Forte or RME Babyface, which will keep my stereo conversion upgrade-proof for some time.

Sound design for “The Woman In Black”

Early on in the semester while planning out the course of this year long individual study, I was asked if I would be interested in producing the crucial sound design component of a local production by The Rude Mechanicals of Stephen Mallatratt’s stage adaptation of “The Woman in Black.” I accepted, knowing that it was to be a huge undertaking – I was expected to write music for certain scenes in the play in addition to collecting sounds for the 53 cues called for in the script – but if not for the help of my two assistants (Lindsay Bulls and Kyrsten Beretsky), the 20+ hours a week I spent would certainly not have been enough to bring everything together in time.

While working on this production diverted attention away from the two major compositions that I set out to complete through this individual study, the experience has proven invaluable by informing me of the reality of life as a working musician in a way that studio oriented projects can not. As de facto lead Sound Designer and Music Director, I had to learn how to best delegate the time and duties to my two assistants so that I could consistently show up to the actors’ rehearsals with new material; this involved scheduling meetings and sound walks, and giving each person on the team basic composition assignments and sounds to find. I had limited leadership experience prior to this production, and while I had plenty of failures, I feel that I really rose to the occasion and succeeded in learning how to keep everyone on track towards completing our contribution to the show.

On the composition side of things, producing tunes for the play has reinforced the virtue of being able to work fast. The music was secondary to the sound design and mostly provided ambiance for the quieter portions of the narrative; however, producing and recording this music was some of the most fun I had throughout the entire process. I found that, thanks to an immense workload and impending deadline, I had no trouble in creating a passable emulation of a theremin/ondes martenot and quickly producing a simple melody that could subtly repeat for as long as the action on stage required. Luckily, a draft for Lindsay’s synthesizer project in MUTH 370 – Electronic Music had a distinct two voice melody that could be re-recorded by herself and Kyrsten performing it on their flutes. Kyrsten’s own synth project didn’t quite suit the atmosphere of “The Woman in Black,” but she provided her own simple melody that formed the basis of another easy tune: I experimented by drenching a recording of her playing it in reverb and giving it the Otto Leuning treatment of low speed playback and delay, which she then improvised over top of. Not wanting to cut myself out of the recordings, Lindsay and I did a few improvisatory takes with her soloing on flute and myself on classical guitar arpeggiating a progression of F#m b9 no 7 to G3x5.

Organizing all of the recorded music and collected sounds was itself a unique challenge that accounted for a majority of my work. The nature of the play is that, in a bit of metatheater, the person or group of people in charge of audio and lighting in the production are collectively referred to as Mr. Bunce in the dialogue and constitute an additional character in the story, silent and only realized through the adjusting of light and sounding of cues. Doing my part required a utility that facilitates – with an ease similar to any acoustic instrument – the quick playback of multiple sounds while allowing flexibility in timing, amplitude, and direction of diffusion (which channel the sound is sent to). The natural choice for me was the DAW Ableton Live.

Screenshot - Ableton performance set

Shown here is the session view of the set I built in Ableton after weeks spent editing our cues. The organization and playback of material is thus:

  • Four sound banks (Groups A-D), each consisting of two stereo tracks (one for the front two speakers of a quadrophonic arrangement, the other for the rear two) as shown in this detail shot of the software mixer:

Screenshot - Mixer

This arrangement was the best way that I could take advantage of Ableton’s simultaneous launching/individual control of clips; more complex scenes in the play utilize more of the sound banks, while simple ones need no more than one. Typically banks C and D are reserved for launching sustained sounds (music, background noises such as running water).

  • The set is navigated with a MIDI controller, in my case I used an Akai Professional MPK Mini, which was perfect given its compact size and varied types of control surfaces. Two of the eight drum pads are used to advance the “scene” (horizontal rows of sound clips) selection in Ableton up or down, while another pad is used to trigger the scene. Another drum pad is assigned to halt all sound files in case of an emergency (missed cue, etc), while two other pads trigger door noises to accompany the only on-stage action that needed to be followed by us.
  • Adjusting software faders is also accomplished by the MPK Mini, using its eight assignable knobs. Four of the knobs control the output of the individual sound banks, and another knob controls the crossfader, enabling the fading in and out of sounds as called for.
  • Getting the sound out of my computer was accomplished with a MOTU 828 interface, with four of its outs inserted into a physical mixer that sent to the quadrophonic array. I had one of my assistants at the board both to mind it in case of some emergency, and to perform creative diffusion of some of the more abstract cues.

While I was overwhelmed at times with this production, I’m thankful that I endured. Every minute of it was its own learning opportunity, thanks not only to suddenly finding myself in a leadership role, but also due to the interdisciplinary nature of the work: working with the highly talented actors Marcus Salley, Levi Shrader, and director Fred Franklin put a lot of pressure on my team to produce great work at a professional level appropriate for those that we are supporting. This experience has given me an appreciation for music set to drama, and I hope to involve myself in similar pursuits in the future.

Hedniskhjärtad as of November 13- Reorchestration, identity crisis, etc

The past week has consisted of reorchestrating the introductory material up to around measure 9, and throwing out the material that followed from there in the previous draft. Working on the opening measures was time well spent; moving the high Eb from the trumpets to the first guitar not only allows for more consistent intonation, but it also gives more coherence to the concept of the piece. I’m more pleased with the way that the piece unfolds around rehearsal mark A, though I obviously run out of steam a bit with that texture at measure 26/rehearsal mark B, which is neither approached nor left in a useful manner.

Rehearsal C has me toying with a more literal application of the idea of “classical music, with riffs,” and while I don’t exactly hate it, I may have to adjust the imitation of the riff line that is performed by the first guitar so as to make it more interesting. I feel pretty confident in how it builds into the cacophony that begins at rehearsal mark E, and I think the concept is better executed here in general.

A number of formatting problems have persisted in this draft of the full score, mostly because I haven’t reached a final enough form for the material that I have generated to justify the time that would be spent polishing everything up (though this isn’t a good excuse for inconsistent accidentals…). Oscar Bettison’s advice about tossing out the first 50% of your ideas is coming to mind now, because I don’t feel that I’ve truly arrived at the desired personality of the piece until just before rehearsal E.

Hedniskhjärtad draft 2 – November 13 C score

Hedniskhjärtad – C score and recording from October 28

Initial readings of Hedniskhjärtad on the 28th of October were rough, thanks in part to my inexperience in writing for brass instruments, and the difficulty on the performers’ part in realizing the textures while sight reading. I was too ambitious in my notation by expecting everyone to be familiar with the symbol for a “short fermata,” as interpreted by the flute in the 6/8 section, as well as expecting the pianist to recognize the notation for a chromatic cluster in the space of a third (which, even if you know what it is, doesn’t look different enough from everything else as to be easily sight read).

This reading has shown me just how important the guitar instruments are for maintaining not just the groove of the piece, but the concept as well. I will probably have to toy with the orchestration many times throughout the writing process to strike the perfect balance between the extremes of “obvious heavy metal influence” and “just another clangorous chamber work for a mixed ensemble.” I will have to know exactly when to use idiomatic writing for the guitarists, and when to have the wind instruments contribute their own variety of dirt.

Hedniskhjärtad full C score draft 1

Background on my piece for eight players, Hedniskhjärtad

Hedniskhjärtad is my answer to the self-imposed need to write for the near-entirety of all members in my pet project, the unofficial UMW Composers’ Ensemble. It is scored as follows:

Bb Clarinet
2x Bb Trumpets
2x Electric Guitars
5 String Bass Guitar

The main aesthetic and conceptual inspiration behind the piece is its title, which translates to “heathen-hearted” from the native Swedish. It is consciously named after the 1998 debut EP of the solo progressive-/black metal act Vintersorg (stage name of Andreas Hedlund); while I’ve appreciated the music of Hedlund and associated artists since my early teens, none of the material is to be lifted from any of his or related projects. Rather, writing Hedniskhjärtad is my first official foray into writing a chamber art music piece that draws from essential aesthetic elements of the cruder “extreme” sub-genres of heavy metal that have been a crucial part of my musical background and artistic growth. Doing so had always been a goal of mine as a composer, but I’ve found myself without the means to accomplish this until now, where I have not only a brash ensemble of wind players accompanied by style conscious guitarists, but also an all-directing title. I would never suggest that one’s title ought to precede their music, but in this instance I feel it keeps me grounded in what exactly it is that I am trying to accomplish, and I am confident that I am finally saying something that I’ve had to say for a very long time.