Audio Diaries – Tough Talk

Some time mid last year my friend Sam got in touch asking for help with some audio files for the first episode of his upcoming second season of Tough Talk, a web series he’d started devoted to mens mental health here in New Zealand. As his work is something I believe in the value of, I elected to help him out, and had him send over the files.

The raw audio I received was in a pretty poor state, and far from fit for purpose. The voices were extremely quiet, overwhelmed by background noises of all varieties, and I wasn’t sure I could salvage them. Still, they were all Sam had to work with, and couldn’t be re-recorded, and they gave me an excuse to develop my audio restoration skills, as well as get some practise in post production. By some miracle I managed to get the tracks to a fairly useable state, Sam was pleased and relieved, and we made arrangements for me to work on the remainder of the series.


Sam & Kristina setting up on set

Over the course of the series, I found the files being sent my way to be in varying degrees of quality. Most of the interviews had been recorded by the time I was first contacted, but some of them yet remained, and with some general advice and pointers, Sam and his partner Kristina produced higher quality results. By the end of the seasons filming they’d learned a lot about how best to work with sound in different, and challenging environments, and I’d figured out a thing or two about how best to get useable results out of the audio captured under those less-than-ideal circumstances.

The first major lesson I learned in restoration is that blunt-force processing won’t get you anywhere. The biggest problem I had was dealing with background noises of all varieties, ranging from the white noise of the lapping waves, to the brutal humming of industrial electronics and machinery, and a variety of sources in between. Heavy settings on de-noising processors would result in lifeless recordings, and many of the subtleties of the recorded voices would quickly be lost, taking with them their intelligibility.

I solved this by tweaking the de-noising processors to have very gentle effects, so much so that to the untrained ear you probably wouldn’t notice anything had happened. I’d run the audio through these subtle settings, then re-tweak them, and run the processing algorithms again. Depending on the severity of the background noise, I’d do this anywhere between 2-4 times, always aiming to stop shy of any loss in the desired parts of the signal, the voices themselves. The software I used (iZtope’s Rx) had a useful feature wherein it intelligently scanned a selected part of a file and developed a spectral profile of the noise, which helped it identify what it was tasked with filtering out. This helped, but I noticed that most critical was carefully setting the threshold of the background noise, and avoiding any overuse and overkill. The end results weren’t always perfect, but they were a notable enough improvement, and made EQ/compression settings easier later on.

Removing static humming was more straightforward, and another specialised algorithm would detect the harmonic series of the unwanted part of a signal, and allowed it to be filtered out via familiar threshold settings, much like a standard noise gate. Generally, these processes were all that were necessary in getting the files ready for mixing. It has always been my first port of call to remove the parts of an audio file that one doesn’t need, before one proceeds with any sort of standard processes. Even before these algorithms were applied, the first things I always do is filter any unnecessary low end frequency content with the cleanest high-pass filters in my arsenal.

Once all this was done, I’d typically go ahead with standard mixing techniques to provide Sam with a stereo file to match up with his footage. I tried to avoid over-compression wherever possible, so as to avoid raising the noise floor again, and relied instead on FabFilter’s Pro-Q and Pro-DS to further sculpt the audio, before end-stage panning and levelling. Simple processes, but when dealing with sketchy source material, I found it’s wisest to avoid the temptation to throw everything at it. You’re only going to manage to take things so far, so don’t try and stress yourself over achieving impossible results.

Some time in the future I may write up more of a tutorial on these processes, but for now, these were the findings I took from these efforts, and you can hear the results for yourself here. Season 3 is said to exist somewhere on the horizon, so until then, enjoy.

Posted in Uncategorized | 1 Comment

Checking In

Checking in, fashionably overdue.

I haven’t forgotten about things here, however since my last post I’ve been largely distracted. Over the last half of 2018 I became preoccupied with contract administration work, and some health matters, and so my audio work fell by the wayside, sans one running project, which I’ll concoct a post on shortly.

I have a few things on the go at present. Among them, a long overdue overhaul of my DJ set in Live is under way, and with that I’m hoping to get back on stages with greater regularity once again. I’m also planning on releasing a number of DJ mixes, which I’ve historically been less-than-enthusiastic about doing.

I also have my Max For Live projects slowly progressing, with an emphasis on the ‘slowly.’ I’ve had to step back from these a bit whilst my mind has been preoccupied with other projects and trying to stay well. I have however added a page to the site to cover these projects, with the first major project already featured.

That’s all for now. More to come with time.

Posted in Uncategorized | Leave a comment

The Metaphorical Equine

I haven’t been particularly active as of late. An unfortunate side effect of the lifestyle i’ve led over the preceding years has been that of general stability being a rarity. Factor in habits dying hard and changes in personal circumstance, and it becomes even harder to maintain the focus, time, and energy to indulge in ones technical pursuits.

Thankfully, things are starting to settle down, and with that, the mental bandwidth to get on with things becomes more readily accessible. So what’s new?

Desk Space

Terribly lit, but what studio isn’t?

Late last year I settled into a new house, where, for the first time ever, I have a small space to myself. I’ve set up a desk, and have begun to accumulate useful things, including a 24″ LCD screen, keyboard, and mouse for posture, and a number of  loudspeakers.

Two pairs are visible in the photograph above. The upper pair are my primaries, and with the little audio interface have been lent to me by a nearby friend. The lower pair are somewhat of a mystery. They need disassembling in order to establish their impedance and power rating, however they’ve clearly been built by someone who knew what they were doing, and so I suspect  that they’re going to sound quite pleasant.

There’s more to gather, however for now all this is a massive upgrade, and slowly but surely it’s helping with keeping myself productive. So what have I been working on?

The first thing i’ve gotten back into has been Ableton Live itself. I’m still not feeling very compositional, so I’ve been stretching my technical mind and piecing together a collection of simple yet effective audio and instrument racks in order to streamline production work. I’ve also been putting a lot of time into Max programming, an invaluable asset to any Live user.

Without getting into too much detail, here’s a few of the devices i’ve been working on:

  • A flexible percussion synth
  • A versatile dub-style stereo delay unit
  • Spatialisation tools
  • OSC ~> MIDI devices (a continuation of the Vive/Live project)

All to plan, i’ll have at least one of those devices complete and ready for release within the next two months. I have thoughts to use them as fundraising devices for an upgrade to Live 10 and Max 8, at which point i’ll start working on developing ambisonic tools, seeing as  that’s an area of interest to many these days, and one rich for exploration.

Hopefully, I’ll have a detailed update on those devices some time soon. Until then, here’s an amusing image taken at a party I played a DJ set at a little over a week ago.

~ OM


Another questionably lit office.

Posted in Ableton Live, Max, Projects | Leave a comment

Vive/Live – Controlling Ableton Live with HTC Vive Controllers

For some time, myself and Ryan at Spiral Technica have been contemplating and conceptualising ways of using the HTV Vive VR system with Ableton Live in tandem. Whilst visiting Ryan and his partner in Dunedin, we decided to try and get the two talking.

Here is the end result, as demonstrated by Ryan:


The resulting ‘instrument’ is probably best described as a two-voice theremin. Vertical motion controls pitch, left/right controls the voices panning, and front/back increases/decreases the filter resonance on the synth patch to add a little emphasis. Note On messages are triggered by the Vive controllers trigger button, and Note Off messages by the grip pads.

Each controller controls a single voice, and the controllers positional data is sent via OSC from Unity to Ableton, where they are received by Ethno Tekh’s ‘Tekh Map‘ set of Max for Live devices.

More technical detail may be found on the GitHub repository that Ryan and I have set up for the project, here.

Simple, but it’s a proof-of-concept that opens the door to more complex iterations of technical and creative possibilities.


One major thing that we’ve established the need for is some form of relative range control. As such, Ryan is looking into enabling the performer to set their position, so that all parameter values are adjusted relative to that point.

Another major point will be to develop Max for Live patches specific to the Vive controllers’ set of parameters. These include positional (XYZ), and rotational (pitch, yaw, and roll) data sets, as well as button on/off messages, and trackpad data, etc.

Development in these two areas should be sufficient in our being able to develop a range of expressive electronic instruments.


Having performed DJ/VJ sets together before, Ryan and I wanted to explore ways of increasing the interactivity between the performance applications we use to create a more dynamic experience for both ourselves and our audience. The next step was to try and bridge the gap between our technical interests and another creative realm, the circus and flow arts. This is more or less where the project stems from.

In doing this, we’ve effectively prototyped a means of interfacing circus (or dance, etc) performers with our computer systems. Vive/Live is our means of interfacing the Vive with Ableton Live.

There are numerous ways in which we envision Vive/Live being used. The primary application we’ve envisioned for ourselves is to use it as a gestural control system in a live setting, however it has numerous applications in a studio/production setting also. For example: I forsee it as being very useful in recording expressive parameter automation, and acting as a modulation source.

Where to?

First port of call is to dive into Max, and start learning how to work with OSC in that environment. After that, development of the first purpose-designed Max for Live device shouldn’t be too tricky.

After that? Well, start experimenting. I have a growing list of concepts, thoughts, and ideas that I wish to test out. Hopefully i’ll be able to either re-visit the South or otherwise find another Vive system locally to work with. Then, it’s play time!

~ OM

Posted in Ableton Live, Projects | Tagged , , , , , , , , | 2 Comments

The Astro-Binaural Clock: A Hypothetical Auditory Orrery

One of my longstanding curiosities has been the middle ground to be found between science and spirituality. For many years I have had a strong interest in the field of astrology and have subsequently found it to be a domineering element in my personal understandings of the spiritual world.

When I was first introduced to the concept of ambisonics and elaborate surround sound environments one of the first thoughts that came to me was the possibility of adapting the geocentric model of the zodiac to the given speaker format. I found myself pondering on what ways there may be to translate astrological transits and aspects into a musically pleasing form. Essentially, if one were to view astrological activity as a sort of energetic weather system, what would the resulting weather patterns sound like?

At a first glance, the simplest way to realise this concept would be to take the movement of the planetary bodies on a horizontal axis and assign them specific or generative tones. By doing this you should be able to amass a collection of tracks representing each of the planetary bodies, each of which possessing a distinct tonal signature with which to determine their placement around the sound field. The next step would be considering how, and where, to place them.

Given that astronomical movement is autonomous and not of our influence, it stands to reason that to accurately place the various tonal signatures, one would need to do so in accordance with the laws of physics and well, reality. This is the part that for a long time stumped me, and led to me mentally archiving the concept. Re-assimilating myself into an academic environment recently has been the catalyst for me picking up this project again and contemplating a means of pulling it off.

Recent contemplation had led me to believe that what was necessary was a sort of digital ephemeris, however in order to compile this there would almost certainly be a magnitude of data entry and processing which quite frankly exceeded my interest somewhat dramatically. The concept held water though, and in pondering how to simplify the data, it occurred to me that what I needed was a sort of calculator for computing the positions of the various planetary bodies. In essence, a basic format which worked for all possible objects and required as little data input as possible. As luck would have it, Google searches presented me with exactly that.

Whilst I am yet to sit down and contemplate exactly how to translate that into the auditory domain, at a glance it seems logical to devise a Reaktor or Max/Max For Live device which handles the necessary calculations, and outputs the spatial data to a fairly standard ambisonic/surround sound/binaural plugin. Whilst the calculations aren’t exactly simple, they upon examination appear to be within my grasp, which is saying something given that I don’t consider myself to be particularly mathematically minded.

And so, in it’s simplest form, I have the necessary ingredients with which to produce a fairly rudimentary proof-of-concept, and I’m quite happy with that. Regardless of this however, if I were to take the idea further I would run into some fairly interesting technical design challenges. Accordingly, I shall elaborate.

One of the details of astrological study often overlooked is the fact that not only does the movement of planetary bodies exist on the horizontal axis, but the vertical as well. This is something referred to as declination.

One of the drawbacks of conventional sound reproduction is that it exists primarily on the horizontal plane, and so spatially positioning a sound either above or below a listener isn’t something easily achieved. It is possible to give the impression of vertical spatialisation via clever filtering and time based effects, but it isn’t wholly accurate.

By employing ambisonic processing it is entirely possible to send audio signals to speakers placed above and below the optimal horizontal listening space but a distinct issue here is that by creating further rings of speakers above the listener, you start running into technical requirements which far exceed even that of say, a simple circular octophonic speaker array like the one I’m starting to work with at the New Zealand School of Music.

Similarly, if one were to design a speaker array which covers not only the higher dimension of space but the lower one as well, you’d find yourself running into what could be best described as an overwhelmingly expensive engineering project that edges precariously on being nothing short of a total clusterfuck to develop. To be totally honest, it’s not really worth the effort for the sake of pure curiosity. Consider this point – ground level is totally a thing.

This is one of the unfortunate pitfalls of surround sound and ambisonics. It is inherently expensive and very demanding on technical resources. It’s on these grounds I believe that surround sound and ambisonic arrays have never really taken off in the consumer market. There is an overwhelming amount of precision engineering involved to do it correctly, and so it’s of no surprise that these formats exist primarily in the academic domain.

Simply put, surround sound formats are extremely challenging. This is not to say that they do not have their purpose, but realistically speaking, they aren’t commercially viable in the traditional sense of easy listening. This considered, it’s worth going back to basics, and therein lies the key to undertaking a project like this on at the very least, an energetically logical level.

Here is where binaural processing walks in.

Binaural audio is a form of sound recording and reproduction which exploits the human hearing system in a way that enables one to simulate 3D audio spatialisation in a fairly realistic manner. It isn’t perfect, and there’s a great deal I have to learn about it, but it seems to be as being a considerably easier means of pulling off the more complex iterations of this overall concept with the technology I have immediately available to me. So, in other words, a rather nice pair of headphones and my trusty laptops expanse of digital resources.

Part of the beauty behind binaural audio is that it requires only two channels. So, it’s basically a stereo signal. The only caveat there is that it requires the use of headphones rather than loudspeakers, which is in some respects also a drawback. It’s something to work with however and accordingly, i’m totally cool with that.

That considered, I have readily accessible options for both the proof-of-concept and a slightly more advanced iteration. And after a few years of pondering on this I’m feeling quite happy with that. Truth be told I’m not entirely in a position to give the time to this project that I would like right now but it at least gives me a solid ground upon which to get going when the time does arise. Depending on my next semesters assignment briefs I may be able to pull it off through those means, but I’m not expecting it to be the case.

This for now remains conceptual, and curious. When the time comes i’ll further detail the project and it’s eventual trials and errors, and I look forward to that. I believe that time will come before too long.

Science and spirituality indeed. I wonder how this will actually work out.

Posted in Projects | Tagged , , , , , , | 1 Comment