i wanted to record an album that sounded unfamiliar to me. so i chose some constraints that would prevent me from getting too connected.
this set was sequenced from solo, single-take hardware jams recorded to two-track with minimal editing and post-processing, mostly between 18-24 sept 2016. everything was mashed out live into sequencers with nothing prepared apart from a few rhythms.
so there’s a bunch of mistakes and clipping and leakage and analog grunge printed in. you can maybe sense the terror in the second half of some of these tracks as i desperately hang on.
Ok. Finally had a chance to pop the lid off this thing and get some probes on it.
1: The keyboard circuit works.
2: The oscillators appear to not work.
3: This could be because I think something flaked out on the power supply while I was trying to get at the oscillator boards. Some of the voltages are wrong now.
I’ve decided to pull the whole thing apart, clean it up and plug-ify each module – it’s way too hard to get at anything, and there’s cruft everywhere.
I’ve been a bit too busy to open the ETI up and keep working on it, but I’ve already started on some plans for the front-end.
The keyboard controller in this thing is primitive digital: shaky, slow (~20ms delay), complicated and probably totally broken. I’m planning on replacing it with a little Arduino rig – I found one in stock at gorillabuilderz.
It runs a 32-bit ARM chip, which is ridiculous overkill for what I want – mainly went with this one because it has lots of digital inputs (enough to just wire up every key on the keyboard) and a couple of 12-bit analog outputs (will use one for CV, maybe the other for an extra LFO?). The nice thing about this is it means I can do other fun stuff later, like build in a chip-tune style arpeggiator, or maybe a MIDI input. It also means I can remove the most complicated board on there – the keyboard controller takes up a lot of space and it’s covered in scary unavailable chips.
The actual keyboard mechanism is really basic: each key activates a switch. At the moment, it’s hooked up to a resistor network: 12 lines for note, 4 lines for octave, which means it can get into a really weird state if you hit multiple keys at once.
After a bit of reading, sorting this out looks dead simple: I need to rewire it so that when a key is down, it pulls a line to ground. That’ll let me rip out a huge amount of horrible wiring, and I doubt the Arduino will be any more delicate than the original digital circuit.
The other problem is that the voltages I need to generate are a bit weird. Trigger is -7V low, +7V high (!?), CV is 0-5V. The outputs I have will probably max out about 3.3V. So there’s probably going to be a little bit of buffer and amp circuitry in there as well.
This particular one isn’t in working order at the moment – but there’s a ton of resources around (including high-quality schematics), particularly at www.eti4600synthesiser.org.uk, so I’m hopeful I can get this one nursed back to health.
It doesn’t really make any proper noise yet. I’ve verified that the power supply works, and the reverb unit and the last stage of the amp works (I can get signal out of it when I thump the reverb tank) but I can’t really narrow down the fault any further without an oscilloscope… which I’m in the process of tracking down. I have a few hunches as to what it might be, though.
This one is also kinda weird in that it looks like there’s a bunch of stuff has been added by a previous owner. Also, it needs a scrub. I’m going to try and get some real sound out of it first, and then probably restore it back to stock.
a couple of us got together to do a totally improvised live electronic set at a gallery opening a few weeks back – three keyboards, two iphones and some effects. it came out pretty good, i edited it down to an hour or so and uploaded it to soundcloud here: http://soundcloud.com/remaincalm/live-at-culture-jam-15-04-2010
some awesome blog posts on DSP theory (er… and sound design in mayan temples) appearing on valhalladsp, thought i’d give it a bump: http://valhalladsp.wordpress.com/. this guy did the excellent eos reverb and valhalla freq echo plugins, and there’s some excellent info in there.
also, autechre were awesome live, new version of reaper is good, and i’ve been slack finishing off the new go/no-go EP because i’ve been very very busy. oh well.
i’ve put together another little JS audio processor for reaper. this one is called ‘paranoia’, and it’s a harsh digital mangler (with bonus bugs!) and a bunch of soft clippers and filters bolted onto it. it started off being modeled on a little hardware digital fx unit going around at the moment but the signal chain ended up evolving into something quite a bit different.
here’s an audio example (updated link, works now! updated 2012: oops, broken again)… crunchiness everywhere.
doesn’t work great on all sources, but does the trick on percussion tracks. the multimode filter in there is pretty nasty as well.
now that ‘autofocus’ is up on iTunes, i figure this is probably a good time to compare the recording process for that with what we’re doing now.
first of all – we recorded an absolute stack of stuff for ‘autofocus’, giant chunks of which ended up getting cut. there were multiple, wildly different versions of some songs, and we left off (or outright abandoned half-way) at least four tracks. that was kinda the only way it could have been done, though. the album was recorded the way it was written – a bit random and cut-up and berzerk.
‘lost in berlin’ (check the iTunes link) is a good example of that. it started as a riff on a microkorg, and was emailed to steve with some blippy drums. steve repitched and mutilated the riff into an actual song in ableton live, but playing it live like that didn’t work. we shelved the song. i think we had another shot at a vocal somewhere in there. months later, we tried playing it rock-style and it sorta worked. we tracked a new rhythm part, and i dug up the original keyboard parts, added a new recording of steve’s vocals, and mixed the whole thing in reaper.
months. that took months. nearly a year. the song goes for about three minutes.
the same process played out for the other eight songs. i tried to get all “brian eno” on taz at one point and recorded him playing bass for a song he’d never heard before, just kinda shouting instructions at him as we went. that ended up as the main groove in ‘catching up with you‘, which is fun, but was a nightmare to work out a live arrangement for. the version of ‘things’ that ended up on there is completely different to how we play it live, but there are lots of elements in there that were pulled from an early demo of it, just processed until it sounded like it had been stuffed full of tiny robots. every song has some synthetic element to it, and hopefully that comes across as a cool sound. it’s a bit hard to judge everything objectively when you’ve heard everything thousands of times.
the new recording – which is almost certainly just going to be an EP – is coming together a bit differently.
we’ve added some new personnel (steve from great apes + others), and we’ve been rehearsing the hell out of the band. new songs are being written more in the rehearsal room than over the internet, and it’s coming out in the arrangements – we’re concentrating a lot more on just getting the groove right. which is probably why we bashed through all five basic tracks in a day, and some of that stuff was first- or second-takes. the extra percussion is actually adding some space, probably because steve and i are taking more breaks. it’s coming together alright.
we’ve now got all of the drums, all of the bass, and half of the keys and guitar comped and tracked. steve c’s recording his parts at his place, and steve a and i will be tracking percussion and extra keys at some point this month. probably at his place. hopefully his neighbours don’t totally freak out. there’s likely to be a lot of cowbell action. all of the keys and electronic percussion is being tracked live – none of the songs for this session were recorded to a click-track, so i can’t sync up drum machines to anything.
also, i put down this completely stupid solo and it’s awesome.
it took about a billion years, but we finally managed to get the go/no-go “autofocus” release shipped off to the duplicators (and flying over the internet in the general direction of itunes, which feels a bit weird).
we got a few clips done as well, thanks to stu willis for making the “catching up with you” clip possible.
anyway, we did it all DIY, and it worked out okay, so we’re gonna have a shot at a follow up EP the same way. but hopefully a bit quicker this time. i’ve had the urge to write about the recording process for a while, and this seems like a good place to start. also, i’m waiting for a plumber to show up and can’t really do anything else in the meantime.
it took a few years, but i’ve finally got myself a mobile recording rig that works pretty good and fits in the back of a daihatsu charade. the main problem is getting enough equipment to get a nice drum recording – once you get past two channels, everything gets a bit spendier and more complicated. and you end up with mic cables everywhere, and i’m really bad at winding them up neatly.
anyway. last sunday, we dragged our gear down to zen rehearsals to track drums and bass. we record there mainly because they run free bbqs on weekends, and they have a well maintained cappuccino machine. also, because some of their rooms are really well sound-proofed and have excellent room treatment, which is important – if the room sounds boxy (or if sydney’s loudest metal band are bleeding through the air conditioning system), the recording will sound bad.
cue: five hours of recording, five tracks of drums and bass done, despite 50% of the personnel suffering from crippling hangovers.
check out all the mess!
once i get everything home, i usually mix up a nice sturdy martini and transfer the tracks over from the portable recorder to the PC. the ancient vs1680 recorder i use is cheap, solid and sounds good, but getting the raw audio out of it onto the computer requires screwdrivers and lots of time and weird software and usually i have to shout at it a bit as well. but eventually i end up with the session loaded up in reaper, ready for edits.
there’s usually a bunch of takes of each song. sometimes one is just perfect the whole way through, but usually each one will have bits of inspiration hit at different times. this is where that martini comes in handy. i usually just map out the song on paper, then mark each take with ticks or crosses depending on how awesome or hideous each part is:
once i’ve mapped it out, i hack the best bits together, get a rough mix up, and start thinking about another martini. and overdubs.
with a quickie mix and some keys, it’s sounding a little like this:
in other news… where the hell is that plumber?
next up: percussion. in a week or two. with any luck.
i gave up on the whole pedal dsp idea. it’s just going to be too much work and i don’t have the cash for a devkit at the moment anyway.
but… i’ve been working on some more JS effects and these ones might actually work
i was also doing a little bit of dev work on the DS stuff but the development environment is a total pain in the ass (in that the core platform keeps getting updates that break compatibility with all the other 3rd party tools). i still have my old dev environment in a virtual machine somewhere so i can get back to that one day but really, it’s just so much damn effort. and bliptracker already does everything i need. so that’s all a bit ugh. i really need to put out that updated beta of dsmcu one day though, i just need to package it up…
i’ve been mucking about with the XNA XB360 dev environment. it looks very interesting. i have two separate ideas – one for a 2d puzzle/action game that doesn’t have gems or tiles and one for an experimental art game.
also i want to get a JLM preamp kit and get some soldering happening.
update3: some more info on the tonecore ddk, scrounged from the freescale website: (giant pdf thing) – short version, there’s 4Mb+ of onboard RAM (8 banks of 512kb plus whatever’s onboard the DSP), and it’s rated at about 100Mips. 24 bit. doesn’t look like there’s any FPU onboard, bah.
work is continuing on dsmcu. here’s what is in the new version so far:
now requires dldi patching.
the entire application is fully skinnable using text files (maybe xml) csv text files and png graphics. *DONE*
skinning extends to the midi commands (midi data sent or monitored for feedback) supported – this should allow the device to be used as a generic midi controller. there will be a number of basic control types available.
support for multiple layouts, selectable from L/R triggers. *DONE* (memory limits on how many backgrounds can be loaded, though)
layouts supported will include: standard mcu control, simplified/expanded mcu control (e.g. big transport), generic MIDI kaoss style touch pad or multi-kaoss (any number of pad zones on the same page), MIDI keyboard interface, MIDI drum pad interface, generic vsti synth control (faders for cutoff, resonance, etc), and combination of those elements…(MIDI and mcu controls possibly won’t be usable at the same time due to host limitations)
no estimate on when it’ll be ready – i’m going to be moving house soon so that might slow me down a bit. but what’s there is working really well so far.
if anyone has gotten the beta working under leopard, can they let me know? cheers.