Challenging the paradigm of taking sampled drum sounds for granted and using them as, if it is one of the most conventional practices that constitute the framework for creating “electronic music”.
Attempt at Some Historical Excourse
Since when had it became like that?
Sampling one-hit percussion, or, alternatively, recreating it with synthesis, was the primary direction in early electronic music, that made it into consumer sector, since the nature of battery instruments implies repetition, and those sounds are usually short, which made then good candidates to be utilized in first low-memory digital samplers. Since those instruments usually do not have a strongly defined pitch, so the pitch change is not used as a form of expression, when playing those. That eliminates the problem of handling changing pitch of an instrument, during synthesis or having to store a large number of pitches in sampler’s memory. Early rhythm-machines, were mostly built around borderline neglecting the timbral structure in favour of persistence of generated rhythmic pattern, e.g. a sequence of transients.
Description
Overall, the sound of this small library, can be described as something really lo-fi, but still moderately punchy and usable. To put those
samples together, I’ve been digging through a whole lot of source material. This tutorial about recreating a snare drum sound with FM
synthesis, and this SOS article about cymbal synthesis, were the most important bits of information that kickstarted the whole thing. Rest of this kit’s pieces were done intuitively. Part of those are very basic, like subtractive synthesis, other came out from more adventurous approaches, including resonant filter banks, and layering of found sounds.
Using this lib, I put together a simple gloomy rock tune, what would serve the demonstration purposes. But before we proceed, I feel like I need to put a disclaimer: this collection of sounds it not designed to be high-quality commercial grade product, but rather an exercise, showcasing the exciting possibilities that open up the moment you get off the beaten “pre-cooked drum samples” path.
Download Zip
GoToDrums Soundfont
Takeaways
Sometimes, rhythmic patterns and note length are really more important than the timbres themselves. This sets us back to the origins of rhythm-machines. This aspect of how humans perceive rythms is what made the concept of “programmed drums” wholesome.
So, while trying to nail down the timbres, I should have really been paying more attention to dynamics, for this whole sample patch to be more expressive. Well, there were reasons for that beyond my ignorance. Initially, using a free version of TX Software Sampler, I found a number of expression-related settings to be non-intuitive, while other to have UX and stability issues. No offence to the authors, it might be my DAW’s or my OS problem as well.
Probably, my (kill to) cure from that would be trying to do the same workout on Yamaha sampler, so I’d realise how much of privileged “mouse-smudger” I am, using point-and-click software.
Cymbals naturally have their dynamic range very wide. And practically none of the widespread approximations (FM, samples, resonant filters) does cover the desired palette of hues on their own. So, ideally, when trying to approximate a cymbal sound, you would have to change the model/approach when reaching a certain velocity boundary. Like, one FM patch for the lower velocities, another one for the moderate ones, and when, probably some combination of pre-recorded sound and resonant. And since the change of timber could be dramatically obvious, you’ll probably need to apply to crossfading between them as well.
The latter sounds like a whole lot of manual work on any sampler, so, if I ever go down that path, that would be, I’ll have to think about automating the process of laying down the usual 32/64/128 levels of velocity, each from a separate sampled sound.
What I currently miss the most with my minimalist no-budget music production setup, definitely would be some organic, pounding and swinging battery. I do not really have much interest in using loops or pre-processed sample libs (tHat CUTt trhU Ze MiX!), because as Chris Randall once said, you have no time to screw around with somebody else’s music if you’re really want to make your own. Truly, with the standards I impose on my self, simply downloading a copy of MT PowerDrums, would be a statement of artistic and engineer impotence.
So far, I got some applicable results in “personalised” percussive sounds, by exploring the following approaches:
Basic subtractive and FM synthesis. Unless, you are a sound-design and electrical-engineering expert, that is your usual “blip-blop” type of electro-drum sounds.
Found objects. This is always fun. Just put your smartphone on record every time you take out the trash,
Physical modeling. Good for membrane percussion, quickly gets complicated for cymbals, because of their complex “clustered” nature of sound.
So here is a small demo I built around a manipulated drum loop which was generate. On the course of this post I’ll try to explain how you can create one yourself.
Do you like strange audio software like I do? Most of really odd and retro stuff, like a total emulation of PPG Wave Computer Suite, or this micro electro drum machine, runs only under Win32 and, therefore out of my direct reach as a typical mac user.
This also applies to a lot of software written for academic research purposes. Usually those programs were made to perform specific calculations that prove the thesis stated in one’s master’s paper. Usually, written by people who do not specialize in software, during pre-Jupiter pre-cloud era, the choice of platform and tooling was . Yes, there were the times, when they were teaching engineers and mathetimaticans to program on Delphi and Prolog. Don’t ask me why. Fun fact, I know for sure, that there are still people who support and maintain in-house Delphi projects.
So, what are my options? Well, current versions of Wine does not support MacOS Catalina, and Winebottler has a very strange paradigm of usage which seems totally alien to me, Like packing exe into Mac App? Automated installations? On what planet am I? And, of course, there is no way, I’m trying to build Wine myself, as it now requires pulling all of the 32-bit versions of development libraries and such. Something is sure gonna break, and I am not smart enough to deal with that confidently. So, why not try docker instead, since I do care about latency for the specified use cases.
So, I’ve assembled a docker image, lousely based on the ones provided by Jess Frazelle for PulseAudio support and Scott Hardy for X session support.
Dockerfile is available in this repo which also includes pulseaudio config files.
To get the X forwarding from container, which would act as a thin client of sorts. We will be using XQuartz for MacOS. From the XQuartz preferences, in the security tab, make sure Allow connections from network clients is enabled. Restart XQuartz.
xhost +localhost
Did not won over MONO(?)/framework installation problems, though. Every time I run a fresh image it would ask me to download a few things. Not critcal for me right now.
When running a Linux distro inside docker, ALSA will not work properly out of the box, as it would not be able to locate a hardware audio
interface (no dummy hw bindings allowed at ALSA level, if you want that - welcome to kernel recomplication!). On most UNIX and Unix-like environments, you can just bind your /snd/audio from host to docker container, but not on your shiny polished MacOS. One of the possible solutions here, is to use pulseaudio client-server over its native transport.
Pulseaudio was once designed to be a “distribution layer” between ALSA and applications, for them to share audio channels and buffers properly
. It worked quite well for general-purpose multimedia apps, but fell a bit short when it came to .. working with audio (i.e DAWs, audio-editors, samplers, etc). Therefore, people designed Jack which serves a similar purpose, but with much richer feature set and latency footprint. I believe, Jack is currently the “unofficial standard” for audio and music production on Linux.
As you see from the file, I also had to install KXStudio stuff. Setup repos, according to the instructions on their site.
I decided to go with Carla as a VST host. For it, I also installed wine bridges, x32 and x64 one.
There was some typical open-source fun stuff along the way, of course. Like, I had to go to Carla’s settings and enable the corresponding experimental feature. After that I tried to scan for new plugins, but that did not work (worked when I ran it under root privileges for some reason). Finally, I got it working by just dragging and dropping a DLL file onto Carla’s rack panel.
Curious fact, I’ve also tried to run Pedalboard2 VSTHost through Wine. It starts, it loads the Win32 plugin DLL, however the audio it produces consists of clicks and pops exclusively. In theory, WineASIO should mitigate those problems, however it is built exclusively around Jack.
And this is where we will be facing the fundamental problem with this kind of setup: pulseaudio and Jack does not mesh well together at all. You cannot output from Jack to pulseaudio sink. Jack forwarding requires a Jack server on the host machine. And installing and running Jack on MacOS is the most unnatural thing ever (by my standards, of course).
So, I opted for pulseaudio and it’s native sink. Thankfully, there is a brew formula for pulseaudio. So, just run the sink like this:
So, conceptually it works. As you can see, I’m sharing this “pulseaudio-cookie” file across guest and host by simply mounting the config directory to container’s filesystem. And now, the caveats.
It seems, like PA can only output to a physical audio device. I tried editing the config in multiple ways, and it explicitly stated that in the error message: [github link]. Not really total deal breaker of course, if you only need monitoring. You can monitor and then simply record it to WAV inside container.
No freaking MIDI support. MIDI sequencing and events are all handled within ALSA, therefore require a device to be present. Well, you know what. I can use OSC protocol for that, and pass through a configured docker network port.
There is a handful of directions we can head from here. And it feels that they all go to Hell. So, I took the random one, which would be try to compile pulseaudio myself, and make an override that will output to virtual device.
Then disable CodeMirror bindings in the following two files. In the last file, I substituted the cm-default with
the ones you’ll find in CodeMirror’s theme CSS files, which also goes by the name of the famous Romanian count.
I of course could change the CodeMirror’s theme the proper way, as its documentation suggests, however, then I would
need to change the HTML help file generation for SC, which I’m too lazy and desorganized for.
Sample reinforcement (or even complete replacement) has became the stample of modern drum production. Surprisingly enough, this practice is not bounded to stricly “electronic” or “sample-based” music. Even for “rock” records, some studios skip the drumkit recording entirely and just program all the drums instead. The more “extreme” drum parts get, the more likely the production will embrace some kind of sample reinforcment, with justification for that being the hightest requirements for timing and hitting consistnecy dictated by the genre.
And it did not happen overnight. The moment first digital drum machine, specifically, Linn LM-1 Drum Computer hit the market in 1980 at stunning launch price of $4,995, there were already producers, who did not want all the fuss with miking up a real drum kit. So, they just asked the drummer of the band to program-in their parts into Linn. Later, as the
MIDI standard became more and more established among studio equipment manufactures, many of them contemplatingly offering a new kind of “no humans needed” approach to music recording.
So, let’s take a look at this bunch of notes from a song, I’m trying to learn. For almost a year now. Yeah, that’s embarrassing as heck, I know.
It has a , that totally example of what could be called “power chord music theory”. The .. you can rather easily introduce accidentals and even short temporary modulations, just by playing the notes of a scale in form of all power chords.
Let’s take a closer look at how it happens.
We all know that sound, it’s been iconised and canoninesed way before I opened my first can of beer. There were numerous replicas of that little machine, and later Roland itself returned to the market with a new upgraded version of that magic box.
Lately, someone asked me, how he can recreate that kind of sound with software.
In my opinion, there are three main components, that make the majority of this sound’s characteristic. Saw oscillator, hight-resonant low pass filter, short sequenced lines with portamento and accents.
So, what would be the steps of dialing-in this components? Well, kinda depends on your time and budget. I believe FL Studio had an emulation of these, but it got deprecated in the later versions. Roland cloud offers software emulations as well. There is even a browser app, that does the trick http://errozero.co.uk/acid-machine/ . However, if you want to get your hands dirty, general guideline will be the following:
Last year I (almost accidently) discovered SuperCollider. This is a kind of advanced mutlipurpose audio software platform, a computer music nerd like me could only dream about. Its server part is
a real-time audio-generationg powerhouse, which responds to OSC protocol commands, and could be programmed using sclang language.
I’ve installed it and was totally hooked, the moment I executed its helloworld code. Why? Well, it generated a gabber beat on the fly, for f0kks sake!
And then I realized that it also supports physical modeling, I was literally blown away. Also, remember, these are all free and open source tools, they might be not so glossy. do not have that “WOW EFFECT ON” switch, they do not come with sweet marketed advertising s promises of magically making your productions sound “commercial”, “radio-friendly”, “like your fav. band/procducer”. However, if you put effort into learning them, they can really take your creative process to the next level.
So, there are a number of SuperCollider unit generators (uGens), that are based on physical modeling.
You’ll have to build those yourself and place into sc extensions directory. Here is a brief instructions on that:
Also, if you are loading synthdefs that use extensions inside
any containerized environment (SuperColliderAU), the extension plugins should be reachable from there.
In this article we will concentrate on making a somewhat realistic snare drum sound, using a physical model, that simulates
audible vibrations of a simple membraine, as a foundation. On their own, those DWGMembrain SC ugens sound somwhat like a tuned percussive timpani-sh rototoms.
So far, we’ve been working on a set of rather isolated code components, that produce audio, using structurally different approaches: physical modeling (through SuperColliderAU “containerized” instance), PCM sample playback (via RomplerGun), and subtractive synthesis (via SynthKick).
However, if you remember the initial set of requirements we listed for MembrainSC, I mentioned flexible mixing and matching options, which could make this VST an interesting hybrid drum machine, rather than one more 909 clone.
So, in practical terms, how do I mix sampled and synthesized component of a kick drum and output it to specific output channel of a plugin? Same goes for a blend of modeled and sampled snares, which should have its own channel in plugins’ output bus.
To get a bit of grip on that, let’s explore JUCE’s mixing and routing possibilities.