Unrelated to anything, but I just wanted to share three random encounters with some seriously weird stuff.
XCode, by default, if you’re running your code it will pause the execution every time it stumbles
upon a failing assertion. Annoying as hell. The fix for that is somewhat brutal.
Open the “Breakpoint Navigator” by clicking on the icon that looks like a right-angle or arrow in the left pane of the Xcode window.
In the bottom left of the “Breakpoint Navigator”, click on the ‘+’ sign, then select “Exception Breakpoint…”
A new window will open. Choose “All” for “Exception”
Check “Automatically continue after evaluating actions”! That’s it
Ever trying to reset some MacOS application settings. Like, there configuration files are nowhere to be found.
It’s all because of Objective-C NSUserDefaults. They are supposed to be stored in ~/Library/Preferences/ folder.
However, only executing the following made the settings reset for good:
defaults delete com.codeplex.pcsxr
com.codeplex.pcsxr is just an example, but all the other application namespaces are similar.
In SLADE you have to add the scripts manually to the WAD archive, otherwise they won’t be executed. To make this work with UDMF, you have to add the scripts LUMP before the ENDMAP marker. Also, you need recomplile it manually if changed.
Just a quick note on how to import wavetables from WaveEdit to Pigments
Pigments is an brilliant exhibit of what is now called a power synth-buzzword, I first encountered it in “Future Music” magazine Much like Massive,
it combines a pack of different synthesis techniques (engines), all streamlined into a single interface for you to mix and match. Both of aforementioned products
also sport an easy “drag-and-drop” modulation system, which is a great way to get started with sound design.
Pigments is a great synth, but it does not have a built-in wavetable editor. So, how about another
weird audio-software-combo? I’m not sure if it is a good idea, but it is definitely a fun one.
I think that WaveEdit was originally designed to be used with AudioMoth, or some
other audio hardware, but it is also a great tool for wavetable editing with a very appealing UI.
More importantly, it has an integrated library of community-curated wavetables, which you can use as a starting point for your own designs,
or to approximate some of the classic digital synth sounds, like the ones from Waldorf Microwave or PPG Wave.
There is a tiny compatibility issue, though, probably related to wavetable resolution. If I save a wavetable in WaveEdit, and then try to import it to Pigments, the
frames do not line up. The wavetable is still usable, but it is not exactly what I wanted. I think that the problem is that WaveEdit uses 64 frames per wavetable, while Pigments uses 512.
How do we make them align properly? Simple! Just make it 8x slower, e.g. apply the speed multiplier of 0.125 and you are good to go.
After upgrading to MacOS Venture I had a bit of a problem with the SSH agent. This problem, though, could still
be related to the interface between the monitor and the chair, as we developers like to say.
I’m using multiple SSH keys, mainly to distinguish between my personal and work GitHub accounts. Git has a nice option for that, btw:
After the upgrade,
I migrated the SSH keys from the old machine. Manually. I mean, like scp-manually. Don’t ask me why I did not use the standard migration tools
. Believe me, that are something things behind those slick silver facade, that you don’t want to know about. But to imagine the
horror I will just say, that the experience with migration tools for my platform made me really miss CloneZilla.
Anyway, back to the topic. After the migration, when I tried to push something to my personal GitHub repo, I got the authentication error.
So, I had to add the key to the agent. I did it like this:
The -K option was specific to macOS and adds the specified identity file to the Keychain, which is the secure password management system on macOS.
Lately, I also stumbled upon this:
WARNING: The -K and -A flags are deprecated and have been replaced
by the --apple-use-keychain and --apple-load-keychain
flags, respectively. To suppress this warning, set the
environment variable APPLE_SSH_ADD_BEHAVIOR as described in
the ssh-add(1) manual page.
So, probably, it is better to use the new options:
This will be a short one. I just wanted to share a quick tip on how to use WaveEdit with virtual audio outputs,
if case someone is bashing their head against the wall trying to figure out what is wrong with it.
Earlier I reported, that Audacity had some problems with recording virtual audio outputs. I tried to record
WaveEdit output with it, using BlackHole 16ch, and it was not working. I was getting a blank recording. That was a bit strange, because
I was able to record WaveEdit output with other applications, like SoX and ffplay.
BlackHole 16ch also worked flawlessly for all the other applications and use-cases I had. So, I was a bit puzzled,
until I checked the multi-channel wave rendered by sox. It was channel 3 and channel 4, that had the actual audio data, Not 1 and 2, as one might expect.
Why does WaveEdit (or is it sox?) act like this is beyond my comprehension.
So, during another lunch break at my not-creative-at-all day job, we were discussing the possibility of existence of purely digital recordings in the 80’s.
I always hijack the conversation to talk about music, and this time was no exception, which, of course, tells you a lot about my social skills and level of adaptation.
I was quite sure that it was possible, but I was not sure about the exact dates and the technology used.
Over the course of discussion, one of my colleagues even challenged me to tighten up the definition of “digital recording” itself.
This was the turning point, because I realized, that we are talking about the existence of systems or solutions that included both tools digital synthesis and sequencing, as well as digital recording the result.
All that extravaganza when the DSP was still at its infancy.
As we all know, 80’s were flourishing with digital synthesizers, but the digital recording was not that widespread, so
you would still track your beloved Yamaha DX7 or Roland D-50 to a tape recorder, or a reel-to-reel machine.
During the conversation, we agreed, that this does not count as a digital recording, because the recording medium is analog.
So, I made some vague hypothesis on what kind of setup that might be and what was the year when it hit the market.
I decided later, that I will do some quick fact-checking to update my friends with the correct answer. This however, turned into
a full-blown mystery investigation, which I am going to share with you.
My obsession with using found sounds in my recording, often gives me a sudden itch to bounce a few seconds of audio output from a random application on my computer.
Say, a web application or a video player.
And then I really feel, how cumbersome and inflated it is to start a general purpose DAW for that, and crawl through an
awkward ritual of gently rubbing its multichannel mixer and transport controls for that. It is the same as
summoning a 12 feet tall four-horned purple demon to open a can of beans for you.
There are of course lightweight open source wave editors, like Audacity.
Sadly, previous versions of it had some problems with recording “virtual” audio outputs akin to BlackHole 16ch,
I’m currently using.
Seems to be working correctly from version 3.2.5 onwards, though. Till this version was published I had plenty of free time to develop my own crappy solution
for yet another purposely invented problem. But this is how you mine proper tech-blog material, isn’t it? So, let’s go!
The Ingredients
SoX - swiss army knife for DSP via terminal, as it is nominated by its developers. And, oh boy, indeed it is.
ffplay - a part of FFmpeg audio suite, and it is a very simple portable media player using the FFmpeg libraries
The Magic
UNIX-like terminal command piping using the | operator. Pipe operator placed between two commands simply tells the shell,
that first commands input becomes second command output. This thing combined with another UNIX paradigm called IO Redirect
is what really unlocks that “shell magic” at the reach of your fingertips (unless half of them had forever ingrown into
that horrible bio-prosthetic named after a small rodent species). So the commands for bouncing live output to a file are the following:
As you can see, I provided two very similar commands here. Indeed, I use the same source for both of them, which is my
virtual input device.
First command just redirects the output of my virtual device to ffplay. This solves the recording monitoring problem.
Next one actually does the recording of audio to disk.
Monitoring pipeline sometimes fails, while recording is stable, so I would recommend to run it in separate terminal
sessions.
But wait, there is more!
Albeit sox does not look all that “user-friendly”, it can be much smarter than trivial methods of live record bouncing. Especially, when it comes to trimming silence.
Let’s consider the following command:
This command will record audio from my virtual device, and trim silence from the beginning and the end of the recording. Technically, this applies “silence” effect as a part the internal FX chain of sox.
Let’s break down the parameters a bit:
Parameter
Description
1 0.1 1%
Detects silence segments that are at least 1 second long, with a silence threshold of 0.1 (10% of maximum volume).
1 0.5 1%
Removes silence segments that are at least 1 second long, with a silence threshold of 0.5 (50% of maximum volume).
When bouncing live audio, you would probably want some safeguard to prevent overwriting existing files. sox does not have a built-in solution for that, but you can easily write a small shell script to do that for you. Here is a small example:
#!/bin/bashwildcard="$1"counter=1
while true;do
filename="${wildcard/\*/$counter}"if[!-e"$filename"];then
echo"$filename"break
fi((counter++))done
In previous article we briefly touched upon evolution of audio playback and processing under Unix-like operating system. That was quite a ride, if you remember. Maintaining traditions firmly set in this blog, let’s ask ourselves a question, how can we make my experience with *NIX audio even more pathological. Any suggestions? Right! Let’s run it in Docker under MacOS.
Ok. Let’s first politely (for the most part) dismiss the obvious suggestions from the sane-minded people. This would be sure mixed with some old-man’s rumbling touching the peculiarity of his ill habits.
Q: Why not just install Wine on your host system?
A: Being a mentally challenged paranoid I am, I have to state that I\m simply afraid. As of now, MacOS has completely dropped 32-bit support, the instructions of building and setting up Wine on those kind of systems looks like for building a portal to Hell. I’m not sure if I’m ready for that yet.
Q: Why not use a proper HyperVisor, instead of crippled Docker, which was not designed for those kinds of shenanigans anyway?
A: Two point here. First, I simply do not want to install another piece of heavyweight software on my laptop (which is, technically, not even mine). Haven’t I already have this Docker thing lying around and my dayjob makes me interact with it a lot. So why not utilise it for something useful? (depends, on how one defines “useful”, of course)
Q: What’s the use of some old Win32 application? It does not fit the modern audio production standards at all.
A: That’s just how I roll, sir/ma’m
On normal Unix-like systems, the host’s sound (and MIDI) hardware could be simply shared via plain docker device mountpoint, like:
# x11docker provides this setup with option --alsa.
docker run --rm--device /dev/snd ALSAIMAGE speaker-test
However, on our beloved slick 2+K$ tinfoil cans, we are going into…
You all know DEXED, an awesome Yamaha DX7 librarian. Seems like I stumbled upon an undocumented feature. When you use it to load a DX7 cart dump, it allows you to select any file on your system. So, why not pick a file, that has nothing to do with DX or even music whatsoever? This way it fills all the registers of a virtual Yamaha with rubbish. And it sounds wild, folks, absolutely untamed.
Listening back to this, I was kinda imagining those big heads like B. Eno or T. Reznor who have somehow found a way, to do the same with a real physical DX7 keyvoard. Was it really a dedicated programmator, or were they just sending digital noise in form of SYSEX messages?
Probably the former, no matter what my corrupted imagkintation tells me.
Have you folks seen this amazing JS demo, resembling an old typewriter? I, personally, find it hilarious and enjoyable. The ink refill process, the brokeness fader, the click sound, this makes it almost an ideal soft machine.
There is one problem though, which, sadly, cannot really attribute that to the precision of the simulation. Sometimes, it will just hangs up, stop reacting to any keystrokes, and there is no way to save the text you’ve already typed, or, continuing the analogy, - feed the page back into the roll, after you fix the jam.
My intuitive approach to this show-stopper, would be just re-run the bits of javascript that set the virtual typewriter up. I found a very useful code bit somewhere, which worked exactly as I expected.
vararr=document.head.getElementsByTagName('script')for(varn=0;n<arr.length;n++)eval(arr[n].innerHTML)//run script inside div
Thanks to this, I was able to finish a page of my upcoming horror-story fanzine. Hell yeah, you cannot really achieve it in any other way, than using a good old typewriter!
In this article, we are going to quickly reiterate the reason why audio-production on Linux can be challenging (if not frustrating at times). I think, the main reason for that, besides of course, my own incompetence, is audio-facilties being handled in too much of Unix-way. But what is Unix way, anyway? Well, in short, we have one isolated piece of software doing one single thing, but doing well.If any additional processing needs to be done with the output, we pipe (|) it into another isolated piece of software. Several steps of that form a pipeline.
Sounds great, right? Well, unfortunately, sometimes the seeming tidiness of this approach could quikcly go down the tubes (pun intended).
Once you need something specific, or something simply done your way, you’ll have to explore the whole pipeline from end to end. It’s alright, if the nodes there are simple utilities like cat or grep, but imaging exploring each of those having massive tectonic layers of legacy software archeology above them.
So, here is how this usually happens. It all starts with ALSA, a kernel subsystem, which works directly with your audio hardware. It does not efficiently handle software mixing and routing, though, so applications cannot share your device inputs/outputs effieicently, and we were not even talking about routing audio inputs and outputs between them.
To satisfy those needs, PulseAudio was introduced. A software wrapper around ALSA, that re-routes all the audio streams through itself, and distributes it between existing hardware inputs and outputs (sinks). Applications compiled to use PulseAuido as audio-driver cannot use ALSA directly, all according to the hightest standards of incomprehensible madman’s logic we maintain in the world of prograaming.
PulseAudio, however had some issues with latency, which made it hardly usable for “professional” audio-recording, that usually includes near real-time record monitoring, for example.
To solve problem, purposedly introdued, we wrote an alternative to PulseAudio called JACK. Does this remind you of “The Futurological Congress”? For me it totally does.
For a long time, JACK was the standard for (semi)professional audio work on Linux. Apparently, does not play nicely with PulseAudio. Some applications, (ex: Wine), does not support JACK at all. Of course, Wine has WineASIO, which can be routed to JACK, however, it is possible only for applications that use ASIO for audio, i.e. DAWs.
Both JACK and Pulseaudio, have different solutions for audio-over network. PulseAudio has native client-server support, and also , a special audio-sink for streaming audio in RTP protocol (is it the same thing as zeroconf or not?). Judging by these configuration strings, client-server support of pulseaudio, can use TCP as a transport, and something called “native”, which I presume, would be standard unix-sockets.
JACK provides several ways remote audio networking. It’s native netone addon using CELT codec and master/follower pattern, and some newer netJACK2, which has network discovery.
Lately, yet another ultra-low-latency professional-audio-grade Unix soundsystem was introduced. It’s called PipeWire, and it is sort of a chamenion protocol, which can act as a PulseAudio backend for PulseAudio clients, Jack backend to Jack clients and so on. AFAIK, Pipewire does not have audio-over-network support.
NOTE: Some amoebas are not shown on the figures, simply because they are considered extinct. Before ALSA, earlier versions of Linux were shipped with Open Sound System (OSS), which then was appropriated and no longer counts as free software. However, as we know, once introduced at the kernel level, is destined to be supported for eternity. That’s why ALSA still has an OSS emulation mode.