This article summarises my thoughts, on why we often end up shipping terrible code, despite formally following
all the good practices.
We will think about this in the next sprint
Agile software development presents itself as a set of perfectly fitted values, unfortunately, very little of them really help to
improve the quality of the product from the engineering point of view.
This bulk of healthy practices of customer collaboration and continious delivery does not really directly address any of the potential code quality issues that comes along. Sometimes it feels like this methodology is enforcing you to avoid this topic entirely,
substituting it with a “pact with the Devil” grade approach. Instead of making bad code better, it makes bad code shippable.
What’s even more pityful, the agile requirements can blur the lines between “good code” and “shippable code” so much,
that developer teams can ship code for years, without even realizing how bad it is. Until it all suddenly crumbles.
Sure, that the profressional software engineering communitty reacted to that with their famous and noble Software Craftmanship Manifesto, it is IMHO still too abstract to have a real impact on how software development (read: writing code) is approached in the world of Agile.
Peer-reviewed garbage
Relevant information is hard to come by. Soom it’ll be everywhere, unnoticed.
– John Cage
Despite being very important and useful practice, code review cannot solve the problem of “shippable code” by itself.
In practice, code review can efficiently check the following:
Some obvious errors, which were overlooked by the author and caught by a fresh pair of eyes
Compliance to team’s conventions, readability and code quality standards
Correctness of selected data structures
Correctness of class structure and links between the elements introduced within the PR and nearby
What code review process often fails to achieve, is to evaluate how does this part of code, which might be cohesive and
make total sense by itself, fits into the bigger picture: the project, the domain, the role, etc.
Thus, it becomes very hard to prevent situation, where on paper (and in mind) you ship a perfectly fine piece of code,
when in fact, you’re just throwing another bucket of garbage into a pile.
Practical example: the PR adds some new functionality to a feature and it introduces a new service/utility class.
But what if that domain already has 50 services? Can a peer reviewer confidently keep track on that?
Micro-services are not the answer
There is nothing we really need to do that is not dangerous.
– John Cage
The proposed solution for that would be the thing I call “architecture ownership”. Or “ownership of architecture”, this
should be analogous to other agile jargonisms like “product ownership”, “feature ownership”, etc.
Being an owner of a certain part of the project architecture implies having the answers to the following questions:
What are the business objects? What operations are defined on them?
What invariants do those business object have?
What is the pipeline of presenting those objects to the cusomer?
What is the biggest current code quality challenge, what pollutes code the most?
There is important omitment to be made here. You cannot have architecture ownership at the beginning of a project/feature,
because in agile world, you, basically, not supposed to make architectural decisions at that point. Sounds crazy, huh?
It might be a good approach, to have ownerships of certain domains/feature stuctures and flows, once they have matured a bit.
Frequently that comes alongside with heavy refactoring and code re-arrangements, but once you have those set, it will be much
easier to detect undesireable distortions of architecture, when reviewing some PRs.
Certain architectural descsions, like invariant checking, strict typing, member visibility, final classes and methods can enforce
some definitive patterns, so it will make sub-par OOP approaches if not impossible, then at least clearly visible even in isolated PRs.
But what do I do?
The way to loose our principles is to examine them.
– John Cage
Practical steps to introducing the concept of architecture ownership within an agile team would be the following:
Identify a part of the product, that needs architecture ownership. Most often it is the one, that is the hardest to maintain, and
constantly giving you headaches.
Measure the test coverage of this part
Try to determine what was the main reason this part of the project became unmaintanable. What kind of bad patterns accumulated there?
Based on the previous observation, come up with new clean architecture, that enforces certain patterns
Gradually move this feature/functionality to the new architecture, using previously written test as a backup
Agree on a way of code reviewing the upcoming related PRs, that checks them for possible pollutions and misusage of current elements of the architecture.
In order for MembrainSC to stand-out, alongside with the other features, we want it be able to play one-shot percussion PCM samples. This might seem unimportant and obvious, but combining synthethis / modeling with samples is extremely valuable. Again, I want to empaphize, that it moves complexity of layering different elements of percussion hits away from your DAW routing.
So, playing wave samples. JUCE has built-in support for that, and the API is surprisingly agile and straightforward.
All you need to do, is to define a new class, make it inherit Synthesiser, define a file reader for your wav, define the key range (yes, JUCE even does the sample pitch-mapping for you!).
voidRomplerGun::setup(){// add voices to our samplerfor(inti=0;i<MAX_VOICES;i++){addVoice(newSamplerVoice());}// set up our AudioFormatManager class as detailed in the API docs// we can now use WAV and AIFF files!audioFormatManager.registerBasicFormats();// now that we have our manager, lets read a simple file so we can pass it to our SamplerSound object.File*file=newFile(File::getSpecialLocation(File::currentApplicationFile).getFullPathName()+"/Contents/Resources/kick_trimmed.wav");AudioFormatReader*reader=audioFormatManager.createReaderFor(*file);// lock our sound to middle CBigIntegerallNotes;allNotes.setRange(60,1,true);// finally, add our soundaddSound(newSamplerSound("default",*reader,allNotes,60,0,10,10.0));intnumFiles=scanROM(File::getSpecialLocation(File::userDocumentsDirectory).getFullPathName()+"/MembrainSC");std::cout<<"Scanned files: "<<numFiles<<"\n";sampleLoaded=true;sampleIndex=0;}
Now, that we managed to build the thing. Let’s see how we can actually use it. Of course, you can follow official instructions and sorta “bake” SuperCollider synthdefs into the plugin. There is (was) even a script among the SuperCollider quarks for that.
But this approach kinda fell flat for me. Guess why? Boooring GUI? Yes sure, and also no multichannel output, and MIDI support being rudimental as well.
The normal approach here would be just port the project to a modern framework like JUCE, because the there is not that much DSP-specific code in SCAU itself, it kinda just start the SC server, exposes a UDP port and routes the server’s output to plugin output. Unfortunately, this is still too much for my “a pointer to a pointer? WTF?” C++ skills.
So, my descioon was to start the DSP C++ journey of mine with something extremely noob-friendly and actively maintained as JUCE framework. Among the million other cool DSP-related features, it provides a suite to . So, you basically can hve a plugin audio-procesing chain inside a parent “container plugin”. Well, why not try it with our recently build SuperColliderAU then? Let’s see how it goes:
Hey! You! Wanna turn your SuperCollider program into a real AU virtual instrument to use in your DAW? Great, then come with me into the depths of C++ sloppiness and clumsiness.
So, once upon.a time there was a project that allowed to run a SuperCollider SC synth server inside an AU plugin. It even ended up in the
official SC repositories. But years were passing and dependencies around it slowly mutated, becoming more and more hostile. At some point in time, it stopped producing a valid AU, and later then even stopped building at all. The original author still expresses some interest to the project, last commit was about half year ago, but yet again I’m too anti-social to bother people with my crazy ideas, so I went there myself.
Down below are the steps that fixes problem with building it.
At the moment of writing this post, I am on macOS Catalina 10.15.3 (19D76). I do not know how much it is actually important, but nevertheless.
It is better to build Supercollider from its standalone repo. Just get the fresh revision and follow the build instructions. Then copy it to /SuperColliderAU/supercollider/build. Revision (22e45f78fb8db8769aea34ce4ce09082917dd40d) worked for me.
Go to superCollider submodule and checkout to develop branch. Get some fresh revision, (30205c18a6cb5d74fff775d28469051899a2d0bb) worked for me.
On some hosts (Like the one bundled with JUCE as a code example, SuperColliderAU does not get fully initialised on scan, so the scan crushes, when tries to destroy its instance afterwards). If that is your case, edit the SCProcess.cpp file, comment out the mPort->stopAsioThread(); line.
By the way, what’s awesome about running a JUCE VST Host through Xcode, you can use debugger on the problematic plugin code, if you have sources and it was compiled with debug symbols.
Supercollider now uses bundled C++ boost, SupercolliderAU make files are not aware of that. As a workaround:
deadvikki@darkhorse$ brew install boost
6.1. And edited SuperCollierAU CMakeList.txt file, so it searches in my local include paths:
5.1. It will compile, but the link process will fail, but that’s not fatal
Re-run make in verbose mode,
make VERBOSE=1
6.1. Manually execute the last command, just edit it a bit.
6.2. Remove -lboost_system and add -L /usr/local/lib -framework AppKit
! Run make again to it can finish copying resource files to build dir.
By the way it screwed up my icu4c vesion , which node relied on. So,
brew info icu4c
brew switch icu4c 64.2
Bottom line:
It builds.Currently it does not passes AUVAL checks, thankfully, garbage DAWs like my old trustworthy Tracktion7, do not bother validation. People been reporting that it also work in Reaper.
Problem: it seems that we are not freeing UDP port properly, so within the same host it starts on different port every time. But it might be the nature of UDP ports
A drum machine with that “sci-fi underground” vibe…
So, I can already imagine you all asking me, why design your own drum machine in 2020, for fokks sake?
Well, in short, I have my reasons. Those are mainly education, inspiration and megalomania, of course. The latter is particularly important, because, instead of doing one more classic 80’s drum machine replica (which am I not capable of, anyway) I came up with an idea of a plugin, that resembles the same “engineering around limitations” approach to doing drum sounds: a combination of synthesis, physical modelling and PCM samples.
Also, unlike those old low-computational power units, we can supply it with much more humanise capabilities, especially for cymbals.
Like most of us, I’ve a handful of bash scripts to automate my day to day tasks, related to work and personal activities. They all live somewhere along the lines of ~/myscripts and this entry is added to my PATH shell env.
The downside of this, is, of course, that you scripts directory become a bit of a mess, congested with all that obsolete and half-baked stuff, that also pollutes your autocomplete as well.
To fight this I came up with a simple convention for all the utility scripts I place In that magic folder. There should be two commented block after bash header in every script: first is a single string of space separated tags, then a short description of what this script does.
Example would be like:
#!/bin/bash# blog util mp3# Convert DAW rendered demo WAVs to MP3 and places them # into dedicated blog folder
Sticking to that convention, I was able to come up with a script that generates a summary of my utilities. It also filters by tag and flags the undocumented ones.
Lots of poor Bash gibberish
#!/bin/bash# meta# Lists my DIY Bash scriptsnodescr=""files=($(ls ~/myscripts/. | grep-v'\.'))tags=""for f in${files[*]};do
filename=$fdescr=""skip="no"linecount=0
while IFS=read-r line;do
if[["$line"=~ ^#.*$ ]];then
let"linecount+=1"if((linecount == 2));then
if![-z"$1"]||["$1"=="tags"];then
if[[$line!=*"$1"*]];then
skip="yes"fi
fi
tags="$tags${line:1}"fi
if((linecount > 2));then
descr+="${line:1}"fi
else
break
fi
done <"$filename"if[-z"$descr"];then
nodescr="$nodescr$filename"else
dd="$(printf'%-20s'$filename):"if[["$1"!="tags"]];then
if[["$skip"=="no"]];then
echo"$dd#$descr" | awk-f table.awk
printf'\n'fi
fi
fi
done
echo$tags | xargs -n1 | sort-u | xargs
echo Missing description: $nodescr
Output looks really nice, though:
br34kp0int@lobsterblood~/» ./scriptz blog
blogmp3 : Converts the files from Repulsive
Recrods tech software demos to MP3
and copies them to blog for
educational purposes
blogthumbs : Generates thumbnails for blog
galleries
day2day : Generates a blog entry with current
data and some random title