Article 6RNK7 Abstraction: Introduction

Abstraction: Introduction

by
jonskeet
from Jon Skeet's coding blog on (#6RNK7)
Story Image

Finally, several posts in, I'm actually going to start talking about abstraction using DigiMixer as the core example. When I started writing DigiMixer (almost exactly two years ago) I didn't expect to take so long to get to this point. Even now, I'm not expecting this post to cover everything about abstraction" or even all the aspects of abstraction I want to cover with DigiMixer." I'm hoping this post will be a good starting point for anyone who isn't really comfortable with the term abstraction", explaining it in a relatable way with DigiMixer as a genuine example (as opposed to the somewhat anaemic examples which tend to be used, which often give an impression of simplicity which doesn't match the real world).

For this post in particular, you might want to fetch the source code - clone https://github.com/jskeet/DemoCode.git and open DigiMixer/DigiMixer.sln.

Project layout

As a general project, DigiMixer contains four different kinds of projects:

  • A core abstraction of a digital mixer - that's the main topic of these blog posts
  • Several implementations of that abstraction, for different physical mixers
  • Business logic built on top of the abstraction to make it easier to build apps
  • Actual applications (there's one public DigiMixer WPF app, but I have other applications in private repositories: one that's very similar to the DigiMixer WPF app, one that's effectively embedded within another app, and a console application designed to run on a Raspberry Pi with an X-Touch Mini plugged in)

The core abstraction consists of a few interfaces (IMixerApi, IMixerReceiver, IFaderScale), a few structs (MeterLevel, FaderLevel, ChannelId) and a couple of classes (MixerInfo, MixerChannelConfiguration). Apologies for the naming not being great - particularly IMixerApi. (Maybe I should have a whole section on naming, but I'm not sure that I'd be able to say much beyond naming is hard".)

The core project contains existing implementations of IMixerReceiver and IFaderScale, so almost all the work in making a new digital mixer work with DigiMixer is in implementing IMixerApi.

Two sides of abstractions: implementation and consumption

Already, just in that list of kinds of project, there's an aspect of abstraction which took me a long time to appreciate in design terms: there's an assymetry between designing for implementation and designing for consumption.

When writing code which doesn't need to fit into any particular interface, I try to anticipate what people using the class want it to look like. What makes it convenient to work with? What operations are always going to be called one after another, and could be simplified into just a single method call? What expectations/requirements are there likely to be in terms of threading, immutability, asynchrony? What expectations does my code have of the calling code, and what promises does it make in return?

It's much easier to answer these questions when the primary user of the code is more of your own code". It's even easier if it's internal code, so you don't even need to get the answers right" first time - you can change the shape of the code later. But even when you're not writing the calling code, it's still relatively simple. You get to define the contract, and then implement it. If the ideal contract turns out to be too hard to implement, you can sacrifice some usability for implementation simplicity. At the time when you publish the class (whatever that means in your particular situation) you know how feasible it is to implement the contract, because you've already done it.

Designing interfaces is much harder, because you're effectively designing the contract for both the interface implementations and the code calling that implementation. You may not know (or at least not know yet) how hard it is to implement the interface for every implementation that will exist, and you may not know how code will want to call the interface. Even if you have a crystal ball and can anticipate all the requirements, they may well be contradictory, in multiple ways. Different implementations may find different design choices harder or easier; different uses of the interface may likewise favour different approaches - and even if neither of those is the case, the simplest to use" design may well not be the simplest to implement" design.

Sometimes this can be addressed using abstract classes: the concrete methods in the abstract class can perform common logic which uses protected abstract methods. The implementer's view is these are the abstract methods I need to override" while the consumer's view is these are the concrete methods I can call." (Of course, you can make some of the abstract methods public for cases when the ideal consumer and implementer design coincide.)

Layering in DigiMixer

The abstract class approach isn't the one I took with DigiMixer. Instead, I effectively separated the code into a core" project which implementers refer to, with relatively low-level concepts, and a higher level project which builds on top of that and is more consumer-friendly. So while mixer implementations implement DigiMixer.Core.IMixerApi, consumers will use the DigiMixer.Mixer class, constructed using a factory method:

public static async Task<Mixer> Create(ILogger logger, Func<IMixerApi> apiFactory, ConnectionTiming? timing = null)

The Mixer class handles reconnections, retaining the status of audio channels etc. As it happens, applications will often use the even-higher-level abstraction provided by DigiMixer.AppCore.DigiMixerViewModel. It's not unusual to have multiple levels of abstraction like this, although it's worth bearing in mind that it's a balancing act - the more layers that are involved, the harder it can be to understand and debug through the code. When the role of each layer is really clear (so it's obvious where each particular bit of logic should live) then the separation can be hugely beneficial. Of course, in real life it's often not obvious where logic lives. The separation of layers in DigiMixer has taken a while to stabilise - along with everything else in the project. I'm not going to argue that it's ideal, but it seems to be good enough" at the moment.

While I've personally found it useful to put different layers in different projects, everything would still work if I had far fewer projects. (Currently I have about three projects per mixer as well, leading to a pretty large solution.) One benefit of separating by project is that I can easily see that my mixer implementations aren't breaking the intended layer boundaries: they only depend on DigiMixer.Core, not DigiMixer. I have a similar split in most of the mixer implementation code as well, with a core" project containing low-level primitives and networking, then a higher level one project which has more understanding of the specific audio concepts. (Sometimes that boundary is really fuzzy - I've spent quite a lot of time moving things back and forth.)

What's in IMixerApi and IMixerReceiver?

With that background in place, let's take a look at IMixerApi and the related interface, IMixerReceiver. My intention isn't to go into the detail of any of the code at the moment - it's just to get a sense of what's included and what isn't. Here are the declaration of IMixerApi and IMixerReceiver, without any comments. (There are comments in the real code, of course.)

public interface IMixerApi : IDisposable{ void RegisterReceiver(IMixerReceiver receiver); Task Connect(CancellationToken cancellationToken); Task<MixerChannelConfiguration> DetectConfiguration(CancellationToken cancellationToken); Task RequestAllData(IReadOnlyList<ChannelId> channelIds); Task SetFaderLevel(ChannelId inputId, ChannelId outputId, FaderLevel level); Task SetFaderLevel(ChannelId outputId, FaderLevel level); Task SetMuted(ChannelId channelId, bool muted); Task SendKeepAlive(); Task<bool> CheckConnection(CancellationToken cancellationToken); TimeSpan KeepAliveInterval { get; } IFaderScale FaderScale { get; }}public interface IMixerReceiver{ void ReceiveFaderLevel(ChannelId inputId, ChannelId outputId, FaderLevel level); void ReceiveFaderLevel(ChannelId outputId, FaderLevel level); void ReceiveMeterLevels((ChannelId channelId, MeterLevel level)[] levels); void ReceiveChannelName(ChannelId channelId, string? name); void ReceiveMuteStatus(ChannelId channelId, bool muted); void ReceiveMixerInfo(MixerInfo info);}

First, let's consider what's not in here: there's nothing to say how to connect to the mixer - no hostname, no port, no TCP/UDP decision etc. That's all specific to the mixer - some mixers need multiple ports, some only need one etc. The expectation is that all of that information is provided on construction, leaving the Connect method to actually establish the connection.

Next, notice that some aspects of IMixerApi are only of interest to the next level of abstraction up: Connect, SendKeepAlive, CheckConnection, and KeepAliveInterval. The Mixer class uses those to maintain the mixer connection, creating new instances of the IMixerApi to reconnect if necessary. (Any given instance of an IMixerApi is only connected once. This makes it easier to avoid worrying about stale data from a previous connection etc.) The Mixer is able to report to the application it's part of whether it is currently connected or not, but the application doesn't need to perform any keepalive etc.

The remaining methods and properties are all of more interest to the application, because they're about audio data. They're never called directly by layers above Mixer, because that maintains things like audio channel state itself - but they're fundamentally more closely related to the domain of the application. In particular, the mixer's channel representations proxy calls to SetMuted and SetFaderLevel to the IMixerApi almost directly (except for handling things like stereo channels).

I should explain the purpose of IMixerReceiver: it's effectively acting as a big event handler. I could have put lots of events on IMixerApi, e.g. MuteStatusChanged, FaderLevelChanged etc... but anything wanting to listen to receive data for some of those aspects usually wants to listen to all of them, so it made sense to me to put them all in one interface. Mixer implements this interface in a private nested class, and registers an instance of that class with each instance of the IMixerApi that it creates.

The DetectConfiguration and RequestAllData methods are effectively part of setting the initial state of a Mixer, so that applications can use the audio channel abstractions it exposes right from the start. The MixerChannelConfiguration is just a list of channel IDs for inputs, another one for outputs, and a list of stereo pairs" (where a pair of inputs or a pair of outputs are tied together to act in stereo, typically controlled together in terms of fader levels and muting).

The only other interesting member is FaderScale: that's used to allow the application to interpret FaderLevel values - something I'll talk about in a whole other blog post.

So what's the abstraction?

If you were waiting for some inspiring artifact of elegant design, I'm afraid I have to disappoint you. There will be a lot more posts about some of the detailed aspects of the design (and in particular compromises that I've had to make), but you've seen the basics of the abstraction now. What I've found interesting in designing DigiMixer is thinking about three aspects:

Firstly here's a lot of information about digital mixers that's not in the abstraction. We have no clue which input channels come from physical XLR sockets, which might be over Dante, etc. There's no representation at all of any FX plugins that the mixer might expose. In a different abstraction - one that attempted to represent the mixers with greater fidelity - all of that would have to be there. That would add a great deal of complexity. The most critical decision about an abstraction is what you leave out. What do all your implementations have in common that the consumers of the abstraction will need to access in some form or other?

Next, in this specific case, there are various lifecycle-related methods in the abstraction. This could have been delegated to each implementation, but the steps involved in the lifecycle are common enough that it made more sense to put them in the single Mixer implementation, rather than either in each IMixerApi implementation or in each of the applications.

So what is in the abstraction, as far as applications are concerned? There's a small amount of information about the mixer (in MixerInfo - things like the name, model, firmware version) and the rest is all about input and output channels. Each channel has information about:

  • Its name
  • Its fader level (and for input channels, this is one fader level per output channel"). This can be controlled by the application.
  • Whether it's muted or not. This can be controlled by the application.
  • Meter information (i.e. current input and output levels)

Interestingly, although a lot of the details have changed over the last two years, that core functionality hasn't. This emphasizes the difference between the abstraction" and the precise interface definitions used". If you'd asked me two years ago what mixer functionality I wanted to be in the abstraction, I think I'd have given the points above. That's almost certainly due to having worked on a non-abstracted version (targeting only the Behringer X-Air series) for nearly two years before DigiMixer started. Where that approach is feasible, I think it has a lot going for it: do something concrete before trying to generalise. (As an aside, I tend to find that's true with automation as well - I don't tend to automate a task until I've done it so often that it requires no brainpower/judgement at all. At that point, it should be easy to codify the steps... whereas if I'm still saying Well sometimes I do X, and sometimes I do Y" then I don't feel ready to automate unless I can pin down the criteria for choosing the X or Y path really clearly.

What's next?

To some extent, this post has been the happy path" of abstractions. I've tried to give a little bit of insight into the tensions between designing for consumers of the abstraction and designing for implementers, but there have been no particularly painful choices yet.

I expect most of the remaining posts to be about trickier aspects that I've really struggled with. In almost all cases, I suspect that when you read the post you may disagree with some of my choices - and that's fine. (I may not even disagree with your disagreement.) A lot of the decisions we make have a number of trade-offs, both in terms of the purely technical nature, and non-technical constraints (such as how much time I've got available to refine a design from good enough" to close to ideal"). I'm going to try to be blunt and honest about these, including talking about the constraints where I can still remember them. My hope is that in doing so, you'll be relieved to see that the constraints you have to work under aren't so different from everyone else. These will still be largely technical posts, mind you.

I'll be digging into bits of the design that I happen to find interesting, but if there are any aspects that you'd particularly like to see explained further, please let a comment to that effect and I'll see what I can do.

External Content
Source RSS or Atom Feed
Feed Location http://codeblog.jonskeet.uk/feed/
Feed Title Jon Skeet's coding blog
Feed Link https://codeblog.jonskeet.uk/
Reply 0 comments