RFC: What do you need for better control surface integration?

Hey Guys,

As you may have seen I’ve been experimenting with a simple macro keypad as a controller for Cantabile (see here). I’ve got it kind of working, however…

A more interesting outcome is I’ve had an idea for custom control surface integrations. I’ve wanted something like this for a long time and while bindings can cover some of this, for more complex and dynamic configurations you can’t beat scripting.

So, I’ve started writing a NodeJS library that makes this easier. The basic idea is you define (in code) a set of “layers”. Each layer has mappings from keys, midi events, Cantabile binding sources to other midi targets, Cantabile bindings etc…

Layers can be switched between to change the function of controller buttons and you can run code when layers switch (to update LED feedback indicators for example).

It’s also open to further extension - eg: you could add support for StreamDeck, custom keyboard controllers, advanced MIDI integrations (eg: controlling the LEDs/LCD displays on MIDI controllers) and anything else that’s controllable from JavaScript. You could even build a web interface over it you wanted to.

Right now these scripts needs to be manually launched but if it works out I could add support to Cantabile for automatically starting/stopping them in the background.

But I’m curious:

  • Is it worth it and would it be useful?
  • In what ways could the integration between Cantabile and your control surfaces and keyboards be improved. (eg: better display of parameter values, other text feedback, led status indicators, switching of knob/slider functions, easier mapping between controllers and settings etc…)
  • What devices are you using that could be better integrated with Cantabile?
  • Is scripting too technical? (I’m trying to make it as simple as possible).

What do you think?

I use an Arturia Keylab 61.
It wprks extremely well sending data out but
I can’t get it to illuminate leds under buttons to indicate status in Cantabile. Would be so useful

Interesting idea - I picked up an https://electra.one/ Mini just to play around with. That has a concept of “pages” that has similarities to this. Integration with Cantabile has been straightforward, and Lua scripting very easy to pick up.

When you say “scripting” I’m personally more interested in resurrecting the discussion as a general feature. Once that’s done it could be used to drive this layers concept?

As a few of us have raised it would be great to have some kind of built in mixer (the mixer rack is great, but not integrated) - could this lead to a concept of default layers that are represented visually both in Cantabile, hardware controllers and virutally (on tablets).

I think its a a good idea. Control of plugin parameter and Cantabile remote for the laptop sounds useful.

I use a Arturia KeyLab 49 MkII for most things including as a control surface for Cubase, courtesy of a superb script withen by m.c (on steinberg forums) known as ’ Arturia_KeyLab Essential mk3.midiremote’. Would love something as comprehensive as the midi remote for Cantabile.

As I’ve mentioned a few times, if we could have basic (e.g. transport) MCU compatibility built into Cantabile it would open up many control options already out there.

I do not think it something I would use myself as I have my setup exactly as I want it in terms of controls and feedback, but it sounds like something that would give Cantabile another USP for those who would want it

I’d welcome any mechanism that helps translating from (different) physical devices to logical, abstracted Cantabile inputs.

Currently, I use a whole range of bundled bindings in my background rack to abstract faders and buttons on various controllers and keyboards to map against standardized virtual input ports in my Cantabile setup.

Currently, this comes at the cost of one audio processing cycle, since anything mapped in my background rack needs to be “piped back” into my system by way of loopback ports. So I’m not using this for time-critical inputs (keys, wheels, pedals), only for controls like zone volumes etc where one latency cycle doesn’t hurt that much.

My current hope is to be able to replace this with the hopefully-to-come input background rack which sits “upstream” of the actual song processing, within the audio cycle. But having a broader layer with scripting capability, combining MIDI, keyboard, OSC, etc input and also respective feedback to the input devices would probably be quite a bit more powerful and remove some of the acrobatics I’m currently going through using only bindings.

The key requirement for making this scripting layer replace the need for an “input rack” would be to not incur any additional input or output latency, so it would need to run within in the audio processing cycle. Not sure if that is doable, or safe for live operations, especially with the risks inherent in user-generated scripts…

If the scripting layer is kept outside the audio cycle, it wouldn’t eliminate the need for an “input rack”, but it could be extremely useful for all kinds of more “administrative”, less time-critical automation - state switching, volume control, etc.

Given that the nerd-percentage of the Cantabile community is pretty high, I’d assume that there’d be a number of us just jumping at the chance to work their scripting magic; others could either benefit from the output of the nerd-herd (script repository, anyone?) or safely ignore the feature altogether…

Just my 0.02 EUR…

Hi @Torsten,

This scripting interface (at least initially) is definitely for non-time-critical stuff since it will run externally to Cantabile and connect via the network interface. You wouldn’t want to put notes/aftertouch/pitchbend etc through this.

But… I’m also thinking of a way to programatically create transient bindings. These would work like any other binding but instead of appearing in the UI (except maybe for diagnostic purposes) they’re created by the script, run in-proc to Cantabile (and on the audio thread if supported by the binding type). That way the script can create the mapping but not be involved once setup.

To really get the audio cycle latency down, I want to push some more of the bindings down into the audio engine. eg: MIDI ↔ plugin parameters/gain/mix and maybe transport triggers.

Finally, regarding “standardizing input” I’m currently designing (not implementing yet) two things that might help with this… a mixer panel where objects can be placed into certain slots and parameter sets - for putting plugin parameters etc into similar slots. The idea here is to provide a way to declare “these are things I want to control” and control surface scripts then do their best to make these controllable.

Am I on the right track?

Brad

The biggest problem with this is Cantabile isn’t “track based” whereas MCU is. What does MCU fader #1 actually map to? That’s why I’m looking to add mixer panel with objects mapped to mixer slots - it provides this flattening and surfacing of the things you want to control.

That said, MCU is mostly just a predefined set of MIDI controllers - these can already be mapped in Cantabile.

What’s missing is the “it just works”.

Or… have I missed your point?

I believe the ability to control leds has been reverse engineered and you just need to send the correct MIDI messages to it. Something a control surface script should be able to handle.

I’ve not heard of this before - that’s a pretty cool looking device.

1 Like

I use the Presonus Atom SQ (35 velocity-sensitive buttons, 8 endless encoders, midi-programmable digital display, midi-programmable button colors). Works fantastic with Cantabile (see my documentation of its midi protocol here), especially once Brad buffed MIDI encoder filters.

I’ve only hit two snags with it:

  1. I can’t assign a button to turn external clock sync on/off since Cantabile doesn’t support a binding target for that. (Maybe if I used an external app like AutoHotkey to send the necessary UI event to Cantabile?)
  2. I can’t assign controller curves for the endless encoders since Cantabile doesn’t support controller curves for Sign Bit encoders. (I worked around this by writing a special plugin for it.)

Simply transport controls - or any other ‘global’ MCU functions. No need for the track-by-track controls to be implemented.

But…

There’s no reason why designated MCU controls (e.g. track faders) couldn’t be exposed as binding sources. Then a user could allocate any fader to any element they choose.

And if we do ever get a mixer I’d hope it would be MCU-ready too.

1 Like

A mixer with aux sends to an FX rack would be interesting.
I use my Keylab61 as a mixer by mapping its sliders CC output to control various racks output levels.
During the process of setting this up I had to make measured chouices about how I mapped each sliders’ targets.
All my instruments are in racks and (here’s the root of my caveat) most of my synths are duplicated/triplicated to allow for fast switching and or sound stacking.
An inbuilt mixer could allow for every module to have a software slider (most controller keyboards only have 9 sliders) whereas software sliders, could be on a per instrument basis, which would be a nightmare for me onstage as it’s too much to manage on the fly.

So, if you design a mixer for Cantabile, can you allow for flexibilty when mapping a slider to VST so that we could map it to rack and/or individual instruments?

Not sure I fully understand where you are heading with this. Maybe to align thinking, here’s my current approach:

I have defined some “standard controls” for my setup:

  • Keys, Guitar Master Volume, Monitor Volume, Backing Tracks volume
  • Volume levels within a song (Main instruments, solo instruments,)
  • Layer and effects levels (Main / Solo reverb, Main / Solo delay, String layer volume, String layer EQ)

Also some standard buttons

  • Next / previous / first state
  • LivePrompter control (next, previous song, play/pause/reset)
  • Panic Button
  • Octave selection
  • Backing Track playback control

All these are assigned to CC messages, and my racks (both racks within the song and the background rack) then react to these CCs coming in on a virtual MIDI port called “BG rack”. Racks within a song mainly react to the “level” type messages; the background rack processes a lot of the “button” type controls, e.g for “next state”.

My mapping of any external control surface then essentially means:

  • create a mapping rack for this surface within the background rack
  • the mapping rack takes MIDI input from the device’s MIDI port and converts it via bindings into CC messages to the loopback port of “BG rack”, thus feeding MIDI to the “BG rack” virtual port

If I had an abstract Mixer / “Control Surface” panel in Cantabile, I would set it up with faders and buttons that correspond to my set of standard CCs as illustrated above. If I could now use these faders and buttons directly in bindings in my racks, this would make life a bit easier - simply assign rack output volume to the “Keys Volume” binding source from the Mixer panel; bind Media Player 1 playback to the “Playback Start” button from the Mixer panel, etc., instead of using cryptic CC messages I always have to look up in my master table :wink:

Control surface scripts could then map MIDI / keyboard / OSC /… input to (and from, for e.g. LED feedback) these standard controls on the Mixer panel. I’d have individual mapping scripts for every external control device.

So it would be very useful to have this kind of standardized “Control Surface” panel, having faders, knobs and buttons.

Just thinking out loud…

Hi @Torsten,

Firstly, just to set expectations, this is all forward planning. I’m in the middle of a non-trival reworking some core engine internals which needs to be finished and stabilized before I move onto this.

I think we’re thinking in a similar vein, except I’m also trying to make it that the mapping from hardware is handled more automatically by a script.

Let me layout what I’m thinking - I think it does line up with what you’re describing.

I’ve struggled to come up with a system for this in the past because I was trying to do everything through the one mechanism. Things became a lot clearer for me when I decided to split this into two distinct areas - mixers and parameters.

Mixer Panel

My thoughts here are to introduce a typical mixer panel GUI - a horizontal stack of N vertical slots each with fader, knobs and buttons. Each would be specific to the kind of object they’re connected to (plugin, rack, etc…)

To add something to the mixer panel, you right click on an object, choose “Add to Mixer” and then choose a mixer slot number.

This both selects and flattens the mixer settings you want direct control over and there’s no bindings involved yet.

On top of this, there would be bindings so you can map controllers to Mixer Slot N Fader, Mixer Slot N Knob 1, Mixer Slot N Button 1 etc… and you could create a whole stack of bindings to control it.

However, an integration script could automatically map all these mixer slots to a known piece of hardware - this could be a hi-end MIDI control surface, a Stream Deck+ or a simple macro keypad with a couple of knobs. It’s up to the script to decide how these are mapped, provide paging across the entire set of mixer slots etc…

With an appropriate integration script installed, all you need to do is choose what things going in which mixer slots and you can control it from your control surface. No bindings, it just works.

Parameters

The approach with parameters is similar but a bit more complex, and also not completely clear in my mind yet. The complexity here is because parameter editing on controllers is far less standardized than mixer panels.

For example:

  • I have a Novation SL MkII and the parameter editing area is eight strips, each with LCD, 2x lightable buttons, 1x rotary encoder with led ring, 1x pot and 1x velocity sensitive drum pad.
  • A StreamDeck+ can be thought of as 4 strips each with 2x display buttons, 1x display area and 1 rotary encoder.

What I want to do is provide similar automatic mapping by the integration script as what I’ve described for mixers.

Best idea so far is to allow songs/racks to define “parameter strips”. A parameter strip would be a pre-canned arrangement of controls - perhaps 2x text, 3x knobs, 4x buttons.

Each of these parameter controls could be used as binding source/targets, or I’ll probably also provide some more direct 2-way mapping to plugin parameters and other common settings to avoid cluttering the bindings panel.

By grouping parameters into strips it gives a bit more organization than individual parameters. eg: you can group parameters for a single plugin into one strip, perhaps group the parameters for a single oscillator etc… This makes them easier to position and move around than individual parameters.

From here a similar approach to the mixer slots - user can assign a parameter strips a slot index and it appears in a parameter panel in that position. You can create bindings to/from these slots - or better use an integration script that can automatically map them to hardware.

What I haven’t work out is how to better handle controls that aren’t in a parameter type grid. For these I’m wondering about named (rather than indexed) parameter slots - not sure.

Again the goal is to select what you want controllable from hardware and the integration script makes it just work - it maps what it can to the hardware, provides paging over the full set of parameters strips etc…

Slot Indices

For both mixer panels and parameter strips I said you assign them a slot index to control where they appear.

Originally I was going to just have them stack left to right in the order of the song/racks, but that means things can shift around from song to song. By setting a particular slot number, that set of controls will always be in the same place.

This however makes it possible to have conflicts (two objects assigned to the same mixer slot). I think the best approach here is to simply error out and leave it to the user to resolve.

I could also have a slot index option of “Auto” where it just uses the next available slot, but I don’t think I like this idea.


Anyway, as mentioned I haven’t started coding any of this and I too am just thinking out loud… and very open to feedback and suggestions.

I don’t think I’m following here. Perhaps my understanding of MCU is wrong. Can’t you just learn bind these to whatever you want already? Aren’t MCU faders already just MIDI controllers - and MIDI controllers can be used as binding sources.

Depends what you mean by MCU ready? If you mean available as a bindable source/target then yes. If you mean it automatically works with MCU device, then yes, if there’s an integration script for it (which I’m sure myself or someone else could put together quite easily).

This is more of a routing capability than a mixer capability.

Currently I’m thinking the mixer panel would support mixer slots (ie: faders) for: plugins, media players, racks, global levels, environment ports and audio routes. ie: any of these objects you could right click, choose Add to Mixer and they would appear in the slot you choose.

For sends - I think you just need a mixer slot for the audio route from sender to the fx rack and/or perhaps a/b send balance.

Maybe, on top of this I could add a “custom” mixer slot that could be bound to anything else.

Note that by “plugins” I mean the top-level Cantabile provided plugin settings (gain, bypass, solo etc…) - not plugin parameters.

I missed this post - very nice work!

Currently the only way to control this is via states. You could have one state where its on, one off. But you’re right - a binding to toggle this would be handy. Logged it

Did we discuss this already - sounds familiar but I can’t remember the details. I think the old binding system pretended to support this but it didn’t work well so I removed it with Bindings4.

1 Like

Interesting—I didn’t realize transport source was state-controllable. I currently have 16 song states, so I guess I’d need 32 with that approach. A little scary but not impossible. Thanks for the tip!

Yes, back in 2022. You have a good memory! I implemented the idea suggested there in my CurveVST plugin (free download if anyone wants it), and it does work.

1 Like