Background: the old binding system made some guarantees about invocation order of some bindings but not all. Also, there were some bugs in the old implementation where bindings could be invoked in the wrong order - especially after re-ordering or disabling/enabling bindings.
This time I wanted to fix it properly.
The old binding framework uses standard .NET events for dispatching binding source actions. The problem is that .NET events don’t provide a way to set the order in which event listeners are invoked. This meant there was all this extra code to try sort things after the fact to fire the bindings in the correct order - and it couldn’t deal with every situation.
After trying several different approaches to fixing this, in the end I decided to just ditch .NET events and replace them with a custom event framework that supports a sort order. This of course meant updating every location where an event is declared, sent or received (all ~2,500 of them)
It took a couple of days, but seems to be working nicely now. Also, a nice side effect is that there’s now no need to sort things at event dispatch time so it should also be more efficient.
A small change here. With the old binding system, a couple of binding sources could be configured as being scheduled with “other bindings invoked by the same trigger”.
This is now more universally available and has been renamed to “other bindings invoked by the same event”.
Getting technical now, but for those curious, the “same event” means the same root event when events are nested. So for example of you configure a bunch of bindings with this setting, then all of those bindings invoked from a root “song load” event will be scheduled together - even bindings that are triggered from an event triggered secondarily to the song load.
Some background: in the current bindings implementation most MIDI to MIDI bindings are executed on the audio thread. All other bindings are processed on the UI thread. This works well for the current capabilities, but I want to setup the groundwork for more advanced automation like “binding animations” (ie: progressively updating a binding over time) which should also be run on the audio thread.
While designing the new bindings I’ve simplified things by having everything run on the UI thread. Now it’s time to push some of that logic down to the audio thread.
A major piece of this work was to get all the binding mappers running in native code (ie: C++, not C#/.NET). Remember a binding consists of three main parts - a source binding point, a target binding point and a “mapper” which handles mapping of values between the different kinds of binding points. While only some binding points will run on the audio thread all mappers need to be able to work on the audio thread.
I’d already extracted all the mapper code from the old bindings into separate C# mapper objects, but now I’ve ported the logic of all those mappers to native C++ objects and written unit tests:
The next step is to figure out if a binding can run on the audio thread (not all can) and if so, wiring it up to run there.
If you’re wondering about the longer time to run the SwitchToCommand tests in the above screen shot, it’s because that mapper has some time-based functionality (auto-repeat invocation of the target command) so the test needs to pause and check it fires. That’s the only mapper that has this, and I’ve not figured out yet how that’s going to work on the audio engine side, so that’s another job for today.
Done! The new binding system can now push down a binding to run on the audio thread if both the source and target binding points support it. At the moment, only the MIDI source and MIDI target bindings points support this, but the framework is now in-place to more easily support moving whole classes of other bindings to the audio thread too (but that’s for later).
In case you’re wondering, bindings that run on the audio thread have much more precise timing - in fact it’s sample accurate. eg: if you delay a binding by n milliseconds it will be delayed by exactly that amount. Bindings on the audio UI thread aren’t that precise, have a slight bit of latency and can also be subject to other UI stalls.
The only MIDI binding point not supported on the audio thread right now is a MIDI to user SysEx binding - the code that processes the sys-ex scripts is written in C# and can’t be called from the audio thread without risking audio processing stalls so they still run on the UI thread. This is the same as the currently binding system.
That’s the last majorly technical piece of work on the new bindings. There’s still a fair bit to go but pleased to have this one done.
Another task checked off. For this one I’ve made some small improvements too. In the old bindings system, you could set a binding’s routing mode as:
Continue
Suppress, or
Block and then Suppress.
That last option let you make sure a song or state load has completed before processing any more incoming MIDI bindings. The idea being that if you’re trying to send a song/state load followed by some MIDI commands to configure the song then you didn’t have to wait an arbitrary period of time before sending the events after the load - you could just send them all at once and the subsequent events would be queued and processed after the load finishes.
Since there’s only a couple of cases where this actually makes a difference I’ve changed things so if you create a binding to a non-delayed song/state load action and set the routing mode to suppress it will automatically block subsequent events until the load finishes… and removed that third routing mode option.
This deals with the settings that control if a song is modified when changes are the result of a binding invocation. Basically the “mark modified” logic checks if a binding is currently being dispatched and ignores change notifications if so when appropriate.
In the same area of code, I’ve also updated the way bindings are logged if Options → Diagnostics → Log Bindings are enabled. It now gives a more precise and cleaner description of the source and target binding point, the source and target values and an indicator if a binding was invoked due to a change, but the target wasn’t invoked for some reason.
The old bindings used to have three modes (disabled, half and full). The new binding object now only has a simple enabled/disabled toggle and by default bindings run in half mode (ie: reverse binding is suppressed when the forward binding is invoked and vice versa).
The only place where full mode makes sense is for MIDI bindings, so this option is now available on the source MIDI binding point, and made more explicit:
With that I think the bindings themselves are complete and functionally include everything the old binding system could do. What remains is various things around the bindings, like upgrading old bindings, updating the network API’s, verification etc…
Just finished implementing the replacement network API for talking to bindings. This is a new API that is not backwards compatible with the old API, but is cleaner, more self-documenting and simpler.
Since I’m reluctant to remove the old network API I’ll also need to build a backwards compatible API that maps to the new binding system - that’ll let old client applications (including the current WebUI and the Stream Deck plugin) to continue to work without change. However…
To get that working I need a way to map/convert old bindings to the new system and so I’ll leave that until I’ve tackled the “Upgrade Old Bindings” task since that’ll need something similar.
How that’s going to work I have no idea yet and it’s a bit daunting - but something to think about over the next couple of days (I’m taking a few days off to prepare Christine and all her parts to be shipped off to be put back together).
Can you share some documentation on the new API? Probably best to also prepare LivePrompter’s CantabileConnect capabilities for the “new world”. I guess with your plan to keep the old network API, things will still work, but maybe I can do things more easily with the new API…
Yep, I’ll definitely be updating the documentation for this. The main difference is that instead of all the binding properties being lumped into one json object, they’re separated into bindable object properties (this is typically the song/rack indicies for “song by name/index” type bindings) and binding point properties (eg: MIDI binding points have props for event, channel, controller etc…).
There’s also a new API that lets you retrieve a list of the property names and types for a particular binding point on a particular bindable object.
Wish List: Could you add method for opening song by Name. And also the possibility to select Song by name from the ‘Songs’ folder, not just the current Set List?
My song ‘notes’ are html pages and I’ve managed to host them in the Cantabile Web Server. I would like a way to select a Song from my web page and issue the command to have Cantabile either move to that Song in the current set list or, if not found, open the song from the Songs folder.
Thank you - David
I’ve been dreading this task ever since I started on this new bindings framework and while the final code is only about 1500 lines of code, it took a couple of weeks because I wanted to do everything I could to make sure it’s correct:
generating lists of bindings and mapping types in the old and new systems and checking everything maps over (and implementing a couple of binding points that I neglected).
various approaches for converting the bindings (mapping tables didn’t work, straight code was cleaner)
mapping binding point and mapper properties from the old to new binding objects.
additional code to also upgrade all the binding states.
creating songs with every possible source and target binding point and mapping type
testing that everything upgrades correctly - which it now seems to.
After all that, it seems there’s just one thing that’s supported in the old system and not in the new - the “Control Curve” state behaviour. This has been removed in the new system since those are now properties on the mapper object and not individually controllable via states. When upgrading bindings that use this the upgraded binding will be generated correctly but the option to explicitly control (or not) the curve via a state behaviour has been dropped. There’s also a very weird edge case to do with the control curve state behaviour and the exported states - but I’d be shocked if anyone actually uses that.
The other requirement for all this was that it could be re-used to map a backwards compatible network API to the new binding system so existing network clients will continue to work. That’s the next job.