What's your time load?

Because an efficient ASIO driver allows more data through than a bad driver. More notes = more demand on the buffer. That’s why raising the buffer reduces demand … the processor has more time to get it together before producing output.

Yes, when time-load exceeds max then I get clicks, and when I play more notes the time-load goes up, but it’s not from ASIO driver overhead. The Cantabile Profiler consistently shows that it’s from increased VST plugin overhead.

For example, when I’m sustaining only one note then I have an overall time-load of about 30% and my plugin reports about 20% contribution to the time-load (so 30-20=10% coming from other things like ASIO driver latency). Playing a chord of 8 notes increases the overall time-load to about 80% and the plugin time-load jumps to about 70% (so 80-70=10% coming from other things like ASIO driver latency). So the ASIO driver latency overhead really doesn’t change much. It’s the plugin latency that’s causing the time-load increases (and the resulting clicks when it goes over 100%).

That’s not to say that ASIO driver latency isn’t important. It determines what slice of time is left for everything else to process. But I don’t think that relatively fixed overhead is related to the number of notes played (except for complete silence, which has special optimizations).

Effects plugins are similar—they’re usually processed serially, they don’t depend on number of notes (except for complete silence), and therefore they usually deliver fixed, additive amounts of overhead to the time-load. So I understand why those often contribute a lot of load even when little is being played.

I don’t think this is true. Whether you play 1 note or 50 notes, the buffer seen by the ASIO driver is still 256 integers (or whatever your buffer size is), and the ASIO driver has no idea how many notes were played to create it. It takes roughly the same amount of time for it to process those 256 integers whether they came from 1 note or 50 notes.

The “demand” you’re talking about is the demand upon plugins to generate those 256 ints from incoming MIDI data, sample data, LFO functions, etc. Or at least that’s my understanding.

1 Like

Sorry, I think we must’ve talking past each other.
There’s no question that any given plugin increases overhead when it’s used. :sunglasses:

In Brad’s explanation of what Time Load points to, he writes, “Cantabile’s time load display is a measure of how long it takes to process one audio cycle. It’s calculated as a fraction of the length of the audio buffer.

This buffer is the ASIO buffer, and raising it accommodates excessive demand, but the payoff is higher latency.
We can demonstrate that two different audio interfaces receiving the identical load can have quite different capacities for outputting glitch free audio. The quality of the ASIO driver is a major factor. I’m suggesting that translates into more notes before dropouts occur. The same computer - different interface.
The only reason for glitches when the CPU is not particularly taxed is because the ASIO buffer cannot cope with the demand, and I have always found that the number of notes being input into a VSTi is a cause of increased demand.
Perhaps other users have some observations on this. :blush:

It’s a bit more complicated than that: it’s a problem of the calculations for the next buffer cycle not being ready in time when the previous buffer is already done. The efficiency of the ASIO driver determines the time taken by the CPU for taking that buffer and transferring it to and through the audio hardware. With an inefficient audio driver, the CPU takes more time to actually transfer the buffer, so less time remains for the actual calculation of synth voices, effects, etc.

The problem that arises during calculating the next buffer is that some calculations need to be made in sequence - first, all synth voices need to be calculated, then the data is ready to be processed by the next plugin in line, etc. So some processes may actually be idling/waiting for content while previous processes are getting their stuff done. And at the end of the sequence sits the final “audio out” step that hands the buffer over to the audio interface driver. Think of a bucket brigade - each plugin needs to process the bucket before the next in the chain can do its job.

So when a plugin in the chain experiences a burst of activity (many notes), this will hold up processing for the rest of the chain - there’s nothing the audio driver can do about that, efficient or not. It’s just that an efficient audio driver leaves more time in the buffer cycle for the actual audio processing. But the audio driver - as @Hamlen states - does nothing but shovel finished audio buffers to the actual audio hardware - it doesn’t see what happens before.

So the number of notes played may hold up the queue en route to the final ASIO buffer; and if the processed result isn’t ready in time for the audio driver requesting the next buffer, then glitches occur.

To some extent, the structure of your song and the programming of the individual plugins have a significant influence on time load - if you have parallel, independent tasks that can be processed by separate CPU cores without needing input from each other, this will “level out” the load across cores. But in sequences of processing, one process will always need to wait for the result of a previous process. Some VST instruments are capable of “farming out” part of their processing across multiple cores, others will only process in a single thread on one core. More notes - longer processing time for that plugin before it can get to summing and final processing and then hand over the result to the next process in line.

Note: this processing may not all be about CPU load. Sample-based plugins spend quite a bit of time waiting for sample data from RAM or from disk. In that case, CPU load may actually be pretty low, while “time load” (i.e. the time until the process is done) may peak. Nothing to do with the ASIO buffer not being able to cope with the demand - it’s the processes before the ASIO driver not being ready with their results in time when the ASIO driver wants its next chunk, independent if that is because of the CPU being overwhelmed with the number of calculations or the I/O systems unable to provide all the sample data from RAM or HDD in time.

The reason why higher buffer sizes help is that

  1. this provides a bit more flexibility to the processes to get their collaborative act together
  2. the relative load of the actual transfer of buffer data to the audio hardware as a percentage of each processing cycle decreases

With very small buffer sizes, the system gets so busy shoveling data to the audio interface that it hardly has any time left to actually process the data. Wild exaggeration, but you’ll get the point…

Cheers,

Torsten

5 Likes

Thanks for the comprehensive inside look there, @Torsten
For sure, I never discounted the strain other elements place on overall performance. My focus, although not as eloquently put as yours, was more in this ball park:

So, forgive me, it seems fair to look at the Time Load as, at least, a significant consequence of the driver quality?

Thanks, @Torsten. That’s a clearer description and matches my understanding too.

In answer to @Ade’s question: When the time load is high irrespective of how many notes are played, then yes driver quality (and effect plugins) are usually prime suspects. But when the time load greatly increases with more notes played, the drivers and effects are probably not the causes.

One way to think about it is that the time-load formula goes roughly like this (assuming the common case of a few VST instruments played through a common effects chain):

TimeLoad = n * max(VST1,VST2,…) + FX1 + FX2 + … + ASIODriverLatency

where n is the number of notes played. Increasing n therefore tends to amplify the impact of the VST instruments without much affecting the effects or drivers.

But the question of why some VSTs incur so much time-load is one I’ve never been able to answer satisfactorily. What @Torsten says about RAM/disk latency is certainly the common wisdom, but when I actually test that common wisdom by changing RAM/disk speeds and recording the resulting time-load difference, the math usually doesn’t add up. It’s as if many VSTs are off taking a nap (no CPU computations, no disk loads, no RAM accesses) when they slow down. I suspect that contention for some other system resource (like the .NET example I mentioned) is to blame in those cases.

1 Like

I have to admit, I never really spent much time in the profiler. I see what you guys are referring to, although it’s no surprise to us that some plugins are hungrier than others.
And the core original question was regarding the time load demand when nothing is, seemingly, going on.
OK- I’d suggest a slight reframing of the issue.
We are all about headroom. When something is chewing up headroom to the point you have concerns about the ability of your rig to deliver, you want to identify the culprit(s) and take remedial steps.
We see that some plugins make a base demand.
We see that some plugins make demands based on what’s requested of them. (Incoming audio or midi.)
We see that the driver quality impacts the ability of a system to deliver the demand.

Solutions, quite obviously, can be found in choice of plugins and choice of audio interface, notwithstanding the need to have a reasonably capable and optimized computer.
There are some plugins that our experience tells us are best used very carefully, if at all, in a live situation.
There are some hardware producers, such as RME, who have historically provided hardware and associated drivers which perform above the norm.
Where the player (overwhelmingly Cantabile users are actually playing rather than programming) is not willing to sacrifice what a certain plugin provides, and as CPU may not be the critical factor in determining average time load, the choice of audio hardware can be the difference between glitches or no glitches, if low latency is the goal. Isn’t it always?

If you’re looking at very low ASIO buffer settings (128 samples and lower), driver quality definitely becomes highly relevant. Not only the time taken by the driver to move data to the hardware, but also the predictability of that process and how it interrupts all other processing.

With higher buffer size settings, driver efficiency becomes less relevant as part of the overall picture.

That’s why I always recommend a proven and reliable dedicated audio interface when shooting for tight playability / low buffer sizes. When using on-board audio or less efficient audio interfaces, higher buffer sizes can compensate, so these can still be very usable.

2 Likes

Don’t forget that with the same ASIO buffer, processor speed is crucial. An i5 @ 4GHz may be better than an i9 @ 2.6 GHz.
A billion-core CPU, as already explained, is not very useful for real-time audio. It is more reasonable to have a fast processor, RAM and SSD.

This is why a desktop wins over a laptop, by the way.

1 Like

Yes, but also consider Adaptive Boost (“Turbo”) when buying. An 8-core i9 @ 2.6 GHz (base) that boosts to 6 GHz when some cores are underused usually beats the 4-core i5 @ 4GHz that uses all its cores. (This too is a desktop win, since it relies on thermal headroom.)

Another potentially surprising quirk: Thunderbolt audio interfaces can achieve lower ASIO driver latencies than USB interfaces (because Microsoft’s Thunderbolt driver stack is more latency-optimized than its USB stack on Windows 10/11). This contradicts internet resources that claim TB isn’t worthwhile because its extra bandwidth goes unused. It’s not about bandwidth; it’s about latency.

1 Like