Again, my warped sense of humor.
This might be a little OT, but I will go into real nostalgia mode on RISC and wax lyrical on the Inmos Transputer, which I was using in the late 80s for very demanding image processing (and the fastest Mandlebrot set generator that I knew of! ).
The problem: Silicon was going to run out of steam at 100HMz as they did not know how to shrink the fab process down further below a certain size. So the future was parallel processing. That prediction of 100Mhz is laughable now, and they broke beyond the barrier in the 90s, but think about how processor speeds have largely plateaued again and we get performance from mutli-cores, hyperthreading…
Back to the late 80s. The Transputer was an all British design. It was 25 MHz (when the best spec PC was 16MHz 80386 CISC machine), it had the first ever onboard maths coprocessor (80387 an expensive option on your PC), it had 4K of onboard RAM (nothing comparable on the PC 80386) and 4 20Mbps onboard serial comms links for connecting to other transputers (no equivalent on the PC 80386).
The transputer also had a very clever RISC instruction set that made very good use of the 32 bit data bus. Inmos realised that on the old 80/20 rule that 80% of time a processor is executing just 20% of the instruction set, and most operands (the data associated with the instruction opcode) was usually small (e.g. increment by 1 on a loop). So, whilst the transputer was a 32 bit processor, they designed an 8 bit instruction set. The top four bits encoded 15 instructions in 0x00 to 0x0e to encode the most commonly used instructions (add, subtract…) with 0x0F representing an extended multi byte instruction, and the bottom four bits encoded operands from 0x00 to 0x0e ( 0 to 14) with 0x0f representing a data extension for larger values across multiple bytes. What this meant was that 80% of the time the transputer was fetching four instructions at a time from a 32 bit wide memory which made it blindingly fast in terms of millions of instructions per second (MIPS) rating compared to CISC instructions. It also had built in array manipulation instructions, intergrated DRAM refresh, it only needed a 5MHz external system clock (simplifying circuit design)
This was all clever stuff.
In the late 80s when my state of the art PC controlling my transputer array was that 80386 running at 16HMz, I had 16 transputer from a pre-qualified batch of 30Mhz chips (I was really pushing the boundaries of speed/performance and got that batch by special arrangement from Inmos) and that gave me nearly 500Mhz on my desktop, with a few other ancillary transputer in my design I was over 500Mhz. Awesome performance for those days.
Sadly Inmos could never cracked the American market (not invented there, so they could not break into the market), they over promised and under-delivered on the next generation transputers that were awesome on paper but they never got them into serious production, and they then ran out of cash. Rather suspiciously a very transputer like DSP from Texas Instruments hoovered up the American market at the same time, sealing Inmos’ fate.
But I still look back in fondness on those days when I was a real engineer at the cutting edge of the day, and my job was pretty much an electronics hobby! And RISC was definitely the way ahead then. It seems to be coming back now.
No RISC, no fun…
Again…
I am so sorry to seem an Apple fanboy.
Here one first example of a common chip and OS approach, a Moog app that runs on all devices (desktop, tablet, phone).
And quite sexy…
@Derek, do we know each other? I was a visiting scientist at the EPCC in Edinburgh in 1989-1991 and I remember coworkers doing mandelbrot sets on the Meiko CS transputer system there. I did my PhD on those systems - the community was not that big so … have we met?
@Tom_Tollenaere I don’t think so, as I never made it to Scotland in that era, not work wise anyway.
Fractal geometry was all the rage in those days as computers became more widely available. So it was quite a common to set those algorithms up for some fun. I also did Henon Mapping (chaotic attractors) but that was as far as I got into it myself.
I programmed up a Mandlebrot generator as a test program to figure out how to partition a job across multiple processors and get the results back (you had to do all that yourself in those days). Basically you gave each point to a different processor and integrated the results (keeping track of what points have been dished out and what are outstanding, what processors are in use and what processors are free. That was hard enough to work out without trying to do anything complex and the Mandlebrot algorithm is pretty simple so a good test vehicle whilst working out how to think in parallel processing terms.
I can’t remember the timing differences know, but even a single transputer on its own whooped the arse off a PC doing this computation, and 16 working on it flew. On a PC you would see each point building up slowly, but on the transputer you would see lines being built up really fast.
As well as the parallelism, the higher clocks speed, the RISC approach, the instruction set elegance (mentioned above) and the onboard RAM all made a heck of a difference in performance.
@derek I remember all of that. I did AI on the transputer farm, massive (for the ear) simulations of neural networks. I think the largest farm I was ever allowed to run was 256 transputers at once. No way you could do that with Intel. Programming was rough though, I worked in raw C (not occam!) on the Meiko - try to debug memory leaks in 256 processors … auch. Anyway, fond memories!
Derek’s reminiscing about the Transputer reminds me of what I read that early on Pixar pretended to be be a hardware development firm so they had an excuse to commandeer the processing power for their animation experiments. When their military sponsors asked why any effort went into the animations they would reply that it was their test harness for the hardware.
I only got up to 16, so you were way ahead of me there. I was using my farm for image processing and running an “area correlation auto tracking” algorithm that had never been run in real-time before, so my transputer crate also had a frame grabber, frame-store, overlays, the Inmos graphics processor and front end image processing chips (A110s) that did all sorts of tricks in real time like edge detection or gray scale histogramming, before you grabbed the video.
I did all of the hardware design and software C/C++ (ditched Occam as soon as I could (to the horror of others in my user community)). I was also, for performance critical code sections, dropping down to assembler (inline within the C compiler) where you kept variables on the 3 register stack, overlapped the integer and floating point units - if you had a ten cycle floating point multiply going on you could still use the integer unit to do other computations until you needed the floating point result, and all sorts of other clever optimisation tricks you would not bother with today.
The same job introduced FPGAs and PLDs for the first time to my place of work, along surface mount components and four layer PCBs construction techniques, none of which had been used before here. It was really cutting edge at the time in many ways
Memory leaks are always trouble, but the biggest issues to solve on parallel arrays were usually live lock or dead lock on the parallel comms.
Fond memories as you say, I really did treat it as a paid hobby in those days, and it does seem (for now) you are never far from somebody who knows Transputers.
Sorry for taking this a bit OT, but the RISC comment made me reminisce
Back OT, it will be interesting to see where Apple take this new processor approach. They have jumped horses before, and I think over time we will see more convergence of iOS and OS X, which you are staring to see know in apps that run on both. And during the transition, just like in the transition from PowerPC to Intel, having Intel and M executables in the universal binary packages and a dynamic binary translator called Rosetta to translate from Intel to M1 on the fly allows for legacy migration.
I am indeed typing this on my Mac, which I love as a general work horse that is more pleasant to use (in my opinion of course - no intention to start a processor war!), but my music computers are still PC (and working fine) as I have VSTs that are PC only that I would not want to be without so long as I can keep running them.
Im not a Mac man at all, my last Mac was 30 years ago, but more and more examples of this M1 chip doing audio/music are showing that their processing capabilities are promising, particularly in audio and music tasks.
32 Diva instances x 6 voices, sounds very good for this entry level thing.
Also the low latency performance seems to be very good.
I think I’ll be watching to see how this evolves
Hmmmmm, this got me thinking. When was the last time I needed 32 instances of Diva?
Let me think… maybe when doing a stress test?
More seriously, never reached the limit of the realtime procesing of your pc with Cantabile or have you gotten too close to it and got a nice crack/pop? at least in my case, 8750h laptop, is not that difficult sometimes.
Anyway I highly doubt that there will be a version of Cantabile for M1x Macs anytime soon, so no rush.
I arrived to a sure point. With my experience and some guys report here. With a windows laptop, performance for musicians is not sure, driver instability in windows can be catastrophic. With an Apple notebook everything is extremely safe.
You have to pay much more money and go into a very closed environment (without Cantabile!).
But if you need many tracks with safe performance on a portable system there is no fight.
With M1 processors gap is getting even wider, and we cannot see yet real M1 code working.
With Rosetta it is already incredibly good.
It would be good for a prog odyssey!
Fur sure!!
I really don’t understand the stability problems you are having. I have been using Windows PCs for music since 2001 (when you needed a specially tuned version of Windows 98 SE) for my DAW and have been gigging with a PC since 2008. First only for running NI B4 at the end of my MIDI chain, then for backing track and lightshow (and B4 II) since 2010, and as the heart of my current rig since 2017 with all of my VSTs, and it has all been pretty solid, so long as you ensure it is correctly configured (Brad’s guide us all I follow) and watch your processor load (why I like a hybrid of VST and real instruments as I like to layer sounds a lot). I have had the odd problem now and again, but nothing that would make me question the approach.
I would also say that life on a Mac is not 100% bomb proof either. I have had the odd crash on Mac as well. I do feel that my Mac is more stable, but there is not much in it these days.
I guess you need to make a choice to stay with Windows and Cantabile if you can resolve the problems or move to Mac, as you are clearly not happy with current matters.
Also don’t forget that Rosetta is a dynamic binary translator that will translate from Intel code to M1 code at run time. Because of that you are going to get a performance hit on Intel code running on M1, and not all legacy Intel applications will run under Rosetta (this was also true for the PPC to Intel version of Rosetta).
Furio
I also don’t understand your continuous rant. Most everyone here uses Cantabile on Windows. As Brad has on the main page " Serious Live Performance Software
Cantabile is a powerful and flexible VST host designed for live performing stage musicians who want to perform better."
As many have stated, no problems with Windows. I am also sorry you’ve had had so many problems with Windows laptops, and dozens of keyboards. Your statement about “Catastrophic windows drivers” just doesn’t hold water with all the many success stories on this forum.
I run many tracks when using a DAW on a laptop, but just how many tracks are you using in a “live” performance? If you need a huge amount of tracks, when does your performance cross over the line to Karaoke? No offense to those using backing tracks, but how many do you need to stall Windows? I can run quite a few with no problems.
I have turned on several churches in my area to Cantabile and Ableton for backing tracks, and live performance…all using Windows laptops…no problems.
I have been gigging with older, cheaper Windows laptops for years with no problems (except my own errors). I don’t know what your needs are…Live Performance…or running Pro Tools. In my 3-piece group, I am running EDrums, Bass amp sims, 3 keyboard controllers, 3 vocals with changing fx, Guitar amp sims, lighting, and a mixer through my older Windows laptop. No Problems.
I get it…you are a fan of the M1, good for you. I will never own one. Just because you seem to have disastrous results with electronics, doesn’t mean we all do. There are many “better” things I could purchase, but I will not go broke doing so. I don’t use my best equipment on a normal gig…why destroy the good stuff, unless it is warranted?
If I were in a touring Prog Band, I would certainly “upgrade” a few things. That being said, we have several resident Touring Prog Band members using Cantabile and Windows, no problem. Again, your statement about Windows instability, and such, just doesn’t prove true in real world use. My 2 cents.
Regards
Corky
Simple explanation. Three notebooks in 12 years with Windows and Reaper, since 2 years Cantabile. Always problems.
Of course during live playing I cannot use more than 2 or 3 synths per time. So glitching is not so worrying. During rehearsals with my band in last weeks nothing bad happened.
But currently a simple cover with 12 tracks in Reaper, a disaster. With a top brand gaming machine, 2000 euro value.
After applying every trick by Brad. And other gurus online.
Obviously I saw many windows setup in recording studios, but on big PCs, not on notebooks.
I never saw with my eyes a windows notebook managing a serious audio project in a DAW. With serious I mean 20 tracks with some Plugin each track.
I was hoping to pay 2000 bucks and put glitching in lost memories.
I tested a Dell, a Sony Vaio, an Asus Vivo book, and now the infamous MSI.
Last three were all i7, with RAM and SSD
The last self recording I did, was on a refurb HP laptop, i5, 16G Ram, ssd. 18 tracks, several wav files I recorded, many vsts, several fx vsts on each track. Also had a wav audio file for reference…19th track. I used Reaper as my DAW. Never any glitches, ran smoothly every time. My live recordings had overdub wav files, which added another 7 wav files. Still no problems. Rendering the files to a stereo file was quick, and seamless.
My current recording has several huge libraries, again with no problem. I’ve never bought a gaming computer, just basic off the shelf types. The SSDs and larger Ram has made a huge difference, and the i5 is very reliable. I also use it for gigging, and run Keyscape, Diva, B3-X, and Amplitube in the same song with only a 60% hit on the CPU. So, I’ve not experienced the Windows catastrophe you espoused, and now you know a successful DAW project on a Windows laptop. My son records on his Windows laptop, with many tracks and overdubs. Most of his stuff is 15 minutes long, but has no problem.
I wrote that a windows notebook CAN have a catastrophic driver situation. It can happen. Not always.
And that if you buy a new one you should be careful, testing before paying.
Many guys here decided to go on more “fixed” pc systems. So I am not the only one having this skeptical approach.
I think you were lucky.
I wrote also that I never saw with my eyes a satisfying audio performance on a windows notebook.
I confirm this last sentence.
A vendor here just offered me a special audio notebook at 2500 euro.
I am not buying another windows notebook online if I cannot see with my own eyes LatencyMon running perfectly.