Slightly OT, but that takes me back to the days when hand code optimisation was still king.
In my programming heyday in the late 89s/early 90s I was working with INMOS Transputers. When the state of the art PC was a 16MHz 8036 (math co-processor an expensive option) I had 30MHz prequalified chips direct from INMOS and not yet on the market (20MHz was the market max) and 16 of them in a parallel array to give me 0.5GHz on my desktop - stonkingly fast in those days. Parallelism was the future as silicon was running out of steam at 100MHz (a laughable prediction now looking back on it, but look at our current mutli-core, hyper-threaded processor architectures and they are effectively the transputer concept but all in one chip).
Anyway, those were the days where you wrote everything in a C/C++ (I had dropped the horrible Transputer specific OCCAM language as soon as a C compiler became available), but when you had the need for speed, you dropped down to hand crafted assembler, and in the case of Transputers the fun part was interleaving in assembly code the instructions for the integer unit and onboard floating point unit (first ever processor to have one integrated), as you could issue a floating point instruction and the integer unit could carry on until the floating point result was needed. So for one floating point instruction that took, say, 10 clock cycles, you could overlap four or five integer operations. The compilers of the day did not understand any of that, and I double the performance compared to compiled code (I needed every ounce of it for what I was doing in real time).
Of course, whilst I have fond memories of those times, I am glad those days are behind me, but coding efficiency is still something I have huge respect for!
Would have to be elegant Motorola rather than kludgy old Intel I used to do 6809 and 68000 - and Intel 8085, 8086, 8051, etc.
OT alert: But there was real elegance in the the Transputer RISC architecture. On the old 80/20 rule, INMOS encoded 20% of the instruction set you used 80% of the time into the high nibble of a byte (0-14 with 15 indicating an extended instruction) and operands up to 14 in the low nibble. So 80% of the time the 32 bit Transputer was fetching four instructions (OPCODES and DATA). It was blindingly fast for its day!
Seriously and back on topic, C would be fine by me. I still program in Java, so closer to C++ but with all the bits that you do not understand or abused in C++ thrown out. I can still get back to C when needed.
Good old Ansi C is also elegant IMO, as are von Neumann, big endian, etc. in 6800, 6809, 68HC11 and so on. And now I read that Brad is a Z80 guy. What a disappointment, I wonāt be extending Cantabile any further and Iām going to buy a ten year GP license right nowā¦
Sorry this old-fashioned developer OT drift.
Oh yes, 100% Z80. There was a time when I knew the hex codes for loads of the instructions by heart, as I was hand-assembling from an op-code lookup tableā¦
aaaah, getting sentimental here; before I got this book, all my knowledge of machine code was gained by trying to decipher code fragments in electronics magazines (remember the KIM-1 and its brethen?):
I had to travel 50 km to the nearest reasonably large city, got to an office equipment store (these were the only guys selling computers at the time), and they found the book in a dusty corner of the managerās second office. They didnāt even know the price, so I got it cheap - and I still have it somewhere.
First thing I built was (of course) an assembler in BASIC on the commodore 4032; next was a logical disassembler, able to follow jumps and branches to decode some crazy software protection schemes that prevented me from loading software from disk instead of that nasty datassetteā¦
For years, I had the complete zero page of the commodore 64 OS imprinted in my memory, and I still remember that $A9 was load accu immediate, and $85 was store accu in zero pageā¦ Counting conditional relative branches by hand was something I could do in my sleepā¦
When I switched to a 68000-powered platform, things became far less āhands-onā; we had decent compilers/assemblers then, so even though I still wrote machine code for the 68000 (and later for the VAX), it never became as āintimateā as the 6502 experienceā¦
Yup, before Apple2 I had the SYM-1. So I was a 6502 freak teenager during those years. Then I switched to the Motorola 68-everything.
And now that the talk has gone off-topic, @Brad, please move our stories to a separate thread. Because I can see that there are many of us hopeless tech romantics out there. And there will be many tales to tell.
Whilst the main thread is about scripting, I love OT discussions like this as well (and probably my fault ).
I first learnt programming around 1983 as a second year electronics apprentice, programming a Linrose (I think) microprocessor development kit which was a Z80 based box of tricks (with a bit of IO and a breadboard area) and was gathering dust in the corner of the workshop as nobody understood it. It only had a hex keypad and a seven segment display system for the UI, and you had to hand write your assembler program, hand assemble it using an opcode table and punch it in yourself.
I still say that if you came from programming having had to do it that way rather than starting at a HLL level, it gave you a much better appreciation of what a microprocessor does. And sometimes in later years I could only solve HLL bugs by dropping into assembler. I once had an absolute doozy of an intermittent crash with MSDOS INT18/11 calls in a real time C program that took me weeks to catch and I only did that by looking at assembler and stack push/pulls. (Ask me and I will tell youā¦)
The Fortran boffins in our data centre in the late 80s could not understand why I would drop to assembler and not program in Fortran, until I showed them a 30 fold increase in some instances (and usually five times better at least) of the image processing algorithms we were running (sobel edge detection, anybody?). The PDP/11 Fortran compiler was shockingly inefficient! Even simple things like it always storing a variable to memory after a calculation and then reading it back for the next statement when it didnāt need to because the variable value was already in a processor register!
Oh, I forgot to say I knew PDP/11 assembler before I knew 68000, but given that the 68000 was a bit of a PDP/11 processor rip off, it made that move quite easy!
Back in the day I knew Z80 inside out and I agree you do get a much better understanding of how things work. Fast forward to the 2000ās and I wrote an FPGA implementation of a Z80 based Microbee and later a TRS80 Model 1 and that took things to a whole new level - thereās not much about those machines I donāt understand. Writing emulators is another fun way to get deeper insights.
If comp sci courses required students to write an emulator of a simple machine Iām sure theyād turn out much better developers.
I had the Rodnay Zaks book too, and absorbed every word of it. It was my bible for a couple of years. Good times!
Couldnāt agree more about writing emulators etc. Although Iāve never done that, I have written an assembler/linker, which also teaches a lot about those low level mechanics. These days most developers donāt have a clue whatās going on in their scripting language interpreter, let alone their OS or CPU.
Someone once told me, and itās something Iāve found to be a good guide is that you should always have a thorough understanding of one level deeper than where youāre mostly working.
So if youāre writing
assembler => hardware (eg: clock timing and memory cycles)
C => assembler
C++ => how to be a know it all snob
SQL => indicies
C# => MSIL
JavaScript => how hashed objects and closures actually work
TypeScript => JavaScript
Of course the further down you go the better your understanding, but JavaScript programmers donāt really need to understand cross domain clocking issues and memory read/write cycles.
PS: I used to be a fan of C++ and itās what Cantabileās audio engine is written in, but Iāve come to really despise it of late - especially stl and whatās become known as āModern C++ā. Itās just waaaaay too complicated for my liking. For my C++ projects I have a really simple template library and do away with stl entirely.
I agree, it seems C++ didnāt know when to stop, and its flexibility became its own worst enemy, allowing overly-complex, unreadable clever template trickery. When one compiler error is 2 pages long and you canāt even work out how to read the error, you know something isnāt right about the language.