Romancing the Stone Age Tech

Good old Ansi C is also elegant IMO, as are von Neumann, big endian, etc. in 6800, 6809, 68HC11 and so on. And now I read that Brad is a Z80 guy. What a disappointment, I wonā€™t be extending Cantabile any further and Iā€™m going to buy a ten year GP license right nowā€¦ :rofl: :rofl: :rofl:
Sorry this old-fashioned developer OT drift.

Edit: you can tell Iā€™m joking, canā€™t you?

2 Likes

Oh yes, 100% Z80. There was a time when I knew the hex codes for loads of the instructions by heart, as I was hand-assembling from an op-code lookup tableā€¦

C9

3 Likes

Neil, you too!

Too many Z-boys in here. :rofl: And, ironically, the creator of Zilog was born and lived just outside my town.

1 Like

68k guy here!
:raising_hand_man:
Gabriel

2 Likes

aaaah, getting sentimental here; before I got this book, all my knowledge of machine code was gained by trying to decipher code fragments in electronics magazines (remember the KIM-1 and its brethen?):

I had to travel 50 km to the nearest reasonably large city, got to an office equipment store (these were the only guys selling computers at the time), and they found the book in a dusty corner of the managerā€™s second office. They didnā€™t even know the price, so I got it cheap - and I still have it somewhere.

First thing I built was (of course) an assembler in BASIC on the commodore 4032; next was a logical disassembler, able to follow jumps and branches to decode some crazy software protection schemes that prevented me from loading software from disk instead of that nasty datassetteā€¦

For years, I had the complete zero page of the commodore 64 OS imprinted in my memory, and I still remember that $A9 was load accu immediate, and $85 was store accu in zero pageā€¦ Counting conditional relative branches by hand was something I could do in my sleepā€¦

When I switched to a 68000-powered platform, things became far less ā€œhands-onā€; we had decent compilers/assemblers then, so even though I still wrote machine code for the 68000 (and later for the VAX), it never became as ā€œintimateā€ as the 6502 experienceā€¦

5 Likes

Yup, before Apple2 I had the SYM-1. So I was a 6502 freak teenager during those years. Then I switched to the Motorola 68-everything.

And now that the talk has gone off-topic, @Brad, please move our stories to a separate thread. Because I can see that there are many of us hopeless tech romantics out there. And there will be many tales to tell. :wink:

2 Likes

Whilst the main thread is about scripting, I love OT discussions like this as well (and probably my fault :wink: ).

I first learnt programming around 1983 as a second year electronics apprentice, programming a Linrose (I think) microprocessor development kit which was a Z80 based box of tricks (with a bit of IO and a breadboard area) and was gathering dust in the corner of the workshop as nobody understood it. It only had a hex keypad and a seven segment display system for the UI, and you had to hand write your assembler program, hand assemble it using an opcode table and punch it in yourself.

I still say that if you came from programming having had to do it that way rather than starting at a HLL level, it gave you a much better appreciation of what a microprocessor does. And sometimes in later years I could only solve HLL bugs by dropping into assembler. I once had an absolute doozy of an intermittent crash with MSDOS INT18/11 calls in a real time C program that took me weeks to catch and I only did that by looking at assembler and stack push/pulls. (Ask me and I will tell youā€¦)

The Fortran boffins in our data centre in the late 80s could not understand why I would drop to assembler and not program in Fortran, until I showed them a 30 fold increase in some instances (and usually five times better at least) of the image processing algorithms we were running (sobel edge detection, anybody?). The PDP/11 Fortran compiler was shockingly inefficient! Even simple things like it always storing a variable to memory after a calculation and then reading it back for the next statement when it didnā€™t need to because the variable value was already in a processor register!

Oh, I forgot to say I knew PDP/11 assembler before I knew 68000, but given that the 68000 was a bit of a PDP/11 processor rip off, it made that move quite easy!

I had this one:

:metal:

2 Likes

Back in the day I knew Z80 inside out and I agree you do get a much better understanding of how things work. Fast forward to the 2000ā€™s and I wrote an FPGA implementation of a Z80 based Microbee and later a TRS80 Model 1 and that took things to a whole new level - thereā€™s not much about those machines I donā€™t understand. Writing emulators is another fun way to get deeper insights.

If comp sci courses required students to write an emulator of a simple machine Iā€™m sure theyā€™d turn out much better developers.

4 Likes

Couldnā€™t agree more

I had the Rodnay Zaks book too, and absorbed every word of it. It was my bible for a couple of years. Good times!

Couldnā€™t agree more about writing emulators etc. Although Iā€™ve never done that, I have written an assembler/linker, which also teaches a lot about those low level mechanics. These days most developers donā€™t have a clue whatā€™s going on in their scripting language interpreter, let alone their OS or CPU.

Neil

2 Likes

Someone once told me, and itā€™s something Iā€™ve found to be a good guide is that you should always have a thorough understanding of one level deeper than where youā€™re mostly working.

So if youā€™re writing

  • assembler => hardware (eg: clock timing and memory cycles)
  • C => assembler
  • C++ => how to be a know it all snob
  • SQL => indicies
  • C# => MSIL
  • JavaScript => how hashed objects and closures actually work
  • TypeScript => JavaScript

Of course the further down you go the better your understanding, but JavaScript programmers donā€™t really need to understand cross domain clocking issues and memory read/write cycles.

PS: I used to be a fan of C++ and itā€™s what Cantabileā€™s audio engine is written in, but Iā€™ve come to really despise it of late - especially stl and whatā€™s become known as ā€œModern C++ā€. Itā€™s just waaaaay too complicated for my liking. For my C++ projects I have a really simple template library and do away with stl entirely.

2 Likes

I agree, it seems C++ didnā€™t know when to stop, and its flexibility became its own worst enemy, allowing overly-complex, unreadable clever template trickery. When one compiler error is 2 pages long and you canā€™t even work out how to read the error, you know something isnā€™t right about the language.

3 Likes

@brad

Someone once told me, and itā€™s something Iā€™ve found to be a good guide is that you should always have a thorough understanding of one level deeper than where youā€™re mostly working.

That is a good rule of thumb, and certainly where I was coming from. Whilst I program in HLLs (Java these days) I have a very good understanding of machine architecture by coming up the other way, and it has helped tackle really obscure problems by dropping down and seeing what the machine is doing.

Itā€™s a bit like me thinking I am (hopefully) a better System Engineer, by first being: apprentice; electronics craftsman; research engineer, project support, project manager and then (after reverse op :wink: ) back to system engineer. I know system engineers who have come straight in at that level after doing a qualification in it and some of them are OK, but some are hopeless. All of that background and cross domain skills give me a lot of insight.

Whilst nobody did ask :wink: I will quickly reminisce on my C application problem which I only solved because I could ā€œgo one level deeperā€. It was fun thinking about again, and itā€™s the sort or challenge you will never forget!

It was a program that received data on a serial port from an external sensor every 50ms, decoded it, processed it, displayed it as part of an operator in the loop control system. The serial port handler was operating on an interrupt driven basis, which introduced an asynchronous nature to the code.

This was all set up and programmed in Microsoft Programmerā€™s Workbench (remember that?) and CodeView as a debugger

It was all working nicely other than a few times a day a random crash would happen being reported as a stack overflow error. That ā€œbuggedā€ me as I thought the stack size was plenty as there were not that many nested function calls (and no recursion to blow the stack). So I increased the stack size and the random crash was still there. Over a week I kept looking at it, increasing the stack size to the maximum (64K segment), and the crash was still there. If I changed to polling the serial port (not ideal) the crash did not occur.

Of course we know that software has no random failure mechanism like hardware does. It is in fact 100% deterministic when the input conditions into a bug are satisfied. This crash was obviously something to do with the interrupt driven nature leading to some condition that caused a stack overflow.

But why? Well I dropped down to debugging in assembler view to try and catch it, and and one day when I was stepping through the interrupt code at the assembler level I noticed that the segmented stack frame address had changed completely from what I had seen in the previous run. I continued stepping and the program crashed.

I could then of course instrument the code and dump some diagnostics and every time the program crashed, the stack pointer was not in the 640K of user memory we had at the time that the C compiler normally set up the EXEā€™s stack within. It was in memory normally reserved for MS-DOS!

So, what was happening? To cut a long story short, as this took a while to discover(!), the crash never happened when the program was in my own C code. It happened when my code called an MS-DOS IO function, such as updating the display, which was achieved via INT Functions. What was causing the crash is that MS-DOS when it is in an INT function caches a programā€™s stack pointer and sets up its own, restoring the programā€™s stack pointer when it completes. The MS-DOS stack area was something ridiculously small, like 5 bytes!! So if data arrived on the serial port and triggered my interrupt handler whilst the code was in an MSDOS INT call, such as updating the user display, then my interrupt functionā€™s local variables (about half a dozen from memory) that are created on the stack were enough to overflow the MS-DOS stack!

Man, that was a ā€œI do not believe itā€ moment. Why would MS-DOS do that? I never of course found the answer to that, but the solution was to, again at low level and some inline assembler in the C code, set up my interrupt handler to mimic what MS-DOS was doing and allocate its own stack on entry and restore whatever stack it was (application or MS-DOS) on exit. I was then back in control of the stack size, could ensure it was large enough and Failure mode removed! :slight_smile:

That is one I do not think I ever would have caught without a knowledge of processor architecture, assembler, how a stack works, etc.

All good fun!

@Neil_Durant

I havenā€™t looked at C++ for 20 odd years, so do not know how it has evolved, but I certainly liked Javaā€™s approach to take C++ as the starting point and then simplify the language, throwing out things like polymorphism, operator overloading, etc, as ā€œclever featuresā€ that could cause a lot of confusion once you have been away from the code for a while, or are new to it with poor documentation. And the Java fathers argued that you could write perfectly good and more maintainable without them. I certainly have not missed them! :slight_smile:

Evolving languages is a double edge sword. A decade ago I was programming in LabVIEW - a really nice paradigm to an ex-hardware engineer - and I was programming FPGAs for time critical systems in LabVIEW FPGA (this was for a contract that brought me down under for a few years @brad ). It was really nice to think purely in data flows via virtual wires into functions (like wiring components that did things). Very neat. As I understand it, people are now turning away from LabVIEW as National Instruments have over complicated it and turn it a framework architecture that is too complex, and they have (I am told) lost the original paradigm that made LabVIEW popular in the first place.

PS @Brad good call to give this itā€™s own thread! :slight_smile:

2 Likes

@Derek Iā€™ve had my share of bugs like that too. They can be so frustrating in the moment but once solved, itā€™s pretty satisfying, especially when you can look back in hindsight and it all adds up.

2 Likes

My processor history in short.

6502 on a SYM-1 board and itā€™s KTM-80 video terminal. Designed for this system: An 8KB expansion with battery powered RAM. A three DAC08 board with CV/Gate for my first DIY synth (Dave Rossum himself sent me some SSM chips and schematics for free. Maybe 1978, I was just a kid. First attempt at writing a sequencer.

6502 on the Apple II. Designed my first eprom programmer and many boards. The four track sequencer finally worked well. Added Korg MS-20 (I always hated the Hz/Volt protocol so I later bought the MS-50).

6800 and 6809 with Exorset development system, my first serious work. Last use of assembler.

68000 on Mac 512. Developed, by removing the bottom of the Mac, of a VME bus for experiments at CERN. My first work in a team. This time I only contributed to design the hardware.

680X0 and the big 68332 for various industrial work. First use of C.

6805 and 68HC11 for small industrial and consumer devices.

More modern microcontrollers: Atmel ATMega, Microchip PIC, ARM. First use of C++.

I must have forgotten something. Anyway (sorry Brad and Neil), no Z80s were used. :wink:

4 Likes

Ok, this is my story (trying to keep it short).
My father started his career taking care of the first mainframe computer at Florence university. He then went to become a professor of applied mathematics. I remember playing with the confetti produced by the punched paper tape machine when I was a kid, in the basement of the Math Department (and the paper, like everything else in the department, had a strong smell of cigarette, since everyone was smoking indoor in those daysā€¦I always liked that smell, I still like it when I find it in old books coming from university libraries).

I learned Fortran77 from books I had at home, wrote some code just for fun while at high school, on punched cards (which had replaced the punched tape) executing it on the university mainframe. Basically it produced the calendar for the Subbuteo championship I played with my friendsā€¦I never apologized to the mainframe for that, though it was used to much ā€œhigherā€ tasks than mine! :innocent:

After that, I did some coding on Commodore64 (some animated graphics with ā€œspritesā€ as they were called, some music stuff using the on board SID).

Fast forward to 1985 at university. I remember writing a fortran code to fit the energy spectra produced by radiation detectors during a lab course at third year. The code was written in fortran on a IBM personal computer which my father had bought for work (and stored on a 5 1/2 floppy disk).

My first encounter with acquisition systems happened during experiments for my master thesis (at CEA Saclay) and for my PhD (at GSI, Darmstadt). It was a VME based acquisition system using a Mac as the controlling PC, maybe the same @cpaolo mentioned (I know for sure it had been developed at CERN).

The ā€œphysicistsā€™ approachā€ to coding in that era was very pragmatic: you learn things when you need it, by doing them and by looking at what others are doing. No systematic learning, I am afraid.

In the meantime I started programming on the Atari, both for MIDI librarians in their own language (GenEdit, X-OR) and in C for a side-project which brought to me some money: the recording and analysis of lung sounds as a diagnostic tool. The latter was based on a Falcon030 and the on board DSP, programmed in assembler, was used for the most intensive calculations (using the Atari CPU just for the GUI).

At the end of the 90ies, after getting a permament position at the university, I was a bit fed up with nuclear physics and I started to work more on the electronics and acquisition systems. The system we had at the time was based on three VME crates filled with ADCs and TDCs (Time to digital converters), each one of them hosting a 68030 CPU and connected to a central PC via ethernet. There was no OS on the CPUs, just a program that we loaded at bootstrap via tftp. The code included drivers for the ethernet interfaces, the acquisition of the converters etc. all handled in interrupt. All the interrupt routines were written in assembler for speed. At the same time we were developing on linux and at some point we installed linux also on the VME cpus (in the meantime we had switched to the RISC PowerPC).

About 20 years ago I got involved in designing digitizers and DSP algorithms for detector signals. There is someone here on the list who did much more than me for that project, at least until he was with us (eh, @luigi? :wink:). At first, processing happened on Analog Devices processors (ADSP-218x series) programmed in assembler. Then we moved most of the processing to FPGA (Altera and Xilinx) using VHDL as main language. In the last few years, I did much more teaching than coding, I am afraid.

Thatā€™s (almost) all. It was not as short as I hoped. :smile:

Gabriel

P.S. I have omitted, for instance, some music related stuff, like reprogramming the EPROM of a Cheetah MS6 (6-voice expander similar to the Matrix-1000) to add new features in collaboration with a smart guy called Christofer Maad (if I remember well his name) who had modified the code for the onboard CPU (a 6502 if I remember well).

5 Likes

Hey Gabriele, yes it was fun till it lastedā€¦!
You forgot to mention some C++ (my fault :grinning: ) and Java

2 Likes

I might as well share my 6502 story. I worked in the manufacturing engineering group of moog music back in the late 70s. I wanted to improve our automatic testing through microprocessors, but had a limited budget to attempt it. I ended up using rockwell AIM-65 computers. Full keyboard, 20 character 16 segment LED display, and 20 character thermal printer. Storage was external audio cassette. One of these in each test fixture and it was also my development environment. All done in 6502 assembler. In order to add comments to the code I had to tape the 20 charcter printer output to paper and add the comments by hand.

The design department used Z80 in the first (and only) microproceesor-based products - Memorymoog and The Source.

In the early 80s I was working on a DCO synth to compete with polysix and juno 60. Sadly it never saw the light of day as Moog folded untiil revived much later by Bob. A prototype made it to the NAMM show. Same show that introduced the DX7

6502 or Z80 was not up to doing the DCO thing. I ended up using a rather unique 16-bit processor which also didnā€™t catch on long-term. I wish I could remember the part number. It was not intel, moto or rockwell.

Edit. It was a TMS99000. Incredibly fast microprocessor for the day. But with my limited knowledge of compliers for fortran, c, whatever, its architecture may not have been a good fit. In assembler it was crazy good for early 80s.

6 Likes

Interesting story. Thanks for sharing

1 Like