Romancing the Stone Age Tech

@brad

Someone once told me, and it’s something I’ve found to be a good guide is that you should always have a thorough understanding of one level deeper than where you’re mostly working.

That is a good rule of thumb, and certainly where I was coming from. Whilst I program in HLLs (Java these days) I have a very good understanding of machine architecture by coming up the other way, and it has helped tackle really obscure problems by dropping down and seeing what the machine is doing.

It’s a bit like me thinking I am (hopefully) a better System Engineer, by first being: apprentice; electronics craftsman; research engineer, project support, project manager and then (after reverse op :wink: ) back to system engineer. I know system engineers who have come straight in at that level after doing a qualification in it and some of them are OK, but some are hopeless. All of that background and cross domain skills give me a lot of insight.

Whilst nobody did ask :wink: I will quickly reminisce on my C application problem which I only solved because I could “go one level deeper”. It was fun thinking about again, and it’s the sort or challenge you will never forget!

It was a program that received data on a serial port from an external sensor every 50ms, decoded it, processed it, displayed it as part of an operator in the loop control system. The serial port handler was operating on an interrupt driven basis, which introduced an asynchronous nature to the code.

This was all set up and programmed in Microsoft Programmer’s Workbench (remember that?) and CodeView as a debugger

It was all working nicely other than a few times a day a random crash would happen being reported as a stack overflow error. That “bugged” me as I thought the stack size was plenty as there were not that many nested function calls (and no recursion to blow the stack). So I increased the stack size and the random crash was still there. Over a week I kept looking at it, increasing the stack size to the maximum (64K segment), and the crash was still there. If I changed to polling the serial port (not ideal) the crash did not occur.

Of course we know that software has no random failure mechanism like hardware does. It is in fact 100% deterministic when the input conditions into a bug are satisfied. This crash was obviously something to do with the interrupt driven nature leading to some condition that caused a stack overflow.

But why? Well I dropped down to debugging in assembler view to try and catch it, and and one day when I was stepping through the interrupt code at the assembler level I noticed that the segmented stack frame address had changed completely from what I had seen in the previous run. I continued stepping and the program crashed.

I could then of course instrument the code and dump some diagnostics and every time the program crashed, the stack pointer was not in the 640K of user memory we had at the time that the C compiler normally set up the EXE’s stack within. It was in memory normally reserved for MS-DOS!

So, what was happening? To cut a long story short, as this took a while to discover(!), the crash never happened when the program was in my own C code. It happened when my code called an MS-DOS IO function, such as updating the display, which was achieved via INT Functions. What was causing the crash is that MS-DOS when it is in an INT function caches a program’s stack pointer and sets up its own, restoring the program’s stack pointer when it completes. The MS-DOS stack area was something ridiculously small, like 5 bytes!! So if data arrived on the serial port and triggered my interrupt handler whilst the code was in an MSDOS INT call, such as updating the user display, then my interrupt function’s local variables (about half a dozen from memory) that are created on the stack were enough to overflow the MS-DOS stack!

Man, that was a “I do not believe it” moment. Why would MS-DOS do that? I never of course found the answer to that, but the solution was to, again at low level and some inline assembler in the C code, set up my interrupt handler to mimic what MS-DOS was doing and allocate its own stack on entry and restore whatever stack it was (application or MS-DOS) on exit. I was then back in control of the stack size, could ensure it was large enough and Failure mode removed! :slight_smile:

That is one I do not think I ever would have caught without a knowledge of processor architecture, assembler, how a stack works, etc.

All good fun!

@Neil_Durant

I haven’t looked at C++ for 20 odd years, so do not know how it has evolved, but I certainly liked Java’s approach to take C++ as the starting point and then simplify the language, throwing out things like polymorphism, operator overloading, etc, as “clever features” that could cause a lot of confusion once you have been away from the code for a while, or are new to it with poor documentation. And the Java fathers argued that you could write perfectly good and more maintainable without them. I certainly have not missed them! :slight_smile:

Evolving languages is a double edge sword. A decade ago I was programming in LabVIEW - a really nice paradigm to an ex-hardware engineer - and I was programming FPGAs for time critical systems in LabVIEW FPGA (this was for a contract that brought me down under for a few years @brad ). It was really nice to think purely in data flows via virtual wires into functions (like wiring components that did things). Very neat. As I understand it, people are now turning away from LabVIEW as National Instruments have over complicated it and turn it a framework architecture that is too complex, and they have (I am told) lost the original paradigm that made LabVIEW popular in the first place.

PS @Brad good call to give this it’s own thread! :slight_smile:

2 Likes

@Derek I’ve had my share of bugs like that too. They can be so frustrating in the moment but once solved, it’s pretty satisfying, especially when you can look back in hindsight and it all adds up.

2 Likes

My processor history in short.

6502 on a SYM-1 board and it’s KTM-80 video terminal. Designed for this system: An 8KB expansion with battery powered RAM. A three DAC08 board with CV/Gate for my first DIY synth (Dave Rossum himself sent me some SSM chips and schematics for free. Maybe 1978, I was just a kid. First attempt at writing a sequencer.

6502 on the Apple II. Designed my first eprom programmer and many boards. The four track sequencer finally worked well. Added Korg MS-20 (I always hated the Hz/Volt protocol so I later bought the MS-50).

6800 and 6809 with Exorset development system, my first serious work. Last use of assembler.

68000 on Mac 512. Developed, by removing the bottom of the Mac, of a VME bus for experiments at CERN. My first work in a team. This time I only contributed to design the hardware.

680X0 and the big 68332 for various industrial work. First use of C.

6805 and 68HC11 for small industrial and consumer devices.

More modern microcontrollers: Atmel ATMega, Microchip PIC, ARM. First use of C++.

I must have forgotten something. Anyway (sorry Brad and Neil), no Z80s were used. :wink:

4 Likes

Ok, this is my story (trying to keep it short).
My father started his career taking care of the first mainframe computer at Florence university. He then went to become a professor of applied mathematics. I remember playing with the confetti produced by the punched paper tape machine when I was a kid, in the basement of the Math Department (and the paper, like everything else in the department, had a strong smell of cigarette, since everyone was smoking indoor in those days…I always liked that smell, I still like it when I find it in old books coming from university libraries).

I learned Fortran77 from books I had at home, wrote some code just for fun while at high school, on punched cards (which had replaced the punched tape) executing it on the university mainframe. Basically it produced the calendar for the Subbuteo championship I played with my friends…I never apologized to the mainframe for that, though it was used to much “higher” tasks than mine! :innocent:

After that, I did some coding on Commodore64 (some animated graphics with “sprites” as they were called, some music stuff using the on board SID).

Fast forward to 1985 at university. I remember writing a fortran code to fit the energy spectra produced by radiation detectors during a lab course at third year. The code was written in fortran on a IBM personal computer which my father had bought for work (and stored on a 5 1/2 floppy disk).

My first encounter with acquisition systems happened during experiments for my master thesis (at CEA Saclay) and for my PhD (at GSI, Darmstadt). It was a VME based acquisition system using a Mac as the controlling PC, maybe the same @cpaolo mentioned (I know for sure it had been developed at CERN).

The “physicists’ approach” to coding in that era was very pragmatic: you learn things when you need it, by doing them and by looking at what others are doing. No systematic learning, I am afraid.

In the meantime I started programming on the Atari, both for MIDI librarians in their own language (GenEdit, X-OR) and in C for a side-project which brought to me some money: the recording and analysis of lung sounds as a diagnostic tool. The latter was based on a Falcon030 and the on board DSP, programmed in assembler, was used for the most intensive calculations (using the Atari CPU just for the GUI).

At the end of the 90ies, after getting a permament position at the university, I was a bit fed up with nuclear physics and I started to work more on the electronics and acquisition systems. The system we had at the time was based on three VME crates filled with ADCs and TDCs (Time to digital converters), each one of them hosting a 68030 CPU and connected to a central PC via ethernet. There was no OS on the CPUs, just a program that we loaded at bootstrap via tftp. The code included drivers for the ethernet interfaces, the acquisition of the converters etc. all handled in interrupt. All the interrupt routines were written in assembler for speed. At the same time we were developing on linux and at some point we installed linux also on the VME cpus (in the meantime we had switched to the RISC PowerPC).

About 20 years ago I got involved in designing digitizers and DSP algorithms for detector signals. There is someone here on the list who did much more than me for that project, at least until he was with us (eh, @luigi? :wink:). At first, processing happened on Analog Devices processors (ADSP-218x series) programmed in assembler. Then we moved most of the processing to FPGA (Altera and Xilinx) using VHDL as main language. In the last few years, I did much more teaching than coding, I am afraid.

That’s (almost) all. It was not as short as I hoped. :smile:

Gabriel

P.S. I have omitted, for instance, some music related stuff, like reprogramming the EPROM of a Cheetah MS6 (6-voice expander similar to the Matrix-1000) to add new features in collaboration with a smart guy called Christofer Maad (if I remember well his name) who had modified the code for the onboard CPU (a 6502 if I remember well).

5 Likes

Hey Gabriele, yes it was fun till it lasted…!
You forgot to mention some C++ (my fault :grinning: ) and Java

2 Likes

I might as well share my 6502 story. I worked in the manufacturing engineering group of moog music back in the late 70s. I wanted to improve our automatic testing through microprocessors, but had a limited budget to attempt it. I ended up using rockwell AIM-65 computers. Full keyboard, 20 character 16 segment LED display, and 20 character thermal printer. Storage was external audio cassette. One of these in each test fixture and it was also my development environment. All done in 6502 assembler. In order to add comments to the code I had to tape the 20 charcter printer output to paper and add the comments by hand.

The design department used Z80 in the first (and only) microproceesor-based products - Memorymoog and The Source.

In the early 80s I was working on a DCO synth to compete with polysix and juno 60. Sadly it never saw the light of day as Moog folded untiil revived much later by Bob. A prototype made it to the NAMM show. Same show that introduced the DX7

6502 or Z80 was not up to doing the DCO thing. I ended up using a rather unique 16-bit processor which also didn’t catch on long-term. I wish I could remember the part number. It was not intel, moto or rockwell.

Edit. It was a TMS99000. Incredibly fast microprocessor for the day. But with my limited knowledge of compliers for fortran, c, whatever, its architecture may not have been a good fit. In assembler it was crazy good for early 80s.

6 Likes

Interesting story. Thanks for sharing

1 Like