Canoe Slalom scoring - History

History

Back
In about 1982 I started developing a system to record scores in canoe slaloms using a BBC micro, with a single floppy drive. Naturally this was written in BBC BASIC.

Once this had been used for a couple of events I wanted a way to show the competitors how they were doing. I used a second BBC micro and a serial link. The econd computer showed a rolling scoreboard of one of the classes. As this computer didn't have a floppy drive its software was fed to it using a combination of *spool and *FX commands on the two ends to transfer the program down the serial link. The master computer then fed updates to this computer in the form of messages on the serial link.

The code to receive the serial messages wass written in 6502 assembler for speed reasons.

The display part of the system hasn't changed much since. Everything else has.

The next development was when the A3000 came out. The code transferred without any change, and the display processor remained on a BBC.

A few years later I bought an A5000 with hard disc. This became the master machine, and the A3000 the display machine. Initially it ran under 6502EM, but I soon made it a lot faster by removing the 6502 assembler so it would run natively. BASIC on the A3000 was faster than assembler on the BBC.

At this stage the master was still using a Mode 7 single-tasking display, with all the functions controlled from one screen on the master, including those for the display processor. The software ran by CHAINing programs in the limited amount of memory available. We used the Watford DFS, so it was possible, by limiting the number of open files to 1, to reduce PAGE and reclaim the buffers which would have been used for the additional 4 open files Watford DFS allowed.

Time to convert to WIMP programs. An early stage of this was going through all the code and removing line number references. This made editing much easier as I could now cut and paste sections of code around.

There was still a potential bottleneck however because there was only one station to do all the data input and control. Changing this was a significant job. The first approach lead to a dead end. I used shareFS to access the data files from more than one computer. The method was always to open for writing, which locked the files. Another computer trying to access an open file would wait, retrying at intervals. Each operation did open,read/modify,close so the wait wasn't usually very long. It worked, just, but it was clunky.

The last major development was to create a database server process. All the other programs sent it messages, and only the database computer touched the files. This required proper networking, using TCP/IP. I started with a library Druck kindly supplied to me, along with invaluable early advice. One two-week school holiday was spent getting the basic client-server setup working. The trickiest part was getting the connections established between clients and the server. The most error-prone bit was dealing with ring buffers. Because of the 256 character limit on BASIC strings, I had to use memory buffers to collect messages, and process them a bit at a time. It wss apparent from the start that messages would have to be longer than 256 bytes.

Each input and output channel was allocated a buffer. These currently are 4K bytes in length. The network code is passed the address of where to start storing data, and the amount available. As the end of the buffer is reached the size gets small, so only part of a message may be received. If a message is incomplete it is left in the buffer, and a new read requested starting from the start of the buffer. Processing the result then passes from the end of the buffer to the start, and there are several edge cases which need to be handled correctly - e.g. terminator is the last character at buffer end, or the first character at buffer start. Recently I uncovered another. The messages are made up of "phrases" separated by a special character. There was a probem in cases where the first character after a message terminator was one of these separators, remaining from much earlier messages. These caes only occurred after considerable traffic had been exchanged. This bug had been latent for many years.

The system now consists of a database server program, and various client programs. One of these is used for entering scores, and there can be multiple copies of this running at once, working on different classes usually. Another produces printed results sheets. One special one controls the rolling scoreboards. These still use most of the code developed on the BBC, but with messages now received over TCP/IP. This display manager process connects to the database server, and multiple scoreboard programs connect to it. The manager decides what is to be displayed on each scoreboard, and requests the data from the database server, reformats it and passes it on. It then monitors results changes, and sends updates. This is done by the server sending out "scorechanged" messaages every time a score is updated. The manager decides which, if any, of the scoreboards is affected, and updates them.

Because all communication is over TCP/IP, there is no distinction between a local and a remote program. This means that it is possible to run all the programs at once on a single computer, and this is how development is usually done. all the programs have code to check whether there is already a copy running locally, and if so pops up that copy instead of running a second copy. Some programs also check across the network for duplicates. There can only be one display manager, and if that is started and finds another copy, it kills the other copy and runs itself. This allows easy transfer of control from a busy station to a quiet one. The scoreboard clients reconnect, and announce what they are displaying, so the manager doesn't need to re-send all the data, just future updates.

I had problems with a competition two years ago. I'd left a bit of debugging code active, and it hit performance so much that various programs had to be restarted regularly. I therefore developed a system of "ping" timers to detect when the communications link was down, and to restart it automatically. This now works, but definitely wasn't trivial.

The performance limitation now was the turnaround of network messages. Each process was using Wimp_PollIdle with a 1centisecond idle time to check for incoming network messages. This imposed minimum times for a message exchange, and when, for example results were being prepared there could be several thousand messages. following a discussion on the ROOL stand at a show I picked up on SocketWatch, which can set a pollword when a socket receives data. Using this (reason 13 returned) speeded things up a lot, and allowed the wimp polling to be less aggressive.

That is the story up to last summer.
Back