Instem Computer Systems,Walton Industrial Park, Stone, Staffordshire, United Kingdom

The first project of my career, was a replacement for an old boiler and turbine monitoring system at the Littlebrook D power station in Dartford, Kent, on the south bank of the River Thames just west of the Dartford tunnel in the South East of the United Kingdom.

By coincidence, Dartford, Kent, is where my parents used to live before moving to Devon. During my childhood we often visited and stayed at my paternal grandparents' house in Dartford. Unfortunately both had died by the time of the Littlebrook project. I had a few pieces of site work while we were installing and commissioning the system, and driving through the edge of Dartford each day from our motel to the power station felt a little strange. My first job had brought me back to where my parents had lived when they had their first jobs.

My first visit to Littlebrook was a day trip. At the start of the project, the project manager took two of the team down for each monthly project status meeting. This way the whole team got to visit the site at least once, met the customer's engineers, and got a tour of the power station. Not only did this practice increase each members understanding of the project and have a positive effect on morale, for myself at least, it meant I checked my work more thoroughly after seeing the massive pieces of machinery it was connected to.

Littlebrook D is an oil-fired power station with three 660 MW (megawatt) generator sets. The original computer system for the site consisted of a Honeywell 716 with 48K RAM for each of the main turbine sets; the same amount of RAM as my 1982 ZX Spectrum. There was also a standby computer which could be used if one of the main units should fail, and a station services machine used to test spare or upgraded equipment and software. The control centre had a number of high resolution screeens but they were only monochrome.

Littlebrook D

Littlebrook D Power Station

The replacement system was based on Digital (previously DEC) Micro-Vax machines running the VMS operating system. An Instem I-Range Host Interface Module (HIM) plugged into the Q-Bus backplane of the Micro-Vax connecting it to a number of Instem's I-Range real-time bulk i/o data acquisition systems by a HDLC link unofficially known as the I-Way. The I-Range systems scanned analogue inputs measuring things like boiler pressure, turbine speed, and temperatures, etc., performing various linear and non-linear transformations on the raw data to convert it to values in SI units. It also scanned various switch positions, sending back notifications of changes.

Each of the three turbine sets had approximately 1800 analogue inputs, 2000 digital( 2-state) inputs and 150 digital (2-state) outputs connected via the I-Range subsystems to its MicroVax 3400. In each subsystem that contained digital outputs there was an extra module called a watchdog card that had to be triggered at a regular rate by the Vax. If the watchdog was not triggered, it shutoff power to the digital outputs causing connected equipment to 'fail-safe'.

The Vax also drove several (what were at the time) state-of-the-art high resolution screens that presented data to operators in the power station's control room.

Littlebrook Control Room

Littlebrook D Control Room before the Project

My main task on the project was the low-level design and development of code to interact with and transfer data from the Host Interface Module into the memory-based 'real-time' database that formed the data storage for control room screens, data-warehousing, and archiving functions. My work included scanning memory-mapped i/o at different rates to pick up analogue values, check them against alert conditions and limits. It also involved listening for interrupts and picking up state change event notifications when they occurred.

Event-Driven versus Polling

Littlebrook was my first introduction to the differences, trade-offs, and interactions between software that polled for values and event-driven software. Processors in the i/o subsystems scanned their inputs and created event notifications if values had changed. For digital inputs monitoring the positions of switches this was straightforward. For analogue inputs such as temperature or pressure, change events were triggered only when the value changed by a significant amount. Other processes polled devices for these change events at specific intervals of time, adding any new ones to a message queue on the Vax for subsequent processing. Make the amount of change required to trigger an event for an analogue input too small and the Vax would be flooded with change events. Make it too large and the values displayed in the control room and archived for analysis would be imprecise. To avoid this tuning problem, at Littlebrook processes on the Vax would polled subsets of analogue inputs for their values at different rates to retrieve precise values, and events were only used to  communicate changes in digital inputs.

In addition to events coming from the i/o subsytem to the Vax, the scans of the analogue inputs running on the Vax checked the values against a number of limits including high, very high, low and very low absolute values, and rates of change limits. When limits were exceeded, alarm events were created and added to a table in the real-time database. That table was polled at specific intervals by processes responsible for updating the control room display screens.

The trade-off between sample rates, amounts and rates of change, and the accuracy and timeliness of values determined most of the decisions about polling or being event-driven at different points in the system, but I developed a strong preference for the elegance of event-driven processing and pipeline-style architectures for processing events that I maintain to this day.

Even when modeling in color, I deliberately look at whether propagating a change in a value in a mi-detail up into some sort of cached total in a moment-interval or related role or role-player would be better than writing a method to iterate over mi-details to calculate the same result. Similar trade-offs in the frequency of calling the calculate method versus rate of change in the mi-details that trigger updates apply.

This work was also my first work to be done in C. I had left university having been taught mostly in Pascal, with introductions to Smalltalk, Prolog, Cobol and Lisp; Java was still half-a-decade away. I rapidly grew to love C, especially for the low-level bit twiddling work needed in those days to manipulate memory-mapped i/o and communication message packets. I also liked the way I could encapsulate data behind a set of functions (by declaring the data as static).  In comparison, the other language used at Instem, Fortran, was frustrating even with the modern extensions that Digital had added to the language.

Colour-coding with Eight Colours

The new high-resolution screens were capable of displaying each pixel in one of eight different colours. Compared with the millions of colours available in displays today, this was pathetic. Nevertheless, it was a big step forward from the monochrome displays of the previous system. Careful use of a few colours enhances enormously the amount of information a display can communicate easily. In addition, the temptation to over-complicate colour coding schemes with too many colours did not exist. As in modeling in colour, too many colours distract from rather than emphasize the information being displayed.

For Littlebrook, the new displays used black backgrounds with static text written in yellow. Values and other dynamic text items were shown in white so that it stood out against the yellow labels.  Red backgrounds were used for values in a very high alarm state, yellow for high alarm states, light blue (cyan) for low alarm states and dark blue for very low alarm states. A purple (magenta) background was used to indicate an input that was suspected as being faulty.

For real-time graphing (trending), different values could be assigned different colours. the major problem that we could not overcome was the visibility of dark blue line graphs on the black background. This effectively reduced the number of values we could plot on a trending screen to six without the need to resort to the use of different line styles such as dashed, and dotted. Later projects that introdcued displays with sixteen colours improved this situation enormously because the screens could use a grey background that provided a much better contrast for all of the other colours. Eight colours ended up being three or four colours short of ideal for the display at Littlebrook but still a vast improvement over no colour.

Meaningful variable names

One of my last coding tasks on the project and my least favourite, was translating the old control programs used by the power station from Corel 66 into VAX Fortran and again integrating them with the real-time database. Many of the old Corel programs had variables named after characters from Winnie the Pooh. From struggling badly to understand these old programs, I learnt the hard way the benefits of using meaningful names for variables and functions. It was hard to determine that a loop something like:

for ( winnie = nohoney ; winnie < honeypot; winnie += honey) { ... }
meant loop through the pressure inputs and ...

ECS

The new system added a data warehouse-style capability to the power station. The Engineeering Computer System (ECS) as it was called read operational data from each of the turbine's Vax computers into a relational database. The power station engineers used this to perform various analysis functions. The ECS also stored the configuration details for all the i/o points monitored by the three turbine computers. The computer consisted of a MicroVax 3400 with 800Mb of disk and a 2.3 Gb tape drive used for archiving. Today my mobile phone has more far more storage capacity but for the time this was a significant amount of disk space.

Team, Process and Tooling

The Littlebrook project team was one of the best balanced teams I have ever worked in. The team was split into three sub-teams of two or three developers with each sub-team combining a fresh graduate with one or two experienced developers. The project manager was also an experienced software developer and that made a big difference too.

The process was a classic waterfall approach based around a large monolithic, project specification document that read like gobbledy-gook to me at the time. Thankfully, the more experienced members of the team were able to translate it into something that resembled comprehensible requirements. There were the usual project management head-aches over schedule, budget and requirement changes, clarification, defects, and misunderstandings but, as a developer, I enjoyed the project enormously. And it is a testament to the fact that process is always far less important than building a great team of good people.

The software development team comprised of Chris B, Malcolm, Kevin, Rob, Andy W, Tony S, Bob, Chris and myself. In the hardware development team were Chris P. and Darren. Simon was the project manager. I continue to be grateful for everything  Rob, Chris B. and Simon taught me during the project

Tooling was primitive compared to today's integrated development environments. Only towards the end of the project did we get any sort of syntax-aware editors. Unit testing was defined as checking your code worked properly before copying it into the production area of the machine. Formal system test and customer acceptance test happened at the end of the project. There was no centralized source-control system but the VAX VMS operating system had built-in versioning of files that proved sufficient for the task, and indeed had some advantages over a centralized system. It is interesting to see Apple adding file versioning into OS-X recently, and the rise of distributed source-control systems like Git and Mercurial that reproduce some of the flexibility that the VAX VMS file versioning provided.

More about Littlebrook Power Station

According to an old pamphlet I picked up at the site, the Littlebrook power station cost 600 million pound sterling to build. The last of the three generator sets was commissioned in 1983. The three main oil-fired boilers each consume 140 tonnes of oil per hour at full load. To handle this fuel consumption, Littlebrook has storage enough for 550,000 tonnes of oil. There were also three 35MW Gas turbines used to react swiftly to changes in demand due to their swift startup time. This power station is the fourth power station to be built on the site since it was first used for electricity generation in 1935; the A, B and C sites have been closed but some of the old buildings still remain. Instem was awarded a contract to replace the obselete monitoring and control computer system.

Follow me on Twitter...