The story from July 1988


It's well-known that most development projects succeed or fail because of decisions that are made at the start of the project. Our electron beam tester project was no exception. We opted for an ambitious design – whose development was going to difficult and time-consuming – and planned for an unreasonably short development time. Either choice could have fatal on its own, and in fact the most implausible aspect of the story is that we were able to keep the development running until we had a working machine.


The important part of the process of working out exactly what kind of machine we were going to build isn't documented in the weekly reports. On the 8th July 1988 I wrote that I'd put in roughly one day that week thinking about the Lintech machine and expected to do more in the following week.


The following week's report was also copied – for the first time - to Dave Hall, the electron beam tester's project manager, who been hired from Lintech when they shut down.


In that week's report I talk about approaching Tektronix – who did, and still do, make very good sampling oscilloscopes – in the hope of buying in the hardware to tell our stroboscopic electron microscope when to turn on its electron beam.


By the following week I'm talking about writing detailed specification for printed circuit boards, and generating estimates for the design time required for each board.


What I hadn't written down, but remember very clearly, was a long meeting with Dave Hall and Graham Plows when we discussed what the machine had to be able to do if Graham was going to have a good chance of persuading potential customers to buy it, and Graham asked me for my thoughts on what the currently available technology could do, which was flattering.


Since Graham would have been kept informed of what Ralph Knowles, Mark Saunders and Tim Frost had been thinking at Lintech when they worked out the system design of their Schumberger-beater, it was also distinctly disingenuous.


Graham presented himself as someone who had just spent two years failing to sell a single one of his old machines, and who wanted to be able to offer a machine that was clearly better than Schlumberger's in a many areas as possible. The engineer's prejudice about salesmen is that they will only admit that it's possible to sell a machine if it is faster, more accurate, easier to use, cheaper, smaller and more reliable than any competitive device, and Graham was playing that kind of salesman to perfection.


He seemed to have been persuaded by Ralph Knowles (who had moved from Cambridge Instruments Lintech in 1985 with Tim Frost and Dave Hills) that any new machine was going to have to distinguish itself from the Schlumberger electron beam tester by offering to sample the surface of integrated circuits much more frequently than once per trigger pulse, making it dramatically faster, but he was insistent that the necessarily digital scheme that controlled the timing of the sampling pulses had to offer very fine granularity – he thought that we had to have a scheme that would let us place each sampling instant after the trigger edge to a resolution of better than 10psec.


At the time I'd thought that the figure of 10psec had been plucked out of thin air, but I now suspect that Mark Saunders had told him that he couldn't sensibly ask for better than 20psec, and he was fishing for a second opinion.


Asking for 10psec was unreasonable. The narrowest sampling pulse that our hardware could generate was 500psec wide, and while the competition was claiming to have equipment under development that could offer slightly narrower pulses, 100psec was probably as good as it was going to get for some years to come.


While it was an impractical demand, it wasn't impracticable, and I'd gone back to my office and dug out my copy of the GigaBit Logic 1988 GaAs IC Data Book and sketched out a way of doing what Graham wanted. This was probably a mistake. If I'd lied by omission, and claimed that 10psec granularity was impracticable, it's unlikely that anybody would have disagreed – nobody else was much interested in very high speed logic. Of course, I'd have been hugely embarrassed if anybody else had pointed out that I'd missed the possibility.


The memo I wrote wouldn't have been particularly long – the rule of thumb was that management never read more than one A4 page of text with much attention. You could exploit this by adding a second page for the more unpalatable bits and have a good chance that they wouldn't be noticed, but this was frowned on as a failure to reduce the issue to its essentials.


The first page did mention that the GaAs logic was rather new (the company had only been founded in 1981), and single-sourced, which made it a risky choice. The argument that we should use emitter-coupled logic static random-access memory to store the lists of sampling times and sampled data – it was the fastest form of memory commercially available at the time – was persuasive enough for the first page, but the proposition that we should build up a digital signal processing unit out of emitter coupled logic integrated circuits to merge repeated observations, at about 50MHz, which was quite fast (and faster than proved practical – we dropped back to 25MHz within a few months) was more problematic. I did include a warning that this was going to make the electronics complicated and expensive, and that it could take quite a while to get it all working, but any sensible engineer wrote that into any kind of proposal that broke new ground, and made sure that it was on the second page


I thought that I'd laid enough emphasis on the risks of using GaAs logic that Graham Plows wouldn't be able to persist with his demand for 10psec timing resolution when it became clear how difficult and risky it was going to be to give him what he wanted.


I was wrong, and I was distinctly surprised when the scheme was accepted, if fascinated by the prospect of actually getting it to work. What I had expected was an extended series of haggling sessions where we'd have traded off eventual performance against design and debugging time.


Since I – as an engineer - had no idea how much a better performance could be worth, in terms of extra sales or a higher selling price, I hadn't had much choice but to start haggling at the best performance that I imagined to be practicable, leaving marketing and production to push for a lower performance machine requiring less development effort, that they could start selling sooner.


Mike Penberth – then Cambridge Instrument's Chief Engineer, which was a non-executive position – had attended the meeting which had considered my scheme, and had collectively given Graham Plows permission to start spending money on it. Directly after the meeting, Mike told me that Graham had justified my over-kill approach on the basis that my scheme was going to be difficult for the competition to copy or duplicate. None of the people at the meeting with enough political clout to make Graham see sense seemed to have appreciated the other side of that argument – what was going to be time-consuming, expensive and difficult for our competitors to develop was also going to be just as hard for us. Mike did appreciate that it was going to be difficult, and was by no means sure that I hadn't bitten off more than I could chew, but clearly hadn't been able to make it clear to the rest of the committee quite how difficult it was likely to be.


From an incidental conversation with Ralph Knowles at Cambridge Instruments a year or so later – he was there representing his new, employer - Link Systems – I eventually found out that Ralph had planned to limit the sampling rate of his hardware to 10MHz, much slower than the 25MHz I'd settled on by then. The excuse he gave was “after-glow in the scintillator in the Everhart-Thornley secondary electron detector”, and he suggested that I talked to Don Ranasinghe of the British Telecom Research Laboratories about it, which I did at the next convenient opportunity, as is recorded in my weekly report for the 2nd March 1990.


Wednesday was the "Advances in Electron Beam Testing and Failure Analysis of VLSI Circuits" meeting in London; Steve Henning was the man with the typed badge, but Andrew Dean, Chris Warner and I were there too. I had a useful talk with Don Ranasinghe of BTRL about plastic scintillators - he has promised me a pre-publication copy of his paper on the subject. He claims that you can get much longer life out of a plastic scintillator if you aluminise it with a deposition unit that is only used to deposit high-purity aluminium, because if the metallisation is contaminated with higher atomic weight metals - zinc, copper and gold amongst others - they migrate into the scintillator material and poison it.”


The copy of Don Ranasinghe paper proved decidedly useful and persuaded us to switch our scintillator from the conventional NE104 material (from Nuclear Enterprises in Edinburgh) to the new “Pilot” material from the American New England Nuclear Corporation.


None of this is going to make much sense if you don't know that the Everhart-Thornley secondary electron detection system is a convoluted but extremely practical way of detecting secondary electrons - generated when the primary, scanned, electron beam hits the specimen – and turning them into a signal that could be measured outside the vacuum chamber that surrounded the specimen and extended all the way up the electron microscope column through the scanning coils and the magnetic lenses to the electron source.


Thomas E Everhart completed his Ph.D at Cambridge University under Charles Oately's supervision in 1958, and Richard F M Thornley had followed him. Between them they had perfected a scheme where the secondary electrons were accelerated into a scintillator inside the vacuum chamber, producing a brief flash of light which was detected by a photomultiplier on the other side of a transparent window in the wall of the vacuum chamber, and they published a joint paper describing the scheme in 1960. Both went on to rather more stellar careers than Graham managed, as did their successor, Alec Broers who ended up as a vice chancellor of Cambridge University – he became a Professor of Electrical Engineering at Cambridge in 1984, and consulted at Cambridge Instruments from time to time when I was working there. His first task after he'd got his Ph.D and gone off to work for IBM had been to develop the lanthanum boride electron source which was subsequently widely used in electron microscopy, and the one time I met him socially (as my wife's spouse at a Cambridge college dinner) I was reduced to telling him how much I appreciated that innovation, for want of something better to talk about.


Ralph Knowles' reservations about sampling the voltage on the surface of the specimen too frequently were based on the duration of the flash of light produced by the scintillator in the Everhart-Thornley detector– most of the light came out as prompt fluorescence within a nanosecond or two, but the incident secondary electrons could also occasionally excite a longer-lived phosphorescent state in the scintillator which could decay to produce an after-glow photon a few nanoseconds later. It didn't happen very often, and wouldn't have mattered much when it did.


It certainly didn't persuade us that 40nsec was an impractically short interval between samples.


What does strike me is that this essentially bogus argument about afterglow gave Ralph Knowles a convenient excuse for limiting the sampling rate to a speed which was compatible with cheap TTL-compatible static random access memory, and quite possibly with doing the signal processing in the Xilinx XC3000 programmable logic devices which had recently become available. Cambridge Instruments had then recently redesigned the signal processing electronics for the Quantimet Image Analysis system around earlier Xilinx parts, and while Ralph Knowles hadn't been at Cambridge Instruments while this was going on, he'd started off at Cambridge Instruments in 1975 in the Image Analysis group, and would have known enough people in that area to have been able to pick a few brains over a beer, over the previous few years.


I wasn't that well-placed, and only knew enough to think that the Xilinx parts weren't significantly faster than regular TTL. This probably wasn't true – the Xilinx data-book for the period is reputed to have included an application note for a 100 MHz 8-digit frequency counter based on an XC3020, which beats anything that I could build with TTL at the time.


As it turned out, my ECL based signal processing was only sampling at 16MHz when we finally got it working – not due to any problem in the ECL-based signal processing but due to an easily (if not immediately) soluble problem elsewhere in the machine. Even at 16MHz the multi-sampling approach was still spectacularly faster than it's predecessors.


Ralph's Xilinx-based processor – if that had been what he had in mind – would have been quite fast enough to give Graham Plows a machine he could have sold, and the re-programmable Xilinx logic would have been a lot easier – and quicker ˗ to debug than the board-full of ECL logic that we spent more than a year getting to work.


To be fair to Graham Plows, his insistence on 10psec granularity didn't make the project much more complicated than it might otherwise have been. The Gigabit logic that it forced us to use was crankier than than the 100k ECL which we would otherwise have used, and coping with its idiosyncrasies did introduce occasional problems that we had to solve during the debugging phase, but it wasn't anything like the main source of bugs to be fixed.


Relying on GigaBit Logic parts did make the project distinctly riskier.


My proposal involved setting up an 800MHz clock oscillator to give us clock edges 1.25nsec apart, which were our coarse time intervals. We were then to interpolate between occasional pairs of clock edges by setting up a voltage ramp, which would rise linearly by about a volt in 1.25nsec if you didn't stop it. If you stopped it after less than 1.25nsec and digitised the voltage at which it had stopped with an 8-bit analog-to-digital converter, you could sub-divide that 1.25nsec interval into 256 finer sub-divisions, each one only 5psec long intervals, keeping Graham happy.


The downside of this approach was that keeping track of the 800MHz clock edges required the GigaBit Logic 10G061 4-stage synchronous counter. Nothing else that we could actually buy would go much faster than 200MHz when put together into a counter that was long enough to be useful. The same approach with a 200MHz clock and 5nsec coarse intervals would have left us with 20psec fine intervals, which Graham claimed to be too coarse to let him sell the machine.


In principle we could have digitised a slower ramp with a higher resolution analog-to-digital converter. There were 10-, 12- and even 16-bit analog to digital converters around at the time that could – in principle - have divided a longer coarse time interval into 1024 divisions (10 bits), 4096 (12-bits) or 65,526 divisions (16-bits) but in practice none of the higher resolution analog-to-digital converters that we could buy were anything like fast enough to be used in a system that would do what we needed.


Hindsight suggests that the risk of going for GaAs logic was excessive. In 1991 GigaBit Logic merged with the other two major suppliers of GaAs parts - Gazelle Microcircuits and Triquint – and the combined organisation fired about half the people who had been working for them. They didn't stop selling us GaAs parts at that stage, but the merged company concentrated on the mobile communications market rather than on digital logic and would probably have cut us off within a few years. In 1989 and 1990 Gigabit Logic had already stopped making a few of the parts that we had designed in, but they hadn't discontinued enough parts to seriously inconvenience us. If the project hadn't been cancelled at the end of 1991, we'd probably have had to do a significant redesign within a year or two to replace the GaAs parts with Motorola's ECLinPS logic, which weren't – initially - quite as quick as the GaAs parts, if rather easier to use, and didn't become available in volume until the early 1990s.


In 1997 and 1998, at Nijmegen University, I completed a detailed design of a system that used ECLinPS to do a very similar job, for an electron spin resonance spectrometer. We never got to build the hardware, but it should have worked, and it would have used a 500MHz oscillator, rather than 800MHz, which in the EBT would have meant 8psec granularity, rather than the 5psec theoretically possible with GaAs parts.


On the other hand, the decision to opt for GaAs logic, which was only available in surface mount packages, did force us to opt for surface mount construction from the start. Cambridge Instruments hadn't used any surface mount parts up to that time, and the electron beam tester project became Cambridge Instrument's pioneer in this area. The team developing the successor to the S.360 electron microscope (which eventually became the S.420 and the S.440 electron microscopes) which was getting together at that time, opted for surface mount construction early on, but were rather slower getting to working hardware than we were.


I spent the first couple of months of the project - after the meeting had made up its mind - filling in the details of my original sketch and putting together a project plan, estimating how long it would take to complete the detailed design of the hardware - circuit diagrams, printed circuit layouts, parts lists – and how long it would then take to get the hardware built and debugged.


That sort of project planning is vital – you need to have a pretty clear idea of where you are going and roughly how long it is going to take – but the plan you come up with is inevitably unrealistically optimistic, and you have to keep revising it as you progressively get a clearer idea of what you are doing, both in the design phase, where filling in progressively finer-grained detail can be painfully educational, and the debugging phase, where unexpected or imperfectly anticipated aspects of reality can have equally painful consequences.


I'd been designing complicated electronics for about fifteen years by then, and didn't have any illusions about what I was doing – I did know that I was unusually good at it, and that stuff that I had thought ought to work had always ended up working – but I knew equally well, from bitter experience as well as observation, that nobody ever got everything right, and it always took longer than you thought that it should.


Cambridge Instruments had had several projects run over time and over budget in the previous few years – the S.360 development was by no means unique – and should have had a more realistic idea of what I was giving them.


On the other hand, most of us at Cambridge Instruments at that time were perhaps a little over-optimistic about the debugging phase. The company had recently invested quite a lot of money in the Metheus computer-aided electronic design software package, with a graphical user interface running on purpose-built workstations, and had set up a well-thought-out procedure for the design process, starting with detailed specifications, which were to be reviewed before being submitted to the circuit design group and their work-stations, where the specifications were to be realised as circuit diagrams whose components – or at least the digital logic involved - could be simulated.


The completed designs were then to be reviewed again before being released to the printed circuit layout draftsmen, working on their own computer-aided-layout work-stations, and the printed circuit artwork they produced was to be reviewed yet again before being sent out to be turned into printed circuit boards.


The first two printed circuit boards to go through this procedure had required remarkably little debugging. The two electronic engineers who had done the designs – David Cairns and Jerry Prusiewicz - were amongst the best we had, they'd been given the time to do the job properly, and the designs, while not undemanding, hadn't been ground-breaking.


The electron-beam tester project didn't do as well. The circuits we were designing used quite a few parts we'd never used before – not just the Gigabit Logic GaAs parts - and the data sheets for at least one of the other new parts – the Advanced Micro Devices (AMD) TaxiChip serial communications chip set - changed significantly while we were designing them in.


Worse, Graham Plows and Dave Hall had a mad ambition to get a working prototype together for the following year's American semiconductor industry trade show – Semicon West in May 1989 - and the schedule that this imposed didn't leave much time for reviewing specifications or detailed designs, and when the usual complications of getting the various detailed specifications right – and keeping them mutually compatible – started delaying the project, they didn't abandon their ambition anything like early enough.


This was definitely a mistake, and got Dave Hall moved off the project in September 1989, after it had become clear that the prototype electron beam tester wasn't going to be going to any trade show in 1989. His new job had the impressive title of Manufacturing Engineering Manager, but in fact involved the same kind of project management as he'd been doing on the electron beam tester project, but on a less demanding project – putting together a package of parts that would let Cambridge Instruments sell a bought-in “hot” field-emission electron gun with some of our electron microscopes. JEOL was then already offering hot and cold field-emission electron sources on their electron microscopes, and had been doing so for a few years. Field emission electron guns offered higher brightness – more electrons – than more conventional sources, which made them irresistible for some buyers, but the higher brightness was bought at the expense of poorer stability and more frequent maintenance than main-stream electron guns, so it was always a niche market.


A year or so later he moved on to become Manufacturing Director for an oil and gas company, so the “Manager” part of his title had done it's work.


He was replaced by Richard Adams, who brought his team of engineers from Image Analysis with him, and put his senior engineer – Andrew Dean in charge of the engineering side of the project, which meant he was – in theory – supervising me. I have a suspicion that I was supposed to see this as some kind of rebuke, but since Andrew and I weren't status conscious we collaborated happily and constructively for the rest of the project. I also got on fine with Richard which wasn't difficult – he was not only an admirable character but also famously diplomatic.


Graham Plows' position was not strengthened by the debacle. About a year later, Cambridge Instruments restructured itself itself into four separate product divisions, each with its own team of engineers and administrators. The Technical Director position ceased to exist, and Graham became head of the separate electron beam tester division, with rather less prestige and influence than he had before.


He took to spending a lot of time in his office playing with simulation programs on his personal computer, and eventually resigned to set up a new business, “New Technology Sources” about a year later, in September 1991. He took Mike Penberth with him, and – a few months later – Nick Campbell (another Cambridge Instruments physicist-turned-engineer, who was younger than Mike and had had the chance to spend longer in tertiary education and get himself a Ph.D, but was similarly good at getting things to work). As far as I know, their primary activity was selling and supporting Intusoft's simulation programs in England and Europe. There's a 2001 US patent on an “Electron beam lithography system having variable writing speed” which lists Graham S. Plows, Michael J. Penberth and Adam Woolfe, all of Cambridge, as it's inventors, so it wasn't the only thing that they did, but it was the only activity that I got to hear about.


Anybody brave enough to plow through the weekly reports will find it helpful to know that the Cambridge Instruments – or Leica Cambridge - EBT2 (later the EBT2000) was essentially the Lintech EBT with a “Sampling Crate” added close to the (moving) electron beam column. The Sampling Crate contained the big and expensive cards – triple extended Eurocards – that carried the expensive and power hungry GaAs and ECL logic that did the timing (the two identical Delay Cards) and the data processing (the Waveform Processor – Digital). It also accommodated the Trigger card, which acted as the interface to the user's timing signals, the Waveform Processor – Analog which provided the rapidly varying analog voltages that drove a variety of grids and screens in the Through The Lens Detector (TTLD) which was wrapped around the final lens of the electron beam column, and the Blanking Board which carried the Blanking daughter board that drove the beam blanking plates rather higher up the column to turn the electron beam on and off (leaving it on for a little as 0.5nsec) at intervals that could be as little as 40sec apart (and frequently were close to that – which was our unique selling point).


The Delay and Trigger Cards started off as the Timebase Board, and the Waveform Processor got split into the Waveform Processor – Digital and the Waveform Processor – Analog a little later, both well before any detailed design had got under way.


Get the Sampling Crate together and working was the most time consuming and expensive part of the project. The Lintech EBT had two other crates of double Eurocard board, one of which controlled the basic electron microscope functions via a VME backplane – an industry standard – while the other to provide a digital image store linked together via the Lintech-designed Naff Bus backplane, which nobody liked much, but which worked well enough that it wasn't worth redesigning it. These two crates contained about thirty-odd different cards between them. We had to add two more Interface cards – which plugged into the VME crate and the Image store crate, and allowed them to talk to the Sampling Crate via a bunch of galvanically isolated Taxi Chip links (which were new and very quick - for the time, at 125MHz - serial links). Getting the digital image store crate to accept image data from the Sampling Crate took an appreciable amount of effort, which shows up frequently in the weekly reports, and the limitations and idiosyncrasies of the other Lintech-designed boards also show up from time to time. When Ralph Knowles, Tim Frost and Dave Hills had moved into Lintech in 1985 they'd done quite a lot to raise the quality of the Lintech electronics, but Graham Plows was never enthusiastic about spending money where it didn't help his sales spiel.


The reports for 1988 can be read here, 1989 here, 1990 here, and 1991 here.