that nothing was more crowded than “empty” space.
The old Aristotelian doctrine that Nature abhorred a vacuum was perfectly true. Even when every atom of seemingly solid matter was removed from a given volume, what remained was a seething inferno of energies of an intensity and scale unimaginable to the human mind. By comparison, even the most condensed form of matter – the hundred-million-tons-to-the-cubic-centimetre of a neutron star – was an impalpable ghost, a barely perceptible perturbation in the inconceivably dense, yet foamlike structure of “superspace.”
That there was much more to space than naive intuition suggested was first revealed by the classic work of Lamb and Rutherford in 1947. Studying the simplest of elements – the hydrogen atom – they discovered that something very odd happened when the solitary electron orbited the nucleus. Far from travelling in a smooth curve, it behaved as if being continually buffeted by incessant waves on a sub-submicroscopic scale. Hard though it was to grasp the concept, there were fluctuations in the vacuum itself.
Since the time of the Greeks, philosophers had been divided into two schools – those who believed that the operations of Nature flowed smoothly and those who argued that this was an illusion; everything really happened in discrete jumps or jerks too small to be perceptible in everyday life. The establishment of the atomic theory was a triumph for the second school of thought; and when Planck’s Quantum Theory demonstrated that even light and energy came in little packets, not continuous streams, the argument finally ended.
In the ultimate analysis, the world of Nature was granular – discontinuous. Even if, to the naked human eye, a waterfall and a shower of bricks appeared very different, they were really much the same. The tiny “bricks” of H 2 O were too small to be visible to the unaided senses, but they could be easily discerned by the instruments of the physicists.
And now the analysis was taken one step further. What made the granularity of space so hard to envisage was not only its sub-submicroscopic scale – but its sheer violence.
No one could really imagine a millionth of a centimetre, but at least the number itself – a thousand thousands – was familiar in such human affairs as budgets and population statistics. To say that it would require a million viruses to span the distance of a centimetre did convey something to the mind.
But a million-millionth of a centimetre? That was comparable to the size of the electron, and already it was far beyond visualization. It could perhaps be grasped intellectually, but not emotionally.
And yet the scale of events in the structure of space was unbelievably smaller than this – so much so that, in comparison, an ant and an elephant were of virtually the same size. If one imagined it as a bubbling, foamlike mass (almost hopelessly misleading, yet a first approximation to the truth) then those bubbles were …
a thousandth of a millionth of a millionth of a millionth of a millionth of a millionth …
… of a centimetre across.
And now imagine them continually exploding with energies comparable to those of nuclear bombs – and then reabsorbing that energy, and spitting it out again, and so on forever and forever.
This, in a grossly simplified form, was the picture that some late twentieth-century physicists had developed of the fundamental structure of space. That its intrinsic energies might ever be tapped must, at the time, have seemed completely ridiculous.
So, a lifetime earlier, had been the idea of releasing the new-found forces of the atomic nucleus; yet that had happened in less than half a century. To harness the “quantum fluctuations” that embodied the energies of space itself was a task orders of magnitude more difficult – and the prize correspondingly greater.
Among other things, it would give mankind the freedom of the universe. A spaceship could accelerate