In this post, I’m going to talk about computers that use water waves to compute things. On the surface, this doesn’t sound like a practical approach to computing — for a start, electricity and water don’t mix well. However, it’s based on how your brain retains memories, and is an example of an approach called reservoir computing, which has been used to solve some pretty hard machine learning problems.
The idea behind reservoir computing is that, given a substrate with sufficiently complex dynamics, you can input a stream of information into it, and it will preserve useful memories of these inputs within its dynamical state. Then, by looking at only a small part of this state, you can read out interesting things about all those past memories. Within your brain, this “substrate with sufficiently complex dynamics” takes the form of densely entangled collections of neurons. You pour your thoughts and sensory inputs into them, and then they orbit around the neural connections in a haphazard fashion, building up a complex dynamical state as they go. Later you can read your thoughts and sensory inputs back as memories1.
But let’s not get too distracted by brains. We’re not zombies after all. Whilst the ideas behind reservoir computing initially came from neuroscience, it’s become apparent that many other, non-neural, reservoirs are possible. And what says “reservoir” better than water? Well, it turns out the dynamics of water are also sufficiently complex to encode a bunch of inputs and carry out some kind of computation, and in the earliest example of this, the reservoir actually took the form of a bucket.
This work was done way back in 2003 at Sussex University2. The “computer” comprised a bucket of water, some lego motors, and a camera. It was fed a sequence of inputs, which were propagated into the water by agitating it with the tips of the motors. This caused waves to appear. These waves collided with existing waves, resulting in complex patterns forming on the surface of the water. Once all the inputs had been delivered to the reservoir, the resulting pattern was read off using the camera.
This setup was used to do speech classification. That is, recordings of people saying numbers were turned into sequences of numbers. These were fed into the reservoir, and then the resulting pattern of waves was used to identify the number that had been said. And it turned out to be surprisingly good at this, achieving almost perfect accuracy.
This idea was refined further in a recent paper by a team in Australia. Rather than having a bucket of water, they used a drinking water fountain. The basin of the water fountain, where the wave dynamics take place, was shaped to produce solitons. This is a special kind of wave that is singular, stable and nonlinear. Singular means that, rather than being the kind of repeating wave pattern you get in the sea, there’s just one moving peak. Stable means that it keeps its shape, even when it collides with other waves. Nonlinear means they can engage in interesting and complex behaviour. These properties together result in Turing completeness — something I introduce in this post, but basically it means you can use them to carry out any computation that any other computer can compute. Always a good property for a computer to have.
And solitons are of particular interest to me, since I work at Heriot-Watt University, right next to the stretch of the Union Canal where John Scott Russell first discovered a soliton travelling along the canal basin — a discovery that would eventually have some pretty significant implications for theoretical physics.
Anyway, the Australian team demonstrated that their water-based reservoir computer was good at sunspot prediction, a challenging problem in chaotic time series prediction that involves working out the future level of sunspot activity based upon the sun’s behaviour in the past. Since it’s chaotic, it’s by nature unpredictable, so doing so is quite impressive — especially when you’ve only got the bottom of a water fountain to work with.
But is this a practical approach? Well, more so than it might appear. Carrying around a bucket of water may not be sensible, but these kind of computers could, in principle, be realised in microfluidics, which are the liquid equivalents of integrated circuits, comprising liquid channels and reservoirs embedded in microscale systems. It’s also potentially a lot more efficient than conventional approaches to reservoir computing3, since computation essentially comes for free from the properties of liquids, rather than having to be simulated by a microprocessor.
So, what’s the takeaway here? First, you can computer with almost anything, if you try hard enough. Second, it might be better — in at least some ways — than the kind of computers you’re used to. Third, computing with water is actually a lot closer to how your brain works than the kind of computers you’re used to4.
Or at least this is how many neuroscientists think it works —much of how the brain functions is still quite mysterious.
The most common approach is the Echo State Network, a large recurrent neural network running on a conventional computer.
And this includes the neural networks you’re used to that are running on the computers you’re used to. These are typically modelled on unusually organised areas of the brain, such as the visual cortex.
This article reminds me of "Computer science is no more about computers than astronomy is about telescopes, biology is about microscopes or chemistry is about beakers and test tubes. Science is not about tools. It is about how we use them, and what we find out when we do." ~ Edsger Dijkstra