Can reservoir computing solve AI's efficiency problem?
A reservoir computer is essentially a big, messy dynamical system that you poke with input data and observe the ripples. Typically they’re used for standard machine learning tasks: prediction, classification, regression, and so on. The thing that makes them stand apart from more conventional AI/ML is that they mostly don’t need training. You only need to tweak a small number of linear read-out weights, and this makes them much more efficient at training time than current transformer-dominated AI systems. Which raises the question of whether reservoir computing can meaningfully contribute to solving AI’s efficiency problem.
Although they’ve been around in various guises since the mid-90s, reservoir computers mostly appear in the form of Echo State Networks, or ESNs. An ESN is a large recurrent neural network with random connections and random weights. Data goes in, spreads all over the place, gets combined in weird and wonderful ways, and the readout nodes are trained to extract some sort of meaning from the ensuing dynamics. This might sound a bit odd to people used to training all the weights of a neural network, but it works surprisingly well.
There’s also an interesting biological angle. The earliest reservoir computers were models of neural circuits in the brain, and there’s a wide belief that these kind of unstructured dynamical wells play an important role in cognition, particularly in the prefrontal cortex, which is a key region for high-order cognitive processes. In fact, people have used actual brain tissue to implement reservoir computers — check out this Nature paper where they used artificial brain tissue grown from stem cells to perform computational tasks, including speech recognition.
But reservoir computers need not have anything to do with neural networks or brains. They can be made out of any sufficiently expressive dynamical substrate. I already talked about one far-out example, which uses waves in a drinking fountain. Although that’s a nice example of the breadth of reservoir computing, more practical substrates include optical circuits, analogue electronics and biochemical systems. This perspective piece in Nature has a nice overview. Out of these physically-realised reservoir computers, optical approaches have so far made the most headway1.
One of the selling points of physically-realised reservoir computers is their speed. Whilst ESNs are pretty much equivalent in terms of their computational abilities, simulating an ESN on a CPU or GPU can be inefficient. This is for two reasons. First, recurrent neural networks can’t readily be parallelised; this was one reason why language modelling moved from these to transformers. Second, digital computers aren’t designed to simulate dynamical systems; indeed, much of the low-level infrastructure that clutters and slows down digital circuits is there to remove the dynamics inherent in physical materials. By comparison, using physical materials to directly implement reservoir computers allows them to run at their native speeds2.
So, reservoir computers can solve problems efficiently, and physical-realisation allows them to do so really fast. Given the extravagant resource usage of contemporary AI approaches, why don’t we see them in widespread use? Well, this has much to do with scale, or lack thereof. Unlike transformers, where vast levels of investment have scaled them up to planetary proportions, reservoir computers remain a niche area of research being investigated by a relative handful of people with only modest resources. Partly because of this, we’re still in the early days of understanding how they can be used to support modern-day AI tasks such as language modelling.
Though we’re not totally in the dark here. There have been a number of studies3 looking at how reservoir computers can be used to replace the expensive parts of transformers used in large language models, and how they can be used to augment these transformers with dynamic longer-term memory. And so far, the results are promising, suggesting that reservoir computers can make LLMs more efficient and more capable at certain tasks.
So, reservoir computers might be a solution to AI’s efficiency problem. More likely, they’ll end up as one piece of a larger solution, since what they’re good at is mostly orthogonal to what today’s mainstream AI architectures do well. Which actually lines up with what we see in the brain: different regions use different neural structures, each suited to a particular role. Feed‑forward structures seem to dominate vision, while more unstructured recurrent dynamics show up in higher‑order cognition. It wouldn’t be surprising if artificial systems eventually settle into a similarly mixed ecology, with reservoir computers being a potentially important part of the puzzle.
For a recent survey, see this preprint: http://arxiv.org/abs/2505.05794
Optical circuits, for instance, can outperform digital electronics by multiple orders of magnitude.
A couple of recent examples: http://arxiv.org/abs/2507.02917 and http://arxiv.org/abs/2507.15779


