I’ve recently been teaching a course on bio-inspired computing, and this has got me thinking about some of the more unusual approaches to computing. I touched on this in my previous post about computing with water, but this time I’m going to go even further down the rabbit hole plug hole and discuss approaches based around slime mould.
If you don’t know what a slime mould is, and you’re lucky enough to live in a cold wet temperate climate like Scotland, then you might take a look at a grass lawn on an autumn day. Is there a patch of yellow goo that looks like a dog recently shared its stomach contents? That might be the aptly-named dog’s vomit slime mould. Many other kinds of slime mould are available, but they’re all pretty slimy, and often confused with fungus — something they’re not, despite the name.
Because they evade attempts to classify life into single-celled vs multi-celled organisms, slime mould have long been of fascination to biologists. When they have plenty of food, slime mould happily exist as single cells, snacking on whatever is to hand. But take away their food supply, and they agglomerate into pseudo-bodies that can explore their local environment in various ways. Of the many species of slime mould, computer scientists are particularly interested in one called physarum polycephalum, whose collective form resembles a blob that continually grows and shrinks in order to find and hoover up any morsels of food it finds within its vicinity.
Which doesn’t sound like a platform for doing computing. But it turns out that slime mould’s ability to efficiently explore local space makes it good at solving computational problems that require this behaviour. And some of the earliest studies in slime mould computing focused on a problem that involves explicit exploration: maze solving. This typically involves placing a blob of slime mould on a piece of food located at the entrance to the maze, placing another piece of food at the exit-point, and then letting the slime mould do its thing — which involves shoving tubular fingers of goo down the maze’s passages until it finds a path that links together its two food sources. As reported in Nature by Toshiyuki Nakagaki and co-authors, this tends to result in the shortest path being found.
Which is arguably not that useful, since the main source of fun in mazes lies in solving them yourself. Instead, from a computer science perspective, the more interesting work in this area tends to focus on solving more abstract graph problems. An example of one such problem is finding the shortest spanning tree in a graph. Basically, given a bunch of nodes, this involves finding the most compact pattern of connections that causes a path to exist between all the nodes.
As a concrete example, and following a famous study that applied slime mould to solving this problem, imagine that the nodes are laid out on a flat surface and each of them represents the spatial location of a major population centre in the Tokyo metropolitan district. The aim is then to create a network of connections — the spanning tree — that leads to there being connections (either direct or indirect) between all of the population centres. In effect, a transport network. And now imagine that each node is represented as a small blob of food, and in the centre of it all is a piece of slime mould, and you get a pretty accurate portrayal of the experiments done by Atsushi Tero and co-authors. As reported in Science, their slime mould expanded to engulf the nodes, and the resulting pattern of nutrient transportation filaments in the slime mould’s body accurately described the minimum spanning tree of the graph. And it closely resembled the actual Tokyo subway network.
This short report from the BBC includes a re-implementation of their work:
But is this a practical approach? If we want to find the shortest spanning tree of a graph, would it be sensible to turn everything into blobs of food and get a slime mould to solve it for us? Surprisingly, the answer is not a firm “no”. And this is because problems like finding the minimum spanning tree can be difficult for conventional computers.
Which brings me to a quick aside: computational complexity. This is the study of the degree to which problems get harder as they get bigger. That is, if you double the size of a problem (e.g. you double the number of nodes in a minimum spanning tree problem), does it take twice as much time to solve? Three times as much? Or does the amount of time required grow exponentially with the size of the problem? If it’s the latter, then you have yourself an NP complete problem1. Unfortunately for us, a lot of the important problems we study in computer science have turned out to be NP complete, meaning that in practice we can only solve pretty small instances of them. But on the bright side, this also includes problems that we don’t necessarily want people to solve, like breaking strong encryption.
Which raises another question: can we get around the limitations of conventional computers by coopting slime mould to do our computations for us?
The answer to this is either “sort of”, or “probably not”, depending on who you ask, but let me recount another study and let you draw your own conclusions. This one, published by The Royal Society, uses slime mould to solve the travelling salesman problem — or TSP to its friends. TSP is related to the shortest spanning tree problem, and involves finding the shortest route that a travelling salesman might take around a group of cities in order to visit each city exactly once and in the least amount of time possible. This is a problem that crops up quite often in computer science, appearing in various forms, most of which don’t actually involve salesmen or cities.
The setup for their experiment was quite different to the previous one, and rests upon the slime mould’s tendency to conserve its volume. That is, when a grown slime mould expands in one direction, it also contracts in another direction, and this comes about through some pretty neat spatio-temporal oscillatory dynamics. So, no food involved this time. Instead, they stimulated the slime mould using light, and lay out the light sources to form a circular structure representing a particular TSP problem. Through this, the slime mould built filamentary paths, and these produced a solution to the TSP.
But the particularly interesting thing they found was that when they doubled the number of cities in the TSP from 4 to 8, the time taken to solve the problem also approximately doubled. This is not what you expect when solving an NP complete problem, where a conventional computer would be expected to take around 16 times longer2, rather than 2 times longer, when the problem size is doubled. To put this another way, their slime mould computer appeared to solve TSP with only linear time complexity. Which suggests that, from the slime mould’s perspective, it’s not an NP complete problem, but rather an instance of the much easier P3 class of problems.
Which brings me to another aside: the enduring question P = NP? Or to put it another way, are hard problems actually easy problems? One of the fundamental theoretical results in computer science says that if you can consistently solve any NP complete problem in the time you’d expect to solve a P problem, then all NP complete problems are actually P problems. This is because part of the definition of NP complete problems is that they can be converted into any other NP complete problem in the time it takes to solve a P problem. If that doesn’t make sense, don’t worry. The important thing is that the slime mould solved an NP complete problem in the time it takes to solve a P problem, which means that P = NP. Consequently, we can solve really hard problems like breaking encryption in no time at all.
Or not. Because the slime mould experiments only involved a certain kind of TSP that was at the easier end of the TSP spectrum, it didn’t always solve them perfectly, the study was empirical rather than theoretical, and blah blah blah. Yet even accepting the many caveats that people have raised, finding good solutions to any kind of TSP in linear time is an impressive feat, and certainly not something you’d expect to achieve on a conventional computer.
In general, it’s easy to dismiss work done in natural computing as wacky and impractical. And certainly some of the work done in this area does fit this interpretation. For example, would you really want to use swarms of crabs to build logic gates, and then build a computer out of these? But there are also results that demonstrate the real world potential of this field. And this includes the work I discussed above. Granted, there are practicalities around turning computational problems into blobs of food, making sure your slime mould stays happy and well fed etc. But, if you can solve hard problems in days rather than weeks or years, then does it matter if you get your hands wet slimy? So, don’t be entirely surprised if the future of computer science is slimier and squishier than you expect.
NP stands for non-deterministic polynomial, in case you were wondering.
The time complexity varies depending which approach you take to solve it, but O(2^n) seems to be pretty typical of deterministic algorithms — meaning that the time taken to solve it grows in proportion to 2-to-the-power-of the problem size.
P stands for polynomial.
Fascinating.
However I think slime moulds are more like parallel computers than sequential ones. Their ability to deploy more "processors" when confronted with more nodes in a TSP makes the complexity analysis less straight forward than you suggest.