"Big Computers" will save us all!
It turns out that holidays merely squash my normal work into a shorter period of time, so are not compatible with producing coherent thought-through posts every two weeks. But I couldn’t resist a quick comment on the UK media’s sudden interest in Big Computers.
Basically, there’s a big one just opened up in Bristol, and an even bigger one in the works for Edinburgh. The one in Bristol is currently crawling with politicians. And the media is proclaiming that this will solve all problems. To quote The Guardian, for example, “UK switches on AI supercomputer that will help spot sick cows and skin cancer”. I also heard on the BBC last night that it will let the UK train its own version of ChatGPT. Hmm.
Well, let’s dig down a bit. The one in Bristol has 5,400 GPUs, or to use the proper hyped-up terminology, “Nvidia Superchips”. According to the ever-reliable internet, it takes 1000s of GPUs and a month-or-so of time to train a ChatGPT-grade LLM. So yes, it could be used to train such a model, but you couldn’t really do much else with it for a month-or-so. And presumably the same computer would then have to host the model and service all those pesky user requests.
Which wouldn’t leave much resources for spotting sick cows and skin cancer. And ignoring1 that it takes more than mere computer power to do these things, this comes down to the heart of the problem with shared supercomputing resources — everyone wants to use them for everything at the same time. Certainly my own experience of using these kind of resources is that large computers have a large number of users, meaning that you2 spend a lot of time waiting around.
5,400 GPUs spread between the research population of the UK simply doesn’t go that far, and pales in significance to the many millions of processing cores that are accessible through commercial services such as Google Colab. Having access to more dedicated computer power within the UK research sector is not a bad thing, and will help support research that might otherwise not have been done, including the kind of things3 that it’s not feasible to deploy to commercial services. But it’s not a solution to all our problems.
I guess at the heart of my rant is an annoyance with the shallowness of reporting in the media when it comes to anything technical. The Guardian, for instance, is full of deep reporting on politics, the humanities, and social issues, but as soon as anything technical comes along, it seems to degenerate into hype and fluff. And the same is true of most other media outlets.
I imagine this has much to do with career pathways, in that people with strong technical backgrounds tend to end up in strongly technical roles, and might see entering the media as a failure in some sense. Which is nonsense, of course, since we need voices that can tell us the truth about what’s happening in technology. But then again, would I want to give up my job and become a reporter — you know, an industry that’s famed for short deadlines and insecure positions? Probably not! So yes, I have no idea what the solution is to this conundrum. Answers below please 😉
You can read most of my other posts for an indication of what can go wrong.
Or, to be more precise, the script that runs your job.
For example, things with really sensitive data.



Great post as usual Michael, and something that often has me looking like a "mad man" raising my voice at the television.
I can't see an easy solution to this problem either - though publications such as The Conversation provide a semblance of hope since they work with the author(s) of a study to communicate it in a way that would make sense to laypersons whilst still addressing the main concepts. These publications would need to find a way to break into the mainstream to properly counteract the hyped up articles - though, how this could be done from a pragmatic point of view is anyone's guess.