Posted on September 2, 2025 | 22 Comments
With the publication of my new book Finding Lights in a Dark Age fast approaching but not yet arrived, I’m at that awkward stage in an author’s journey with a book where it’s too late to change anything in it, but it’s not yet left the nest and made its own way in the world. Already, I’m visited too often by an internal monologue along the lines of “should have included that, shouldn’t have said that, should have said that better”. Something I like about writing books rather than, say, blog posts is their fixed and tangible material presence in the world. The downside is, well, the fixity, rather than the more dialogic nature of online discussion. At its best – which it often is on this blog – online dialogue can be great. Whereas at its worst, retreating to a book and its ineradicable one-way word flow offers a certain balm to the soul.
Ah well, I’m happy overall with what I’ve written in Finding Lights. But in this awkward moment of pause, I still sometimes find myself switching gears in conversations with people when they stray into potentially controversial areas in case it prompts a flaring of that internal monologue. One such area that’s blindsided me a couple of times has been along the lines of “Ah, so you’re writing about the shape of the future – I expect you must have said a lot about artificial intelligence? It’s going to change the world!”
The truth, dear reader, is that I’ve said almost nothing about artificial intelligence in the book. And so the internal monologue flares again: “Aiee, AI! Should have included that…”
An advance review copy of the book sits on the desk beside me as I write these words. I touch it and feel its calming fixity. What’s done is done, and you can’t write about everything in the few short pages of a book…
Actually, I’m fairly relaxed about my AI omission. I see AI as but another over-hyped manifestation of our over-energised and over-connected world, which will likely fall as that world falls. Will it change the world in the meantime? Possibly, but not, I think, in especially interesting or positive ways. If it changes the world, it will change it in the way that the spread of 200 horsepower tractors in a world of 20 horsepower ones changes it. An acceleration, an amplification of a doomed trend. For sure, it will be used for some good purposes. It will be used for many more bad ones. Technology is never neutral. It’s always folded into existing social structures. And the argument of my book is that those structures will soon change radically, if unpredictably.
I spoke with a friend recently who works in a large public sector organisation. She told me that AI is increasingly dominating the deliberative and decision-making processes of her organisation in ways she thinks are uncreative and deadening. Artificial intelligence is not, in fact, ‘intelligent’. It’s just stylishly regurgitative.
I’ve seen AI versions of things I’ve written and debates I’ve had. They’re kind of like the low B essay of a moderately diligent but unengaged university student, who’s read a couple of the sources from the reading list and written a competent bit of hackery along the lines of “A says this and B says that. The strength of A is this, the weakness is that. The strength of B is this, the weakness is that. I think they’ve both got a point, but I think A is best because my professor prefers them”.
As I was discussing with my friend this deadening AI-ification of her workplace, it suddenly struck me that public intellectual culture is increasingly AI-ified in a similar way. Let me not name names, but there are certain authors and certain publishers that churn out books about weighty contemporary matters – food systems, migration and so on – along these lines. A says this and B says that. Here’s my novel and creative synthesis of the same old crap, along with five hundred references that hopefully nobody’s going to follow up too closely. Typically, the novel solution involves salvation by high-energy (but purportedly ‘renewable’) new technology which is being mercilessly flogged by state-corporate interests as the only realistic way to save the world or save the poor.
I’m not suggesting that these books have actually been written by AI. I don’t doubt they had an ‘original’ human creator. Most human originality is regurgitative anyway. But you do need a creative spark somewhere. The AI-ification of our intellectual culture speaks to its loss of creativity and vitality. We keep going over the same old arguments, looking for ‘creative’ solutions in them to growing problems that can’t be solved because they’re inherent to the premise of the arguments. There’s something deadening about this cookie-cutter futurology in contemporary public culture: “things look bleak, but actually I’m optimistic about the future because of (drumroll) tech-X”. For my part, I’m not optimistic about the future, but I stay hopeful because hope is a human trait. Maybe it’s programmed in – evolutionarily, that is, and not through coding. The difference is that you feel it.
The B student’s essay shows they have a basic ability to read, understand, summarize and repackage. It fits them to managerial work in public or private corporations where these skills are useful. This would be fine if business-as-usual was a good long-term bet for our public-private corporatized civilisation, but it’s not fine now.
Matters aren’t helped much if we turn out more A students – hence I’m not overly excited by the prospect of improved and more creative AI. A students end up being brilliant thinkers in universities whose output is quietly ignored by the rest of society – universities have become anything but universal – or, more likely, brilliant generators of income for our doomed corporatized civilisation.
Forgive me if the above sounds cynical. I don’t mean to suggest that there are no possible positions or acts of grace in the modern world. But I do think we need to do better, and this involves making some fundamental changes to the stories we tell, which AI and our AI-ified public culture are unable to do.
I’m not an especially creative or brilliant thinker myself, but I hope my book might help a few people see more ways of stepping out of these familiar stories and into the darker unknown. This is a place where the solution to global problems becomes easier because we stop trying to solve them. Instead, we address ourselves, without particularly knowing what we’re doing, to more tangible matters that are closer to hand.
In other words, I think we need some radically different stories that we can inhabit from the inside because we really feel them, rather than AI-ified surveys of the field that return some ‘realistic’ version of the status quo ante delivered by high-energy technologies.
My book won’t find favour with everyone, of that I can be sure. My fondest hope for it is that at least no one will think it was written by AI, or a human version of it.
And now, as previously trailed, I’m going to be mostly offline for two or three weeks. But rest assured I will be back here with more genuinely human-produced content soon.
Gaia Foundation We Feed the UK
David Graeber Pirate Enlightenment
(By the way, I just came across this interesting obituary of Graeber, who in my opinon truly was a creative and brilliant thinker. I daresay many people would come out a lot worse if an ex-partner wrote their obituary).
I do hold a tiny glimmer of hope regarding AI, though not in the “AI will be the One Weird Trick that saves us!” vein at all… it is, after all, just autocarrot on steroids, capable not of understanding questions but rather of replying to queries with what a plausible answer might sound like. So there are at least two huge problems with it: one that people think there is understanding there where there isn’t, and another the old adage of garbage in, garbage out.
Rather, I hope that the bubble of rampant mis-application of AI (and it looks more and more like a bubble all the time) might remind some investors of the value of human endeavour and human labour. A slim hope, perhaps, and still not One Weird Trick etc, but I’ll take my faint glimmers where I can get them.
AI is just the latest shiny object in the news.
People are already noticing that AI is using so much electricity that their electric bills are going up and that it uses a ton of cooling water without providing any day to day benefit.
Everyone can do their part to reduce consumption by turning off the AI summary in your search engine.
The latest shiny object, indeed. AI is being blamed for “the reallocation of investor capital” away from the formerly shiny “future food sector”.
“AI a threat… Recent [drops in] investment levels in alternative proteins reflect a market-wide slowdown in food and climate tech funding, driven in part by the reallocation of investor capital toward artificial intelligence”.
https://www.greenqueen.com.hk/alternative-protein-investent-decline-q2-2025-funding/
As I have often enough argued over the past decade and a half, so far as public consequence is concerned, it doesn’t matter quite so much what a technology can actually do as what people believe it can do? “AI” is the supreme case of this: whatever it actually is or does, its advocates genuinely believe it will drive the cost of human labor to zero in the near term, across most domains of endeavor. They are planning based on this assumption, and are sufficiently well-resourced to act on those plans.
And so that belief, and the plans based on it, will condition our economic, political and social reality for the remainder of the period we find ourselves in, regardless of what the technology actually turns out to be capable of. The particularly salient feature of this belief is that the obsolescence of human labor will generate large numbers of essentially surplus human bodies, in fact entire populations: an unnecessariat.
And the main policy considerations of the AI advocates therefore concern what to do with “excess” populations. You’ve heard of the accelerationist left’s response: “fully automated luxury communism” (which has never been anything more concrete than a clever turn of phrase, and is massively problematic for many reasons anyway), and until the glorious day that is achieved some kind of UBI.
I shouldn’t need to explain what the AI-right’s strategy for the management of populations considered excess looks like.
*This* is the discourse beneath the discourse, the esoteric meaning folded up within the exoteric talk of data centers, energy and cooling needs and so on. And this is why it’s worth addressing, and countering, in any account of the near-term human future.
I do have some sympathy for the fully automated luxury communism approach, with some caveats about the joys of actually doing meaningful activity which sometimes will be labour, and around how difficult it is with our current technology to automate simple things like making clothing (every seam in every garment you have ever worn was handled by a human being at some point) or picking fruit (there are people working hard on robotic solutions here but I think almost all fruit is also handled by actual humans). Still, the ideal of nobody being compelled to labour for subsistence — of having the same freedom to play, learn, and socialise as, say, the richest 1% do now — is an attractive goal.
The bigger problem is that “fully automated luxury” does not in and of itself automatically lead to “communism” and frankly we’re a very long way from even social democracy at this stage. Fully automated luxury capitalism will always be for the rich and only for the rich, and the freedoms now enjoyed by the 1% are not granted by technology but rather built on the exploitation of the poor and th destruction of the earth.
I don’t know what fully automated luxury distributism might look like, but I would probably read that fantasy novel.
Full automatic …….
Will ai change a flat tyre, fix your car unclog your drains or fix your ac/ heating ? dig out a stuck tractor ?
Now I give you 90% of the worlds politicos could be replaced by a comadore 64 , paper pushing jobs will be the ones to suffer ,civil servants maybe , universities as no one will need to know anything but basic reading but I remember the paperless office that was a bust .
Hence my caveats around how difficult full automation would actually be!
That said, if my grandfather had been told, when he was a child during WW1, that his grandchildren would keep in touch by tapping a tiny handheld computer screen they carry around in their pockets, he would have been astounded. I don’t believe we will have the spare energy just lying around in the future to develop technologies at the rate or of the kinder did in the 20th century, but I also don’t think humans are going to stop trying to solve problems with technology just because the fossil energy goes away. And I don’t think the fully automated luxury communism crowd are claiming that the technology for full automation already exists; that’s clearly not the case even in our most basic needs (food and clothing being the examples I already gave). But I don’t think that technology without systemic social reorganisation can lead to the equality and liberty imagined, and too much focus on the technology can easily cloud people’s thinking on this.
@Kathryn.
I get/agree with your points.
I just think that how far a technology/idea can “go” has always been limited by the available energy at the time.
Flight has always been imagined but no coal fuel/fire planes were ever invented. It took the discovery of oil to make the ideas a reality.
As the availablity of cheap/abundant fossil fuels declines, then the technology we create will become less complex/sophisticated. Not that we won’t still be very creative/innovative with what is possible.
PS I think we are in agreement on the horrors of what the right will do about excess population, whether or not the cost of labour goes to zero. (I think it won’t go to zero when we stop burning fossil fuels…)
As Kathryn says ” garbage in garbage out ” especially when you find out that the second largest source it uses for information is Wikipedia .
Chris , what you are worrying about what you left out is the basis for another book !
I think AI’s Achilles Heel, like the rest of modernity, will be a lack of energy.
Well indeed. The difficulty is how much damage those who are heavily invested in it do in the meantime… but I still think it’s largely a bubble.
The absence of AI sounds like a welcome relief honestly. Also LLMs aren’t AI, and most of the things people seem to ask about doing with it could already be done with algorithms and software already. I think it’s just an intensification of high tech systems to increase surveillance, avoid responsibility on the part of decision makers, and an attempt to de-skill/outsource knowledge work in a similar process to the way craft work was deskilled/outsourced with industrialization. I don’t know how well it works out for any of that. But nothing new there really, except personally where it seems like it’s going to be harder to use my office job to buffer my farming.
Projections for the growth of AI could be throttled by geopolitical realities, as some countries (such as the US, UK, Germany, and Finland) aren’t even making enough electricity to cover their own current levels of consumption, and imports of electricity are needed from neighbouring countries (2024 data). The same goes for the exorbitant electricity requirements of bacterial protein (hello Solar Foods in Finland) and cryptocurrency mining.
The latest energy stats for the UK show that even though the renewable generation of electricity reached a new record high there in 2024, the increase over the 2023 amount was only 7 TWh, which is less than the UK’s 9.5 TWh increase in net imports of electricity during the same year.
In other words, during 2024 the UK’s net imports of electricity increased by more TWh than the UK’s renewable generation of electricity increased, despite the record-breaking year for renewable generation in the UK (mainly due to an increase in biomass burning).
“Renewable generation rose by 5.1 per cent to reach a new record high of 143.7 TWh, driven by record high generation from wind and thermal renewables (bioenergy).”
“… while net imports rose by 40 per cent from 2023 to reach 33.4 TWh.”
https://assets.publishing.service.gov.uk/media/688a28656478525675739051/DUKES_2025_Chapter_5.pdf
Interesting range of comments as ever – thanks. It’s a relief to see other people describing it as a bubble. Has anyone read Bender & Hanna’s ‘The AI Con’?
I agree with AG about the discourse behind the discourse in relation to managing ‘surplus’ populations. And that is something I do talk about in the book, separately from AI.
I basically agree with the points being made about the ‘AI-right’, although I see this more in terms of a colonialism initially imposed by European countries on the wider world that’s now coming back to haunt us. However, I’m not persuaded that the left-right framing is the best political optic for it. The colonial project has had both left and right elements (manifested now by the likes of Starmer) while localism/communitarianism has been best advanced by ‘conservative’ thinkers who are streets away from the AI or alt-right.
I’m less sympathetic than even Kathryn’s modest sympathies for FALC – another colonialist project. Sure, it’s good for people not to exist at the door of hunger and want … something that modernist liberatory dreams of the left and right have often in fact served up to them … but you don’t need far-fetched notions like FALC to generate that. The playing, learning and socialising of the 1% looks to me mostly quite destructive and dysfunctional, and it doesn’t even seem to make most of them happy.
I don’t think there can be a fully automated luxury distributism, because full automation and luxury require monopoly, and that’s what distributism is geared to negating. Tolerably automated quotidian or artisanal distributism is about as close as I can get. I think it’s enough.
Re Diogenes’ points about de-employment, yes first they came for the manufacturing jobs and then they came for the office jobs, now leaving only the service jobs (builder, mechanic, plumber etc). The ‘petty bourgeoisie’ as the new ‘revolutionary’ class (I have an article about this coming out soon) … like Robert de Niro in Brazil.
The question is how many ‘surplus’ populations will escape the machinations of the disaster capitalist state and be able to build their own local livelihood politics. That’s a major focus of my new book … though I give no numerical answers.
Perhaps the playing, learning and socialising of the 1% is so destructive because despite the apparent freedom their wealth affords them, they are really bound to the service of Mammon. And that’s just….boring.
I think most of the people I’ve seen seriously playing with FALC ideas are anarcho-socialist leaning communists, rather than bureaucrats, and that probably changes my impressions of their aims. But I would absolutely be up for tolerably automated quotidian-artisanal distributism! How we get there from here seems almost as difficult as FALC.
I am up late preserving fruit again, and thinking about how I might manage the allotment in busy years in future; I won’t be removing any fruit trees but there might be more flint corn and fewer tomatoes.
I am glad you didn’t include an AI chapter in the book!
In my take the main role of AI is actually to cope with digitalization, which has resulted in an explosion of “information” which has to be processed. i.e AI is just another layer of complexity to solve problems generated by previous development. Of course some of this will be “useful”.
Like for many other new technologies, it might be the case that the applications that really “hit” will not at all be the industrial applications, but “social” applications.
That is kind of a scary thought, seeing how social media has played out.
AI looks to me as the last gasp of industrial civilization of the west , any dam thing to keep the grown ball rolling , billions thrown at servers with no electricity to drive them but its investment hense growth , another way of milking the tapped out consumer .
I would not be surprised to see cities or areas losing power / rolling blackouts to keep the servers powered up unless you live somewhere close to a munitions factory then power is garranteed , even when no one has power to ask it questions .
Michael Crichton wrote a novel titled Prey (London:Harper Collins 2002) that deals with the confluence of genetic engineering, nanotechnology and artificial intelligence. Of course things go very wrong and the hero and his compatriots have to save the day by blowing up the lab complex with the evil nanorobots and their human accomplices inside. I loved it!
Crichton writes well and I have read several of his books. This one was 524 pages and I finished it in less than 24 hours. What stands out for me is that he does his research and presents it well. In this book, Crichton’s theme is distributed intelligence. This is how birds move as a coordinated mass without a central intelligence. The intelligence exists at the network level – i.e. distributed. It is made up of organisms without the higher levels of complexity we see in primates and ourselves. Yet it works. It is based on a few simple rules that any mobile organism can follow like, “Stay close to your neighbor but don’t bump into them.” What this network does is allow infinite variation while minimizing error.
Distributed intelligence is significantly different from large language models (LLM) that have become the basis of the corporate use of AI. LLMs are based on the past and extrapolate into the future in a limited, restricted way. Of course they are going to use huge amounts of energy and provide more and more wrong answers! Think of the difference between a database and a spreadsheet. The database tells you what happened. The spreadsheet tells you what could happen. I use spreadsheets for database purposes too, as do many people, but if you are not generating reports it is not a problem. In the same comparative manner, distributed intelligence can come up with creative answers simply because they allow for randomness, something LLMs do not do.
Here is another point that I touched on in my latest book.
[Begin quote]
Distributed intelligence can also shed new light on the social revolutions of the 1960s and counterculture history, previously mentioned in the chapter on demographics. A few simple rules allowed an infinite range of behaviors beyond the few behaviors that are prohibited. It also brought any random variation into sharp focus. This could be a scientific counterpoint to Teilhard de Chardin’s idea of the noosphere.
De Chardin proposed the noosphere as the sphere of thought encircling the earth that has emerged through evolution as a consequence of the growth in complexity/consciousness. In this formulation, a new idea is released into a hypothetical layer around the earth that is outside of time and space. When I think of something new, it is projected into the noosphere and available for anyone else to pick up on. The key to picking up on new things is how well prepared you are.
Of course, that is a simplistic view of de Chardin’s rather elaborate system of philosophy linked with natural phenomena, but the point is that a distributed net of organisms acting by a few simple rules can appear to be controlled by an intelligence that is beyond ordinary measurement.
[End quote]
The point here is that the focus on LLMs in AI is a Big ‘Nuthin. (The name of a wonderful song by The Roches from 1989, by the way.) If the mucky-mucks financing this nonsense weren’t so focused on centralization and profit, they might be onto something important.
I should probably add a little bit on how mimicry works in distributed intelligence (as well as in humans!). If a random action works (is successful) in a distributed intelligence network, the action “can” be repeated (but not always). In Crichton’s novel, the nanobots “bumped into” some adaptations that made them more successful in living outside the lab and actually taking over some human hosts. The mass of nanobots then adopted these successful actions through mimicry. Sorta like adopting the techniques of the best hunter in your tribe. And of course, culture takes this to a higher level, which is why mimicry is a necessary part of human culture, but a subset of the behaviors making up culture.
Where to pre order in uk?
Chelsea Green says US only and Barnes &N link sends me to a US site…
And yes AI, not much use really! Can probably turn out ok GCSE essays but give it anything useful to do and unless you already have a high level of knowledge to coreect it, it makes daft and potentially damaging mistakes. I had the paid chatgpt model use our extensive soil Anaysis data to make a table of organic inputs for our market garden. It confused ha with acre and suggested I apply lime at 2.4x the correct rate, consisistently suggested non organic inputs dispute the prompts telling it to cross reference all recommendations to comply with organic standards.
Anything potentially useful it might do is riddled with mistakes.