Virus of Thought
For a long time artificial intelligence (AI) has smacked of a bit too cute, a toy, a supercharged Alexa; great at disbursing already established knowledge, but incapable of any real innovation. It seems, like Covid or climate change, irrationally accepted without serious examination. For the best analysis of this that I’ve read, Robert Gore at Straight Line Logic explains in this post.
This goes along with something I just read that the big financial backers of AI were abandoning it in favor of compliance. I don’t know what that means, exactly, or how it is likely to be the new AI, but that’s what I heard. So, the money train is switching tracks and a lot of people are going to be left behind with little more than Alexa-plus.
The failures of AI are well expressed in the Robert Gore post, but my concern comes from the idea that the virus of thought has already infected the US Government, China and Russia and trillions will be spent trying to justify it long after everyone else has moved on, the way one is still encouraged to get the vaxx today, after the pitfalls have already been exposed.
My objection to AI has always been logistical and skeptical. First, the amount of energy needed to power huge data centers is not available now, nor will it be a decade from now. The fear put into the US Government, and Trump specifically, is that China and/or Russia would be ahead of us in the race for AI supplemented weapons systems and hold a strategic advantage. To that degree, he might be right, because if AI can do anything, it is the correlation of already existing knowledge like specific targets, ranges, coordinates, etc., but that’s not how it’s being sold to the public. It’s being sold on ideas that cannot happen.
One of the highlights of Gore’s post is that AI can take human knowledge and better organize it, categorize it and disseminate it, but in the area of autonomous innovation it falls flat.
Writers and filmmakers alike fear that AI can do the job of writing and generating stories as well as humans and I have for a long time exposed this belief as a fallacy. It might be able to regurgitate storylines and themes well enough, but going into the future, what is it going to draw on? It’s going to draw on its already devised storylines and themes. So, where would a work such as Frankenstein come from? That was originally published by Mary Shelley in 1818 and was a literary revolution. Yet, it drew on something no machine has, emotion. A robot might be able to simulate emotion, trigger a tear at the right moment, but it cannot understand emotion that comes from fear or exultation.
When I think of the things I want to write about, I draw on everything I have ever experienced. I create a world in which I might be able to express that experience so that others will find a correlating experience in their own lives, lessons learned, heartbreaks endured, visions sought and denied, frustration, etc. Those things come from living and working with other people who also have millions of inter-relational experiences that have shaped their attitudes toward people who have similar or opposing biases about them. When one thinks of all of the complex social and professional interactions they’ve had from the time they were born to the present, the idea of some switches and diodes replicating that is insane. I might be able to draw a completely lifelike bobcat, but I cannot make one. That’s the limitation of AI-generated output that so many people seem to overlook.
One of the benefits of the drive for AI that I can recognize is that it exposes the total dereliction of the power grid and the unnecessary weight that climate change imposes on it. Suddenly, big power brokers like Gates have distanced themselves from the climate change hysteria because they realize that we need much more power, not less; we need more reliable power, not less reliable. It has fueled the push for modular power plants, the idea being that if modular power plants can supply electricity to cities and smaller towns, it will free up the big power plants to fuel AI projects. Okay, I’ll take that. I’d like nothing better than a container-sized power plant using saltwater fusion to supply electricity to my region (no, I have no idea whether it is likely or even possible). It’s the investigation into the concept that’s important.
AI might be able to do some calculations that help innovators innovate, drawing on all known principles and ideas that would take a couple of decades to compile by oneself, but to expect it to be creative is a step too far in my mind. As Gore’s post points out, it doesn’t take very many generations of copying itself to lose all semblance to the original. It’s sort of like putting a copy through the copier, degrading it a little bit each time.
The problem, in my mind, is that the hype surrounding AI, the possible upsides and downsides to it being extolled and demonized seems out of control. There’s little rational, contemplation of whether it is good or bad and anything bad about it would be hidden by those who are in line to benefit from it. The electric vehicle is a good example of this mindset. There are so many downsides that they seem illogical, but the hype for them comes from those who make them and those who will need to unload them at some point.
If you’re selling one, you don’t want to recognize the environmental impact of the batteries, that if they come in contact with water they’ll short out and might start on fire. They currently brag that they can get up to 100,000 miles out of a battery. Most of my vehicles have over 300,000 miles on them or will have before I get rid of them.
EVs are so dangerous that shipping companies have refused to take them on their ships, because they can spontaneously start a fire that can’t be put out on the high seas. The fact is, until it totally burns out, there’s little chance of putting them out on land. That would, in normal times, be a knockout blow. Not now. If you have one, love it and enjoy driving it more than life itself, I’m not here to tell you you’re wrong, but if I’d never buy one. I won’t even park next to one.
Tapping into the power of self-righteousness seems to be the winning marketing strategy for everything these days, but too many refuse to recognize the danger of it. It’s like the push for mass migration, it doesn’t matter how many children are sexually abused, women raped, men murdered, regardless of all the downsides the self-righteousness of not being racist overwhelms it all.
This is why we’re on the verge of world war, because too many are involved in their own oblivious self-righteous crusades that they can’t see, or don’t care, that their actions are pushing the world ever closer to oblivion where none of what they care about will matter. The EU for example.
Become a paid subscriber and join the conversation. Currently $5 per month or $30 per year.
In order to offer my subscribers a further benefit, please use the discount code “subscriber” for a secret 30% off the paperback versions at Twelveround.com. This discount will not be advertised, this is the only way to get it, but you can offer it to friends and family.
Twelveround.com is still the home of quality fiction with 60-70% 5-star ratings. If Westerns, Shadow Soldier, Home to Texas and Into Exile aren’t your thing, Rebel and Rogue are more modern (1970s) If you don’t want to buy from Amazon or have them in e-book format, you can get the physical novels from Twelveround for the above discount.



I wrote two things about regulating AI. First, all AI installations must create their own power supply, with all the money involved they should be able to do that. Secondly, all AI productions must clearly indicate that it is an AI production at the beginning, during, and at the end of the productions with severe penalties for non-compliance.
It very much reminds me of the dotcom bubble. Amazon is the only one that survived.