The rapid pace of AI related news is hard to keep up with. Here is a brief update on some of the developments since my previous post on the topic.
Artificial intelligence is not directly comparable with human or animal intelligence. One important difference, so far, is that AI models have had limited opportunities to act on the world around them and observe and learn from their actions, whereas animal learning is fundamentally interactive. The human self-understanding as a species essentially different from (other) animals derives from our pride in our higher intelligence. Abstract thinking through language, mathematics, schematics, and so on sets us apart from the rest of the animal world. Our problem-solving skills have enabled us to construct an advanced civilization with technologies so useful that most of us would literally not survive long without them. Our intelligence also enables us to invent new ways of being cruel to those with less power, including animals. And most significantly, we have invented technologies that drive climate change, speed up resource depletion and mass extinction, not to mention the arsenal of nuclear weapons that are able to wipe out most advanced forms of life at one fell swoop. For those who doubt the still persisting danger of nuclear war (which we came very close to in the autumn of 2024) there is much to learn from the late Ellsberg or Ted Apostol.
Intelligence may come with an evolutionary cost. It might not be such a useful mutation in the long run, as Richard Heinberg argued recently. According to the theory of the Great Filter, this could be the reason we have not discovered extraterrestrial life yet. Heinberg suggests that we need a complement of ecological wisdom to save us from ourselves. Trying to "augment" our intelligence by brain implants or weaving ever more AI and computation into our social web would be the worst direction to pursue.
Optimists, who probably get a salary for keeping spirits up, always remind us of the benefits ranging from medical diagnostics to automation of every conceivable boring task, and even the bizarre delights of AI-generated art. Slightly more realistic optimists acknowledge that we need to approach AI with caution. What would it take for you to trust an AI? How can we regulate the worst problems of this wonderfully promising and unstoppable technology? Such questions are asked by those who have a stake in the continued and expanded use of AI, those who shrug off the bleak doomsday scenarios and ethical dilemmas of unauthorised appropriation as something we will eventually be able to solve.
When Chinese LLM DeepSeek made some of its source code openly available, its American proprietary source competitor OpenAI saw their stocks plummet and accused them of plagiarism. Which is ironic, because what else can you call the training of an AI than plagiarism? (But we stole it first!) While some commentators reacted with schadenfreude, others pointed out that DeepSeek refused to speak about sensitive political issues. These models are prone to present hallucinations in a confident tone. However, the debate about how much confidence one should have in an AI really misses the larger picture.
A rigorous study by researchers at Microsoft looks at the effects among knowledge workers of relying on generative AI tools. In a broad enough definition, the category of knowledge workers includes designers, artists, musicians and composers who use AI "creatively," whatever that means. It goes without saying that this research group represents an interest in continued use of AI, which is precisely why they address some potential adverse effects and suggest solutions.
Some of the findings are that users of generative AI tools produce a less diverse set of outcomes for the same task than those who don't use them. When the task is simple, users may have confidence in generative AI without bothering too much with quality control, but this also results in a blunting of critical thinking because they get less opportunity to practice it. The study's notion of critical thinking also includes aesthetic or stylistic quality assessment of generated images, music, or text. The promise of efficiency and reduced workload is often offset by a need for more fact checking, editing, and assessing potential risks such as the safety of generated code. Users with low self-confidence in the task they try to solve are found to be more prone to uncritically accept AI-generated output. AI tool use thus can lead to over-reliance on the tool and diminished problem-solving skills.
In other words, the use of AI tools may alleviate the work process for those who are already highly skilled and have a solid understanding of the problem at hand, whereas it leads to less critical thinking efforts among those who are less skilled.
If that isn't bad enough, another study just realised that two commercially available LLMs are able to self-replicate. You just ask it to create a copy of itself and run the copy on the local disk. The two AI systems in the study were able to perform advanced problem solving by breaking up the task into several smaller problems, by exploring file structures and available commands on the computer. If a suitable command was unavailable it wrote a replacement code snippet and executed it. These systems can use self-replication to avoid shut-down, and can instruct the replicated copy to further replicate itself. What could go wrong? As these researchers point out, the tested systems were not the most advanced ones currently available, and progress is rapid anyway.
Then there is the under-reported issue of the huge energy demand of super-computers running large AI models. There have been proposals to build dedicated nuclear reactors to power these data centres. These computers also need cooling, which consumes fresh water and is an additional source of energy expenditure. Crypto currencies and NFTs already consume a lot of electricity, and with expanded AI usage data centres are responsible for a significant share of the global electricity use, sometimes compared to that of a smallish country.
I'm not sure whether the environmental impact from the enormous energy consumption, mining for rare earths needed for computer chips, and fresh water for cooling at the data centres outweighs the existential risk of self-replication, but they all need to be considered among the adverse effects of otherwise potentially benign AI use. In addition, there is the effect of dulling the user's critical thinking, threatening various sectors with unemployment, and the exploitation of human creativity and copyright issues. The question, But is it art? comes far down on my list of concerns. Nevertheless, when an auction house like Christie’s sells AI art of an embarrassing quality, their argument that it can be considered art (and I don't deny that it can be) is particularly stupid.
By human standards, the generated portraits of some fictional Belamy family are sloppy, mannerist, and full of hints that these are AI generated images. The texture at the scale of brush-marks looks like a compromise between wallpaper, dithering patterns, and jpeg compression artefacts. Once these images exist, a good painter would be able to emulate the style. Conversely, the AI has not succeeded particularly well at mimicking the style of classical painting in these examples. In their blog post, Christie's explain the process and intentions of the team behind the images:
They are still addressing the fundamental question of whether the images produced by their networks can be called art at all. One way to do that, surely, is to conduct a kind of visual Turing test, to show the output of the algorithms to human evaluators, flesh-and-blood discriminators, and ask if they can tell the difference.
Although the artificial quality is easily discerned in the Belamy family portraits, there are already much more convincing examples where AI art risks being confounded with purely human creations. Surrealistic imagery and abstract art in particular lend themselves to such exploitation. But being able to fool viewers that a human might have painted the image is a dubious criterion for what is art. As a somewhat more solid alternative they suggest another criterion: how inspired viewers are by the various images. In fact, there are viewers who find some of the AI-generated images more inspiring than some human-made paintings.
Still, this is to misunderstand fundamental mechanisms of the art world, and I count auction houses as a part of it. By selling these mediocre quasi-paintings for up to $432,500, Christie's and the buyer have institutionally validated it as real art. Any discussion of their aesthetic merit or lack thereof is beside the point. The same goes for speculations as to whether the machine can be considered the artist, rather than the team who generated it aided by the automated appropriation of previous art history.
Some four thousand signatories have demanded that Christie's cancel an auction of AI art. In defence of the AI artists, a Christie's representative said that "you can see a lot of human agency in all of these works," and that AI is used not to replace human creativity but rather to enhance it. The exploitation of prior art remains a fact, and we should not forget how much resources are wasted in the making.