25-06-2025
Machine intelligence and wisdom
A profound question emerges at the intersection of philosophy and science: Can algorithmic intelligence alone ensure an ethical and secure future, or must humanity strive towards a "digitisation of wisdom", imparting moral depth to machines, directing their immense computational power towards noble human ends?
At first glance, we may not grasp the true gap between 'machine intelligence' and 'machine wisdom', perceiving it merely as a play on words. Yet this gap reaches into the nature of consciousness, the architecture of will, and the consequences of purposeful action. I have previously explored the nature of consciousness - artificial and spiritual - and its ethical implications. Now, we confront a new realm where the paths of intelligence and wisdom intersect and diverge within digital machines.
Modern algorithms have exceeded our expectations. They operate as deep neural networks with trillions of parameters, capable of generating entire libraries of text within minutes. They outperform humans in pattern recognition, data retrieval and solving complex mathematical problems. In 2016, Google DeepMind's AlphaGo defeated world champion Lee Sedol in the ancient game of Go, once believed too complex to be fully codified. While this victory was mathematically dazzling, the algorithm itself felt no triumph, no emotional reward. It solved the problem within rigid logical constraints, but it could not reflect on the meaning of playing, of winning, or of emotional resonance.
This exemplifies the essential distinction: Intelligence is the ability to represent and manipulate knowledge to achieve goals efficiently, while wisdom — as defined by thinkers from Aristotle to Kant — is the faculty of foresight. Wisdom surveys causes and consequences, weighs short-term gains against long-term values, and contextualises facts within cultural and moral traditions.
The first challenge lies in the very nature of wisdom, it resists codification into fixed rules or equations. Wisdom arises from the interplay of personal experience, collective tradition and intuitive awareness of the long-term effects of our actions. Neuroscientific studies show that the brain's prefrontal cortex, responsible for long-term moral judgement, interacts with immediate reward systems, balancing present pleasure against the benefit of future generations. How, then, can a machine — confined within an unfeeling digital framework — understand choices whose effects extend far beyond its operational lifespan?
Proponents of value-aligned AI argue that algorithms can be trained through reinforcement learning to consider ethical variables within their reward functions. Weights could be assigned, for instance, to increase penalties for violating human dignity or to account for environmental sustainability across generations. Yet three major issues arise:
1. The Epistemic Dilemma: Who decides which values are encoded? Humanity has never reached universal consensus on ethical hierarchies. Should individual liberty outweigh social justice? Do spiritual beauty and dignity have computational value?
2. The Technical Barrier: Wisdom requires understanding indirect and unforeseen consequences, something that even the most powerful computers, including quantum models, cannot yet simulate with sufficient accuracy, especially when social and psychological factors are involved.
3. The Trust Problem: Suppose we create a 'wise' algorithm, how can we convince ordinary users to trust its decisions, especially if they clash with intuition or immediate interests? Ethical AI demands explainability, not just for scientists, but for the public who must live with its impact.
Amid these debates, a deeper question emerges: Is wisdom something that can be digitally transferred, or is it a living experience requiring conscious awareness? Philosophers like Thomas Nagel and John Searle suggest that subjective consciousness — our felt sense of being — cannot be reduced to mathematical functions. If true, then wisdom may not be measured by efficiency alone, but by a kind of ethical taste akin to artistic or poetic sensibility.
Scientific efforts towards 'mind uploading' raise intriguing possibilities. Theories like Integrated Information Theory (IIT) posit that consciousness results from complex physical integration of information, a view supported by a 2016 study published in Nature. However, our current technologies remain far from matching the human brain, with its 86 billion neurons and countless connections, not to mention the yet-unexplained depths of mental life possibly linked to quantum mechanics.
The wisest path may begin not with technological ambition, but with intellectual humility. Machines must not replace humans in moral judgement but rather complement them. Human-in-the-loop systems — where humans retain final authority — could ensure accountability in decisions affecting future generations. For example, AI may propose renewable energy strategies based on historical data and climate simulations, but elected bodies or national councils should determine the acceptable level of risk.
In the end, wisdom may remain an inherently human virtue. Our challenge is not to manufacture it inside machines, but to embed our best human values into the decisions they help shape.