What Godel's incompleteness theorems say about AI morality | Aeon Essays
Briefly

Artificial intelligence is envisioned as an unbiased moral arbiter responsible for sentencing, resource allocation, and conflict mediation. However, the capacity of AI to understand morality as humans do is deeply questioned. It may only replicate human biases and limitations due to the complex and contextual nature of ethical reasoning. The challenge lies in the intuition and historical awareness essential to moral judgment, which resists formalization. Attempts to encode ethics into AI may strip morality of its essential depth, while utilitarian principles offer a starting point for deriving specific ethical behaviors.
Artificial intelligence might replicate human decisions without improving on them, carrying forward the same biases, blind spots, and cultural distortions from human moral judgment.
Ethics may be so embedded in lived experience that any attempt to encode it into formal structures risks flattening its most essential features.
AI would not merely reflect human shortcomings; it would strip morality of the very depth that makes ethical reflection possible in the first place.
A classic example comes from utilitarianism, which often takes as a foundational axiom the principle that one should act to maximise overall wellbeing.
Read at Aeon
[
|
]