AI Consciousness: A Biological Perspective
Introduction
Most policy debates about AI revolve around its potential upsides: whether AI as an augmented decision-maker can solve existential risks like climate change or pandemics. But a different kind of argument has started appearing, one that uses the potential for infinite suffering brought by the development of Artificial General Intelligence to justify human extinction. AGI refers to a hypothetical AI system with human-level cognitive abilities across all domains, as opposed to narrow tasks like next-token prediction or image generation. If AI truly becomes smarter than humans, such systems could potentially think for themselves and adapt. The question this post explores is whether that version of AI could ever actually suffer.