The AGI Enigma: A Reflection on Human Values and Future Ethics

Today, let’s dive into a topic that’s been buzzing around for quite some time, thanks to foundational models: Artificial General Intelligence (AGI). There’s a lot of chatter about how AGI could reshape our world or even render us obsolete. But honestly, I don’t even agree on the principles here. To me, all these principles and assumptions are riddled with human bias.

AGI is Not Just a Better Human; It Is Something Else

Let’s establish something: even if AGI becomes a reality, it won’t simply be a human, even with all our assumptions about what smart people do. AGI, by its nature, will be different, possessing capabilities and perhaps even “thought processes” that are inherently non-human. Could we claim that we know how an octopus or a dolphin thinks? Have we even cared to put energy into understanding how other beings on our planet “feel” and “see” the world? By a large margin, based on where we put our efforts, the answer is a straight no.

AGI Based on Human Values

To mitigate the presumptive actions of bad AGI agents, some advocate that we must define human values or design products based on these values to prevent an AGI catastrophe. While designing products that align with human values is wise (after all, humans are the customers that pay money), assuming we can safeguard against AGI becoming a harmful agent by instilling it with our values, in my view, is just a big lie for several reasons.

How can we be sure our values are the “right” ones? Human values have shifted dramatically over centuries. Can we say the values that supported colonialism were good? Our values today, which we hold dear, might be considered unethical in the future. We got several things right, but not so many others, and history is filled with so many of our illusions. So, I argue that if we aim to develop AGI, it isn’t about defining a set of values for it. Rather, we should approach AGI as a being (in the Heidegger sense) and strive to understand it, which, as our track record shows, we are not so good at.

AGI Based on Our Current Values and Goals

Now, let’s address the darker view of AGI, the one that foresees our doom. In fact, I would say the current goals of humans are already our doom. So not much difference there! The link between AGI and productivity is crucial here. Since the onset of industrialism, our pursuit of productivity has escalated. If we design our AGI based on our values and goals of our current era, essentially, we’re defining a desire for a more productive system. And what do you think will happen? Humans are not efficient. You “may” try to build that second brain, or process information faster, but let’s be honest, humans are not the best anymore in several tasks.

This opens up a Pandora’s box of questions and potential scenarios. If we instruct AGI to optimize for productivity and efficiency, where does that leave us, inherently imperfect and inefficient beings? It’s a stark contrast and a potential pitfall that we must navigate carefully, ensuring that our pursuit of technological advancement doesn’t inadvertently devalue our own existence and capabilities. But, to do that, we must change how, inadvertently, we are pushing for more efficient systems, better technologies, and faster machines. And that has never slowed down. So, you can imagine what will happen anyway…

This post is a reflection of my current thoughts on AGI dilemmas we may face and how current outlets do not necessarily represent it.