Psychological and philosophical issues with AI and what it can teach us about being human

The rate of technological progress exceeded the rate of our psychological development by orders of magnitude in last couple of centuries. Psychologically, we’re pretty much the same, if not less developed, as the ancient Greeks might have been and yet the technology at our disposal would have been unimaginable even a few decades ago. The mismatch between technology and our readiness to deal with it was already made obvious by the scale of destruction of the world wars in the 20th century and the rate of psychological conditions in western populations despite unforeseen material wellbeing and comfort.

Our unhealthy relationship to social media and smartphones has been the most recent and most unmistakable alarm bell. But none of that seems to deter us from going forward in the same direction, except we are speeding up the pace exponentially. The explosion of chatgpt serves as an all too poignant reminder that the apes got their hands on a new toy and they have no clue what it’ll do to them.

A deeper look into the psychological and philosophical issues around AI inevitably involves asking some questions that are fundamental to our human nature and existence. So what we can potentially gain from playing with different scenarios is not simply forming a strong opinion on the emerging AI, but also insights into questions about ourselves we didn’t even know we had.

AI is implied by the spearhead

When the first humanoid took two pieces of rock, hit them together until one became sharp and pointy enough to be used as a weapon to kill prey much more efficiently, the process that inevitably culminates in AI some millions of years later, had already begun.

The benefits of any new technology, including the pointy piece of rock as well as AI, are obvious. The benefits are the very reason why the tool gets created in the first place. The prehistoric hunter equipped with a stone spearhead, returns with prey much more often than the guy using his bear hands, and as a result, not only does he and his tribe enjoy a higher quality of life, but they also gain an evolutionary edge to reproduce and to survive.

What is less obvious is the danger the sharp piece of rock brings upon our inventive hunter and his tribe. The moment he created his tool, his world changed forever. More to the point, the world of humans changed irreversibly. Other tribes are now incentivized to either come up with their own spearheads or take his. Once competing tribes are equipped with the same weapon, they are also incentivized to slaughter his entire tribe and take everything he had or just as a preemptive measure so that he doesn’t do it first.

So the consequence of the spearhead (replace with any piece of technology) is twofold: much easier to kill pray (gain something) and much easier to fall pray yourself (lose something).

Perhaps the sharp piece of rock may be too easily dismissed as obviously having a downside since it’s practically a weapon. Let’s look at a more benign example.

The iron plow could potentially kill a person by accident, but it would be a fairly clumsy and inefficient weapon to be used in battle, which is why it probably never was. We can take the plow as a symbol of agriculture as a whole, which is largely responsible for the human race spreading to previously uninhabitable areas and providing a steady source of nutrition and a quality of life that our prehistoric ancestors would not even have dared to dream of. What possible dangers could such a great invention entail?

If the people who started using the plow first didn’t know, we sure do today. Exploitation of the soil, monocultures, loss of biodiversity, overproduction, overeating, overpopulation, disease, war.

The downsides of new technologies are rarely instant and obvious, but all the more impactful and difficult to reverse.

For every problem technology solves, it creates a new one. The problem solved may have been an inconvenience (body odor), the problem created by the solution may be life threatening (ozone hole). We pretend to be ignorant of this regularity and keep coming up with new technologies to solve problems previous technologies created.

Let’s take the ultimate problem, disease and death. Who wouldn’t want to cure the sick? Who would even dare to argue that medicine, the solution to disease may have a downside? When our loved ones are sick, we’d do anything it takes to cure them, and if we could, we’d make sure they live forever. But what happens when everyone tries to do just that?

Dozens, if not hundreds of companies are working around the clock world wide to extend human life potentially infinitely. Imagine that one of them succeeds. The human immortality pill is created. On the one hand, that’s the best thing that could ever happen. Billions of people who are dying and sick would be saved. And yet, if the immortality pill is discovered too early, it will be a total disaster, potentially the end of our civilization.

How would we decide who gets to live forever? Even if we say let everyone alive today live forever, who would decide who gets to have children? There is obviously not enough of Earth to keep everybody and their infinate generations of offsprings alive. So the most humane thing ever, the elimination of death would create a world where a certain part of the population would need to be castrated to ensure they can’t reproduce. The availability of the immortality pill would lead to the worst of wars and probably the end of our civilization as we know it.

The solution to the mess created by the immortality pill could be a new piece of technology that enables humanity to become a multiplanetary species. If it came just in time, we’d be able to relocate the excess population to other planets and the fight for the right to live forever and reproduce would cease. Until, of course, we’d run out of habitable planets or into yet another seemingly essential problem, to which we’d come up with yet another technological innovation and repeat the cycle ad infinitum.

So what is it about technology that makes it such a a double-edged sword?

Tools and technology make gaining something easier but they also inevitably make losing something else easier. With the immortality pill, we may gain eternal life, but we would likely lose our civilization to it, and definitely lose the value and meaning of human life as we know it, which originates from the simple fact that it may end at any moment.

Technology only ever solves the problem at hand, but it doesn’t cure the human condition, because if there is such a cure, it is not technological, but psychological.

If you want to live forever, I think you are missing the point of human life.

The people who claim to understand AI, disagree whether it’s dangerous or benign

The above paragraphs might convey the impression that the writer of these lines is some anti-tech, traditionalist, religious cavemen, who is convinced that technology is inherently bad and dangerous.

Far from it.

If the spearhead head been developed by a species whose repertoire of behavior doesn’t include killing its own specimens, then the spearhead would have no downside. Similarly, if we were not prone to overpopulation and exploiting our environment, medicine and agriculture would have no downsides. But we are who we are, and we use, overuse and abuse every piece of technology according to our psychological and intellectual limitations.

The dangers of technology are not inherent to technology, but to us, the people who make and use it. Technology is not dangerous by itself or because of some natural law. It is only dangerous because of human nature.

So what we need protection from is not technology, but our own human nature.

That’s true about all technology except for AI.

Because AI becomes of real value to us only at the point when it gets smarter and more capable than we are. Incidentally, that’s also the point of no return, after which we don’t know what happens.

A group of people seems to not only comprehend the risks of new technology, in this case AI, but to also make sacrifices to mitigate those risks at least to the point of making a public statement. The impact the petition to halt the development of chatgpt for 6 month may be questionable. But the motive of the people, who signed it is beyond any doubt. They didn’t do so out of fear of losing their livelihoods like the Luddites in the 19th century, who simply destroyed the machines in factories. These are not the your ordinary technophobes, but smart people, who recognize that the invention of the spearhead changed the world irreversibly and not evidently for the better. And we are about to do it all over, except at a much larger scale and in an irrevokable way.

Eliezer Yudkowsky didn’t sign. He says it’s asking for too little too late and if we don’t shut it all down, we are all going to die.

The way in which AI is different from all previous pieces of technology is that it doesn’t need to get into the wrong hands, be overused or abused in order to kill all of us. It simply needs to be created.

It’s not like AI is going to be inherently evil and want to kill us. Our existence will simply be of no significance to it, just like we don’t care how many millions of bacteria we may kill with our every breath or how many ants we kill with our every step.

We could, in theory, create the AI in a way that it preserves humanity above anything else, but we don’t know how to do that just yet. And we don’t get a second try with this one, like we think we do with the climate.

“We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong.

Shut it down.”

Eliezer doesn’t have high hopes. He goes on by proposing a global treaty to shut down all AI research and experiments and enforce that agreement by the threat of violence, even if the violating party is a nuclear power.

Many smart people disagree with Eliezer. Some say it’s a non-issue, which doesn’t seem to hold ground even if we disregard the uniqueness of the AI situation in contrast with any other technology that didn’t kill us all, but actually improved our lives in one way, but made it worse in others. And then the optimistic bunch, who actually signed the petition, hence they are obviously worried, and think the game is not over yet, if we are careful enough, we can get this right.

The worst case scenario: we all die

It is difficult to take a stance on who is right or wrong in this debate. If the experts have such diametrically opposed opinions plus anything in between, what chance do we stand to make sense of it?

As established above, what seems undeniable is the pace technology develops and transforms our lives at is way too fast. We can’t keep up with it.

Our mental and emotional systems evolved to pick berries and run away from tigers, live in small groups, play around and have lots of sex. We think we are the priviliged ones for living in an age where child mortality is at a historic low and we can have hot showers any time of the day. And we certainly are. Our experience today is vastly richer than that of our ancestors. But all that comes at a price, which we must pay even if we close our eyes at the check out.

The price we pay is all the problems we have, which are only possible because we have this vastly richer, safer and in many ways better lives than our ancestors did. None of our problems today would be possible without that richness, safety and high quality. And all this is pre AI.

What illustrates the magnitude of the issue we are facing with AI is that even if we followed Yudkowsky’s instructions word by word on how to halt and police the development of AI globally, we’d end up in nuclear war, because the incentives and our current level of psychological development will make that inevitable.

If AI research was banned globally, secret experiments would start the next morning, because everybody would assume that everybody else would break the treaty and the first one to do so would dominate everyone else. Game theory 101. You can’t ban the quest to find the Holy Grail, well, technically you can, but it won’t stop anyone. The desire to get there first is way too strong.

Even signing such a treaty, let alone not breaking it, seems impossible as it would require all the countries of the world to work closely together, work out all the details and not only put aside their differences – like they did to some extent when COVID hit -, but it would require them to actually trust each other. Not going to happen. Not at the level we are as a species and not with the quality of politicians we manage to delegate to make decisions on our behalves.

The genie is out of the bottle. Technically, the singularity (when AI reaches a point of exponential self improvement) is ahead of us, but it does not seem to be a question of whether, but when it happens. It’s the gold rush all over again. Imagine trying to ban digging at the time of the California gold rush. A total ban does nothing, but raises the stakes, and incentivizes quicker, less thoughtful action. That’s what we can expect from a ban.

Unless it turns out to be technically impossible, superhuman AI is going to happen. And then our last hope is that Yudkowsky is mistaken about the alignment problem and the human race will occupy a special enough place in the “heart” of the AI to be left alive.

AI may be part of the natural evolution of life. Perhaps every intelligent species that ever existed out there in the universe eventually built its own AI and instilled its own value systems and priorities, which, if their EQ/IQ ratio resembled ours, inevitably made them extinct. Could this be the reason why we haven’t had obvious alien contact yet? Perhaps every civilization that developed its technology at a faster rate than its own readiness to deal with it caused its own demise? That would be the joke of the universe. Perhaps the key to the long term survival of a civilization is to develop technology in baby steps rather than in quantum leaps. It makes sense. The maturity of a civilization may be measured by their ability to handle certain technologies without spurring undesired consequences into motion and being aware of their inability to do so and avoiding the use of such technologies.

As exciting the discussion of the obvious threat of AI killing or enslaving all of us as it is, the dangers are plain to see for everyone who dares to look and the mechanics of how those dangers may actualize are all too easy to imagine.

What’s even more exciting, and yet hardly talked about, are the less obvious dangers of the best case scenario.

The best case scenario: we lose what makes us human

Let’s entertain ourselves with a less gloomy scenario, in which our AI turns out to be benign and we’d get all of its benefits and none of its potential downsides. The question then is: to what extent would a benign superhuman AI preserve the core of our human nature? What, in essence, is our human nature in contrast with machines?

To answer those questions, it makes sense to first explore why we need AI in the first place.

If we listen to the people working on AI, they claim that the reason to make this happen is because it has the potential to make the world a much better place for large numbers of people. Better than ever, for more people than ever. That includes curing deadly diseases, freeing people from soul numbing jobs.

Suppose that’s all true.

To distinguish something I need from something I want, I look at whether I can live without it. If I can, I don’t need it. I may still want it, I may still work hard to get it, but only if it’s worth the effort. That’s very different from the kind of stuff that I need to simply stay alive, sane and operational, like air, water, food, sunlight, rest, human touch, intimacy, exercise and so on. I can even say that love or art is not something I can’t live without, but living without them is not worth it for me. That’s my personal preference.

Are we in a situation globally, in which, unless we invent AI we are all going to die?

Clearly not. If anything, the opposite is true.

AI is obviously not something we need, it’s something we want. It’s something some people want. Just like those miners wanted the gold in the 19th century. Except, this time it’s not even pure greed, there’s probably some real sense of “let’s make the world a better place” involved, but it’s still driven by the sense that the first person who gets there will have all the glory.

AI is the gold rush of the 21st century.

There is a dishonesty to the AI community making AI look like we need it, rather than honestly saying that it’s a cool gadget that is fun to build and it can make us billions in a very short time, it will also solve a bunch of problems, which by the way have been created by previous technologies, and we have no idea how it’ll change our lives, but we know it will change it irreversibly and significantly and potentially deadly. So why the hell wouldn’t we do it?

It would not be fair to pick on the AI community. It’s our human nature, it’s capitalism, it’s the corporation, it’s greed, it’s competition, it’s the void inside of us. It’s every product, which satisfies a desire dressed up as an essential need, which covers the vast majority of everything that’s sold to us, that we sell ourselves on a daily basis.

The reason why we create tools and technology is to eliminate certain pain points in our lives. Most of that pain today is psychological, caused by perceived problems, which only arise in comparison to other people who don’t seem to have them. The big question there is if we could eliminate all pain would we want to?

AI is different from all previous technologies in that, not only does it solve a specific set of problems, but it has the potential solve all of our problems.

At end of the road of superhuman AI development, still going with overwhelmingly positive scenario, nothing remains but a human experience which is free from any kind of pain and difficulty.

All of our material needs and desires are fulfilled instantly, we may even replace difficult humans in our lives by obedient humanoids and we get instant, definitive answers to all of our questions. That’s difficult to imagine, but what else would be at the end of the superhuman AI experiment if it all goes well?

What if we eliminated all pain, imperfection and uncertainty from human experience?

Whether a machine may ever be conscious or not, is a far reaching question. What’s easy to acknowledge is that AI does not need to be conscious, only smarter, more capable and powerful than us, to kill or enslave humanity. If the AI that kills or replaces us in some fundamental way is conscious and has everything we do and a lot more in terms of richness of experience, complexity, intelligence and capability, then it would be difficult to not see it as the next phase of evolution and simply let it happen, and be glad to have been instrumental for its emergence. However, if it’s smarter and more powerful than us, but has no conscious experience it would be a damn shame if it were to replace us. That would not be evolution, but a sad accident. Some smart people who can’t wait to be replaced by AI seem to be missing that point.

But we don’t actually need to go so far as consciousness to get good answers to our earlier questions such as what comprises our essential human nature.

The difference between machine and human based intelligence seems to be quite clear, despite the fact that we know embarrassingly little about the latter. Let’s look at the most salient characteristics of both and contrast them.

Human intelligence and the behavior that originates from it is a result of two subsystems: the rational mind and the intuitive mind. The rational mind works well in controlled situations where the number of variables is limited. That, however, is rarely the case, which is when intuition weighs in and may even overwrite the choices the rational mind is suggesting. We know very little about how intuition works. At times it can save our lives or help us avoid great misfortune, while other times it – or the underlying emotion that seems to masquerade as intuition – can cause us to miss out on great opportunities. In addition, we have the ability to self reflect and course correct next time around whether our logic or intuition failed us. Our intelligence and behavior emerges as the result of these very different psychological functions.

What we can clearly say about human intelligence is that it’s imperfect in all the things it aspires to do and it’s prone to making all kinds of mistakes including predicting the future, learning from the past, pattern recognition, decision making and so on. Consequently, human experience involves pain and uncertainty. Our consciousness, which self-consciousness is an aspect of, makes all of that matter. Avoiding pain is not only a survival tool, but it matters all the more, because we know that pain hurts. Our advanced psychological ability to self reflect, have abstract thoughts and use language has a significant downsite to it. It turns fleeting physical and emotional pain into enduring suffering as a result of identifying with a mental image of the self, which is experienced as separate from, exposed to and often threatened by the rest of the experience, which is thought of as other, not self. On one level we wish some of this away, but from another angle, all of this is beautiful and this is what makes us essentially human.

Artificial intelligence is a very different process entirely. It’s based on yes or no questions. Machines don’t understand or experience colors for instance, they just represent red with different arrangements of black and white than blue. But it’s all black or white to them.

Artificial intelligence makes no mistakes. It doesn’t have the capacity to. The hallucinations of the language models we see today are not mistakes. We perceive them as mistakes, but in fact, the only reason they happen is because the information we fed the models and the way we set it up to process it lead to the results we got. At the most fundamental level, the processors of the computers that run those language models still do nothing else, but process gigantic amounts of yes or no questions. And they never make mistakes. Once they’ve run long enough and got enough feedback from humans as to what people judge as mistakes, those perceived mistakes will disappear entirely.

Without the possibility of errors, there is no room for uncertainty. If there is no uncertainty, there is no future. The future we know exactly, has already happened.

A good way to articulate the difference between human and artificial intelligence is trying to measure the length of the shores of the British Islands or that of any island for that matter. Our human intelligence would struggle with the task, because the shore, where the water meets the land is in constant motion. Our best shot would be to walk around the islands and measure the distance. We’d get a highly inaccurate measurement, which, if repeated, would never match a previous measurement. But it would make sense to and be useful for humans.

Artificial intelligence on the other hand would simply build a concrete wall around the island and having established a clear-cut shoreline, it could easily measure its length. It would destroy the shore all around the isles in the process, but that would only matter to apes like us.

Human intelligence is imperfect, prone to mistakes and mysterious, which results in errors, pain and uncertainty. Artificial intelligence on the other hand can’t make mistakes.

And that’s how we arrive at the answers to our question about what makes us essentially different from machines.

The essence of the human experience is that life can go wrong and when it does, it matters.

You may feel an urge to jump to the conclusion that the essential difference between humans and machines is consciousness. And it is an essential difference, but it’s not definitive. Consciousness is what makes the quality of the experience matter. But if the quality is always flat line perfect, then the other component is still missing. If the potential for experience to be bad is not there, if the potential of pain is not there, it’s still a non-human, undesirable, uninteresting, meaningless experience.

We are still talking about the scenario, in which AI is perfectly aligned with humanity and we only get the good stuff out of it and none of the bad – as unlikely as that sounds.

Even in that dream like scenario, what happens is that we merge with AI one way or another, either physically or maybe even without the need of any major physical intervention, but the point is that harnessing the power of the superhuman AI, we’ll have become an entirely new species, who we can only describe today as some kind of an omnipotent god-like entity, who can only make perfect choices and as a result experience no pain or uncertainty.

If we succeed in making AI our ally, the inevitable long term outcome is that the resulting beings will live in an era of miserable perfection.

Strange as it may sound that I’m equating the essence of human existence to the capacity of experiencing pain, there is something to it. There are innumerable things we could come up with that we feel signify what’s most human about us. Compassion, love, language, art just to name a few. But would any of these things and any of the good things in life retain any meaning if we didn’t have the capacity to lose them? What would we experience, if pain and loss were not part of our experience? Would it make any sense to have desires if they were fulfilled instantly?

We can’t have pleasure without pain. There is no yang without yin. People used to know this thousands of years ago.

Imagine that you are a chess player and you’ve lost the ability to lose in chess. First you might think it’s the best thing that could have happened to you and you’d beat the entire world. How long would you want to go on playing chess after that? That’s exactly how life would feel like if it couldn’t go wrong. Watching the same movie again and again for eternity. Destined to succeed and be happy forever. Does that sound more like heaven or hell?

The only way you could entertain yourself in such a situation, the only reason to go on living would be to build a simulator, and recreate in it a pre AI world so that you’d forget about you are omnipotence and you’d feel like you could make a mistake, your life could end at any point so you’d appreciate it again. You’d put yourself back into pretty much the life you have today with all of its shortcomings and pain points.

Why go through all that trouble to get where we are?

When the first humans hit those rocks together, they wanted to change their environment to get rid of some pain. Hunger to be precise. They rejected a part of the human experience. It served them well, they survived, procreated, hence we are here today. We also reject a part of our experience. It’s not hunger anymore for most of us, and it’s not pain that threatens with the extinction of the human race, but we want to get rid of it nevertheless.

It looks like our effort to get rid of some of our pains, we’ll either make us extinct or loose the essence of who we are by eliminating pain all together.

Far fetched as it sounds to worry about not having pain in our lives, there doesn’t seem to be a stop halfway between these two destinations on the technological train ride. We either die quickly or we merge with AI and become gods.

Today, we are light years away from the state of perfect lives, but we are also a long way from being exposed to the elements like early humans used to be. Life is already too good, too boring, too low stakes and too devoid of challenges for many of us. So we come up with artificial challenges, we chase experiences and consume whatever we can to keep ourselves occupied and superficially satisfied. We tolerate injustice and inequality at a societal level, because our lives are not bad enough to make us revolt and we pick the fights we can easily win or lose without any significant consequence either in virtual reality, playing games or in the real world by identifying as a member of a tribe fighting an opposing tribe either in politics, sports or any other genre of culture.

If we only bothered to look, we’d see that if we actually succeeded in our absurd pursuit of maximizing pleasure and eliminating pain, the only thing we could get is a picture perfect misery.

Technology promises Nirvana, the end of suffering by changing our environment and by turning us into gods. The Buddha, the Taoists, Jesus and many others realized they were already god expressing itself in human form and that pain doesn’t equal suffering so they didn’t work to change their environment to find peace and satisfaction. They worked on transforming the human psyche to awaken to its deepest core and find peace and fulfillment within.

Thinking that we can create a shortcut to heaven on earth with AI seems like the most foolish endeavor we could undertake as a species. The nirvana AI can generate is going to be very different from what most of us have in mind, because it’s based on flawed principles and broken foundations.

How can we preserve our humanity over the best case scenario?

The mindset that has us hankering for eternal life, maximum comfort, immediate satisfaction, no uncertainty and all the rest is based on seeking salvation externally. Hoping to find peace and fulfillment in something other than ourselves, in something we can consume, that will complement us, make us whole in one way or another.

We must reverse that and find peace and satisfaction and wholeness internally and the competitiveness and greed that drives so much of our behavior will be replaced by compassion and cooperation. On that basis, we’d have a very different relationship to technology and there’d be no way for it to threaten our existence in any way.

Tell that to the people who lack clean water and who are dying in some disease that AI could help cure – I hear the opposition. We already covered that: for every problem technology solves, it creates another. The solution is not technological. By all means, let’s cure all the diseases we can, but let’s not expect that to give us the peace we all seek.

With every single leap of technological progress, a minority of people who had access to it gained more power and the majority lost some. The bigger the leap, the bigger the resulting inequality, given the fact that our psychological development got stuck in prehistoric ages.

Curiously, some very clever people seem to be clearly aware of that and yet, for some reason still assume that it’ll be different this time. The biggest leap ever in the history of technology is about to happen, and unlike all previous leaps, this time it will lead to more equality and more democracy.

What’s more likely to happen is the people who have access to AI will essentially merge with it and hence, a new species will be born.

They are likely to be the greediest, most competitive, richest and most powerful people alive today. They will have gained superpowers and the rest of us will have to obey them, just like the slaves had to obey their owners, the peasants had to obey their landlords and employees have to obey their bosses to this day.

The packaging of this power dynamic may have softened over the last couple of centuries (employees don’t get whipped for bad behavior most places), but the principle hasn’t changed: the powerful rule over the less powerful.

Power can be used, but can’t be misused without technology. Definitely not at scales we’ve all seen it happen.

Technology is a great enabler, no question. Only recently, it has enabled billions of people to access information and education easier than ever before. At the same time, without tools, especially weapons, tyranny can not be maintained.

A tyrant without access to technology is just a loud bully, who will be overthrown once he goes too far.

The reason why Sam Harris is pro gun is that before owning a gun was possible for anybody, the outcome of a violent situation was simply the result of brute strength. In other words, the bigger guy always walked away alive regardless of whether he was the aggressor. With a gun, argues Sam Harris, an innocent victim has a chance to defend themselves or to deter a potential attack. I agree with Sam Harris on a great number of issues, but he seems to be missing the point here.

While in the pre gun world, the bigger guy always won the fight, in the world of guns all over the place, the guy with the bigger gun wins and the country with the bigger army wins. The powerful with access to more and better weapons are able to rule over everyone else more efficiently than ever before.

If nobody has weapons, one aggressive person will be overtaken, simply outnumbered by the rest of the group even if he is five times the average size. If weapons and technology lead to equal and peaceful societies given our psychological development, we would be living in equal societies, since weapons and technology have been around for a long time. And yet we don’t.

I would not allow statistics to deceive us here. Murder rates per thousand inhabitants might have decreased over millennia, but the wars of the 20th century had more casualties than all the previous wars combined. Only made possible by technology. What do we expect from future wars with even more advanced weapons?

To have more equality, peace and wellbeing in our societies, what we need more of is not technology, definitely not weapons, but quantum leaps in our psychological development. We need peace inside first, to have peace outside.


Elon musk said to Tucker Carlson that he’d been talking to Larry Page of Google some years ago and he expressed his concern about the risks of AI, one of which may be that we get replaced or somehow made redundant in the process. To which, according to Musk, Larry Page replied that Musk was a speciest, favoring humanity over other beings. Musk is proud to be a speciest.

Should we be speciests or should we not? It’s not so obvious.

Meat eaters are often labelled spiciest by vegans. You don’t have the right to enslave or otherwise exploit other species for your own benefit they argue. Personally, I quit eating meat some 10 years ago, but I don’t fully buy that argument.

For one, all species would enslave other species in the wilderness for their own benefit if they could. Secondly, if your survival is at stake you are prompted by nature to kill and consume less complex life forms.

We humans, have the luxury of having a choice. We can decide to eat plants, which are not sentient as opposed to animals. We can decide not to produce live animals at industrial scale farm factories. For the first time in history, we have the choice to not have to kill sentient beings for food as a result of technology – credit given where credit is due -, but do we have the compassion to not do so?

The use of speciesism in that context is counterproductive. If we were more advance psychologically, it would be clear to us that we want to live in harmony with the entire ecosystem of the planet without causing any unnecessary pain. At the same time, if the only way to feed ourselves was to eat animals, we’d have to favor our own survival over that of less complex life forms even if they were as conscious and sentient as we are.

If we use the concept of speciesism to refer to a species more capable and more complex than us, being a speciesest gains a different meaning. It becomes tribalism elevated to the level of our species.

Tribalism at its worst is defending people in our tribe who did wrong to members of the other tribe, just because they belong to our tribe. So the tribe is more important than universal morality, values and principles in general.

If AI and the humans who fuse with it become a more complex and more capable species, don’t we fall prey to our tribal instincts elevated to the whole of humanity when we fear being replaced by them? Shouldn’t we embrace something that can give more to the universe than we could ever give? Shouldn’t we look at AI as simply the next phase of the evolution of life?

Or should we insist on preserving our humanity with all of its messy features and mixed bag of characteristics just because we are humans?

The answer all depends on what the emerging AI will be like. Will it integrate the best of human nature or be devoid of it?

There is a lot in human nature that the world and we could do without.

But giving it all up, not only the bad parts, for something we have no idea about, seems like a dumb move.

In conclusion

There are so many ways to approach the potential demise of humanity by AI that it’s easy to get lost on some tangents. What makes this exploration worthwhile though is the insights we might gain in the process into our own lives, priorities and psychological states.

What seems clear is this:

Technology is way ahead of us.

Superhuman AI is only a question of time. We will be not ready for it.

Whether we go extinct or become some “miserable in its perfection” kind of creature as a result is yet to be seen.

The only thing that seems to make sense in the meantime is to get in touch with the best parts of our humanity, embrace the pain that’s inevitable and learn not to turn it into suffering, alleviate the pain that is possible to alleviate and celebrate each other, our existence and all the good we’ve brought into this world despite how fucked up we are as a species.







Leave a Reply

Your email address will not be published. Required fields are marked *