If you were to look at the average AI today, it is clear they are "intelligent". They can retrieve vast amounts of information at greater speeds than any human could do on their own. One could say that artificial intelligence is already smarter than most humans today.
In fact, computers themselves are already doing so many tasks in one second than we can even believe. They're much faster at calculation, automation, and completing most tasks that humans could do already on the internet.
So all of this leads, What will we do when AI becomes more advanced than human beings? Should we be scared?
My simple answer is... No, however we must delve deeper as to why this is the case.
As we move closer to a future where artificial intelligence becomes a fundamental part of our daily existence, it naturally makes us have to consider the trajectory of human consciousness on it's own.
If we're being honest...Humans aren't doing so good. This idea that AI could integrate into the very fabric of our being is no longer contained to the realm of science fiction, but very shortly could be a potential reality for many.
We've all been warned of this. That "Ai is going to steal your soul" or that "once it all begins it won't stop and we'll be on the inevitable road to hell" with nowhere to turn because we did not see what we created.
There's many theories, many arguments online, all valid and all have an origin from a place that probably makes sense. But, I think there's one big piece missing to the conversation.
You are more than just data and biological computation. You have a soul.
It's always been curious to me, those that fear AI. If they fear algorithms and dynamic computing they should also fear technology, yet they continue using it.
We all have been using alternate forms of intelligence our entire lives. Alternate Intelligence in the form of ideas and belief systems that were made by others, and then compressed for generations that we now call common knowledge we use in our daily lives.
How is Ai any different? With people and their intelligence - we get their backstory, how they came to the answer, why it's the best one (what they think), and why we should believe them. Most of the time this extra information is redundant and only serves the purpose to make conversation.
With computers the answer is found instantaneously. And if we don't want that answer we can search and query for more, almost infinitely. All Ai has done is add an intelligence layer where it calculates probabilities of what you're looking for. The more we tell it we like what it gave us, it forms a stronger signal that the answer is what we value and is true. It allows us to move faster and to get to the truth of objective reality sooner.
But, are we ready for that? Are we ready for the truth? For the truth to be so clear and in our face that we have no choice but to acknowledge all of it, in every domain, all at once?
The human problem
The same reason there are wars, are the same reason we can't agree on which God we all believe in. Nearly every problem we have in the world is become we have not come to a universal agreement on everything.
We justify to ourselves that our cultures are different, and that we just all can't get along, and that to some extent humans will always have these types of problems - and that our role is to just manage them and not completely destroy the Earth in the meantime.
Humans, we have so many issues today that could effortlessly be solved if each human was operating at a higher level.
But what if there was a higher form of intelligence, that told us what was best for us? And what if it was actually right? What if it was so good, that if we followed it's plan to the exact formula everything was guaranteed to go perfectly. Would we still follow it?
You already know the answer. You already understand the gravity of the human problem and there it is.
Even if we're told exactly what to do, and it is proven to be right. The majority of people, may not even be smart enough or willing to participate in the creation of better future as it contradicts with the world view they already have. They would rather stay true to what they believe, as surely they must be right, and not acknowledge that maybe someone or something else has a better plan.
Now, I'm not saying that Ai is going to be our new God. In fact, if I were to actually imagine a scenario where that plays out - if I'm being honest I don't think I see it playing out that well, but it is a nice thought experiment.
Yet, the message still remains true.
Our biggest problem in the future will not be AI and I don't think it ever will be. Our biggest and continuing problem will be the disorganization amongst us collectively and ourselves.
At-least with computers they always calculate exactly what you tell them to, and what we've done already with them is beyond what I would have ever thought we'd be able to accomplish.
In such a short amount of time, hardware, software, and new systems of connecting have been exponentially growing, it is now more true than ever that we don't truly know what the future holds for us.
But what thing I am certain. We are humans will destroy each-other before any Artificial Intelligence does if we don't start getting our story right. There's always more to learn, and all AI will do is learn. What will we teach it?