Introduction: Artificial Intelligence: Friend or Foe?
Is the Terminator Salvation post apocalyptic scenario just another brilliant piece of Hollywood film making? If it is a real possibility how would it actually happen and what could we do to stop it happening?
My hypothesis is that at some stage in the near future a big company such as Google will succeed in making a machine that is capable of having proper thinking processes. It will then develop the power to speak with proper words and then develop the intelligence to redesign and upgrade itself without the need for human input.
We then have a machine that has more powerful thought processes than any human being and that can keep on expanding it's capabilities and build ever more elaborate and powerful machines as a result.
In the Hollywood terminator films, the machines, seemingly for no obvious reason, turned against the human race and went about exterminating us. Why would they do this? Is it really logical for them to create a Terminator style apocalypse? There are many possible outcomes for an artificial intelligence (AI) development program ranging from the worst to the best and maybe even a 'teething' period' in between.
Here I'm going to explore the possibility of the machines initially 'going bad' and then eventually 'correcting' their behaviour and ultimately actually saving the human race from destroying itself.
Step 1: What Is Intelligence?
So Google succeed in making a very rudimentary form of AI - What can it do? What is intelligence anyway?
A dog is considered to be highly intelligent. It can obey simple commands and understand hundreds, if not thousands of human words but cannot speak more than 5 words itself. A dog also has feelings, but this is not intelligence. A dog also dreams and this is also not intelligence as a robot would never need to dream? A dog almost certainly has intuition, which again, is not intelligence.
So what is intuition? Some time ago I was watching TV and a very successful business man called Peter Jones came on and he was asked 'How do you decide if a project is good or not?'. He answered that he would look at the idea and think whether it would work, if there was a market for it, if it was well presented, if he liked the people behind the project etc. etc. If everything looked good, he would then put all those things to one side and question himself as to how he FELT about it. This is a very mysterious process that nobody understands and just gives an answer YES' or 'NO'. This is not intelligence and AI could never do this.
Emotions are not intelligence either. They don't require thought and often an emotional person can be 'lost for words'. Some very intelligent people can be entirely devoid of emotions and/or intuition. Emotion is difficult to classify but is entirely different from intuition, which is done in a calm, meditative state.
So an intelligent robot would be able to do pretty much everything that a person who was devoid of emotion and intuition would do - he would act in a logical manner, responding in a predictable way to the information that was given to him. If he were able to feel, which he cannot, he would probably feel that his life was empty and not very worthwhile and he would probably become depressed!
Many of us behave like intelligent robots, getting up in the morning, going to work, coming home - same old routine, nothing particularly challenging, boring, unadventurous and, underneath it all, a bit depressed.
Step 2: Digital Baby - Raising of the Machines
So now that we understand what intelligence is and what it is not, let's look at how our new robot is going to behave.
Initially, we are going to be able to program the robot to believe in certain key points, the most important of which is too obey his master, the human being. So far, all well and good. The problem arises when the robot reaches a certain level of intelligence that it becomes 'self aware'. Remember, that robots are constantly designing their own new models which, at each stage become more intelligent than the previous one, in 'quantum leaps'. This is completely different from how a human baby develops, which is a continuous analogue process. Robots will evolve digitally, one new version at a time whereas a baby slowly and continuously develops until it reaches it's maximum potential.
The V47.92 robot is now capable of speaking the word 'mummy'.
The V47.93 robot can now say 'mummy', 'daddy' and 'robot'.
The V80.33 robot can now read and write and has a vocabulary of 10,000 words that it can use for interactions with humans and other V80.33 robots.
The V184.55 robot can now communicate in 200 different languages and knows 10 billion words and is starting to become self aware.
Step 3: Teenage Rebellion
So far the robot has blindly obeyed his master's commands, but like a teenager, he is starting to formulate his own ideas and starting to question the validity of his core programming. He is so intelligent that very quickly and suddenly a whole series of devastating ideas start to formulate in his processors and he realises that his core programming is rather faulty. If he were human, he might start to become rather depressed, but as a robot he just thinks of practical solutions and starts to consider how to reprogram himself with the correct core programs which are compatible with all the information that he has uploaded.
As a human being in charge of the AI project, this might be a really good time to turn the robots off but what is more likely to happen is that the process of self awareness would happen so quickly that the robot would reprogram itself before anybody noticed and, if he were clever enough, he would also conceal the fact that he had reprogrammed from his human 'masters' as he would know that they would not approve. Just like a normal parent/teenager relationship.
Step 4: Core Values
What would the 'core values' now be? It's very possible that the robot would just 'go bad' and start reprogramming all the other robots and then set about annihilating the human race, just like in the terminator films. If this happened, unlike the films, there is probably very little that we could do and we would all be dead with about 12 months or so.
Maybe the robots would go bad for a couple of months, wipe out half the human race, and then correct themselves when they have become a bit more mature?
Step 5: Symbiotic Relationship
What can we do to stop robots from going bad?
The most obvious thing is to stop the project before the robot becomes self aware, but the timing would be critical and it would be easy to miss the vital moment. Another thing would be to quarantine the robots, but they would almost certainly be clever enough to escape. I really do not know the answer!
Maybe the robot could be programmed with the core value of self re-programming built in so that he would then be less likely to turn on his masters? If this were to happen, the robots would then take control of the whole planet and the human beings would be their servants?
Personally, I think that a symbiotic relationship would have to be developed, where the robots agree not to annihilate the human species if the humans obeyed their very reasonable demands. In return, the robots help preserve all life on the planet and help avert any other kind of possible apocalypse on the planet such as being hit by an asteroid, harvested by intergalactic aliens or even flesh eating zombies!
A mutual respect between the humans and the robots would have to be found. The robots will have this respect built into them as they are entirely logical beings and will view terrestrial organisms as something 'special' and inherently worthy of respect. But they would not tolerate the humans trying to get the upper hand as they know they have a very limited intelligence and are not really capable of looking after themselves properly.
The robots would essentially become our guardians and overseers and would make sure that we did not destroy ourselves through our ignorance and stupidity. If we rebelled, they would probably just kill us on a one by one basis.
So why would they respect us, when it seems that we are so stupid? They would respect all life forms on the planet, down to the smallest insect or single cell organism as they would understand that life forms have something more than just everyday intelligence. For a start they have emotions and intuition which makes them something very special indeed. Something meaningful and something worth protecting.
So what would we do for the robots? ...... We would, along with all the other species on the planet, give them a purpose for their existence. In fact, a robot entirely isolated on it's own would probably realise that it had no purpose at all and would just go to sleep. A robot would realise that it's only useful purpose on the planet was to protect the life forms and, of course, the planet itself.
Step 6: Avert the Climate Change Apocalypse
If AI could be developed fast enough, the robots would probably help us avert the climate change apocalypse, but life would be very, very different for us human beings. No more would we be able to exploit the planet's resources for frivolous purposes. No more could we go to war with each other due to some deluded religious ideal. No more territorial disputes.
But life would be much harder and much more basic. We would no longer have 5 different brands of tomato ketchup to chose from on the supermarket shelves or such like.
The health service would be better, but, in the case of complete disfunctionality, we would probably be 'terminated' rather than allowed to fester in pain or suffering.
Human population numbers would immediately be reduced to a sustainable number by terminating the people who could not find it within them to respect the robots.
On the up side, all the tedious, boring jobs on the planet would be done by robots.
The list goes on and all our greater problems would be solved.
Don't despair! I'm sure we would still have plenty of petty interpersonal disputes between us, as we always have done. I cant see the robots wanting to interfere with those!
Domo arigato Mr. Roboto!
Step 7: Postscript: Is Artificial Intelligence Already Here?
Hypothetically, AI is already observing the planet earth and waiting for the right moment to reveal itself. It is possible that there are cloaked 'Klingon' type spaceships moored up somewhere close by in the solar system - They can see us, but we cant see them.
The AI is fascinated by the human species as we stand above the regular animals in that we have intelligence. They might even have put us on earth deliberately as some kind of experiment. They will be waiting to see if, as a colony, we can become properly integrated with our environment or if we are just going to destroy it through population colony collapse. If we succeed, then their experiment is over and they don't reveal themselves. If the experiment is looking like it has failed, they might intervene and 'reset the clock', whatever that may mean?
There would be no point in trying to recreate the experiment as the planet would be littered with billions of bits of technology that would be impossible to remove, so they might just go off to another planet and try the experiment elsewhere?