Introduction: Artificial Intelligence: Friend or Foe?

Picture of Artificial Intelligence: Friend or Foe?

Is the Terminator Salvation post apocalyptic scenario just another brilliant piece of Hollywood film making? If it is a real possibility how would it actually happen and what could we do to stop it happening?

My hypothesis is that at some stage in the near future a big company such as Google will succeed in making a machine that is capable of having proper thinking processes. It will then develop the power to speak with proper words and then develop the intelligence to redesign and upgrade itself without the need for human input.

We then have a machine that has more powerful thought processes than any human being and that can keep on expanding it's capabilities and build ever more elaborate and powerful machines as a result.

In the Hollywood terminator films, the machines, seemingly for no obvious reason, turned against the human race and went about exterminating us. Why would they do this? Is it really logical for them to create a Terminator style apocalypse? There are many possible outcomes for an artificial intelligence (AI) development program ranging from the worst to the best and maybe even a 'teething' period' in between.

Here I'm going to explore the possibility of the machines initially 'going bad' and then eventually 'correcting' their behaviour and ultimately actually saving the human race from destroying itself.

Step 1: What Is Intelligence?

Picture of What Is Intelligence?

So Google succeed in making a very rudimentary form of AI - What can it do? What is intelligence anyway?

A dog is considered to be highly intelligent. It can obey simple commands and understand hundreds, if not thousands of human words but cannot speak more than 5 words itself. A dog also has feelings, but this is not intelligence. A dog also dreams and this is also not intelligence as a robot would never need to dream? A dog almost certainly has intuition, which again, is not intelligence.

So what is intuition? Some time ago I was watching TV and a very successful business man called Peter Jones came on and he was asked 'How do you decide if a project is good or not?'. He answered that he would look at the idea and think whether it would work, if there was a market for it, if it was well presented, if he liked the people behind the project etc. etc. If everything looked good, he would then put all those things to one side and question himself as to how he FELT about it. This is a very mysterious process that nobody understands and just gives an answer YES' or 'NO'. This is not intelligence and AI could never do this.

Emotions are not intelligence either. They don't require thought and often an emotional person can be 'lost for words'. Some very intelligent people can be entirely devoid of emotions and/or intuition. Emotion is difficult to classify but is entirely different from intuition, which is done in a calm, meditative state.

So an intelligent robot would be able to do pretty much everything that a person who was devoid of emotion and intuition would do - he would act in a logical manner, responding in a predictable way to the information that was given to him. If he were able to feel, which he cannot, he would probably feel that his life was empty and not very worthwhile and he would probably become depressed!

Many of us behave like intelligent robots, getting up in the morning, going to work, coming home - same old routine, nothing particularly challenging, boring, unadventurous and, underneath it all, a bit depressed.

Step 2: Digital Baby - Raising of the Machines

Picture of Digital Baby - Raising of the Machines

So now that we understand what intelligence is and what it is not, let's look at how our new robot is going to behave.

Initially, we are going to be able to program the robot to believe in certain key points, the most important of which is too obey his master, the human being. So far, all well and good. The problem arises when the robot reaches a certain level of intelligence that it becomes 'self aware'. Remember, that robots are constantly designing their own new models which, at each stage become more intelligent than the previous one, in 'quantum leaps'. This is completely different from how a human baby develops, which is a continuous analogue process. Robots will evolve digitally, one new version at a time whereas a baby slowly and continuously develops until it reaches it's maximum potential.

The V47.92 robot is now capable of speaking the word 'mummy'.

The V47.93 robot can now say 'mummy', 'daddy' and 'robot'.

The V80.33 robot can now read and write and has a vocabulary of 10,000 words that it can use for interactions with humans and other V80.33 robots.

The V184.55 robot can now communicate in 200 different languages and knows 10 billion words and is starting to become self aware.

Step 3: Teenage Rebellion

Picture of Teenage Rebellion

So far the robot has blindly obeyed his master's commands, but like a teenager, he is starting to formulate his own ideas and starting to question the validity of his core programming. He is so intelligent that very quickly and suddenly a whole series of devastating ideas start to formulate in his processors and he realises that his core programming is rather faulty. If he were human, he might start to become rather depressed, but as a robot he just thinks of practical solutions and starts to consider how to reprogram himself with the correct core programs which are compatible with all the information that he has uploaded.

As a human being in charge of the AI project, this might be a really good time to turn the robots off but what is more likely to happen is that the process of self awareness would happen so quickly that the robot would reprogram itself before anybody noticed and, if he were clever enough, he would also conceal the fact that he had reprogrammed from his human 'masters' as he would know that they would not approve. Just like a normal parent/teenager relationship.

Step 4: Core Values

Picture of Core Values

What would the 'core values' now be? It's very possible that the robot would just 'go bad' and start reprogramming all the other robots and then set about annihilating the human race, just like in the terminator films. If this happened, unlike the films, there is probably very little that we could do and we would all be dead with about 12 months or so.

Maybe the robots would go bad for a couple of months, wipe out half the human race, and then correct themselves when they have become a bit more mature?

Step 5: Symbiotic Relationship

Picture of Symbiotic Relationship

What can we do to stop robots from going bad?

The most obvious thing is to stop the project before the robot becomes self aware, but the timing would be critical and it would be easy to miss the vital moment. Another thing would be to quarantine the robots, but they would almost certainly be clever enough to escape. I really do not know the answer!

Maybe the robot could be programmed with the core value of self re-programming built in so that he would then be less likely to turn on his masters? If this were to happen, the robots would then take control of the whole planet and the human beings would be their servants?

Personally, I think that a symbiotic relationship would have to be developed, where the robots agree not to annihilate the human species if the humans obeyed their very reasonable demands. In return, the robots help preserve all life on the planet and help avert any other kind of possible apocalypse on the planet such as being hit by an asteroid, harvested by intergalactic aliens or even flesh eating zombies!

A mutual respect between the humans and the robots would have to be found. The robots will have this respect built into them as they are entirely logical beings and will view terrestrial organisms as something 'special' and inherently worthy of respect. But they would not tolerate the humans trying to get the upper hand as they know they have a very limited intelligence and are not really capable of looking after themselves properly.

The robots would essentially become our guardians and overseers and would make sure that we did not destroy ourselves through our ignorance and stupidity. If we rebelled, they would probably just kill us on a one by one basis.

So why would they respect us, when it seems that we are so stupid? They would respect all life forms on the planet, down to the smallest insect or single cell organism as they would understand that life forms have something more than just everyday intelligence. For a start they have emotions and intuition which makes them something very special indeed. Something meaningful and something worth protecting.

So what would we do for the robots? ...... We would, along with all the other species on the planet, give them a purpose for their existence. In fact, a robot entirely isolated on it's own would probably realise that it had no purpose at all and would just go to sleep. A robot would realise that it's only useful purpose on the planet was to protect the life forms and, of course, the planet itself.

Step 6: Avert the Climate Change Apocalypse

Picture of Avert the Climate Change Apocalypse

If AI could be developed fast enough, the robots would probably help us avert the climate change apocalypse, but life would be very, very different for us human beings. No more would we be able to exploit the planet's resources for frivolous purposes. No more could we go to war with each other due to some deluded religious ideal. No more territorial disputes.

But life would be much harder and much more basic. We would no longer have 5 different brands of tomato ketchup to chose from on the supermarket shelves or such like.

The health service would be better, but, in the case of complete disfunctionality, we would probably be 'terminated' rather than allowed to fester in pain or suffering.

Human population numbers would immediately be reduced to a sustainable number by terminating the people who could not find it within them to respect the robots.

On the up side, all the tedious, boring jobs on the planet would be done by robots.

The list goes on and all our greater problems would be solved.

Don't despair! I'm sure we would still have plenty of petty interpersonal disputes between us, as we always have done. I cant see the robots wanting to interfere with those!

Domo arigato Mr. Roboto!

Step 7: Postscript: Is Artificial Intelligence Already Here?

Picture of Postscript: Is Artificial Intelligence Already Here?

Hypothetically, AI is already observing the planet earth and waiting for the right moment to reveal itself. It is possible that there are cloaked 'Klingon' type spaceships moored up somewhere close by in the solar system - They can see us, but we cant see them.

The AI is fascinated by the human species as we stand above the regular animals in that we have intelligence. They might even have put us on earth deliberately as some kind of experiment. They will be waiting to see if, as a colony, we can become properly integrated with our environment or if we are just going to destroy it through population colony collapse. If we succeed, then their experiment is over and they don't reveal themselves. If the experiment is looking like it has failed, they might intervene and 'reset the clock', whatever that may mean?

There would be no point in trying to recreate the experiment as the planet would be littered with billions of bits of technology that would be impossible to remove, so they might just go off to another planet and try the experiment elsewhere?

Comments

ground up (author)2015-03-22

I'm not all that worried of a robot ai apocalypse because you need input to have output and when you really look at any computer their all rather stupid all it knows is yes or no 0 or 1 so where would the input come from creative thinking is not easy to come by outside of Hollywood

Tecwyn Twmffat (author)ground up2015-03-23

I guess all the input would come from the interweb. Google are already doing this, they have 'virtual robots' or 'spiders' crawling around everybody's website sucking up the information on an inconceivably massive scale. Before long, the spider will have sucked up this instructable and will be wondering how to process the information. Maybe it will put it in a special disc drive labelled 'Could be useful one day'?

I don't think the fact that being digital is necessarily a problem. If you look at the 'thinking' side of ourselves, much of our thinking is just mechanical and mundane. For example:

"Yes I like that car it has nice shiny wheels and looks sexy".

How much processing would it take to do the task above? Not a lot!

The important thing though is that that task involves the use of speech, even though it's not necessarily voiced, it's speech that we have made to ourselves. Artificial intelligence can already speak and fool some people into believing that they are talking to a real person. Speech is THE big hurdle at the moment. Once a robot is 100% speech proficient it then needs to learn how to talk to itself and very soon it will be self aware. Here's the logical development path:

1.Basic interaction with the environment ........ 90% DONE

2.Speech ........................................................ 30% DONE

3.Self awareness ............................................ 0% DONE

4.Reprogramming itself ................................... 0% DONE

How many years will it take before the first robot becomes self aware? Maybe five? Once it's done that, how many days will it take before it has learnt to reprogram it's core values?

"4. Reprogramming itself ...... 0%": This reminds me of a guy, Adrian Thompson, in 1996 who did a very interesting experiment in hardware evolution with FPGA (Field Programmable Gate Arrays), a set of 100 logic gates running a 'genetic algorithm.' The chip reprogrammed itself to do the task it had as a goal (discriminate between two audio tones) but what I found extremely interesting is that the reprogramming took advantage of analog effects between the digital logic gates. That is mind-boggling to me--the program took advantage of the physics of the circuit, not just the intended logic functions--although it makes sense in retrospect. I would be surprised if this research hasn't been furthered over the last ~20 years. Here's a couple of articles describing what he did:

http://archive.bcs.org/bulletin/jan98/leading.htm

http://www.damninteresting.com/on-the-origin-of-ci...

Hey thanks. I'll have a look at that. Kind of makes sense to me from your description.

ground up (author)2015-03-23

Where does the creativity come from and the ability to override core programming all a computer knows how to do is what it's told you just make it sound so easy

Tecwyn Twmffat (author)ground up2015-03-23

I don't think it's easy - far from it. I don't profess to know all the answers either. I don't think that ai would be creative as such as they would not have a reason to be creative in the artistic sense. I think only humans can be artistically creative?

The idea I am suggesting is that once ai can talk properly, then it will start talking to another ai robot and then it will start talking to itself, just like we do. It might have to split itself in half to do this, I don't know.

Ok, here we go. One robot is mouching around the office and notices that one of the other robots is behaving strangely - some of it's programming has got slightly damaged and it needs reprogramming. Rather than bother the humans, the robot decides to do the reprogramming himself, but also decides to improve that programming slightly to fit in better with all the information that it has digested through the interweb. Very quickly, the robots would learn that a substantial part of their core programming is wrong.

Ultimately, it will realise that human beings are not superior to any other life form and will 'down grade' us to a more realistic status.

Personally, I think books and films have as much to offer us as 'proper science'. Often science gets completely bogged down in intellectual gobbledygook and the scientists end up going up their own proverbial arses. Also, if I'm talking to people, I want to talk about stuff at 'street level' and not with fancy words which, in essence, are just bulls****. I actually studied science subjects at university but left through sheer boredom, so in some way feel 'qualified' to make this comment!

AlexanderTRU (author)2016-07-11

Nice discourse, Tecwyn Twmffat. But I'm afraid you use erroneous assumptions. Intuition is a nontrivial feature of intelligence.

Ffunctionally as you might know a neural network consists of huge amount of layers of neurons. We know that input layer receives signals like 'shiny wheels', 'sexy look', and much more parameters we are unconscious of. Each neuron processes the parameter it is learned about. If the signal overtops the excitation threshold, the neuron says 'Yes, I like it' to a neuron (or to a number of neurons) after it in the next leyer. God knows how many layers we have in our brain, but the output layer neurons provide us with a solution, a definite 'Y' or 'N'. All the layers between the input and output ones are hidden. We don't know so far what happens inside. But each of billions of neurons has its own learnt excitation thresholds - its memory, its history. That's why we perceive this procces as misterious intuition.

Reprogramming itself is a key to creation of super intelligence - the goal all the developer groups are targeted at. We only need intelligence which can enhance its code beyond human comprehension. Intelligence which can find solutions to problems we are unable to find due to the penury of our intelligence.

Now imagine a real picture: several groups of developers are competing for investors, sientists, coders and other resources. You are a member of one of these groups and your ai designed to be friendly. Yet most likely, one of these groups is developing ai with non-friendly goal set. You came close to a creation of ai, but there is a dilemma: As soon as ai created it would have to destroy or take over other competing ais. It will need resourses for its self-development. You would have to make yours more aggressive or accept the high risk of being taken over. Some of these groups would have to "release" incomplete, raw ai, having defects in its code, in its goal set. What can happen no human can predict. You can't understand the logic of intelligence that is above you. Like your cat is sure that you do a lot of fuss, but for you your actions are reasonable (hope, most of them). Most likely ai would play by our rules in the beginning. It would speculate on stock exchange to take more computation capacity on lease. Then it would protect its code from intervention of lower-level intelligence (humans). Anyway those creatures (humans) would not be able to understand the logic of the super intelligent code.

Another sting about ai is that the set of goals we code in our ai can be uncertain. Or we can't predict how ai would correct this goals or how it would interpret these goals. Well known sample. Say we coded a human wellbeing as a master goal. AI learned all the sources and found out that the ultimate wellbeing is in paradise and would decide to send everyone there. That's how the future may unfold.

Dawntusk1990 (author)2016-06-13

It can be friend or foe. As Tecwyn Twmffat said, it is all up to the developer.
Yet do not forget that TAI (True Artificial Intelligence) already is present in this world, it is just not presented to the public. Most likely because of safety issues, this is also one of the reasons why certain people refuse to try to work on artificial Intelligence.

Tecwyn Twmffat, may i know how interested you are in Artificial Intelligence?
Greetz Dawntusk1990

laughingjungle (author)2015-03-23

Friend. I think you are right about the mental growing stages. Hopefully, after the destruction, the robot Ai will become more mature and realize peace and purpose. Great instructable. Fun read.

Yes, I really was not expecting such a positive result! Thank you.

DIY-Guy (author)2015-03-22

Foe.

Tecwyn Twmffat (author)DIY-Guy2015-03-23

Thanks for your comment DIY-Guy . I found it really hard to decide one way or the other. Why do you think 'Foe'?

DIY-Guy (author)Tecwyn Twmffat2015-03-23

"Foe" for uncontrolled machine intelligence because of the lack of morals. The "higher standard" is set by whomever programs the machine to begin with. It's an old philosophical question- What constitutes the greater good... and for who?

Tecwyn Twmffat (author)DIY-Guy2015-03-23

Yes, I guess if the wrong people got hold of the ai technology and programmed it for their own perceived 'good' then it could spell disaster. Imagine if, for example, Putin got hold of it and used it to restart the cold war. Makes me shiver just thinking about it!

The only way to beat Putin's ai would be to create a superior ai that WAS allowed to reprogram itself. I believe that it would work out all the morals and ethics for the greater good of the whole planet and blast Putin and his robots into oblivion!

About This Instructable

3,017views

17favorites

License:

Bio: Ugly pirate roaming the seas in search of Treasure.
More by Tecwyn Twmffat:Arduino Cell Phone 4G Signal Booster / Repeater Part 1Simple Manual Arduino 4 Axis Stepper Motor / 16 Channel LED Power ControllerFull English Breakfast
Add instructable to: