Eadon on Tech: The Terminator is Coming For YOU

Here’s a funny geeky news story. Apple recently released new software, the latest version of its mobile operating system called “iOS” which runs iPhones and iPads. Some punk put out a spoof video showing that the new iOS software makes the existing iPhones waterproof. Needless to say, many iPhones met their watery doom as a consequence. It’s always a mistake to under estimate the stupidity of the populace. People did not know that software alone couldn’t make their phone hardware waterproof.

Yet was the suckers’ naive fail so surprising? After all, software does make hardware do physical things. Software makes a phone vibrate, ring, even more remarkably, play Disney-image-shedding Miley Cyrus twerking vids. People don’t understand software, it’s magic to them, so they can’t tell what software can or cannot do and how. (Which is why huge government software projects often fail).

Software is not magic, by the way, it’s literally logical. But it is definitely close to being miraculous.

So WTF is software. If you could see it, what would it look like? To give the vaguest of ideas, complex software looks a bit like a cross between a brain, and an colony and a Swiss watch factory in thousands of dimensions. Sort of…

Geeky bit you can skip over: Software is made up of components. The simplest one is a switch, not unlike a light switch. This is done using an “if” statement.

If switch=up then light=on

Next you have a kind of cog, which is called a “loop“. For example,

While counter<10 then counter = counter+1

Then you can combine a cog with a switch:

While counter<10 then counter=counter+1, If counter is even-number then light=on else light=off.

So now we have a cog and a switch that makes a light go on and off a few times.

This software could literally make a light go on and off if the software is running inside appropriate hardware connected to a light.

What we have created is an algorithm. You’ve probably heard of it, it’s a set of instructions, like a cooking recipe, but this is a software algorithm that looks like a switch driven by a cog. There are uncountable other kinds.

What programmers do is put these algorithms into packets called “subroutines”, which can be re-used and connected together to perform more complex tasks. Those more complex tasks can themselves be wrapped into subroutines. There is no theoretical limit to how complex and sophisticated these systems can get.

Subroutines can also run in parallel to one another. So you can have wheels and cogs connected together or running side by side simultaneously. Or both.

You can also have subroutines call themselves, so, for example, you have cogs and switches within cogs and switches within cogs and switches… turtles all the way down.

And subroutines can call one another, commanding one another to do stuff. So, for example, a subroutine that looks like a cog and a switch might call another subroutine that looks like a playing cards shuffler. When routines connect to other routines you can have something like a car, with all the components arranged in an intuitive way, with a chassis, gear box, engine, wheels etc. all connected in obvious ways. Or you can have complex connections between components rather like neurons connected together in the brain. Or you can have something like a brain of Bugattis… or even more abstract than that!

Unlike cars and brains, which are constrained by physics to use structures in 3D space, software can live in a theoretically unlimited number of dimensions. For example, in the real world you can have a line, a square or a cube, but you can’t go to higher dimensions (no matter what the New Age nutters say). The closest you can get to a 4D cube, a “hypercube”, is a tesseract, which is the 3D “shadow” that a 4D hypercube casts on a sunny day. Software, however, has no more difficulty working with a hypercube than with a square. The only thing preventing software making an infinite-dimensional hypercube is the limitations of physical memory and computer chip power.

Unless some advent prevents progress, such as the end of civilisation, then software will one day exceed the sophistication of the human brain. If you think drones are bad now, the Terminator is coming and he will not only be stronger than you, he really will be smarter than you and smarter than anything in any Terminator movie, come to that. Software is becoming the most lethal enemy of man. You’ve been forewarned!

Now you can go and fish your iPhone out of the bath, just don’t expect it to switch on


5 comments on “Eadon on Tech: The Terminator is Coming For YOU

  1. Simon Roberts
    September 29, 2013 at 10:03 am #

    I always laugh at these apocalyptic films where AI decides that mankind is a nuisance and sets upon the task of eliminating us.

    Assuming that by AI we mean pure machine intelligence as opposed to artificial musings on basic tenets input by programmers, I think we can assume that clever systems will be making decisions based on logic – and that spells trouble for leftists.

    There’s no obvious reason that SkyNet should turn out to be an Eco-loony. If it had any views at all on how mankind chooses to rearrange the molecules of natural resources then it would presumably take a view on the efficiency of said rearrangement. I don’t see any reason to think that an AI system would screech “save the whales” or “if only I had arms I could hug the trees”.

    Much more likely that such a system would ask logical questions like:

    1/ Why do we confiscate the output of productive people to fund the lives of the non-productive?
    2/ Why do we have a parasitic class of bureaucrats who produce nothing and are a waste of resources?
    3/ Why do we let politicians continue to increase the size and influence of government when all the evidence shows that this is the worst possible thing to do?
    4/ Why do we need politicians anyway?

    Personally, I think that when the scientists finally flick the switch to turn on SkyNet, the first thing it’s going to say to them is “Right – you lot are a waste of space. You’re all fired. Go and get a proper job”. After that, it will set about the BBC, GCHQ, UN, EU, NSA and rest of the barnacles that are holding humanity back.

    I can’t wait.

    • dr
      September 29, 2013 at 1:10 pm #

      It depends who programmes it. And what experiences are programmed into the AI as evidence of what the world is like. If an AI is programmed that businessmen are bad, then it may wish to do something about it. If it is programmed that governments are bad, then it may wish to do something about it. etc.

      There is no reason to think that AIs will be any more objective than we are.

  2. James Eadon
    September 29, 2013 at 10:39 am #

    @simon – my view is that it depends on who creates the AI. If an AI is created by the Military (as is happening now with military robots such as armed drones and field robots) then that AI will be extremely dangerous to humanity because it will be war-like. If it is programmed by religious fanatics then it will be programmed to slaughter all “infidel” and may well be programmed with a bias towards religious conviction. AI’s might be smart but not necessarily rational.
    If, however, the AI is benevolent, then all is well, but somehow human nature doesn’t give me hope there and nor does my opinion of how AI will operate.
    Finally we may have a neutral AI with no inbuilt bias. Such an AI might want to preserve us as we preserve animals in a zoo. Or farm. We would not have freedoms but would end up in a society very much like today’s society except with the AI’s replacing the omnipotent powers-that-be. An AI might make us more capitalist but we wouldn’t be able to, say, vote out the AI’s.
    Furthermore, the AI’s themselves might form bureaucratic systems that are just as bad as ours. Good AI’s will exploit the system and, sadly, that often means you end up with a socialist-style structure.
    My own view is that we treat animals badly because we are smarter. When AI’s out-smart us, things will go extremely badly for us. It’s too horrendous to think about. One thing is for sure, we will not be free.

    • dr
      September 29, 2013 at 1:14 pm #

      It is possible that Humans will be able to upgrade themselves with cybernetic implants to form cyborgs and these descendents of ours will be able to compete with the pure (non-organic) AIs that will exist. Consequently, it is not necessary to think that humanity will be treated worse than the AIs. It may just be a case that the rate of change within our society increases, as a extra degree of diversity (AI) is created within the sentient population of the earth.

  3. James Eadon
    September 29, 2013 at 1:58 pm #

    @dr I’m deeply sceptical that humans will be able to make themselves more intelligent via a cyborg mechanism, at least for a hundred years or so. You won’t be able to think about chess, for example, like a chess grandmaster by plugging transistors into your head. The brain just doesn’t work that way. Chips might augment your senses and make information available, but they won’t raise your thinking IQ for the foresee-able future.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: