
A.I. - A.I. - How do we stop it destroying us (or, ultimately it will)


They say Hollywood often inspires inventions of the future and it’s so true in the case of artificial intelligence. Many films have depicted it, long before it became reality, from “Terminator” & “Bicentennial Man” to Hayley Joel Osmon’s haunting performance in the aptly named “A.I.”
Some of the subject matter has been positive, but largely the message has been that this organic technology is ultimately bad for the planet. As with nuclear energy, I think it depends partly on who’s hands it is in and how legislation limits it’s use and access to management systems. However, there is the difference between analogue technology like nuclear fusion, and digital technology like A.I. The afore-mentioned is completely useless without human engineering whereas A.I. , in the not too distant future, won't need anybody to help it develop.
Getting one’s head around this self-learning aspect of A.I. is difficult, even if you are based in the digital industry. It’s not just what it will be able to do, but the speed at which it is learning. A friend of mine, who is in said industry, put it quite poetically:-
“Think of A.I. as this. Right now A.I. is a new born baby. In 3 months it will be a toddler, in 6 months a teenager and in a year a fully grown adult. “
And it is that speed that is scary. Can we as a world control such an organism that is growing so fast, in its web size and the intelligence within it.
The primary aim will be to prevent A.I. being able to infiltrate systems that manage major networks in all sorts of fields. The internet itself is one, but beyond that, systems that control more analogue technologies like fossil fuel energy and yes, nuclear weapons systems, to directly bring the elephant firmly into the room. Remember SKYNET ?
This is theoretically not possible right now because, whilst A.I. can currently scan and disseminate millions of pieces of data in milliseconds, it does not currently have the capability to think and hack systems voluntarily. but it could develop and this is what needs to be legislated against.
One of the leading voices on this subject, Professor of Artificial Intelligence, Toby Walsh has done particular research on A.I's applications within the military. My own impression on what A.,I. would do for the battle field was in advancing conventional weapons.
However, Professor Walsh's vision or observation demonstrates how advanced these "weapons" could become and the specific danger that will present. He supposes that humans will no longer be on the battlefield at all. The weapons in use could be made up of machines but also robots and the robots would basically run the show making on field decisions themselves. So the major issue, given that robots don't have the same empathies of a human, is will they make humane and sensible decisions that don't contravene the Geneva Convention ?
​
In addition, he cited the fact that there are already 2 A.I. political parties on the planet, suggesting that Government be run by A.I. Sounds crazy ? Not so says Professor Walsh as he reminds us that humans are already terrible at making decisions, let alone correct ones and that A.I. would make more calculated decisions, in time.
Not even experts like Professor Walsh really know how clever A.I. will become and what it could do for the human race, good or bad. Either way, technology companies, Governments and even individuals have a responsibility to try and manage this evolving phenomenon until we fully understand how to be its master, rather than the other way around.
​
G. Hoff -Editor