Recently there has been quite a bit of talk about artificial intelligence, and the possible problems this technology might introduce. Technology and evolution go hand in hand, and even though some might disagree, I find to be a natural part of the evolution. In our modern society, technology has become just as important as food and water. In this day and age, you would have to work hard to find some field that in some shape or form doesn’t use some form of technology to aid and assist.
But what is artificial intelligence and why should I even bother or worry about it?
Artificial intelligence is in its basic form a technology that makes machine and/or software seem like intelligent beings, as they are able to resonate on a specific set of problems and devise a solution to the problem or even communicate with human beings and as such be perceived as human beings themselves. Most major researchers and textbooks on the field summarize it as “the study and design of intelligent agents”.
Many of you will have seen some form of artificial intelligence displayed in movies and TV series throughout the years, and well known movies include the iconic WarGames from 1983 starring Matthew Broderick, where a young guy hacks into a computer system only to find out that the games he’s playing turns out to be a machine set out to win the game, to the Terminator franchise of movies where a computer network called Skynet becomes self aware and realizes that the only and final solution is to wipe out all human life to be able to survive itself. Another brilliant example of an artificial intelligence comes from the movie Alien (1979), where a commercial space vessel stumbles upon a distress call, only to uncover an unknown life form which is brought on-board. As the scenario comes to play, the computer called Mother decides that the commercial and military value of this new alien species out values the life of the crew, and is willing to sacrifice them all to spare another life. Another noteworthy mention is Person Of Interest, where a scientist by the name Harold Finch has successfully programmed an artificial intelligence able to predict violent acts of crime, and acts upon these predictions by sending a team of people their social number so that they can intervene and help this from ever occurring. Being a nerd, I can’t but help to include another source of reference, which is Lieutenant Commander Data in the television series Star Trek: The Next Generation, which is a fully evolved artificial intelligence in the form of a humanoid robot which acts a member of the crew alongside its human and alien counterparts.
All these movies portray a quite possible scenario where we as human beings will come to rely upon the feedback of an artificial intelligence, either to help us make a decision or in some cases even make the decisions for us without explicit permission from its human owners. Now some might think that all this is without risk, seeing as any computer and/or software we use is made by human beings, and as such should be deemed safe for everyone. But this isn’t necessarily the case, and as any person who’s been involved with programming knows the phrase “garbage in, garbage out!” – and this rings very true with an artificial intelligence.
The ultimate goal for most people involved in the science of artificial intelligence, is to make an intelligence that is as good as the human intelligence or even surpass it. Already back in 1950, a brilliant man by the name of Alan Turing devised a test which has later been aptly named the Turing test. This test tests an artificial intelligence capacity and ability to exhibit intelligent behavior equivalent to, or indistinguishable from that of a human. The test is performed with by conducting a conversation between a human being and a computer and its artificial intelligence. The participants, including the judge, are separated so that the judge only has the conversation between the two to pass its judgement on. If the judge cannot reliably tell the difference between who is the computer and who is the human, the artificial intelligence is said to have passed the test.
A goal in itself is always good, a goal is what drives us and gives us something to work against. But with any goal, one has to sit back and look at all aspects in what you’re working with, this being but not limited to the technical challenges, the moral aspect, the mere ethics and the potential value. In other words, one should never go into something like this with blinds, as the outcome could in worst case be disastrous.
Quite recently, an open letter was published to the public about the current state of the field of research into artificial intelligence, with some reservations as to how to go about continuing this work. The letter was summarized with the following: “In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.”. This letter was signed by some quite influential people, both in the world of science and computing, among others: Stephen Hawking, Stuart Russell (Berkeley professor), Eric Horvitz (Microsoft research director), Bart Selman (Cornell professor), Yan LeCun (head of Facebook’s Artificial Intelligence Laboratory), Peter Norvig (Director of research, Google) and Elon Musk (SpaceX and Tesla).
The basis of creating an artificial intelligence is that it mimics its owners, in this case, the human race. But for anyone with some background in programming, it be programming computer programs, logical systems or even human beings in the form of social programming, knows that you are bound to meet certain limitations along the way.
An artificial intelligence will never ever be better than the ones that program it, and their background both socially and theoretically will leave a major footprint on how that intelligence behaves. If the moral standards of the person(s) developing it are shady, odds dictate that the moral compass of the artificial intelligence will be shady. If however, the artificial intelligence is programmed by persons with what society as a whole deems as sane and just moral compass, odds go in favor that the personality will be a good one.
But true intelligence does not stem from just being good, or being bad, it’s from knowing the difference between right and wrong. As we all know, right and wrong is not black and white. From time to time, you need to tell lies. Not for the sake of lying, but to preserve a sane state of mind and to keep focus at what is at hand.
If the artificial intelligence isn’t able to tell a white lie when the occasion calls for it, it wouldn’t be a trustworthy artificial intelligence, as white lies are all about self preservation, or simply just shifting the focus the right way. The thing is, many people see no reason in having a computer taking over the role as a human, being a thinking entity, able to make up their minds and act on what information they have available at the time, but some good could come out of it as well.
What governs the outcome, is wildly dependent on those that program it. I find artificial intelligence to be both extremely fascinating, but also terrifically scaring. If not audited in a good manner, and thoroughly tested, with no sense of control, the outcome could be way out of our reach, and it’s good to see that the likes of Stephen Hawking, Stuart Russell and Bart Selman agree with this, and have made it a part of the public domain for people to make up their own minds. Hopefully the end result will be a set of rules and/or guidelines that people involved into the field of artificial intelligence will have to work after and report on.
In essence, it’s all about the implementation. And as with most things in life, if done poorly the results will be at best abysmal, at worst disastrous. I would not like seeing artificial intelligence end up being the ghost in the machine, the ghost that eventually led to the demise of the human race.