If you've ever been alive, and I assure you that you have been (even if you haven't "lived"), then you're familiar with Pac-Man. I don't mean Manny Pacquiao, either--who by the way is going to eat a ton of punches on Saturday.
Pac-Man is the quintessential video game. It was a staple of arcades and game consoles from 1980 on, and it retains a classic and nostalgic feel that makes it attractive to younger gamers today. Yet I recall when there wasn't a better option--Doom hadn't quite been released yet. Here is brief list of Pac-Man facts I just educated myself on moments ago. Thanks Wikipedia!
- Generated $2.5 billion in quarters in its first decade; remains highest grossing game of all-time
- Recognized by 94% of Americans
- Installations in the Smithsonian and New York Museum of Modern Art
So where am I going with this? At the end of March, researchers at Washington State University announced they had developed an algorithm that had one computer teach another how to play Pac-Man. Two virtual robots acted as a student and teacher pairing, with the latter being able to develop the former's Pac-Man skills through helpful positive and negative feedback. If the teaching robot offered too much or too little advice, then the student robot didn't actually learn. But by the end of the experiment, the student robot was outscoring the teaching robot in games of Pac-Man.
While this doesn't seem too significant, it is one of the first instances of one robot learning from another. Some members of the research team note that even very advanced and intuitive robots become easily confused, making them seem "very dumb." While the easiest solution previously has been to swap out software or hardware to reprogram a robot, re-teaching it would be most economical. And if that process can be accomplished by another machine through examples and feedback, then we're looking at a future where robots will one day be teaching humans. So the odds of finding that elementary school teaching job just got even slimmer. This is all beginning to sound very Skynet-ish to me.
Since Terminator hasn't become quite as ubiquitous as Pac-Man, a recap: it's a fictional AI computer system which was given control of the American arsenal. When it became sentient, military contractors tried to shut it down. Skynet considered this an attack on its right-to-life, and used its military control to begin erasing human life. Depending on which installation you're watching Arnold tries to kill humans or saves the day.
It's not that there is anything inherently wrong with robots teaching robots. Indeed, if we could reduce the amount of human tasks involved in training and reprogramming robots, these individuals could arguably turn their attention to larger, more impactful robotic solutions. Provided these robots abide by Asimov's Three Laws of Robotics, there shouldn't any issues. But there are certainly going to be issues.
Terrorism unfortunately takes new and creative forms, and if robots begin teaching each other, all it takes is one rogue developer to alter some code before Skynet begins marauding. Our fates are in the hands of tech company giants, the employees they hire, and the big money that has never reliably bred compassionate or responsible corporate citizens.
Take for example Google's self-driving car. While this is on the outskirts of current mainstream technology, it remains an inevitability and it has been practically demonstrated many times. Now compare this with Asimov's short story Sally. This narrative contains a future where cars are outfitted with CPUs which recognize the vehicle's consciousness. At an automobile retirement farm, the cars there witness the farm owner being taken at gunpoint and loaded onto a bus. The cars communicate with each other and devise a plan to help their caretaker. They surround the bus, tell the bus what's going on, and the bus takes it upon itself to free the caretaker. The cars take their owner home, while the bus kills the hostage-taker. Obviously, robots teaching each other Pac-Man doesn't translate to an artificial consciousness that doles out judgment.
In reality, it's more like a slippery-slope argument in that all it would take is a hacker or mentally unstable programmer to make an army of killing machines. Once a month the news informs of us a new mass shooter or serial killer. And hackers, viruses, bugs, and software problems have become a fact of life for most technological gadgets. It's only a matter of time until the two are paired together.
When the first self-driving car kills someone, where do the fingers point? When first car with a learning AI kills someone, where do the fingers point then? And will it be too late to change anything?
|
Comments rated to be Good Answers: