Artificial Intelligence: Growth or Risk?
The internet. Smart phones. Handheld tablets that function as computers. We take these things for granted, yet these amazing gadgets were beyond our grandparents’ wildest dreams when they were our ages. These amazing devices were imagined by dreamers; perhaps even by writers of science fiction, and now advanced technology has made them a reality. Scientists, writers, and philosophers have speculated for centuries about the limits and extremes of technology, and debated just as long on the moral implications of each new advance. How much more can technology do? Are there limits? Should there be? Will one day we be able to replace a human with a machine, and why would we choose to do so?
Writers, directors, and producers have pondered these questions about the potential limits of artificial intelligence, from such varied sources as the classic film 2001, A Space Odyssey, to the television show Star Trek: The Next Generation. Always at the heart of the query is what happens if our creations grow more powerful, stronger, and more human than we are?
In 2004, moviegoers were treated to a retelling of Isaac Asimov’s collection of short stories entitled I, Robot, from the 1950s. This series, and movie, focused on controls designed to keep intelligent mechanical beings under control by humanity, while ensuring they could not turn against their masters. Of course, there was one robot that was designed to break the laws given to robots for the protection of humanity; else there wouldn’t be much to the story. The irony came when the robot who did not have to obey the laws governing other robots, actually chose to help the humans under his protection-not because he had to, but because he wanted to be more human himself. That we should be so lucky with our real-life forays into artificial intelligence, that along with self-awareness our creations develop empathy, a conscience!
More recently, in Avengers: Age of Ultron, (2015), a group of heroes learns firsthand what happens when a computer achieves ultimate power and artificial intelligence. Does the goal of protecting humanity include protecting it from itself? How far would an ultra-logical machine go to keep people safe? Could it self-justify genocide to prevent future warfare? This of course, is an example in the extreme, one designed for the big screen, and hardly the first of its kind. (Think the entire Terminator franchise.)
The fact remains, though, that as long as we have technology, there is a chance that someone, or something, will seek to abuse/misuse it. Since I don’t really fancy a return to primitive living, I hope that for every advance we make, there will be someone out there taking precautions and setting up safeguards so that our technology never eclipses us. After all, while we can replace a phone or a laptop if it should be destroyed, human beings are not replaceable.
Here’s hoping that we continue to enjoy our technology, but we don’t forget it’s not a substitute for the people in our lives.