Tuesday, February 23, 2016

To Bot or Not

     Robots and AI are, according to Bill Gates, "at the point the computer industry was 30 years ago" (Lin). If Gates is right, there's about to be a huge boom in this industry. And with that is going to come questions about how far is too far, and just what exactly an advancement in AI and Robotics could bring. One of the most important questions is how ethical is creating these robots if safety cannot be guaranteed.
     It's never not going to happen: computer code-based programs will always run the risk of "glitching" or malfunctioning. When Microsoft Word glitches, it's no big deal. Just quit it and re-open it. When your computer glitches, there's certainly an element of panic, but it's only harming you. You can always bring it in for repairs and fix the problem. But when a government drone glitches, that's a problem.
     When this exact situation happened in August of 2010, a helicopter drone malfunctioned and hurtled towards Washington, D.C., actually putting the safety of the White House in jeopardy. Is it ethical for the government to continue developing these drones even though they're not 100% reliable? Where is that threshold?
     Honestly, it may never be truly ethical to develop this technology to the point it is currently being developed. But that doesn't mean it shouldn't exist. This advancement in Robotics and AI could pave the way for more efficiency and safety, but we won't get there without some trial and error.
     Admittedly, some of the problems with reliance on robots falls squarely on the shoulders of humans. If humans become too reliable on this technology, they could run the risk of losing valuable skills as well as jobs. Already this is beginning. In May of last year, a driver decided to do a demo for Volvo and drive his car into a crowd of people, just to prove the automatic brakes were accurate.
     Unsurprisingly, this went horribly wrong. His car lacked the upgrade needed for this brake system, but he relied on it regardless. With this combination of idiocy and lack of understanding about the car he bought, this man proved that while the Robotics and AI technology may exist to advance society, that doesn't mean society is ready for it.
     It would be unfair to say this advancement need to be postponed until society can handle it, mainly for two reasons: 1. Society is remarkable at adapting, and 2. There will always be idiots. But in light of the reliability of human idiocy, there is a line that AI developers should draw in regards to this technology. It would be unethical for this robotics technology to begin causing harm to humans. This unnecessary harm would be causing humans to lose their jobs to robots. Human agency does have to be factored in, though. If humans begin relying on this robotics technology to do everything for them, then there is no harm. Then humans have brought it upon themselves to advance into obscurity.


Humans in the future, if we rely too much on Robotics. Wall-E predicted it first.

No comments:

Post a Comment