Basically, the possibilities for Artificial Intelligence (AI) are actually terrifying. Maybe I'm just a doomsday-ist or overly paranoid, but I feel like creating a machine that could be smarter than us, that we could end up working for, is a bad idea. Ethically, it could cause quite a bit of harm.
In this article by Raffi Khatchadourian, scientist Nick Bostrom discusses the potential merits of creating highly intelligent AI, AI that can gain IQ from answering questions (is that not terrifying? If it answers questions correct it will just keep getting smarter....and eventually be smarter than everyone else?). During their discussion, one of the participants said, "The A.I. that will happen is going to be highly adaptive, emergent capability, and highly distributed. We will be able to work with it--for it--not necessarily contain it." Okay. If warning bells aren't going off in your head, then you've never seen a sci-fi movie.
Listen, I'm all for scientific advancement. But when humans create things, they bring into those things human error. I don't want an all-knowing robot with human error. That's a very dangerous thing! There's a reason this kind of thing is the plot of several doomsday movies.
Let's just say, for a minute, that this A.I. is created and it becomes smarter than some, or most humans. Let's also take into consideration the fact that humans will try to give them reasoning skills. These A.I. machines will be smarter than us and possess faulty reasoning skills. They will either A) take advantage of us lower mortals in the workforce and life in general or B) revolt and kill us all.
Both of these options will cause major ethical harm to the humans currently populating this world. By creating A.I., humans run the risk of "playing god" and causing complete disrepair to the human race. Is this playing doomsday? Yes. But someone has to, amid everyone getting excited about having their own little buddy, someone has to think about the worst-case scenario. Maybe it won't change anything in the long run, but hey, I'll be prepared.
Monday, February 29, 2016
Tuesday, February 23, 2016
To Bot or Not
Robots and AI are, according to Bill Gates, "at the point the computer industry was 30 years ago" (Lin). If Gates is right, there's about to be a huge boom in this industry. And with that is going to come questions about how far is too far, and just what exactly an advancement in AI and Robotics could bring. One of the most important questions is how ethical is creating these robots if safety cannot be guaranteed.
It's never not going to happen: computer code-based programs will always run the risk of "glitching" or malfunctioning. When Microsoft Word glitches, it's no big deal. Just quit it and re-open it. When your computer glitches, there's certainly an element of panic, but it's only harming you. You can always bring it in for repairs and fix the problem. But when a government drone glitches, that's a problem.
When this exact situation happened in August of 2010, a helicopter drone malfunctioned and hurtled towards Washington, D.C., actually putting the safety of the White House in jeopardy. Is it ethical for the government to continue developing these drones even though they're not 100% reliable? Where is that threshold?
Honestly, it may never be truly ethical to develop this technology to the point it is currently being developed. But that doesn't mean it shouldn't exist. This advancement in Robotics and AI could pave the way for more efficiency and safety, but we won't get there without some trial and error.
Admittedly, some of the problems with reliance on robots falls squarely on the shoulders of humans. If humans become too reliable on this technology, they could run the risk of losing valuable skills as well as jobs. Already this is beginning. In May of last year, a driver decided to do a demo for Volvo and drive his car into a crowd of people, just to prove the automatic brakes were accurate.
Unsurprisingly, this went horribly wrong. His car lacked the upgrade needed for this brake system, but he relied on it regardless. With this combination of idiocy and lack of understanding about the car he bought, this man proved that while the Robotics and AI technology may exist to advance society, that doesn't mean society is ready for it.
It would be unfair to say this advancement need to be postponed until society can handle it, mainly for two reasons: 1. Society is remarkable at adapting, and 2. There will always be idiots. But in light of the reliability of human idiocy, there is a line that AI developers should draw in regards to this technology. It would be unethical for this robotics technology to begin causing harm to humans. This unnecessary harm would be causing humans to lose their jobs to robots. Human agency does have to be factored in, though. If humans begin relying on this robotics technology to do everything for them, then there is no harm. Then humans have brought it upon themselves to advance into obscurity.
It's never not going to happen: computer code-based programs will always run the risk of "glitching" or malfunctioning. When Microsoft Word glitches, it's no big deal. Just quit it and re-open it. When your computer glitches, there's certainly an element of panic, but it's only harming you. You can always bring it in for repairs and fix the problem. But when a government drone glitches, that's a problem.
When this exact situation happened in August of 2010, a helicopter drone malfunctioned and hurtled towards Washington, D.C., actually putting the safety of the White House in jeopardy. Is it ethical for the government to continue developing these drones even though they're not 100% reliable? Where is that threshold?
Honestly, it may never be truly ethical to develop this technology to the point it is currently being developed. But that doesn't mean it shouldn't exist. This advancement in Robotics and AI could pave the way for more efficiency and safety, but we won't get there without some trial and error.
Admittedly, some of the problems with reliance on robots falls squarely on the shoulders of humans. If humans become too reliable on this technology, they could run the risk of losing valuable skills as well as jobs. Already this is beginning. In May of last year, a driver decided to do a demo for Volvo and drive his car into a crowd of people, just to prove the automatic brakes were accurate.
Unsurprisingly, this went horribly wrong. His car lacked the upgrade needed for this brake system, but he relied on it regardless. With this combination of idiocy and lack of understanding about the car he bought, this man proved that while the Robotics and AI technology may exist to advance society, that doesn't mean society is ready for it.
It would be unfair to say this advancement need to be postponed until society can handle it, mainly for two reasons: 1. Society is remarkable at adapting, and 2. There will always be idiots. But in light of the reliability of human idiocy, there is a line that AI developers should draw in regards to this technology. It would be unethical for this robotics technology to begin causing harm to humans. This unnecessary harm would be causing humans to lose their jobs to robots. Human agency does have to be factored in, though. If humans begin relying on this robotics technology to do everything for them, then there is no harm. Then humans have brought it upon themselves to advance into obscurity.
Humans in the future, if we rely too much on Robotics. Wall-E predicted it first.
Tuesday, February 16, 2016
The Little Blue Checkmark
The sharing economy has brought so much good with a much larger flow of information. But with that flow comes uncertainty. And the developers of apps such as Facebook, Twitter, and Instagram have had to deal with this uncertainty and occasional lack of trust. They want to make their websites reliable, and needed a way to guarantee that. Enter the "little blue checkmark."
With the sharing economy comes a lot of unknowns: is this website safe? Can I trust this site with my email? Phone number? Not everyone is comfortable with this. Data shows that 69% of US adults are hesitant to be part of the sharing economy unless they have a reputable source saying it is reliable (IDE Sharing Economy). However, members of today's younger generation are much more open with their information, with 20% being okay sharing their cell phone number in 2013, which was way up from just 2% in 2006 (Henley). The younger generation (12-17 year olds) are much more open with information, and as such, more susceptible to being catfished.
No, not catfished as in the actual fish, but as in being tricked by someone online who says they are someone they're not. It's so common in today's digital world that MTV made a TV show about it. That's right, there's a reality TV show about masquerading as someone else online. While that might seem humorous, it really is a good idea, as online dating becomes more prevalent.
So with websites so cognizant of this catfishing, and with many people only willing to participate in social media platforms with the guarantee of authenticity (on Facebook, this could mean having multiple photos, detailed personal information, and friends with the same thing (Henley)), these platforms knew they had to step up big time. So they created the Blue Checkmark, which has now become a universally accepted symbol of "verification." If someone has this check by their profile, you know they're the real deal.
This honor is usually reserved for people whose profile gets stolen frequently, namely, celebrities. But it isn't unheard of for a "regular person" to get this check either. This article does a great job explaining how various websites determine who gets verified and how and illustrates that many times it's not easy to become verified, which is both good and bad. The elite status of verification ensures that false accounts aren't verified (which would defeat the whole purpose), but also prevents the majority from knowing whether an account is real or false.
While verification is a step towards better internet safety and reliability, there is still a long way to go. We might never see a world where a profile is guaranteed to be legitimate no matter whose it is, and that might just be part of the risks that comes with taking part in this sharing economy. For now though, at least you can rest assured that you're following the actual accounts of your favorite celebrities and internet personas just by locating the little blue checkmark.
With the sharing economy comes a lot of unknowns: is this website safe? Can I trust this site with my email? Phone number? Not everyone is comfortable with this. Data shows that 69% of US adults are hesitant to be part of the sharing economy unless they have a reputable source saying it is reliable (IDE Sharing Economy). However, members of today's younger generation are much more open with their information, with 20% being okay sharing their cell phone number in 2013, which was way up from just 2% in 2006 (Henley). The younger generation (12-17 year olds) are much more open with information, and as such, more susceptible to being catfished.
No, not catfished as in the actual fish, but as in being tricked by someone online who says they are someone they're not. It's so common in today's digital world that MTV made a TV show about it. That's right, there's a reality TV show about masquerading as someone else online. While that might seem humorous, it really is a good idea, as online dating becomes more prevalent.
So with websites so cognizant of this catfishing, and with many people only willing to participate in social media platforms with the guarantee of authenticity (on Facebook, this could mean having multiple photos, detailed personal information, and friends with the same thing (Henley)), these platforms knew they had to step up big time. So they created the Blue Checkmark, which has now become a universally accepted symbol of "verification." If someone has this check by their profile, you know they're the real deal.
This honor is usually reserved for people whose profile gets stolen frequently, namely, celebrities. But it isn't unheard of for a "regular person" to get this check either. This article does a great job explaining how various websites determine who gets verified and how and illustrates that many times it's not easy to become verified, which is both good and bad. The elite status of verification ensures that false accounts aren't verified (which would defeat the whole purpose), but also prevents the majority from knowing whether an account is real or false.
While verification is a step towards better internet safety and reliability, there is still a long way to go. We might never see a world where a profile is guaranteed to be legitimate no matter whose it is, and that might just be part of the risks that comes with taking part in this sharing economy. For now though, at least you can rest assured that you're following the actual accounts of your favorite celebrities and internet personas just by locating the little blue checkmark.
Monday, February 8, 2016
Sharing is Caring...Until it's Stealing
We've all been there, we've all done it: we've all used Pandora radio for streaming. And when using it, I never put much thought into how much the musician was being paid for this work. Turns out, it's not a lot. So is it ethical to stream this music? Is it ever ethical to illegally download or fileshare music? In the end, the question isn't really about how ethical it is to download or file share, it's about how ethical it is to persecute people for doing so.
For starters, many record companies (and musicians) hold this belief that if someone is sharing/streaming their music, they lose out on sales. However, this is a misconception in many ways. Most importantly, it is a misconception that the musicians would lose out on a lot of money, since iTunes only pays musicians about ten cents per download anyway, and popular streaming services like Spotify and Pandora only pay about fifty cents. (Richmond).
So it's not like there's really a loss of money. But say a musician feels spurned and wants to sue anyway. In today's time, while legally within their bounds, it is not the smartest decision to make. Not only is it not smart, it is unethical. When musicians who are making millions begin suing children over downloading or streaming their music for free, they make enemies. People dislike them, and are less likely to want to purchase their music.
But it really hurts the "little guy". People have to pay hundreds of thousands in reparations, money they don't have to begin with (hey, maybe that's why they were streaming the music instead of downloading it!). It harms the person committing the crime more than the person the crime was committed against, which makes this persecution unethical.
But aside from the law itself being unethical, it's completely illogical. For a musician making hardly anything from iTunes sales (which is how most of the songs are being downloaded now), you would think he would understand someone not purchasing their new album, choosing instead to stream it and then instead spend the money on merchandise or going to a concert, where in the end the musician will make more money.
Many people nowadays also want to preview the music they're going to buy. This doesn't mean one-minute segments, but the entire song, the entire album. Consumers now want to know what they're getting into and make sure they're spending their money correctly. So if streaming can lead to record sales, what's the problem with it? Looking at you, Taylor Swift.
Moreover, is it ethical for record companies and musicians to restrict how their music can be used? Is it ethical to say that the music can only be bought, and not used in projects? That in order to put a song someone might not even like as a background for a YouTube video, he has to go out and buy it? If there's creative commons licensing for books and movies, shouldn't there be for songs as well? And if someone has purchased the song in question, shouldn't he be allowed to use it however he pleases, even if that might be in a video? The music industry is far behind in the times, because using music like this, which many videos have had their sound stripped for doing, is what often times leads someone to discover a new artist and in turn download their music or pay to go to a concert.
Is it really ethical for large record companies to take advantage of individuals for sharing a musician's content? It does more harm than good, and no matter how many people are made examples of, this behavior isn't going to change. Downloading songs for free is no longer seen as stealing...it's sharing. Just like you used to go in halfsies for that new Donny Osmond CD, now you go halfsies on a virtual CD. This is one of those cultural shifts, and if the major record companies/musicians don't get with the times, all they're going to do is lose customers and fanbases.
For starters, many record companies (and musicians) hold this belief that if someone is sharing/streaming their music, they lose out on sales. However, this is a misconception in many ways. Most importantly, it is a misconception that the musicians would lose out on a lot of money, since iTunes only pays musicians about ten cents per download anyway, and popular streaming services like Spotify and Pandora only pay about fifty cents. (Richmond).
So it's not like there's really a loss of money. But say a musician feels spurned and wants to sue anyway. In today's time, while legally within their bounds, it is not the smartest decision to make. Not only is it not smart, it is unethical. When musicians who are making millions begin suing children over downloading or streaming their music for free, they make enemies. People dislike them, and are less likely to want to purchase their music.
But it really hurts the "little guy". People have to pay hundreds of thousands in reparations, money they don't have to begin with (hey, maybe that's why they were streaming the music instead of downloading it!). It harms the person committing the crime more than the person the crime was committed against, which makes this persecution unethical.
But aside from the law itself being unethical, it's completely illogical. For a musician making hardly anything from iTunes sales (which is how most of the songs are being downloaded now), you would think he would understand someone not purchasing their new album, choosing instead to stream it and then instead spend the money on merchandise or going to a concert, where in the end the musician will make more money.
Many people nowadays also want to preview the music they're going to buy. This doesn't mean one-minute segments, but the entire song, the entire album. Consumers now want to know what they're getting into and make sure they're spending their money correctly. So if streaming can lead to record sales, what's the problem with it? Looking at you, Taylor Swift.
Moreover, is it ethical for record companies and musicians to restrict how their music can be used? Is it ethical to say that the music can only be bought, and not used in projects? That in order to put a song someone might not even like as a background for a YouTube video, he has to go out and buy it? If there's creative commons licensing for books and movies, shouldn't there be for songs as well? And if someone has purchased the song in question, shouldn't he be allowed to use it however he pleases, even if that might be in a video? The music industry is far behind in the times, because using music like this, which many videos have had their sound stripped for doing, is what often times leads someone to discover a new artist and in turn download their music or pay to go to a concert.
Is it really ethical for large record companies to take advantage of individuals for sharing a musician's content? It does more harm than good, and no matter how many people are made examples of, this behavior isn't going to change. Downloading songs for free is no longer seen as stealing...it's sharing. Just like you used to go in halfsies for that new Donny Osmond CD, now you go halfsies on a virtual CD. This is one of those cultural shifts, and if the major record companies/musicians don't get with the times, all they're going to do is lose customers and fanbases.
Tuesday, February 2, 2016
Data Brokers Broke My Trust
We've all heard the C.S. Lewis quote "Integrity is doing the right thing when no one is watching." In fact, if you've ever spent any time inside an elementary, middle, or high school classroom, you've probably seen a sign or two about it. If Lewis is right, and doing the right thing when no one is watching is what defines integrity, then data brokers dug a hole, tossed in their integrity, filled the hole, jumped on it a few times, and then spat on it. Essentially, they have no integrity.
Being a data broker is based solely on doing the wrong thing when no one is looking: data mining. And because there are few rules and no laws to enforce ethically correct behavior, these data brokers are able to continue essentially stealing information from consumers. Not cool. This behavior is what breaks the trust of consumers, making them fear the internet.
Why should consumers' trust be broken? Oh, maybe because data brokers sell things such as region, education, travel data, and the Social Security Numbers (!!!) of consumers. Who do they sell it to? Pretty much anyone, including hotels, jewelry stores, airlines, colleges, retail chains, law enforcement, advertising agencies, other data brokers, and, oh yeah, the government (Fernbeck). Trust broken now? Starting to feel a little paranoid every time you visit a website or search amazon for a product? Ready to go hide in a cave with zero technology or cell signal and live off the wild for the rest of your life? I'm right there with you.
Luckily, it's not entirely hopeless. I know, right now you feel like Han Solo after Boba Fett tricks him and he ends up being frozen in carbonate (shiver), Boba Fett being the Data Brokers and Han Solo the unsuspecting consumer. But there is hope Luke can come and save you! Hope in the form of a proposed Fair Information Practices (FIP) checklist to keep data brokers ethical. Currently, these guidelines aren't enforceable by law, but hopefully that would change.
One of the new guidelines included would be "harm": an ethical standpoint to make sure the consumers aren't involved in any " consequences resulting from the dissemination of false or wrong data" (Fernbeck). This, among several other guidelines (such as fairness, trust, respect, and privacy) would hopefully keep data brokers ethical.
Now, unless these guidelines go into action and are legally enforced, there's not really much anyone can do to stop data brokers, except hiding in a tech-free wilderness (which, let's face it, no matter how paranoid we are, we couldn't actually do). So for now, all we can do is be smart about what information we put online, be sparse with information, and remember that it is likely that everything we do online is being catalogued and sold to someone. Hey, at least you know someone, somewhere, cares aboutyour personal information you!
Until there is some real, desperately needed, reforms in the data mining industry, data brokers will not change their ways and will continue to behave unethically. So, I'd like to leave a message to data brokers from my old classrooms and C.S. Lewis:
Being a data broker is based solely on doing the wrong thing when no one is looking: data mining. And because there are few rules and no laws to enforce ethically correct behavior, these data brokers are able to continue essentially stealing information from consumers. Not cool. This behavior is what breaks the trust of consumers, making them fear the internet.
Why should consumers' trust be broken? Oh, maybe because data brokers sell things such as region, education, travel data, and the Social Security Numbers (!!!) of consumers. Who do they sell it to? Pretty much anyone, including hotels, jewelry stores, airlines, colleges, retail chains, law enforcement, advertising agencies, other data brokers, and, oh yeah, the government (Fernbeck). Trust broken now? Starting to feel a little paranoid every time you visit a website or search amazon for a product? Ready to go hide in a cave with zero technology or cell signal and live off the wild for the rest of your life? I'm right there with you.
Luckily, it's not entirely hopeless. I know, right now you feel like Han Solo after Boba Fett tricks him and he ends up being frozen in carbonate (shiver), Boba Fett being the Data Brokers and Han Solo the unsuspecting consumer. But there is hope Luke can come and save you! Hope in the form of a proposed Fair Information Practices (FIP) checklist to keep data brokers ethical. Currently, these guidelines aren't enforceable by law, but hopefully that would change.
One of the new guidelines included would be "harm": an ethical standpoint to make sure the consumers aren't involved in any " consequences resulting from the dissemination of false or wrong data" (Fernbeck). This, among several other guidelines (such as fairness, trust, respect, and privacy) would hopefully keep data brokers ethical.
Now, unless these guidelines go into action and are legally enforced, there's not really much anyone can do to stop data brokers, except hiding in a tech-free wilderness (which, let's face it, no matter how paranoid we are, we couldn't actually do). So for now, all we can do is be smart about what information we put online, be sparse with information, and remember that it is likely that everything we do online is being catalogued and sold to someone. Hey, at least you know someone, somewhere, cares about
Until there is some real, desperately needed, reforms in the data mining industry, data brokers will not change their ways and will continue to behave unethically. So, I'd like to leave a message to data brokers from my old classrooms and C.S. Lewis:
Subscribe to:
Posts (Atom)