Remember a few years ago when the government wanted to change how we access the internet and everyone got all up in arms because we have access now and changing it would make it more difficult for people who currently have access to get access? Me too. I was one of those people. But there is a bigger internet problem that we should be getting up in arms about. There are very many people who currently don't have access to the internet, and that needs to change.
The internet is a well of information. Most of us have become so accustomed to using it, not only for social media, but just as a part of our lives. How many times a day do you have a question and you use google to find the answer? My bet is at least once. You google recipes, health tips, word definitions, that actor you know you recognize but can't remember where from. And you don't think twice about it. You use social media in some form several times a day. You're reading this blog post.
Now think if you couldn't do any of that. If you couldn't afford to have internet access, and the world's answers were no longer at your fingertips. How would you survive? For many people, the "dark ages" before the internet are very much a reality. In her study, Emily Hong looks at the many families in San Francisco's Chinatown who cannot afford internet. She attributes this digital divide to the racialization of Chinatown. Over the years, it became a place for the poorer Asian population to live, and they have still not escaped that stigma. For many of these residents, they cannot pay the estimated $32 a month it would cost to have in-home internet access. While that may not sound like a lot to some, that money adds up, especially in an expensive area like San Francisco.
It doesn't seem ethical that some of the population should be able to afford access to information and some shouldn't. But what can be done about this? First, this problem encompasses the United States as a whole, and while it is certainly worse in areas like San Fran's Chinatown, the cost of internet access overall needs to be decreased in order to take steps to solve those problems.
To compare to the East coast of the US, New York residents pay roughly $55/month for internet, almost double what people in other large cities such as London or Hong Kong pay. The same article linked in the previous sentence discusses steps to take in order to fix this, and names lack of competition as a serious problem. Based on my personal experience being pigeon-holed into buying a specific server, I can easily agree with this. More market competition could in fact create more competitive prices, and help bring down the overall price of internet.
While bringing down the price wouldn't necessarily solve all the problems related to internet, it would be a start. It goes without saying that these internet companies overcharging for their service is extremely unethical, and it has the direct result of causing communities like Chinatown where residents cannot afford service. Unfortunately, these companies don't seem to be changing towards a more ethical business outlook anytime soon.
Sunday, March 20, 2016
Life Hacks
Want to know a good life hack? Hire a hacker. Now, that might seem like odd advice, but hiring a hacker for your business could actually end up benefiting you. I'm not saying you should go out and hire Anonymous to hack competitors' websites; I'm saying you should have test runs on whether or not your website is hackable.
We all know hacking is unethical. There's no way around that. But the thing is, the vast majority of people live ethically gray lives, and many of them are completely unethical. All that to say, there are Bad People in this world. And these people will try to hack your website, business, and customer information if you are successful and the information is accessible. So why not beat them to it?
This form of hacking is known as constructive hacking, where someone uses hacking as a way to solve a collective goal (for good of course) (King). In this case, it is helpful to engage in constructive hacking as a way to preemptively problem solve and see things from a hacker's perspective.
After Target was famously hacked in 2013, they brought in "security experts" at Verizon to "probe its network for weaknesses," (Krebs). Now, the article doesn't say specifically, but it doesn't take a lot of logic to reason that these experts were probing for weaknesses by simulating attacks as hackers would do. In fact, Target recently opened a "Cyber Fusion Center," designed to help keep Target's cyber security secure. This center employs a group of people called the "Red Team" whose entire job is to attempt to hack into Target's system. Seriously.
Clearly, if Target had had all of this in place prior to the security breach, there is a good chance the breach wouldn't have happened. When you look at it from this perspective, it seems like it is the ethical necessity to hire these cyber hacking teams. If you don't, you're leaving innocent customers exposed to malevolent hackers who will try to steal their information. Not taking steps to prevent this is unethical and unacceptable.
So if you want to stay out of hot water and avoid large information infiltrations, take that preemptive security step and hire a hacker. You won't regret it.
We all know hacking is unethical. There's no way around that. But the thing is, the vast majority of people live ethically gray lives, and many of them are completely unethical. All that to say, there are Bad People in this world. And these people will try to hack your website, business, and customer information if you are successful and the information is accessible. So why not beat them to it?
This form of hacking is known as constructive hacking, where someone uses hacking as a way to solve a collective goal (for good of course) (King). In this case, it is helpful to engage in constructive hacking as a way to preemptively problem solve and see things from a hacker's perspective.
After Target was famously hacked in 2013, they brought in "security experts" at Verizon to "probe its network for weaknesses," (Krebs). Now, the article doesn't say specifically, but it doesn't take a lot of logic to reason that these experts were probing for weaknesses by simulating attacks as hackers would do. In fact, Target recently opened a "Cyber Fusion Center," designed to help keep Target's cyber security secure. This center employs a group of people called the "Red Team" whose entire job is to attempt to hack into Target's system. Seriously.
Clearly, if Target had had all of this in place prior to the security breach, there is a good chance the breach wouldn't have happened. When you look at it from this perspective, it seems like it is the ethical necessity to hire these cyber hacking teams. If you don't, you're leaving innocent customers exposed to malevolent hackers who will try to steal their information. Not taking steps to prevent this is unethical and unacceptable.
So if you want to stay out of hot water and avoid large information infiltrations, take that preemptive security step and hire a hacker. You won't regret it.
Friday, March 18, 2016
They See Me Trollin'
The internet is a weird place. There's no argument there. But when you take a place like the internet where there are virtually no rules, and a global community, there inevitably grows a kind of culture. And just like all cultures world-wide, there are good and bad aspects to it. One of these weird, possibly bad aspects is "trolling," a phenomenon where posters ("trolls") spam people's internet homes with (usually) rude or offensive content. Is this ethical? Should it be stopped?
Here's the thing. In America, we have a law about freedom of speech. Other countries, not so much. So how does that translate to the internet? It's hard to say. On one hand, it's easy to argue that trolling, while not necessarily ethical, is certainly legal in America thanks to free speech, as long as the messages don't cross into hate speech or threats. But the internet is global. Other countries can and do use the internet, and their people occasionally troll.
So the only option, then, is to approach this issue on a country-by-country basis, which is certainly not the most efficient (though we could always establish the internet as its own entity, with its own set of rules to be followed globally, but that would probably be even harder.). Some countries, like New Zealand, have already realized that trolling is a problem and taken steps to fix it. However, as this article points out, there are some very large problems with this law: the law cites punishable speech as speech that is "indecent," "false," or "used to harass an individual." And as the article also says, this is very broad. It could technically encompass political cartoonists. It's also not clear whether the law applies to people posting this content, or if it would include people responding to it as well.
Under this law, would re-posting this image be punishable in New Zealand? Many people found it indecent and false. The law is simply too broad to do any real good, and could end up infringing on the critical speech needed in large spaces like the internet.
Controlling trolling on the internet is one of the hardest problems to solve in today's world. It's obvious that many of the trolls, such as those mentioned in this article, are clearly causing ethical harm. They shouldn't be doing what they're doing. But unfortunately, until the internet is given its own division of millions of people whose job is to screen the web for these sorts of trolls, nothing is to be done. Nothing can be done, without infringing on the critical speech of millions of innocent parties. Unfortunately, this is one of those times where people just have to block and ignore, as unfair as that may be. But, hey, it's the internet. Anything goes there.
Here's the thing. In America, we have a law about freedom of speech. Other countries, not so much. So how does that translate to the internet? It's hard to say. On one hand, it's easy to argue that trolling, while not necessarily ethical, is certainly legal in America thanks to free speech, as long as the messages don't cross into hate speech or threats. But the internet is global. Other countries can and do use the internet, and their people occasionally troll.
So the only option, then, is to approach this issue on a country-by-country basis, which is certainly not the most efficient (though we could always establish the internet as its own entity, with its own set of rules to be followed globally, but that would probably be even harder.). Some countries, like New Zealand, have already realized that trolling is a problem and taken steps to fix it. However, as this article points out, there are some very large problems with this law: the law cites punishable speech as speech that is "indecent," "false," or "used to harass an individual." And as the article also says, this is very broad. It could technically encompass political cartoonists. It's also not clear whether the law applies to people posting this content, or if it would include people responding to it as well.
Under this law, would re-posting this image be punishable in New Zealand? Many people found it indecent and false. The law is simply too broad to do any real good, and could end up infringing on the critical speech needed in large spaces like the internet.
Controlling trolling on the internet is one of the hardest problems to solve in today's world. It's obvious that many of the trolls, such as those mentioned in this article, are clearly causing ethical harm. They shouldn't be doing what they're doing. But unfortunately, until the internet is given its own division of millions of people whose job is to screen the web for these sorts of trolls, nothing is to be done. Nothing can be done, without infringing on the critical speech of millions of innocent parties. Unfortunately, this is one of those times where people just have to block and ignore, as unfair as that may be. But, hey, it's the internet. Anything goes there.
Monday, March 14, 2016
Virtual Reality the Future
Virtual reality is a phrase out of sci-fi. It evokes images of futuristic goggles that hold their own screen and create a new reality for the wearer. This virtual reality does not exist just in this instance now, though. Now virtual reality takes place in chat rooms, in places where the users can manipulate the space around them to create a world of their own. But is this a good thing?
As Julian Dibbell points out in his article "A Rape in Cyberspace," these new worlds are not regulated. There is no official rulebook for how circumstances should be handled, and this can obviously cause some very dangerous problems. Virtual reality can't be stopped though; at this point it has to be embraced. And there needs to be a set of rules in place for these "worlds" if virtual reality is to remain a positive thing.
The article proved that these realities are fairly adept at self-governing, but that does not mean the governing is effective. This is proved by Mr. Bungle returning after banishment under a different username. Something has to change in order for virtual realities to become truly safe.
Virtual reality is growing, gaining momentum. And it's not slowing down any time soon. It's a world that is becoming more real with each passing day, and as users get more drawn in, the lines between "real" reality and virtual reality will become more blurred. To protect these users, there needs to be some reform in the wide sweeping rules of virtual reality to help keep users safe.
I know it's nearly impossible to solve the problem, but there can be improvements. These will help the general populace embrace virtual reality and lead to more use, for more purposes. Eventually, if virtual reality can become a safe place, we could be living in the future world of Phil of the Future, with virtual reality a commonplace occurrence. That is the world I hope for.
As Julian Dibbell points out in his article "A Rape in Cyberspace," these new worlds are not regulated. There is no official rulebook for how circumstances should be handled, and this can obviously cause some very dangerous problems. Virtual reality can't be stopped though; at this point it has to be embraced. And there needs to be a set of rules in place for these "worlds" if virtual reality is to remain a positive thing.
The article proved that these realities are fairly adept at self-governing, but that does not mean the governing is effective. This is proved by Mr. Bungle returning after banishment under a different username. Something has to change in order for virtual realities to become truly safe.
Virtual reality is growing, gaining momentum. And it's not slowing down any time soon. It's a world that is becoming more real with each passing day, and as users get more drawn in, the lines between "real" reality and virtual reality will become more blurred. To protect these users, there needs to be some reform in the wide sweeping rules of virtual reality to help keep users safe.
I know it's nearly impossible to solve the problem, but there can be improvements. These will help the general populace embrace virtual reality and lead to more use, for more purposes. Eventually, if virtual reality can become a safe place, we could be living in the future world of Phil of the Future, with virtual reality a commonplace occurrence. That is the world I hope for.
Monday, February 29, 2016
Artificial Intelligence Could Doom us All
Basically, the possibilities for Artificial Intelligence (AI) are actually terrifying. Maybe I'm just a doomsday-ist or overly paranoid, but I feel like creating a machine that could be smarter than us, that we could end up working for, is a bad idea. Ethically, it could cause quite a bit of harm.
In this article by Raffi Khatchadourian, scientist Nick Bostrom discusses the potential merits of creating highly intelligent AI, AI that can gain IQ from answering questions (is that not terrifying? If it answers questions correct it will just keep getting smarter....and eventually be smarter than everyone else?). During their discussion, one of the participants said, "The A.I. that will happen is going to be highly adaptive, emergent capability, and highly distributed. We will be able to work with it--for it--not necessarily contain it." Okay. If warning bells aren't going off in your head, then you've never seen a sci-fi movie.
Listen, I'm all for scientific advancement. But when humans create things, they bring into those things human error. I don't want an all-knowing robot with human error. That's a very dangerous thing! There's a reason this kind of thing is the plot of several doomsday movies.
Let's just say, for a minute, that this A.I. is created and it becomes smarter than some, or most humans. Let's also take into consideration the fact that humans will try to give them reasoning skills. These A.I. machines will be smarter than us and possess faulty reasoning skills. They will either A) take advantage of us lower mortals in the workforce and life in general or B) revolt and kill us all.
Both of these options will cause major ethical harm to the humans currently populating this world. By creating A.I., humans run the risk of "playing god" and causing complete disrepair to the human race. Is this playing doomsday? Yes. But someone has to, amid everyone getting excited about having their own little buddy, someone has to think about the worst-case scenario. Maybe it won't change anything in the long run, but hey, I'll be prepared.
In this article by Raffi Khatchadourian, scientist Nick Bostrom discusses the potential merits of creating highly intelligent AI, AI that can gain IQ from answering questions (is that not terrifying? If it answers questions correct it will just keep getting smarter....and eventually be smarter than everyone else?). During their discussion, one of the participants said, "The A.I. that will happen is going to be highly adaptive, emergent capability, and highly distributed. We will be able to work with it--for it--not necessarily contain it." Okay. If warning bells aren't going off in your head, then you've never seen a sci-fi movie.
Listen, I'm all for scientific advancement. But when humans create things, they bring into those things human error. I don't want an all-knowing robot with human error. That's a very dangerous thing! There's a reason this kind of thing is the plot of several doomsday movies.
Let's just say, for a minute, that this A.I. is created and it becomes smarter than some, or most humans. Let's also take into consideration the fact that humans will try to give them reasoning skills. These A.I. machines will be smarter than us and possess faulty reasoning skills. They will either A) take advantage of us lower mortals in the workforce and life in general or B) revolt and kill us all.
Both of these options will cause major ethical harm to the humans currently populating this world. By creating A.I., humans run the risk of "playing god" and causing complete disrepair to the human race. Is this playing doomsday? Yes. But someone has to, amid everyone getting excited about having their own little buddy, someone has to think about the worst-case scenario. Maybe it won't change anything in the long run, but hey, I'll be prepared.
Tuesday, February 23, 2016
To Bot or Not
Robots and AI are, according to Bill Gates, "at the point the computer industry was 30 years ago" (Lin). If Gates is right, there's about to be a huge boom in this industry. And with that is going to come questions about how far is too far, and just what exactly an advancement in AI and Robotics could bring. One of the most important questions is how ethical is creating these robots if safety cannot be guaranteed.
It's never not going to happen: computer code-based programs will always run the risk of "glitching" or malfunctioning. When Microsoft Word glitches, it's no big deal. Just quit it and re-open it. When your computer glitches, there's certainly an element of panic, but it's only harming you. You can always bring it in for repairs and fix the problem. But when a government drone glitches, that's a problem.
When this exact situation happened in August of 2010, a helicopter drone malfunctioned and hurtled towards Washington, D.C., actually putting the safety of the White House in jeopardy. Is it ethical for the government to continue developing these drones even though they're not 100% reliable? Where is that threshold?
Honestly, it may never be truly ethical to develop this technology to the point it is currently being developed. But that doesn't mean it shouldn't exist. This advancement in Robotics and AI could pave the way for more efficiency and safety, but we won't get there without some trial and error.
Admittedly, some of the problems with reliance on robots falls squarely on the shoulders of humans. If humans become too reliable on this technology, they could run the risk of losing valuable skills as well as jobs. Already this is beginning. In May of last year, a driver decided to do a demo for Volvo and drive his car into a crowd of people, just to prove the automatic brakes were accurate.
Unsurprisingly, this went horribly wrong. His car lacked the upgrade needed for this brake system, but he relied on it regardless. With this combination of idiocy and lack of understanding about the car he bought, this man proved that while the Robotics and AI technology may exist to advance society, that doesn't mean society is ready for it.
It would be unfair to say this advancement need to be postponed until society can handle it, mainly for two reasons: 1. Society is remarkable at adapting, and 2. There will always be idiots. But in light of the reliability of human idiocy, there is a line that AI developers should draw in regards to this technology. It would be unethical for this robotics technology to begin causing harm to humans. This unnecessary harm would be causing humans to lose their jobs to robots. Human agency does have to be factored in, though. If humans begin relying on this robotics technology to do everything for them, then there is no harm. Then humans have brought it upon themselves to advance into obscurity.
It's never not going to happen: computer code-based programs will always run the risk of "glitching" or malfunctioning. When Microsoft Word glitches, it's no big deal. Just quit it and re-open it. When your computer glitches, there's certainly an element of panic, but it's only harming you. You can always bring it in for repairs and fix the problem. But when a government drone glitches, that's a problem.
When this exact situation happened in August of 2010, a helicopter drone malfunctioned and hurtled towards Washington, D.C., actually putting the safety of the White House in jeopardy. Is it ethical for the government to continue developing these drones even though they're not 100% reliable? Where is that threshold?
Honestly, it may never be truly ethical to develop this technology to the point it is currently being developed. But that doesn't mean it shouldn't exist. This advancement in Robotics and AI could pave the way for more efficiency and safety, but we won't get there without some trial and error.
Admittedly, some of the problems with reliance on robots falls squarely on the shoulders of humans. If humans become too reliable on this technology, they could run the risk of losing valuable skills as well as jobs. Already this is beginning. In May of last year, a driver decided to do a demo for Volvo and drive his car into a crowd of people, just to prove the automatic brakes were accurate.
Unsurprisingly, this went horribly wrong. His car lacked the upgrade needed for this brake system, but he relied on it regardless. With this combination of idiocy and lack of understanding about the car he bought, this man proved that while the Robotics and AI technology may exist to advance society, that doesn't mean society is ready for it.
It would be unfair to say this advancement need to be postponed until society can handle it, mainly for two reasons: 1. Society is remarkable at adapting, and 2. There will always be idiots. But in light of the reliability of human idiocy, there is a line that AI developers should draw in regards to this technology. It would be unethical for this robotics technology to begin causing harm to humans. This unnecessary harm would be causing humans to lose their jobs to robots. Human agency does have to be factored in, though. If humans begin relying on this robotics technology to do everything for them, then there is no harm. Then humans have brought it upon themselves to advance into obscurity.
Humans in the future, if we rely too much on Robotics. Wall-E predicted it first.
Tuesday, February 16, 2016
The Little Blue Checkmark
The sharing economy has brought so much good with a much larger flow of information. But with that flow comes uncertainty. And the developers of apps such as Facebook, Twitter, and Instagram have had to deal with this uncertainty and occasional lack of trust. They want to make their websites reliable, and needed a way to guarantee that. Enter the "little blue checkmark."
With the sharing economy comes a lot of unknowns: is this website safe? Can I trust this site with my email? Phone number? Not everyone is comfortable with this. Data shows that 69% of US adults are hesitant to be part of the sharing economy unless they have a reputable source saying it is reliable (IDE Sharing Economy). However, members of today's younger generation are much more open with their information, with 20% being okay sharing their cell phone number in 2013, which was way up from just 2% in 2006 (Henley). The younger generation (12-17 year olds) are much more open with information, and as such, more susceptible to being catfished.
No, not catfished as in the actual fish, but as in being tricked by someone online who says they are someone they're not. It's so common in today's digital world that MTV made a TV show about it. That's right, there's a reality TV show about masquerading as someone else online. While that might seem humorous, it really is a good idea, as online dating becomes more prevalent.
So with websites so cognizant of this catfishing, and with many people only willing to participate in social media platforms with the guarantee of authenticity (on Facebook, this could mean having multiple photos, detailed personal information, and friends with the same thing (Henley)), these platforms knew they had to step up big time. So they created the Blue Checkmark, which has now become a universally accepted symbol of "verification." If someone has this check by their profile, you know they're the real deal.
This honor is usually reserved for people whose profile gets stolen frequently, namely, celebrities. But it isn't unheard of for a "regular person" to get this check either. This article does a great job explaining how various websites determine who gets verified and how and illustrates that many times it's not easy to become verified, which is both good and bad. The elite status of verification ensures that false accounts aren't verified (which would defeat the whole purpose), but also prevents the majority from knowing whether an account is real or false.
While verification is a step towards better internet safety and reliability, there is still a long way to go. We might never see a world where a profile is guaranteed to be legitimate no matter whose it is, and that might just be part of the risks that comes with taking part in this sharing economy. For now though, at least you can rest assured that you're following the actual accounts of your favorite celebrities and internet personas just by locating the little blue checkmark.
With the sharing economy comes a lot of unknowns: is this website safe? Can I trust this site with my email? Phone number? Not everyone is comfortable with this. Data shows that 69% of US adults are hesitant to be part of the sharing economy unless they have a reputable source saying it is reliable (IDE Sharing Economy). However, members of today's younger generation are much more open with their information, with 20% being okay sharing their cell phone number in 2013, which was way up from just 2% in 2006 (Henley). The younger generation (12-17 year olds) are much more open with information, and as such, more susceptible to being catfished.
No, not catfished as in the actual fish, but as in being tricked by someone online who says they are someone they're not. It's so common in today's digital world that MTV made a TV show about it. That's right, there's a reality TV show about masquerading as someone else online. While that might seem humorous, it really is a good idea, as online dating becomes more prevalent.
So with websites so cognizant of this catfishing, and with many people only willing to participate in social media platforms with the guarantee of authenticity (on Facebook, this could mean having multiple photos, detailed personal information, and friends with the same thing (Henley)), these platforms knew they had to step up big time. So they created the Blue Checkmark, which has now become a universally accepted symbol of "verification." If someone has this check by their profile, you know they're the real deal.
This honor is usually reserved for people whose profile gets stolen frequently, namely, celebrities. But it isn't unheard of for a "regular person" to get this check either. This article does a great job explaining how various websites determine who gets verified and how and illustrates that many times it's not easy to become verified, which is both good and bad. The elite status of verification ensures that false accounts aren't verified (which would defeat the whole purpose), but also prevents the majority from knowing whether an account is real or false.
While verification is a step towards better internet safety and reliability, there is still a long way to go. We might never see a world where a profile is guaranteed to be legitimate no matter whose it is, and that might just be part of the risks that comes with taking part in this sharing economy. For now though, at least you can rest assured that you're following the actual accounts of your favorite celebrities and internet personas just by locating the little blue checkmark.
Monday, February 8, 2016
Sharing is Caring...Until it's Stealing
We've all been there, we've all done it: we've all used Pandora radio for streaming. And when using it, I never put much thought into how much the musician was being paid for this work. Turns out, it's not a lot. So is it ethical to stream this music? Is it ever ethical to illegally download or fileshare music? In the end, the question isn't really about how ethical it is to download or file share, it's about how ethical it is to persecute people for doing so.
For starters, many record companies (and musicians) hold this belief that if someone is sharing/streaming their music, they lose out on sales. However, this is a misconception in many ways. Most importantly, it is a misconception that the musicians would lose out on a lot of money, since iTunes only pays musicians about ten cents per download anyway, and popular streaming services like Spotify and Pandora only pay about fifty cents. (Richmond).
So it's not like there's really a loss of money. But say a musician feels spurned and wants to sue anyway. In today's time, while legally within their bounds, it is not the smartest decision to make. Not only is it not smart, it is unethical. When musicians who are making millions begin suing children over downloading or streaming their music for free, they make enemies. People dislike them, and are less likely to want to purchase their music.
But it really hurts the "little guy". People have to pay hundreds of thousands in reparations, money they don't have to begin with (hey, maybe that's why they were streaming the music instead of downloading it!). It harms the person committing the crime more than the person the crime was committed against, which makes this persecution unethical.
But aside from the law itself being unethical, it's completely illogical. For a musician making hardly anything from iTunes sales (which is how most of the songs are being downloaded now), you would think he would understand someone not purchasing their new album, choosing instead to stream it and then instead spend the money on merchandise or going to a concert, where in the end the musician will make more money.
Many people nowadays also want to preview the music they're going to buy. This doesn't mean one-minute segments, but the entire song, the entire album. Consumers now want to know what they're getting into and make sure they're spending their money correctly. So if streaming can lead to record sales, what's the problem with it? Looking at you, Taylor Swift.
Moreover, is it ethical for record companies and musicians to restrict how their music can be used? Is it ethical to say that the music can only be bought, and not used in projects? That in order to put a song someone might not even like as a background for a YouTube video, he has to go out and buy it? If there's creative commons licensing for books and movies, shouldn't there be for songs as well? And if someone has purchased the song in question, shouldn't he be allowed to use it however he pleases, even if that might be in a video? The music industry is far behind in the times, because using music like this, which many videos have had their sound stripped for doing, is what often times leads someone to discover a new artist and in turn download their music or pay to go to a concert.
Is it really ethical for large record companies to take advantage of individuals for sharing a musician's content? It does more harm than good, and no matter how many people are made examples of, this behavior isn't going to change. Downloading songs for free is no longer seen as stealing...it's sharing. Just like you used to go in halfsies for that new Donny Osmond CD, now you go halfsies on a virtual CD. This is one of those cultural shifts, and if the major record companies/musicians don't get with the times, all they're going to do is lose customers and fanbases.
For starters, many record companies (and musicians) hold this belief that if someone is sharing/streaming their music, they lose out on sales. However, this is a misconception in many ways. Most importantly, it is a misconception that the musicians would lose out on a lot of money, since iTunes only pays musicians about ten cents per download anyway, and popular streaming services like Spotify and Pandora only pay about fifty cents. (Richmond).
So it's not like there's really a loss of money. But say a musician feels spurned and wants to sue anyway. In today's time, while legally within their bounds, it is not the smartest decision to make. Not only is it not smart, it is unethical. When musicians who are making millions begin suing children over downloading or streaming their music for free, they make enemies. People dislike them, and are less likely to want to purchase their music.
But it really hurts the "little guy". People have to pay hundreds of thousands in reparations, money they don't have to begin with (hey, maybe that's why they were streaming the music instead of downloading it!). It harms the person committing the crime more than the person the crime was committed against, which makes this persecution unethical.
But aside from the law itself being unethical, it's completely illogical. For a musician making hardly anything from iTunes sales (which is how most of the songs are being downloaded now), you would think he would understand someone not purchasing their new album, choosing instead to stream it and then instead spend the money on merchandise or going to a concert, where in the end the musician will make more money.
Many people nowadays also want to preview the music they're going to buy. This doesn't mean one-minute segments, but the entire song, the entire album. Consumers now want to know what they're getting into and make sure they're spending their money correctly. So if streaming can lead to record sales, what's the problem with it? Looking at you, Taylor Swift.
Moreover, is it ethical for record companies and musicians to restrict how their music can be used? Is it ethical to say that the music can only be bought, and not used in projects? That in order to put a song someone might not even like as a background for a YouTube video, he has to go out and buy it? If there's creative commons licensing for books and movies, shouldn't there be for songs as well? And if someone has purchased the song in question, shouldn't he be allowed to use it however he pleases, even if that might be in a video? The music industry is far behind in the times, because using music like this, which many videos have had their sound stripped for doing, is what often times leads someone to discover a new artist and in turn download their music or pay to go to a concert.
Is it really ethical for large record companies to take advantage of individuals for sharing a musician's content? It does more harm than good, and no matter how many people are made examples of, this behavior isn't going to change. Downloading songs for free is no longer seen as stealing...it's sharing. Just like you used to go in halfsies for that new Donny Osmond CD, now you go halfsies on a virtual CD. This is one of those cultural shifts, and if the major record companies/musicians don't get with the times, all they're going to do is lose customers and fanbases.
Tuesday, February 2, 2016
Data Brokers Broke My Trust
We've all heard the C.S. Lewis quote "Integrity is doing the right thing when no one is watching." In fact, if you've ever spent any time inside an elementary, middle, or high school classroom, you've probably seen a sign or two about it. If Lewis is right, and doing the right thing when no one is watching is what defines integrity, then data brokers dug a hole, tossed in their integrity, filled the hole, jumped on it a few times, and then spat on it. Essentially, they have no integrity.
Being a data broker is based solely on doing the wrong thing when no one is looking: data mining. And because there are few rules and no laws to enforce ethically correct behavior, these data brokers are able to continue essentially stealing information from consumers. Not cool. This behavior is what breaks the trust of consumers, making them fear the internet.
Why should consumers' trust be broken? Oh, maybe because data brokers sell things such as region, education, travel data, and the Social Security Numbers (!!!) of consumers. Who do they sell it to? Pretty much anyone, including hotels, jewelry stores, airlines, colleges, retail chains, law enforcement, advertising agencies, other data brokers, and, oh yeah, the government (Fernbeck). Trust broken now? Starting to feel a little paranoid every time you visit a website or search amazon for a product? Ready to go hide in a cave with zero technology or cell signal and live off the wild for the rest of your life? I'm right there with you.
Luckily, it's not entirely hopeless. I know, right now you feel like Han Solo after Boba Fett tricks him and he ends up being frozen in carbonate (shiver), Boba Fett being the Data Brokers and Han Solo the unsuspecting consumer. But there is hope Luke can come and save you! Hope in the form of a proposed Fair Information Practices (FIP) checklist to keep data brokers ethical. Currently, these guidelines aren't enforceable by law, but hopefully that would change.
One of the new guidelines included would be "harm": an ethical standpoint to make sure the consumers aren't involved in any " consequences resulting from the dissemination of false or wrong data" (Fernbeck). This, among several other guidelines (such as fairness, trust, respect, and privacy) would hopefully keep data brokers ethical.
Now, unless these guidelines go into action and are legally enforced, there's not really much anyone can do to stop data brokers, except hiding in a tech-free wilderness (which, let's face it, no matter how paranoid we are, we couldn't actually do). So for now, all we can do is be smart about what information we put online, be sparse with information, and remember that it is likely that everything we do online is being catalogued and sold to someone. Hey, at least you know someone, somewhere, cares aboutyour personal information you!
Until there is some real, desperately needed, reforms in the data mining industry, data brokers will not change their ways and will continue to behave unethically. So, I'd like to leave a message to data brokers from my old classrooms and C.S. Lewis:
Being a data broker is based solely on doing the wrong thing when no one is looking: data mining. And because there are few rules and no laws to enforce ethically correct behavior, these data brokers are able to continue essentially stealing information from consumers. Not cool. This behavior is what breaks the trust of consumers, making them fear the internet.
Why should consumers' trust be broken? Oh, maybe because data brokers sell things such as region, education, travel data, and the Social Security Numbers (!!!) of consumers. Who do they sell it to? Pretty much anyone, including hotels, jewelry stores, airlines, colleges, retail chains, law enforcement, advertising agencies, other data brokers, and, oh yeah, the government (Fernbeck). Trust broken now? Starting to feel a little paranoid every time you visit a website or search amazon for a product? Ready to go hide in a cave with zero technology or cell signal and live off the wild for the rest of your life? I'm right there with you.
Luckily, it's not entirely hopeless. I know, right now you feel like Han Solo after Boba Fett tricks him and he ends up being frozen in carbonate (shiver), Boba Fett being the Data Brokers and Han Solo the unsuspecting consumer. But there is hope Luke can come and save you! Hope in the form of a proposed Fair Information Practices (FIP) checklist to keep data brokers ethical. Currently, these guidelines aren't enforceable by law, but hopefully that would change.
One of the new guidelines included would be "harm": an ethical standpoint to make sure the consumers aren't involved in any " consequences resulting from the dissemination of false or wrong data" (Fernbeck). This, among several other guidelines (such as fairness, trust, respect, and privacy) would hopefully keep data brokers ethical.
Now, unless these guidelines go into action and are legally enforced, there's not really much anyone can do to stop data brokers, except hiding in a tech-free wilderness (which, let's face it, no matter how paranoid we are, we couldn't actually do). So for now, all we can do is be smart about what information we put online, be sparse with information, and remember that it is likely that everything we do online is being catalogued and sold to someone. Hey, at least you know someone, somewhere, cares about
Until there is some real, desperately needed, reforms in the data mining industry, data brokers will not change their ways and will continue to behave unethically. So, I'd like to leave a message to data brokers from my old classrooms and C.S. Lewis:
Tuesday, January 26, 2016
Big Brother, Safe Cities?
In 2004, Ross McNutt and some friends created a drone. Not just any drone, though; they effectively created an all-seeing eye. This podcast explains more and in great detail, but the question here is: is this really a good idea?
In theory, an all-seeing eye of God watching over the city might not be such a bad thing. As McNutt explained, he's used this plane to solve crimes, capture murderers and kidnappers, and discover who planted roadside bombs in combat zones. So clearly, there is a positive side to all this. It's hard, ethically, to say no to something that could save lives. But it's also hard to say yes.
When listening to this podcast, at first I thought it was an easy decision. Of course I would want to implement something that could save lives. Of course I would want to make the world safer. Especially after hearing about what McNutt's plane did in Mexico, where it was able to trace hitmen back to their leader's house. That's huge! You can arrest the little guys all you want, but unless you have the person employing them, it's really not going to change much. But with this invention, you can get the leader. And almost easily, too.
However, there are definitely drawbacks. How do you ensure that employees working for McNutt's planes will stay objective and not use this footage for their own personal use? Is the software capable of being hacked, and if so, could someone theoretically hack the planes and spy on others? Will people become extremely paranoid knowing they're being watched all the time? Will this actually stop crime, or will criminals just become more creative and figure out how to work around the planes? If these do become regularly implemented, will the whole world turn into an episode of Big Brother? And if so, is that worth it?
As of now, these risks are too much for me to endorse the plane. I feel like even though there is a great deal of good to be had from their use, like with most things, there is a great deal of evil potential as well. Ethically, I cannot say this plane is a good idea when there is still so much unknown. This invention would definitely save lives, no doubt about it, and I feel a little guilty saying I'm against it. But in the end, I simply have too many questions about the widespread use and the consequences it may cause to feel safe knowing there's a giant all-seeing plane in the sky above me.
Maybe it's because I'm a little paranoid, or because I've read enough books, but there is so much that could go wrong with these planes. In the end, McNutt says safety, but all I see is this.
In theory, an all-seeing eye of God watching over the city might not be such a bad thing. As McNutt explained, he's used this plane to solve crimes, capture murderers and kidnappers, and discover who planted roadside bombs in combat zones. So clearly, there is a positive side to all this. It's hard, ethically, to say no to something that could save lives. But it's also hard to say yes.
When listening to this podcast, at first I thought it was an easy decision. Of course I would want to implement something that could save lives. Of course I would want to make the world safer. Especially after hearing about what McNutt's plane did in Mexico, where it was able to trace hitmen back to their leader's house. That's huge! You can arrest the little guys all you want, but unless you have the person employing them, it's really not going to change much. But with this invention, you can get the leader. And almost easily, too.
However, there are definitely drawbacks. How do you ensure that employees working for McNutt's planes will stay objective and not use this footage for their own personal use? Is the software capable of being hacked, and if so, could someone theoretically hack the planes and spy on others? Will people become extremely paranoid knowing they're being watched all the time? Will this actually stop crime, or will criminals just become more creative and figure out how to work around the planes? If these do become regularly implemented, will the whole world turn into an episode of Big Brother? And if so, is that worth it?
As of now, these risks are too much for me to endorse the plane. I feel like even though there is a great deal of good to be had from their use, like with most things, there is a great deal of evil potential as well. Ethically, I cannot say this plane is a good idea when there is still so much unknown. This invention would definitely save lives, no doubt about it, and I feel a little guilty saying I'm against it. But in the end, I simply have too many questions about the widespread use and the consequences it may cause to feel safe knowing there's a giant all-seeing plane in the sky above me.
Maybe it's because I'm a little paranoid, or because I've read enough books, but there is so much that could go wrong with these planes. In the end, McNutt says safety, but all I see is this.
Wednesday, January 20, 2016
The Ethics of Veganism
Some people consider it ethical to become vegan, because of animal rights. Others believe that while animals are important, their rights do not factor into food. The people conforming to the Rights Approach would say not being vegan is immoral. Other ethical approaches disagree. But which is correct? Is there even a correct answer?
In the article from brown.edu entitled "A Framework for Making Ethical Decisions," the author states several different approaches to making ethical decisions. One of these approaches, under the Non-Consequentialist Theories category, is the Rights Approach. This approach borrows words from Kant, saying, to paraphrase, "act in a way that you treat all of humanity, whether it is yourself or someone else, as their own end and never as a means to another end" (Brown).
What does this say about veganism? Well, the article brings up a very good point, saying "many now argue that animals . . . have rights" (Brown). And here veganism comes in. Many people could easily use the Rights Approach to validate veganism, arguing that eating meat or consuming animal byproducts violates animal rights. And really, they'd be right. It's not like the meat industry treats their animals well. This video shows what happens in one of Iowa's largest pig farms (viewer discretion advised). And while it's easy to say that this isn't what happens to all animals who eventually end up on our table, it is impossible to deny that this is the reality for many companies in the meat industry.
If you abide by the Rights Approach to ethics, it seems pretty clear that abstaining from meat/animal products is the ethical way to go. As the article later states, the guiding light for ethics according to the Rights Approach is whatever action "respects the rights of all who have a stake in the decision"(Brown). But not everyone abides by this approach.
I, for one, am guilty of not being vegan. No matter how many animal rights videos I watch, and how often I feel aghast watching videos like the one above, there is just some part of me that cannot give up my bacon and burgers. Call it unethical if you will; I'll call it "using the Egoistic Approach." This approach states "self-interest is a prerequisite to self-respect and respect for others" (Brown). Eating meat/animal byproducts is definitely done out of my own self-interest. But I can validate it through this approach by saying that in order to respect others and myself, I have to look out for Number 1: myself.
Is this actually ethical, though? The Brown.edu article outlines a process for making ethical decisions: Recognize the ethical issue, Consider parties involved, Gather all relevant information, Formulate actions and consider alternatives, Make a decision and consider it, Act, Reflect on outcome. According to this process, becoming a vegan is probably the correct--or most ethical--response to the mistreatment of animals in the meat industry.
Still, I can't seem to bring myself to become a vegan. Maybe this is just an impermissible decision I have to live with, or maybe it speaks to a larger reality: theorizing about ethics is one thing, but living by that theory is entirely different.
In the article from brown.edu entitled "A Framework for Making Ethical Decisions," the author states several different approaches to making ethical decisions. One of these approaches, under the Non-Consequentialist Theories category, is the Rights Approach. This approach borrows words from Kant, saying, to paraphrase, "act in a way that you treat all of humanity, whether it is yourself or someone else, as their own end and never as a means to another end" (Brown).
What does this say about veganism? Well, the article brings up a very good point, saying "many now argue that animals . . . have rights" (Brown). And here veganism comes in. Many people could easily use the Rights Approach to validate veganism, arguing that eating meat or consuming animal byproducts violates animal rights. And really, they'd be right. It's not like the meat industry treats their animals well. This video shows what happens in one of Iowa's largest pig farms (viewer discretion advised). And while it's easy to say that this isn't what happens to all animals who eventually end up on our table, it is impossible to deny that this is the reality for many companies in the meat industry.
If you abide by the Rights Approach to ethics, it seems pretty clear that abstaining from meat/animal products is the ethical way to go. As the article later states, the guiding light for ethics according to the Rights Approach is whatever action "respects the rights of all who have a stake in the decision"(Brown). But not everyone abides by this approach.
I, for one, am guilty of not being vegan. No matter how many animal rights videos I watch, and how often I feel aghast watching videos like the one above, there is just some part of me that cannot give up my bacon and burgers. Call it unethical if you will; I'll call it "using the Egoistic Approach." This approach states "self-interest is a prerequisite to self-respect and respect for others" (Brown). Eating meat/animal byproducts is definitely done out of my own self-interest. But I can validate it through this approach by saying that in order to respect others and myself, I have to look out for Number 1: myself.
Is this actually ethical, though? The Brown.edu article outlines a process for making ethical decisions: Recognize the ethical issue, Consider parties involved, Gather all relevant information, Formulate actions and consider alternatives, Make a decision and consider it, Act, Reflect on outcome. According to this process, becoming a vegan is probably the correct--or most ethical--response to the mistreatment of animals in the meat industry.
Still, I can't seem to bring myself to become a vegan. Maybe this is just an impermissible decision I have to live with, or maybe it speaks to a larger reality: theorizing about ethics is one thing, but living by that theory is entirely different.
Subscribe to:
Posts (Atom)