Remember a few years ago when the government wanted to change how we access the internet and everyone got all up in arms because we have access now and changing it would make it more difficult for people who currently have access to get access? Me too. I was one of those people. But there is a bigger internet problem that we should be getting up in arms about. There are very many people who currently don't have access to the internet, and that needs to change.
The internet is a well of information. Most of us have become so accustomed to using it, not only for social media, but just as a part of our lives. How many times a day do you have a question and you use google to find the answer? My bet is at least once. You google recipes, health tips, word definitions, that actor you know you recognize but can't remember where from. And you don't think twice about it. You use social media in some form several times a day. You're reading this blog post.
Now think if you couldn't do any of that. If you couldn't afford to have internet access, and the world's answers were no longer at your fingertips. How would you survive? For many people, the "dark ages" before the internet are very much a reality. In her study, Emily Hong looks at the many families in San Francisco's Chinatown who cannot afford internet. She attributes this digital divide to the racialization of Chinatown. Over the years, it became a place for the poorer Asian population to live, and they have still not escaped that stigma. For many of these residents, they cannot pay the estimated $32 a month it would cost to have in-home internet access. While that may not sound like a lot to some, that money adds up, especially in an expensive area like San Francisco.
It doesn't seem ethical that some of the population should be able to afford access to information and some shouldn't. But what can be done about this? First, this problem encompasses the United States as a whole, and while it is certainly worse in areas like San Fran's Chinatown, the cost of internet access overall needs to be decreased in order to take steps to solve those problems.
To compare to the East coast of the US, New York residents pay roughly $55/month for internet, almost double what people in other large cities such as London or Hong Kong pay. The same article linked in the previous sentence discusses steps to take in order to fix this, and names lack of competition as a serious problem. Based on my personal experience being pigeon-holed into buying a specific server, I can easily agree with this. More market competition could in fact create more competitive prices, and help bring down the overall price of internet.
While bringing down the price wouldn't necessarily solve all the problems related to internet, it would be a start. It goes without saying that these internet companies overcharging for their service is extremely unethical, and it has the direct result of causing communities like Chinatown where residents cannot afford service. Unfortunately, these companies don't seem to be changing towards a more ethical business outlook anytime soon.
COMM 360
Sunday, March 20, 2016
Life Hacks
Want to know a good life hack? Hire a hacker. Now, that might seem like odd advice, but hiring a hacker for your business could actually end up benefiting you. I'm not saying you should go out and hire Anonymous to hack competitors' websites; I'm saying you should have test runs on whether or not your website is hackable.
We all know hacking is unethical. There's no way around that. But the thing is, the vast majority of people live ethically gray lives, and many of them are completely unethical. All that to say, there are Bad People in this world. And these people will try to hack your website, business, and customer information if you are successful and the information is accessible. So why not beat them to it?
This form of hacking is known as constructive hacking, where someone uses hacking as a way to solve a collective goal (for good of course) (King). In this case, it is helpful to engage in constructive hacking as a way to preemptively problem solve and see things from a hacker's perspective.
After Target was famously hacked in 2013, they brought in "security experts" at Verizon to "probe its network for weaknesses," (Krebs). Now, the article doesn't say specifically, but it doesn't take a lot of logic to reason that these experts were probing for weaknesses by simulating attacks as hackers would do. In fact, Target recently opened a "Cyber Fusion Center," designed to help keep Target's cyber security secure. This center employs a group of people called the "Red Team" whose entire job is to attempt to hack into Target's system. Seriously.
Clearly, if Target had had all of this in place prior to the security breach, there is a good chance the breach wouldn't have happened. When you look at it from this perspective, it seems like it is the ethical necessity to hire these cyber hacking teams. If you don't, you're leaving innocent customers exposed to malevolent hackers who will try to steal their information. Not taking steps to prevent this is unethical and unacceptable.
So if you want to stay out of hot water and avoid large information infiltrations, take that preemptive security step and hire a hacker. You won't regret it.
We all know hacking is unethical. There's no way around that. But the thing is, the vast majority of people live ethically gray lives, and many of them are completely unethical. All that to say, there are Bad People in this world. And these people will try to hack your website, business, and customer information if you are successful and the information is accessible. So why not beat them to it?
This form of hacking is known as constructive hacking, where someone uses hacking as a way to solve a collective goal (for good of course) (King). In this case, it is helpful to engage in constructive hacking as a way to preemptively problem solve and see things from a hacker's perspective.
After Target was famously hacked in 2013, they brought in "security experts" at Verizon to "probe its network for weaknesses," (Krebs). Now, the article doesn't say specifically, but it doesn't take a lot of logic to reason that these experts were probing for weaknesses by simulating attacks as hackers would do. In fact, Target recently opened a "Cyber Fusion Center," designed to help keep Target's cyber security secure. This center employs a group of people called the "Red Team" whose entire job is to attempt to hack into Target's system. Seriously.
Clearly, if Target had had all of this in place prior to the security breach, there is a good chance the breach wouldn't have happened. When you look at it from this perspective, it seems like it is the ethical necessity to hire these cyber hacking teams. If you don't, you're leaving innocent customers exposed to malevolent hackers who will try to steal their information. Not taking steps to prevent this is unethical and unacceptable.
So if you want to stay out of hot water and avoid large information infiltrations, take that preemptive security step and hire a hacker. You won't regret it.
Friday, March 18, 2016
They See Me Trollin'
The internet is a weird place. There's no argument there. But when you take a place like the internet where there are virtually no rules, and a global community, there inevitably grows a kind of culture. And just like all cultures world-wide, there are good and bad aspects to it. One of these weird, possibly bad aspects is "trolling," a phenomenon where posters ("trolls") spam people's internet homes with (usually) rude or offensive content. Is this ethical? Should it be stopped?
Here's the thing. In America, we have a law about freedom of speech. Other countries, not so much. So how does that translate to the internet? It's hard to say. On one hand, it's easy to argue that trolling, while not necessarily ethical, is certainly legal in America thanks to free speech, as long as the messages don't cross into hate speech or threats. But the internet is global. Other countries can and do use the internet, and their people occasionally troll.
So the only option, then, is to approach this issue on a country-by-country basis, which is certainly not the most efficient (though we could always establish the internet as its own entity, with its own set of rules to be followed globally, but that would probably be even harder.). Some countries, like New Zealand, have already realized that trolling is a problem and taken steps to fix it. However, as this article points out, there are some very large problems with this law: the law cites punishable speech as speech that is "indecent," "false," or "used to harass an individual." And as the article also says, this is very broad. It could technically encompass political cartoonists. It's also not clear whether the law applies to people posting this content, or if it would include people responding to it as well.
Under this law, would re-posting this image be punishable in New Zealand? Many people found it indecent and false. The law is simply too broad to do any real good, and could end up infringing on the critical speech needed in large spaces like the internet.
Controlling trolling on the internet is one of the hardest problems to solve in today's world. It's obvious that many of the trolls, such as those mentioned in this article, are clearly causing ethical harm. They shouldn't be doing what they're doing. But unfortunately, until the internet is given its own division of millions of people whose job is to screen the web for these sorts of trolls, nothing is to be done. Nothing can be done, without infringing on the critical speech of millions of innocent parties. Unfortunately, this is one of those times where people just have to block and ignore, as unfair as that may be. But, hey, it's the internet. Anything goes there.
Here's the thing. In America, we have a law about freedom of speech. Other countries, not so much. So how does that translate to the internet? It's hard to say. On one hand, it's easy to argue that trolling, while not necessarily ethical, is certainly legal in America thanks to free speech, as long as the messages don't cross into hate speech or threats. But the internet is global. Other countries can and do use the internet, and their people occasionally troll.
So the only option, then, is to approach this issue on a country-by-country basis, which is certainly not the most efficient (though we could always establish the internet as its own entity, with its own set of rules to be followed globally, but that would probably be even harder.). Some countries, like New Zealand, have already realized that trolling is a problem and taken steps to fix it. However, as this article points out, there are some very large problems with this law: the law cites punishable speech as speech that is "indecent," "false," or "used to harass an individual." And as the article also says, this is very broad. It could technically encompass political cartoonists. It's also not clear whether the law applies to people posting this content, or if it would include people responding to it as well.
Under this law, would re-posting this image be punishable in New Zealand? Many people found it indecent and false. The law is simply too broad to do any real good, and could end up infringing on the critical speech needed in large spaces like the internet.
Controlling trolling on the internet is one of the hardest problems to solve in today's world. It's obvious that many of the trolls, such as those mentioned in this article, are clearly causing ethical harm. They shouldn't be doing what they're doing. But unfortunately, until the internet is given its own division of millions of people whose job is to screen the web for these sorts of trolls, nothing is to be done. Nothing can be done, without infringing on the critical speech of millions of innocent parties. Unfortunately, this is one of those times where people just have to block and ignore, as unfair as that may be. But, hey, it's the internet. Anything goes there.
Monday, March 14, 2016
Virtual Reality the Future
Virtual reality is a phrase out of sci-fi. It evokes images of futuristic goggles that hold their own screen and create a new reality for the wearer. This virtual reality does not exist just in this instance now, though. Now virtual reality takes place in chat rooms, in places where the users can manipulate the space around them to create a world of their own. But is this a good thing?
As Julian Dibbell points out in his article "A Rape in Cyberspace," these new worlds are not regulated. There is no official rulebook for how circumstances should be handled, and this can obviously cause some very dangerous problems. Virtual reality can't be stopped though; at this point it has to be embraced. And there needs to be a set of rules in place for these "worlds" if virtual reality is to remain a positive thing.
The article proved that these realities are fairly adept at self-governing, but that does not mean the governing is effective. This is proved by Mr. Bungle returning after banishment under a different username. Something has to change in order for virtual realities to become truly safe.
Virtual reality is growing, gaining momentum. And it's not slowing down any time soon. It's a world that is becoming more real with each passing day, and as users get more drawn in, the lines between "real" reality and virtual reality will become more blurred. To protect these users, there needs to be some reform in the wide sweeping rules of virtual reality to help keep users safe.
I know it's nearly impossible to solve the problem, but there can be improvements. These will help the general populace embrace virtual reality and lead to more use, for more purposes. Eventually, if virtual reality can become a safe place, we could be living in the future world of Phil of the Future, with virtual reality a commonplace occurrence. That is the world I hope for.
As Julian Dibbell points out in his article "A Rape in Cyberspace," these new worlds are not regulated. There is no official rulebook for how circumstances should be handled, and this can obviously cause some very dangerous problems. Virtual reality can't be stopped though; at this point it has to be embraced. And there needs to be a set of rules in place for these "worlds" if virtual reality is to remain a positive thing.
The article proved that these realities are fairly adept at self-governing, but that does not mean the governing is effective. This is proved by Mr. Bungle returning after banishment under a different username. Something has to change in order for virtual realities to become truly safe.
Virtual reality is growing, gaining momentum. And it's not slowing down any time soon. It's a world that is becoming more real with each passing day, and as users get more drawn in, the lines between "real" reality and virtual reality will become more blurred. To protect these users, there needs to be some reform in the wide sweeping rules of virtual reality to help keep users safe.
I know it's nearly impossible to solve the problem, but there can be improvements. These will help the general populace embrace virtual reality and lead to more use, for more purposes. Eventually, if virtual reality can become a safe place, we could be living in the future world of Phil of the Future, with virtual reality a commonplace occurrence. That is the world I hope for.
Monday, February 29, 2016
Artificial Intelligence Could Doom us All
Basically, the possibilities for Artificial Intelligence (AI) are actually terrifying. Maybe I'm just a doomsday-ist or overly paranoid, but I feel like creating a machine that could be smarter than us, that we could end up working for, is a bad idea. Ethically, it could cause quite a bit of harm.
In this article by Raffi Khatchadourian, scientist Nick Bostrom discusses the potential merits of creating highly intelligent AI, AI that can gain IQ from answering questions (is that not terrifying? If it answers questions correct it will just keep getting smarter....and eventually be smarter than everyone else?). During their discussion, one of the participants said, "The A.I. that will happen is going to be highly adaptive, emergent capability, and highly distributed. We will be able to work with it--for it--not necessarily contain it." Okay. If warning bells aren't going off in your head, then you've never seen a sci-fi movie.
Listen, I'm all for scientific advancement. But when humans create things, they bring into those things human error. I don't want an all-knowing robot with human error. That's a very dangerous thing! There's a reason this kind of thing is the plot of several doomsday movies.
Let's just say, for a minute, that this A.I. is created and it becomes smarter than some, or most humans. Let's also take into consideration the fact that humans will try to give them reasoning skills. These A.I. machines will be smarter than us and possess faulty reasoning skills. They will either A) take advantage of us lower mortals in the workforce and life in general or B) revolt and kill us all.
Both of these options will cause major ethical harm to the humans currently populating this world. By creating A.I., humans run the risk of "playing god" and causing complete disrepair to the human race. Is this playing doomsday? Yes. But someone has to, amid everyone getting excited about having their own little buddy, someone has to think about the worst-case scenario. Maybe it won't change anything in the long run, but hey, I'll be prepared.
In this article by Raffi Khatchadourian, scientist Nick Bostrom discusses the potential merits of creating highly intelligent AI, AI that can gain IQ from answering questions (is that not terrifying? If it answers questions correct it will just keep getting smarter....and eventually be smarter than everyone else?). During their discussion, one of the participants said, "The A.I. that will happen is going to be highly adaptive, emergent capability, and highly distributed. We will be able to work with it--for it--not necessarily contain it." Okay. If warning bells aren't going off in your head, then you've never seen a sci-fi movie.
Listen, I'm all for scientific advancement. But when humans create things, they bring into those things human error. I don't want an all-knowing robot with human error. That's a very dangerous thing! There's a reason this kind of thing is the plot of several doomsday movies.
Let's just say, for a minute, that this A.I. is created and it becomes smarter than some, or most humans. Let's also take into consideration the fact that humans will try to give them reasoning skills. These A.I. machines will be smarter than us and possess faulty reasoning skills. They will either A) take advantage of us lower mortals in the workforce and life in general or B) revolt and kill us all.
Both of these options will cause major ethical harm to the humans currently populating this world. By creating A.I., humans run the risk of "playing god" and causing complete disrepair to the human race. Is this playing doomsday? Yes. But someone has to, amid everyone getting excited about having their own little buddy, someone has to think about the worst-case scenario. Maybe it won't change anything in the long run, but hey, I'll be prepared.
Tuesday, February 23, 2016
To Bot or Not
Robots and AI are, according to Bill Gates, "at the point the computer industry was 30 years ago" (Lin). If Gates is right, there's about to be a huge boom in this industry. And with that is going to come questions about how far is too far, and just what exactly an advancement in AI and Robotics could bring. One of the most important questions is how ethical is creating these robots if safety cannot be guaranteed.
It's never not going to happen: computer code-based programs will always run the risk of "glitching" or malfunctioning. When Microsoft Word glitches, it's no big deal. Just quit it and re-open it. When your computer glitches, there's certainly an element of panic, but it's only harming you. You can always bring it in for repairs and fix the problem. But when a government drone glitches, that's a problem.
When this exact situation happened in August of 2010, a helicopter drone malfunctioned and hurtled towards Washington, D.C., actually putting the safety of the White House in jeopardy. Is it ethical for the government to continue developing these drones even though they're not 100% reliable? Where is that threshold?
Honestly, it may never be truly ethical to develop this technology to the point it is currently being developed. But that doesn't mean it shouldn't exist. This advancement in Robotics and AI could pave the way for more efficiency and safety, but we won't get there without some trial and error.
Admittedly, some of the problems with reliance on robots falls squarely on the shoulders of humans. If humans become too reliable on this technology, they could run the risk of losing valuable skills as well as jobs. Already this is beginning. In May of last year, a driver decided to do a demo for Volvo and drive his car into a crowd of people, just to prove the automatic brakes were accurate.
Unsurprisingly, this went horribly wrong. His car lacked the upgrade needed for this brake system, but he relied on it regardless. With this combination of idiocy and lack of understanding about the car he bought, this man proved that while the Robotics and AI technology may exist to advance society, that doesn't mean society is ready for it.
It would be unfair to say this advancement need to be postponed until society can handle it, mainly for two reasons: 1. Society is remarkable at adapting, and 2. There will always be idiots. But in light of the reliability of human idiocy, there is a line that AI developers should draw in regards to this technology. It would be unethical for this robotics technology to begin causing harm to humans. This unnecessary harm would be causing humans to lose their jobs to robots. Human agency does have to be factored in, though. If humans begin relying on this robotics technology to do everything for them, then there is no harm. Then humans have brought it upon themselves to advance into obscurity.
It's never not going to happen: computer code-based programs will always run the risk of "glitching" or malfunctioning. When Microsoft Word glitches, it's no big deal. Just quit it and re-open it. When your computer glitches, there's certainly an element of panic, but it's only harming you. You can always bring it in for repairs and fix the problem. But when a government drone glitches, that's a problem.
When this exact situation happened in August of 2010, a helicopter drone malfunctioned and hurtled towards Washington, D.C., actually putting the safety of the White House in jeopardy. Is it ethical for the government to continue developing these drones even though they're not 100% reliable? Where is that threshold?
Honestly, it may never be truly ethical to develop this technology to the point it is currently being developed. But that doesn't mean it shouldn't exist. This advancement in Robotics and AI could pave the way for more efficiency and safety, but we won't get there without some trial and error.
Admittedly, some of the problems with reliance on robots falls squarely on the shoulders of humans. If humans become too reliable on this technology, they could run the risk of losing valuable skills as well as jobs. Already this is beginning. In May of last year, a driver decided to do a demo for Volvo and drive his car into a crowd of people, just to prove the automatic brakes were accurate.
Unsurprisingly, this went horribly wrong. His car lacked the upgrade needed for this brake system, but he relied on it regardless. With this combination of idiocy and lack of understanding about the car he bought, this man proved that while the Robotics and AI technology may exist to advance society, that doesn't mean society is ready for it.
It would be unfair to say this advancement need to be postponed until society can handle it, mainly for two reasons: 1. Society is remarkable at adapting, and 2. There will always be idiots. But in light of the reliability of human idiocy, there is a line that AI developers should draw in regards to this technology. It would be unethical for this robotics technology to begin causing harm to humans. This unnecessary harm would be causing humans to lose their jobs to robots. Human agency does have to be factored in, though. If humans begin relying on this robotics technology to do everything for them, then there is no harm. Then humans have brought it upon themselves to advance into obscurity.
Humans in the future, if we rely too much on Robotics. Wall-E predicted it first.
Tuesday, February 16, 2016
The Little Blue Checkmark
The sharing economy has brought so much good with a much larger flow of information. But with that flow comes uncertainty. And the developers of apps such as Facebook, Twitter, and Instagram have had to deal with this uncertainty and occasional lack of trust. They want to make their websites reliable, and needed a way to guarantee that. Enter the "little blue checkmark."
With the sharing economy comes a lot of unknowns: is this website safe? Can I trust this site with my email? Phone number? Not everyone is comfortable with this. Data shows that 69% of US adults are hesitant to be part of the sharing economy unless they have a reputable source saying it is reliable (IDE Sharing Economy). However, members of today's younger generation are much more open with their information, with 20% being okay sharing their cell phone number in 2013, which was way up from just 2% in 2006 (Henley). The younger generation (12-17 year olds) are much more open with information, and as such, more susceptible to being catfished.
No, not catfished as in the actual fish, but as in being tricked by someone online who says they are someone they're not. It's so common in today's digital world that MTV made a TV show about it. That's right, there's a reality TV show about masquerading as someone else online. While that might seem humorous, it really is a good idea, as online dating becomes more prevalent.
So with websites so cognizant of this catfishing, and with many people only willing to participate in social media platforms with the guarantee of authenticity (on Facebook, this could mean having multiple photos, detailed personal information, and friends with the same thing (Henley)), these platforms knew they had to step up big time. So they created the Blue Checkmark, which has now become a universally accepted symbol of "verification." If someone has this check by their profile, you know they're the real deal.
This honor is usually reserved for people whose profile gets stolen frequently, namely, celebrities. But it isn't unheard of for a "regular person" to get this check either. This article does a great job explaining how various websites determine who gets verified and how and illustrates that many times it's not easy to become verified, which is both good and bad. The elite status of verification ensures that false accounts aren't verified (which would defeat the whole purpose), but also prevents the majority from knowing whether an account is real or false.
While verification is a step towards better internet safety and reliability, there is still a long way to go. We might never see a world where a profile is guaranteed to be legitimate no matter whose it is, and that might just be part of the risks that comes with taking part in this sharing economy. For now though, at least you can rest assured that you're following the actual accounts of your favorite celebrities and internet personas just by locating the little blue checkmark.
With the sharing economy comes a lot of unknowns: is this website safe? Can I trust this site with my email? Phone number? Not everyone is comfortable with this. Data shows that 69% of US adults are hesitant to be part of the sharing economy unless they have a reputable source saying it is reliable (IDE Sharing Economy). However, members of today's younger generation are much more open with their information, with 20% being okay sharing their cell phone number in 2013, which was way up from just 2% in 2006 (Henley). The younger generation (12-17 year olds) are much more open with information, and as such, more susceptible to being catfished.
No, not catfished as in the actual fish, but as in being tricked by someone online who says they are someone they're not. It's so common in today's digital world that MTV made a TV show about it. That's right, there's a reality TV show about masquerading as someone else online. While that might seem humorous, it really is a good idea, as online dating becomes more prevalent.
So with websites so cognizant of this catfishing, and with many people only willing to participate in social media platforms with the guarantee of authenticity (on Facebook, this could mean having multiple photos, detailed personal information, and friends with the same thing (Henley)), these platforms knew they had to step up big time. So they created the Blue Checkmark, which has now become a universally accepted symbol of "verification." If someone has this check by their profile, you know they're the real deal.
This honor is usually reserved for people whose profile gets stolen frequently, namely, celebrities. But it isn't unheard of for a "regular person" to get this check either. This article does a great job explaining how various websites determine who gets verified and how and illustrates that many times it's not easy to become verified, which is both good and bad. The elite status of verification ensures that false accounts aren't verified (which would defeat the whole purpose), but also prevents the majority from knowing whether an account is real or false.
While verification is a step towards better internet safety and reliability, there is still a long way to go. We might never see a world where a profile is guaranteed to be legitimate no matter whose it is, and that might just be part of the risks that comes with taking part in this sharing economy. For now though, at least you can rest assured that you're following the actual accounts of your favorite celebrities and internet personas just by locating the little blue checkmark.
Subscribe to:
Posts (Atom)