Remember a few years ago when the government wanted to change how we access the internet and everyone got all up in arms because we have access now and changing it would make it more difficult for people who currently have access to get access? Me too. I was one of those people. But there is a bigger internet problem that we should be getting up in arms about. There are very many people who currently don't have access to the internet, and that needs to change.
The internet is a well of information. Most of us have become so accustomed to using it, not only for social media, but just as a part of our lives. How many times a day do you have a question and you use google to find the answer? My bet is at least once. You google recipes, health tips, word definitions, that actor you know you recognize but can't remember where from. And you don't think twice about it. You use social media in some form several times a day. You're reading this blog post.
Now think if you couldn't do any of that. If you couldn't afford to have internet access, and the world's answers were no longer at your fingertips. How would you survive? For many people, the "dark ages" before the internet are very much a reality. In her study, Emily Hong looks at the many families in San Francisco's Chinatown who cannot afford internet. She attributes this digital divide to the racialization of Chinatown. Over the years, it became a place for the poorer Asian population to live, and they have still not escaped that stigma. For many of these residents, they cannot pay the estimated $32 a month it would cost to have in-home internet access. While that may not sound like a lot to some, that money adds up, especially in an expensive area like San Francisco.
It doesn't seem ethical that some of the population should be able to afford access to information and some shouldn't. But what can be done about this? First, this problem encompasses the United States as a whole, and while it is certainly worse in areas like San Fran's Chinatown, the cost of internet access overall needs to be decreased in order to take steps to solve those problems.
To compare to the East coast of the US, New York residents pay roughly $55/month for internet, almost double what people in other large cities such as London or Hong Kong pay. The same article linked in the previous sentence discusses steps to take in order to fix this, and names lack of competition as a serious problem. Based on my personal experience being pigeon-holed into buying a specific server, I can easily agree with this. More market competition could in fact create more competitive prices, and help bring down the overall price of internet.
While bringing down the price wouldn't necessarily solve all the problems related to internet, it would be a start. It goes without saying that these internet companies overcharging for their service is extremely unethical, and it has the direct result of causing communities like Chinatown where residents cannot afford service. Unfortunately, these companies don't seem to be changing towards a more ethical business outlook anytime soon.
Sunday, March 20, 2016
Life Hacks
Want to know a good life hack? Hire a hacker. Now, that might seem like odd advice, but hiring a hacker for your business could actually end up benefiting you. I'm not saying you should go out and hire Anonymous to hack competitors' websites; I'm saying you should have test runs on whether or not your website is hackable.
We all know hacking is unethical. There's no way around that. But the thing is, the vast majority of people live ethically gray lives, and many of them are completely unethical. All that to say, there are Bad People in this world. And these people will try to hack your website, business, and customer information if you are successful and the information is accessible. So why not beat them to it?
This form of hacking is known as constructive hacking, where someone uses hacking as a way to solve a collective goal (for good of course) (King). In this case, it is helpful to engage in constructive hacking as a way to preemptively problem solve and see things from a hacker's perspective.
After Target was famously hacked in 2013, they brought in "security experts" at Verizon to "probe its network for weaknesses," (Krebs). Now, the article doesn't say specifically, but it doesn't take a lot of logic to reason that these experts were probing for weaknesses by simulating attacks as hackers would do. In fact, Target recently opened a "Cyber Fusion Center," designed to help keep Target's cyber security secure. This center employs a group of people called the "Red Team" whose entire job is to attempt to hack into Target's system. Seriously.
Clearly, if Target had had all of this in place prior to the security breach, there is a good chance the breach wouldn't have happened. When you look at it from this perspective, it seems like it is the ethical necessity to hire these cyber hacking teams. If you don't, you're leaving innocent customers exposed to malevolent hackers who will try to steal their information. Not taking steps to prevent this is unethical and unacceptable.
So if you want to stay out of hot water and avoid large information infiltrations, take that preemptive security step and hire a hacker. You won't regret it.
We all know hacking is unethical. There's no way around that. But the thing is, the vast majority of people live ethically gray lives, and many of them are completely unethical. All that to say, there are Bad People in this world. And these people will try to hack your website, business, and customer information if you are successful and the information is accessible. So why not beat them to it?
This form of hacking is known as constructive hacking, where someone uses hacking as a way to solve a collective goal (for good of course) (King). In this case, it is helpful to engage in constructive hacking as a way to preemptively problem solve and see things from a hacker's perspective.
After Target was famously hacked in 2013, they brought in "security experts" at Verizon to "probe its network for weaknesses," (Krebs). Now, the article doesn't say specifically, but it doesn't take a lot of logic to reason that these experts were probing for weaknesses by simulating attacks as hackers would do. In fact, Target recently opened a "Cyber Fusion Center," designed to help keep Target's cyber security secure. This center employs a group of people called the "Red Team" whose entire job is to attempt to hack into Target's system. Seriously.
Clearly, if Target had had all of this in place prior to the security breach, there is a good chance the breach wouldn't have happened. When you look at it from this perspective, it seems like it is the ethical necessity to hire these cyber hacking teams. If you don't, you're leaving innocent customers exposed to malevolent hackers who will try to steal their information. Not taking steps to prevent this is unethical and unacceptable.
So if you want to stay out of hot water and avoid large information infiltrations, take that preemptive security step and hire a hacker. You won't regret it.
Friday, March 18, 2016
They See Me Trollin'
The internet is a weird place. There's no argument there. But when you take a place like the internet where there are virtually no rules, and a global community, there inevitably grows a kind of culture. And just like all cultures world-wide, there are good and bad aspects to it. One of these weird, possibly bad aspects is "trolling," a phenomenon where posters ("trolls") spam people's internet homes with (usually) rude or offensive content. Is this ethical? Should it be stopped?
Here's the thing. In America, we have a law about freedom of speech. Other countries, not so much. So how does that translate to the internet? It's hard to say. On one hand, it's easy to argue that trolling, while not necessarily ethical, is certainly legal in America thanks to free speech, as long as the messages don't cross into hate speech or threats. But the internet is global. Other countries can and do use the internet, and their people occasionally troll.
So the only option, then, is to approach this issue on a country-by-country basis, which is certainly not the most efficient (though we could always establish the internet as its own entity, with its own set of rules to be followed globally, but that would probably be even harder.). Some countries, like New Zealand, have already realized that trolling is a problem and taken steps to fix it. However, as this article points out, there are some very large problems with this law: the law cites punishable speech as speech that is "indecent," "false," or "used to harass an individual." And as the article also says, this is very broad. It could technically encompass political cartoonists. It's also not clear whether the law applies to people posting this content, or if it would include people responding to it as well.
Under this law, would re-posting this image be punishable in New Zealand? Many people found it indecent and false. The law is simply too broad to do any real good, and could end up infringing on the critical speech needed in large spaces like the internet.
Controlling trolling on the internet is one of the hardest problems to solve in today's world. It's obvious that many of the trolls, such as those mentioned in this article, are clearly causing ethical harm. They shouldn't be doing what they're doing. But unfortunately, until the internet is given its own division of millions of people whose job is to screen the web for these sorts of trolls, nothing is to be done. Nothing can be done, without infringing on the critical speech of millions of innocent parties. Unfortunately, this is one of those times where people just have to block and ignore, as unfair as that may be. But, hey, it's the internet. Anything goes there.
Here's the thing. In America, we have a law about freedom of speech. Other countries, not so much. So how does that translate to the internet? It's hard to say. On one hand, it's easy to argue that trolling, while not necessarily ethical, is certainly legal in America thanks to free speech, as long as the messages don't cross into hate speech or threats. But the internet is global. Other countries can and do use the internet, and their people occasionally troll.
So the only option, then, is to approach this issue on a country-by-country basis, which is certainly not the most efficient (though we could always establish the internet as its own entity, with its own set of rules to be followed globally, but that would probably be even harder.). Some countries, like New Zealand, have already realized that trolling is a problem and taken steps to fix it. However, as this article points out, there are some very large problems with this law: the law cites punishable speech as speech that is "indecent," "false," or "used to harass an individual." And as the article also says, this is very broad. It could technically encompass political cartoonists. It's also not clear whether the law applies to people posting this content, or if it would include people responding to it as well.
Under this law, would re-posting this image be punishable in New Zealand? Many people found it indecent and false. The law is simply too broad to do any real good, and could end up infringing on the critical speech needed in large spaces like the internet.
Controlling trolling on the internet is one of the hardest problems to solve in today's world. It's obvious that many of the trolls, such as those mentioned in this article, are clearly causing ethical harm. They shouldn't be doing what they're doing. But unfortunately, until the internet is given its own division of millions of people whose job is to screen the web for these sorts of trolls, nothing is to be done. Nothing can be done, without infringing on the critical speech of millions of innocent parties. Unfortunately, this is one of those times where people just have to block and ignore, as unfair as that may be. But, hey, it's the internet. Anything goes there.
Monday, March 14, 2016
Virtual Reality the Future
Virtual reality is a phrase out of sci-fi. It evokes images of futuristic goggles that hold their own screen and create a new reality for the wearer. This virtual reality does not exist just in this instance now, though. Now virtual reality takes place in chat rooms, in places where the users can manipulate the space around them to create a world of their own. But is this a good thing?
As Julian Dibbell points out in his article "A Rape in Cyberspace," these new worlds are not regulated. There is no official rulebook for how circumstances should be handled, and this can obviously cause some very dangerous problems. Virtual reality can't be stopped though; at this point it has to be embraced. And there needs to be a set of rules in place for these "worlds" if virtual reality is to remain a positive thing.
The article proved that these realities are fairly adept at self-governing, but that does not mean the governing is effective. This is proved by Mr. Bungle returning after banishment under a different username. Something has to change in order for virtual realities to become truly safe.
Virtual reality is growing, gaining momentum. And it's not slowing down any time soon. It's a world that is becoming more real with each passing day, and as users get more drawn in, the lines between "real" reality and virtual reality will become more blurred. To protect these users, there needs to be some reform in the wide sweeping rules of virtual reality to help keep users safe.
I know it's nearly impossible to solve the problem, but there can be improvements. These will help the general populace embrace virtual reality and lead to more use, for more purposes. Eventually, if virtual reality can become a safe place, we could be living in the future world of Phil of the Future, with virtual reality a commonplace occurrence. That is the world I hope for.
As Julian Dibbell points out in his article "A Rape in Cyberspace," these new worlds are not regulated. There is no official rulebook for how circumstances should be handled, and this can obviously cause some very dangerous problems. Virtual reality can't be stopped though; at this point it has to be embraced. And there needs to be a set of rules in place for these "worlds" if virtual reality is to remain a positive thing.
The article proved that these realities are fairly adept at self-governing, but that does not mean the governing is effective. This is proved by Mr. Bungle returning after banishment under a different username. Something has to change in order for virtual realities to become truly safe.
Virtual reality is growing, gaining momentum. And it's not slowing down any time soon. It's a world that is becoming more real with each passing day, and as users get more drawn in, the lines between "real" reality and virtual reality will become more blurred. To protect these users, there needs to be some reform in the wide sweeping rules of virtual reality to help keep users safe.
I know it's nearly impossible to solve the problem, but there can be improvements. These will help the general populace embrace virtual reality and lead to more use, for more purposes. Eventually, if virtual reality can become a safe place, we could be living in the future world of Phil of the Future, with virtual reality a commonplace occurrence. That is the world I hope for.
Subscribe to:
Posts (Atom)