Thursday, April 14, 2016
Monday, April 11, 2016
Trolling
Trolling
is the deliberate usage of technology to harm others, typically through the use
of hateful or manipulative words. Coming in many forms, trolling became
widespread with the rise of social media and the inclusion of comment sections
on most websites. Another key component of trolling has to do with the fact
that online forums, social media, and comment sections are often anonymous.
Anonymity enables people to say (type, in this case) things which they would
not normally say in real life. YouTube is a famous playground for trolls. Since
YouTube does not require users to supply their real names, people feel free to
say whatever they want. Often one can find hateful remarks and insults in the
comment section for just about any video. These comments are usually
threatening toward the subject(s) of the video, the uploader, or the cause or
idea associated with the video. Sometimes the comments are relatively harmless
in actuality, other times the comments can be full-on harassment, potentially
psychological and/or sexual in nature. In the case of GamerGate, these
instances of harassment can destroy careers and ruin lives.
Tech
companies have the obligation to do their utmost to cut down on trolling. One
would think that the simplest method to cut back on trolling would be to remove
anonymity from the internet. However, as Slate
mentioned, anonymity can be a crucial factor online. People who live in
countries without free speech rely on anonymity to express their opinions to
the outside world. Additionally, anonymity enables people to inspire change
without allowing personal biases and prejudices to influence the situation. Although
trolling very often technically falls under the category of “free speech,” it
is harmful to the greater good just as often. As providers of goods and
services, technology companies have at least some responsibility to ensure the
safety and security of their customers. In order to do so, it is important for
companies like Google and Twitter to work to cut down on trolling.
Trolling
of the GamerGate sort is perhaps the worst thing the internet enables us to do
(except, perhaps, the ability to use Tor to anonymously buy illegal weapons and
traffic people). GamerGate and similar trolling of Robin Williams’s daughter
caused deep psychological damage to those involved. On the other hand, petty
trolling within a YouTube comment section is relatively harmless. For example,
there is one particular USC fan who consistently writes stupid comments under
Notre Dame football highlight videos. This troll’s comments have not ruined
lives, nor have they affected any real change in the world. Usually, one or two
ND fans will simply tell him to stop trolling. For harmless trolling such as
this, the only way to deal with it is to deny the troll the attention they
seek. If no one engages with a harmless troll, said troll will usually go away.
However, if a troll does make comments which cause genuine damage, it seems
logical that some sort of prosecution should occur. When I browse the internet,
I very rarely contribute to forums or comment sections. I am a classic lurker,
consuming vast amounts of content without actually contributing much to the
content. Aside from the occasional social media post, Reddit post, or Wikipedia
edit, I don’t post online very often. So no, I am not a troll.
Tuesday, April 5, 2016
Artificial Intelligence
Artificial
intelligence is the usage of transistors in a microprocessor to mimic the
actions of neurons in a human brain. According to ComputerWorld, “artificial intelligence is a sub-field of computer
science. Its goal is to enable the development of computers that are able to do
things normally done by people -- in particular, things associated with people
acting intelligently… any program can be considered AI if it does something
that we would normally think of as intelligent in humans.” Over time, as the
concept of artificial intelligence has matured, several sub-categories of AI
have developed. These include general and narrow AI, and within each of those,
strong AI, weak AI, and hybrid AI.
General artificial intelligence systems are those
which are intended to perfectly and completely simulate human reasoning on any
particular topic or task. Think “JARVIS” from the Iron Man movies or “HAL” from 2001:
A Space Odyssey. Narrow artificial intelligence systems include those which
are deigned to intelligently and efficiently carry out a specific task or train
of reasoning. Such systems include Google’s AlphaGo and IBM’s DeepBlue, both of
which were designed to carry out specific tasks (in both cases, board games)
very well. Each form of AI can be implemented through strong, weak, and hybrid
methods. Strong AI is a system designed to perfectly mimic the firing of
neurons in the brain. A strong AI system, when the first one is built, will
theoretically be a perfect replica of a human brain. Weak AI is a system
designed to just get the task done, no matter whether a human-style pattern of
reasoning is used. In between these two forms is hybrid AI, where the exact
methods of human reasoning inspire but do not totally inform the methods of reasoning
used by the computer.
AlphaGo, Deep Blue, and Watson are all proof of the
potential AI has to become a permanent fixture of the world of the future.
AlphaGo and Deep Blue are very effective implementations of narrow artificial
intelligence. As The Atlantic points
out, AlphaGo is able to “improve—and it is always improving, playing itself
millions of times, incrementally revising its algorithms based on which
sequences of play result in a higher win percentage.” Because AlphaGo is
able to constantly improve its own algorithms, it is intelligent in a way that
a static computer program could never be. By continually improving itself, it
mimics very well the way in which humans practice sports and study for tests in
an effort to improve their own algorithms. Watson is the first impressive
implementation of general hybrid AI. While it does not come close to the level
of JARVIS or HAL, it can perform a wide variety of logical and intuitive tasks
very well. General artificial intelligence systems are currently very good at
logic and computation. The key breakthroughs will come when such systems
acquire intuition, as sense of morality, and the desire for self-preservation
(the scary one!). Once general AI takes on these characteristics, it will be
able to rival the power of the human brain.
The Turing Test is a good indicator for narrow AI
systems, where the test can be adapted rather well to the specific task the AI
system is meant to carry out. However, when it comes to general AI, the test
doesn’t hold up as well simply because it cannot test enough variables to
accurately determine intelligence. Since perfect general AI will work just like
a human mind, it would follow that general AI should be able to beat a Turing
Test every time. Once we reach the point where biological and electronic
computers become indistinguishable, or perhaps even inseparable, we will have
come to the singularity. Ethically, there is no problem with the singularity in
general. On an individual basis, certain computers are bound to act
unethically, just as certain people are bound to act unethically. Such a
dynamic is necessary for the proper functioning of society.
Tuesday, March 29, 2016
Net Neutrality
As summarized by The Verge, net neutrality is the idea
that internet service providers (ISPs) cannot charge customers different rates
to receive different network performance and priority. For example, AT&T
cannot charge Netflix higher rates because they push much greater amounts of
data through AT&T’s (and others’) network. The Verge explains that:
“The order focuses on
three specific rules for internet service: no blocking, no throttling, and no
paid prioritization. ‘A person engaged in the provision of broadband internet
access service, insofar as such person is so engaged, shall not impair or degrade
lawful internet traffic on the basis of internet content, application, or
service, or use of a non-harmful device, subject to reasonable network
management,’”
When the FCC mentions “paid prioritization,” they
are referring to the practice of configuring the network to favor certain
traffic based on how much was paid for that traffic or how much its speedy
transmission might benefit the network provider. According to the Electronic
Frontier Foundation, “the FCC produced rules that we could support… We want the
internet to live up to its promise, fostering innovation, creativity, and
freedom. We don’t want regulations that will turn ISPs into gatekeepers, making
special deals… and inhibiting new competition, innovation, and expression.”
Basically,
The Verge, the EFF, the Reddit community,
and millions of other entities and individuals argue for net neutrality because
they believe that the internet should be an open and unencumbered medium for
the transmission of ideas, knowledge, and entertainment. Supporters of net
neutrality suggest that prioritization, throttling, and suppression of certain
packets traveling through a network will result in the loss of the freedom of
ideas and data which is so critical to their vision of the internet. Detractors
of net neutrality, including the woefully misguided Forbes contributor Jeffrey Dorfman, suggest that net neutrality
flies in the face of free-market capitalist economics. Dorfman gives this
analogy: “This is a bad idea for the same reason that only having vanilla ice
cream for sale is a bad idea: some people want, and are willing to pay for,
something different.” Although I too am a staunch supporter of the free market,
Dorfman’s argument makes absolutely no sense to me. Just because content
creators and consumers might be willing to pay for better and faster transmission
of data doesn’t mean that ISPs should make this an available feature.
It has become
increasingly clear in the last decade that computing is becoming a utility
commodity, much as the same as electric power, natural gas, or water. Electric
companies aren’t allowed to charge certain people higher rates because they
draw more current from the grid, nor are they allowed to charge (x) dollars for
120 volt service and (x*2) dollars for 240 volt service. Rather, electric
companies simply charge customers a single rate based on how much power they
draw from the grid. Users of electricity know that as long as they pay this one
rate, they will receive the electric power they need. Similarly, water
providers are not allowed to charge higher rates for “more pure water.” This
would be an abomination as it would directly and negatively impact the health
of people with fewer means. If net neutrality didn’t exist, the sound operation
of the economy would be in jeopardy. Modern free-market economics assumes that
consumers behave at least somewhat rationally. Central to consumers’
rationality is their reasonable access to all potential information before
making consumption decisions. Net neutrality protects reasonable access to all
potential information. Clearly, since computing is becoming a public utility and
since it allows sound operation of the economy, net neutrality is necessary.
The internet is indeed a public service and fair access should be a basic
right.
Wednesday, March 23, 2016
Project #3
Click here to view a letter to Congress regarding encryption.
Reflections:
Is encryption a fundamental right? Should citizens of the US be allowed to have a technology that completely locks out the government?
Insofar as privacy is a fundamental right, encryption is also a right. As I pointed out in my letter to Congress, encryption is both a human and a legal right. It's easy to demonstrate that the Fifth Amendment proves that encryption is a legal right. It's a bit more difficult to prove that encryption is a human right. The proof lies in the fact that a lack of encryption would very likely lead to human suffering, as I explain in the letter. Anything which, when lacking, leads to human suffering is a human right. Consequently, US citizens should be guaranteed encryption. As the Declaration of Independence stated, "all... are endowed... with certain unalienable rights, that among these are life, liberty, and the pursuit of happiness." Removal of encryption would go against all three supposedly-inalienable rights since unprotected data could lead to loss of life, suspension by law enforcement of certain freedoms, and financial or other personal loss: a removal of happiness.
How important of an issue is encryption to you? Does it affect who you support politically? financially? socially? Should it?
Encryption is important to me not from an ideological standpoint, but from a legal and logical perspective. The U.S. Constitution very clearly grants American citizens various rights, the maintenance of which, in the modern digital age, necessitates encryption. Politicians who are anti-encryption will generally not receive my support in the future, as encryption will be central to my career in the finance industry where secure data and trade secrets are very important. It seems reasonable to expect politicians to support encryption since so many people's lives and careers depend on it, as I've explained above and in the letter.
In the struggle between national security and personal privacy, who will win? Are you resigned to a particular future or will you fight for it?
It's unfortunate that the 21st century has been defined by issues of "national security." Regrettably, the political climate is one in which it is easier to see politicians moving away from encryption rather than towards it. I wouldn't be surprised to see a bill not unlike the fictitious one I laid out appear within the next few years. The question will be whether my predictions of significant personal and financial loss due to the lack of encryption will actually come true. If they do, it will be incumbent upon politicians to reinstate encryption immediately. Ideally, however, politicians will recognize that it's very close to, if not actually, illegal to remove encryption from consumer and corporate electronics in the first place. I'd be willing to fight for a world which includes encryption, and I'm sure my future employers will be willing to bring their resources to bear in the fight as well.
Reflections:
Is encryption a fundamental right? Should citizens of the US be allowed to have a technology that completely locks out the government?
Insofar as privacy is a fundamental right, encryption is also a right. As I pointed out in my letter to Congress, encryption is both a human and a legal right. It's easy to demonstrate that the Fifth Amendment proves that encryption is a legal right. It's a bit more difficult to prove that encryption is a human right. The proof lies in the fact that a lack of encryption would very likely lead to human suffering, as I explain in the letter. Anything which, when lacking, leads to human suffering is a human right. Consequently, US citizens should be guaranteed encryption. As the Declaration of Independence stated, "all... are endowed... with certain unalienable rights, that among these are life, liberty, and the pursuit of happiness." Removal of encryption would go against all three supposedly-inalienable rights since unprotected data could lead to loss of life, suspension by law enforcement of certain freedoms, and financial or other personal loss: a removal of happiness.
How important of an issue is encryption to you? Does it affect who you support politically? financially? socially? Should it?
Encryption is important to me not from an ideological standpoint, but from a legal and logical perspective. The U.S. Constitution very clearly grants American citizens various rights, the maintenance of which, in the modern digital age, necessitates encryption. Politicians who are anti-encryption will generally not receive my support in the future, as encryption will be central to my career in the finance industry where secure data and trade secrets are very important. It seems reasonable to expect politicians to support encryption since so many people's lives and careers depend on it, as I've explained above and in the letter.
In the struggle between national security and personal privacy, who will win? Are you resigned to a particular future or will you fight for it?
It's unfortunate that the 21st century has been defined by issues of "national security." Regrettably, the political climate is one in which it is easier to see politicians moving away from encryption rather than towards it. I wouldn't be surprised to see a bill not unlike the fictitious one I laid out appear within the next few years. The question will be whether my predictions of significant personal and financial loss due to the lack of encryption will actually come true. If they do, it will be incumbent upon politicians to reinstate encryption immediately. Ideally, however, politicians will recognize that it's very close to, if not actually, illegal to remove encryption from consumer and corporate electronics in the first place. I'd be willing to fight for a world which includes encryption, and I'm sure my future employers will be willing to bring their resources to bear in the fight as well.
Monday, March 21, 2016
The DMCA and Circumvention
In 1998 when Bill
Clinton signed the Digital Millennium Copyright Act into law, he both created
and destroyed critical features of the internet. The Act’s safe-harbor provisions
enabled social media, blogs, and other crowd-sourced websites to flourish. At
the same time, news outlets and internet advocates including Slate, the Electronic Frontier Foundation
(EFF), Wired, and The New York Times claim that the law’s
anti-circumvention statutes have done serious harm to the open flow of
information, ideas, and creativity the internet originally stood to offer. In
particular, the DMCA has this to say about circumvention: “no person shall
circumvent a technological measure that effectively controls access to a work
protected under [a given] title.” (per Wired)
In English, this means that no individual hacker, company, or consumer may
attempt to break into protected media for (almost) any reason. This provision
was originally installed to protect DVDs from being copied into bootleg
versions. Many people take umbrage with the statute, for varying reasons. The Atlantic argues that the law “threatens
to make archivists criminals if they try to preserve our society’s artifacts
for future generations” while the EFF rightly points out that the law makes it “legally
risky” to engage in reverse engineering of copyrighted software.
The computer
science field, both academic and industrial, finds it particularly difficult to
come to grips with the dubious nature of reverse engineering. Except for
purposes of determining interoperability, (even that can be questionable)
reverse engineering is made illegal by the DMCA. Furthermore, the law has
enabled companies to place digital locks on their code, preventing external
tampering. In my opinion, the concept of software licenses and DRM schemes is
absurd. If developers and filmmakers expect their code and films to be treated
by the judicial system in the same manner as books or physical artwork, they provide
to the public said code and films in the same manner. Books do not contain DRM
software, nor are they only procurable under a license and “terms of service”
agreement. Paintings do not require signature of a legal document just to
complete the purchase transaction. Yet, paintings and books still receive copyright
protection under the law. Developers and filmmakers must cease using DRM
software and forcing customers into strange legal covenants just to acquire the
software or other piece of media. Honestly, DRM is just companies being lazy
and unwilling to face the open market. When someone purchases a book, he or she
also purchases the rights to do whatever he or she wants with that specific
copy: highlight in it, rip pages, read it to a child, or even burn it. The only
thing a person cannot do is reprint the book and sell it as their own. Similar
practices should apply to software and movies. However, in this case, the
rights which should come with purchase would include reverse-engineering if not
being done directly for profit and translation into new formats (i.e. burning mixtapes
from iTunes purchases). Generally, software and media producers should not be
allowed to remove the free nature of both people and markets.
In the same
spirit, it should be considered ethical for people to build workarounds for DRM
software, so long as they have no profiteering or malicious intent in doing so.
If software and other digital media were to be sold in truly discrete,
license-free forms, the ethics of reverse engineering, DRM circumvention, and phone
unlocking would become clear: let the property owner do with his or her
property as he or she pleases. Until these ethical questions can truly be
resolved, however, property and copyright laws pertaining to digital media must
be completely rewritten and creators of said media must be forced to face
competition.
Tuesday, March 1, 2016
Online Advertising
Without going into too much detail,
I must admit that online advertising is what pays my college tuition, in an
indirect sort of way. Consequently, my ethical response to online advertising
is likely a bit more biased toward acceptance than most other people. At its
core, online advertising is the result of companies cleverly making use of the
data available to them. On the level, such behavior is in no way ethically
reprehensible. The standard methods companies use to gather their data, i.e.
page-view tracking, purchase history, social media analysis, etc. are all
legitimate (this post will refer to them as reasonably-public) methods because
they gather data which the subject knowingly and willingly makes public. Any
post on social media should, in my opinion, be fair game for usage by a third
party. Additionally, page-views and purchase history are all conscious
decisions which the subject generally knows have the potential to be observed
by a third party and thus become reasonably-public data. When the subject makes
these decisions, it is on her to make her peace with that fact. (I would,
however, like to see a beefed-up Incognito Mode become a better option for
those who truly cannot fathom the idea of their browsing being observed.)
The
New York Times and The Guardian
both chronicle cases of legitimate data collection. Target makes use of
customer’s conscious and public decisions to great effect. Facebook collects
social media data which is, by definition, public. (Social media? Come on…) Even the cases where lenders and recruiters
collect data on their customers, as decried by the Kaspersky blog, is
legitimate. In a society in which every company has the obligation to perform
well for customers and shareholders alike, all potential competitive advantages
which can be legitimately and legally acquired should be considered and used.
However, when data to be used for
advertising is acquired illegally—whether through hacking, intimidation, or
bribery—the data itself and the resulting analytics and company actions become ethically
disagreeable. Illegally or illegitimately acquired data not only gives the
company in question an unfair advantage in the marketplace, but it also puts
the customer at a disadvantage. A person whose not-reasonably-public decisions,
identity, and preferences are compromised must now work hard to (if possible) restore
his or her identity and good reputation. Nor should that person be expected to
be the vanguard of their own not-reasonably-public data. That responsibility
lies with the companies who can mobilize large IT departments to protect
financial secrets, matters of identity, and so forth. Individual people
generally do not have the IT expertise or physical ability to fully protect
their own not-reasonably-public data, and so that charge shifts to the other,
generally more powerful, party.
With the current (and most logical)
precedent of companies each holding and owning the data they collect on their customers,
it is incumbent on those companies to protect the data from hacking and leaking
for two reasons. First, hacking or leaking of not-reasonably-public data
breaches the necessary relationship built on trust between the company and the
customer as described in the previous paragraph. Second, it removes the
marketplace advantage the company might have had by owning the data. Within
this second point lies my justification for why companies should be allowed to
sell reasonably-public user data. A key component of the modern marketplace
economy is the securitization and distribution of individual bits of data
(stocks, bonds, mortgages, etc.) In my opinion, reasonably-public user data is just
more data ready to be securitized. Therefore, companies should be allowed to
package and sell user-data in a responsible, airtight manner when the purchaser
can prove that it will use the data for legitimate ends. Additionally, if the
government has a very legitimate need
for the data and can provide a warrant or court order, they should be provided
with the data (in most cases.) Overall, the major keys when dealing with user information and advertising are legitimate collection of reasonably-public data, mindful protection of that data, and sound market practices when dealing with the data.
Subscribe to:
Comments (Atom)