Thursday, May 5, 2016

Click here to listen to my final podcast discussing my Notre Dame memories.

Monday, April 18, 2016

Computer Science Education

Let me make my position clear from the start. Coding should not be a requirement in any schooling. However, let me also be clear that I strongly believe coding classes should be available at most (ideally all) schools. Although knowledge of a language like Java or Python is a very powerful tool, it pales in comparison to knowledge of mathematics or English (sorry CS majors!) While math and English are used in every facet of life and in every career and more basic job, computer coding is a specific skill necessary for a relatively small number of jobs. The point of general education at the K-12 level is to prepare students for life as successful human beings, not for life as successful computer coders. Indeed, even though coding and computer science skills can be valuable for all students, they simply won’t be necessary for most students later in life. The current class of computer science majors, 79 people in all, represent just 3.5% of the total Notre Dame Class of 2016. Even more telling is the fact that although the number of devices running computer code increases rapidly every year, the number of people needed to write code will not increase nearly as fast. The demand for software professionals simply will not increase to the point where every child will be encouraged to consider going into computer science or software development.
With all that said, it would be good to see computer science and coding classes available in every high school. As we’ve discovered in other readings, too many disadvantaged students are never even given the opportunity to code. Consequently, the professional ranks of software developers are too heavy with white males and foreign nationals and others descended from East Asia. The key to solving this diversity crisis lies chiefly in giving a wider variety of students experience with coding at the high school level so that they know what they’re getting into before choosing to become computer science majors. Students who are given the opportunity (not forced to, let’s be clear) learn even just a bit of C or Java before college are far more likely to choose coding as a way of life, both in college and afterward. However, students who are given such an opportunity are also much less likely to foolishly choose computer science as a major, only to drop out after a year or two. So, it’s very important to give students the chance to learn some coding at an age where they can cognitively handle it, but are not committed to doing it for the foreseeable future.

Although the readings give some good reasons why computer science classes should be a requirement, the truth is that such a development is not reflective of the reality on the ground. President Obama’s trademark idealism and optimism shine through in his push for widespread (and probably required) computer science education, but such is simply not necessary. Rather, it’s important to make computer science education an option at all high schools, especially ones which serve typically disadvantaged students. Only when students are given the choice to learn some coding before college will they truly be able to understand what might lie ahead within the nation’s computer science departments, and even more importantly, what might lie ahead within the world’s tech companies. Furthermore, once students know what they might be able to make of their lives by choosing to learn code, the diversity crisis within the tech industry will be solved. It’s clear that such a choice has to be given to every student during high school. 

Thursday, April 14, 2016

Monday, April 11, 2016

Trolling

                Trolling is the deliberate usage of technology to harm others, typically through the use of hateful or manipulative words. Coming in many forms, trolling became widespread with the rise of social media and the inclusion of comment sections on most websites. Another key component of trolling has to do with the fact that online forums, social media, and comment sections are often anonymous. Anonymity enables people to say (type, in this case) things which they would not normally say in real life. YouTube is a famous playground for trolls. Since YouTube does not require users to supply their real names, people feel free to say whatever they want. Often one can find hateful remarks and insults in the comment section for just about any video. These comments are usually threatening toward the subject(s) of the video, the uploader, or the cause or idea associated with the video. Sometimes the comments are relatively harmless in actuality, other times the comments can be full-on harassment, potentially psychological and/or sexual in nature. In the case of GamerGate, these instances of harassment can destroy careers and ruin lives.
                Tech companies have the obligation to do their utmost to cut down on trolling. One would think that the simplest method to cut back on trolling would be to remove anonymity from the internet. However, as Slate mentioned, anonymity can be a crucial factor online. People who live in countries without free speech rely on anonymity to express their opinions to the outside world. Additionally, anonymity enables people to inspire change without allowing personal biases and prejudices to influence the situation. Although trolling very often technically falls under the category of “free speech,” it is harmful to the greater good just as often. As providers of goods and services, technology companies have at least some responsibility to ensure the safety and security of their customers. In order to do so, it is important for companies like Google and Twitter to work to cut down on trolling.

                Trolling of the GamerGate sort is perhaps the worst thing the internet enables us to do (except, perhaps, the ability to use Tor to anonymously buy illegal weapons and traffic people). GamerGate and similar trolling of Robin Williams’s daughter caused deep psychological damage to those involved. On the other hand, petty trolling within a YouTube comment section is relatively harmless. For example, there is one particular USC fan who consistently writes stupid comments under Notre Dame football highlight videos. This troll’s comments have not ruined lives, nor have they affected any real change in the world. Usually, one or two ND fans will simply tell him to stop trolling. For harmless trolling such as this, the only way to deal with it is to deny the troll the attention they seek. If no one engages with a harmless troll, said troll will usually go away. However, if a troll does make comments which cause genuine damage, it seems logical that some sort of prosecution should occur. When I browse the internet, I very rarely contribute to forums or comment sections. I am a classic lurker, consuming vast amounts of content without actually contributing much to the content. Aside from the occasional social media post, Reddit post, or Wikipedia edit, I don’t post online very often. So no, I am not a troll. 

Tuesday, April 5, 2016

Artificial Intelligence

Artificial intelligence is the usage of transistors in a microprocessor to mimic the actions of neurons in a human brain. According to ComputerWorld, “artificial intelligence is a sub-field of computer science. Its goal is to enable the development of computers that are able to do things normally done by people -- in particular, things associated with people acting intelligently… any program can be considered AI if it does something that we would normally think of as intelligent in humans.” Over time, as the concept of artificial intelligence has matured, several sub-categories of AI have developed. These include general and narrow AI, and within each of those, strong AI, weak AI, and hybrid AI.
General artificial intelligence systems are those which are intended to perfectly and completely simulate human reasoning on any particular topic or task. Think “JARVIS” from the Iron Man movies or “HAL” from 2001: A Space Odyssey. Narrow artificial intelligence systems include those which are deigned to intelligently and efficiently carry out a specific task or train of reasoning. Such systems include Google’s AlphaGo and IBM’s DeepBlue, both of which were designed to carry out specific tasks (in both cases, board games) very well. Each form of AI can be implemented through strong, weak, and hybrid methods. Strong AI is a system designed to perfectly mimic the firing of neurons in the brain. A strong AI system, when the first one is built, will theoretically be a perfect replica of a human brain. Weak AI is a system designed to just get the task done, no matter whether a human-style pattern of reasoning is used. In between these two forms is hybrid AI, where the exact methods of human reasoning inspire but do not totally inform the methods of reasoning used by the computer.
AlphaGo, Deep Blue, and Watson are all proof of the potential AI has to become a permanent fixture of the world of the future. AlphaGo and Deep Blue are very effective implementations of narrow artificial intelligence. As The Atlantic points out, AlphaGo is able to “improve—and it is always improving, playing itself millions of times, incrementally revising its algorithms based on which sequences of play result in a higher win percentage.” Because AlphaGo is able to constantly improve its own algorithms, it is intelligent in a way that a static computer program could never be. By continually improving itself, it mimics very well the way in which humans practice sports and study for tests in an effort to improve their own algorithms. Watson is the first impressive implementation of general hybrid AI. While it does not come close to the level of JARVIS or HAL, it can perform a wide variety of logical and intuitive tasks very well. General artificial intelligence systems are currently very good at logic and computation. The key breakthroughs will come when such systems acquire intuition, as sense of morality, and the desire for self-preservation (the scary one!). Once general AI takes on these characteristics, it will be able to rival the power of the human brain.
The Turing Test is a good indicator for narrow AI systems, where the test can be adapted rather well to the specific task the AI system is meant to carry out. However, when it comes to general AI, the test doesn’t hold up as well simply because it cannot test enough variables to accurately determine intelligence. Since perfect general AI will work just like a human mind, it would follow that general AI should be able to beat a Turing Test every time. Once we reach the point where biological and electronic computers become indistinguishable, or perhaps even inseparable, we will have come to the singularity. Ethically, there is no problem with the singularity in general. On an individual basis, certain computers are bound to act unethically, just as certain people are bound to act unethically. Such a dynamic is necessary for the proper functioning of society. 

Tuesday, March 29, 2016

Net Neutrality

               As summarized by The Verge, net neutrality is the idea that internet service providers (ISPs) cannot charge customers different rates to receive different network performance and priority. For example, AT&T cannot charge Netflix higher rates because they push much greater amounts of data through AT&T’s (and others’) network. The Verge explains that:
The order focuses on three specific rules for internet service: no blocking, no throttling, and no paid prioritization. ‘A person engaged in the provision of broadband internet access service, insofar as such person is so engaged, shall not impair or degrade lawful internet traffic on the basis of internet content, application, or service, or use of a non-harmful device, subject to reasonable network management,’”
When the FCC mentions “paid prioritization,” they are referring to the practice of configuring the network to favor certain traffic based on how much was paid for that traffic or how much its speedy transmission might benefit the network provider. According to the Electronic Frontier Foundation, “the FCC produced rules that we could support… We want the internet to live up to its promise, fostering innovation, creativity, and freedom. We don’t want regulations that will turn ISPs into gatekeepers, making special deals… and inhibiting new competition, innovation, and expression.”
                Basically, The Verge, the EFF, the Reddit community, and millions of other entities and individuals argue for net neutrality because they believe that the internet should be an open and unencumbered medium for the transmission of ideas, knowledge, and entertainment. Supporters of net neutrality suggest that prioritization, throttling, and suppression of certain packets traveling through a network will result in the loss of the freedom of ideas and data which is so critical to their vision of the internet. Detractors of net neutrality, including the woefully misguided Forbes contributor Jeffrey Dorfman, suggest that net neutrality flies in the face of free-market capitalist economics. Dorfman gives this analogy: “This is a bad idea for the same reason that only having vanilla ice cream for sale is a bad idea: some people want, and are willing to pay for, something different.” Although I too am a staunch supporter of the free market, Dorfman’s argument makes absolutely no sense to me. Just because content creators and consumers might be willing to pay for better and faster transmission of data doesn’t mean that ISPs should make this an available feature.

It has become increasingly clear in the last decade that computing is becoming a utility commodity, much as the same as electric power, natural gas, or water. Electric companies aren’t allowed to charge certain people higher rates because they draw more current from the grid, nor are they allowed to charge (x) dollars for 120 volt service and (x*2) dollars for 240 volt service. Rather, electric companies simply charge customers a single rate based on how much power they draw from the grid. Users of electricity know that as long as they pay this one rate, they will receive the electric power they need. Similarly, water providers are not allowed to charge higher rates for “more pure water.” This would be an abomination as it would directly and negatively impact the health of people with fewer means. If net neutrality didn’t exist, the sound operation of the economy would be in jeopardy. Modern free-market economics assumes that consumers behave at least somewhat rationally. Central to consumers’ rationality is their reasonable access to all potential information before making consumption decisions. Net neutrality protects reasonable access to all potential information. Clearly, since computing is becoming a public utility and since it allows sound operation of the economy, net neutrality is necessary. The internet is indeed a public service and fair access should be a basic right. 

Wednesday, March 23, 2016

Project #3

Click here to view a letter to Congress regarding encryption.

Reflections:

Is encryption a fundamental right? Should citizens of the US be allowed to have a technology that completely locks out the government?

Insofar as privacy is a fundamental right, encryption is also a right. As I pointed out in my letter to Congress, encryption is both a human and a legal right. It's easy to demonstrate that the Fifth Amendment proves that encryption is a legal right. It's a bit more difficult to prove that encryption is a human right. The proof lies in the fact that a lack of encryption would very likely lead to human suffering, as I explain in the letter. Anything which, when lacking, leads to human suffering is a human right. Consequently, US citizens should be guaranteed encryption. As the Declaration of Independence stated, "all... are endowed... with certain unalienable rights, that among these are life, liberty, and the pursuit of happiness." Removal of encryption would go against all three supposedly-inalienable rights since unprotected data could lead to loss of life, suspension by law enforcement of certain freedoms, and financial or other personal loss: a removal of happiness.


How important of an issue is encryption to you? Does it affect who you support politically? financially? socially? Should it?

Encryption is important to me not from an ideological standpoint, but from a legal and logical perspective. The U.S. Constitution very clearly grants American citizens various rights, the maintenance of which, in the modern digital age, necessitates encryption. Politicians who are anti-encryption will generally not receive my support in the future, as encryption will be central to my career in the finance industry where secure data and trade secrets are very important. It seems reasonable to expect politicians to support encryption since so many people's lives and careers depend on it, as I've explained above and in the letter.


In the struggle between national security and personal privacy, who will win? Are you resigned to a particular future or will you fight for it?

It's unfortunate that the 21st century has been defined by issues of "national security." Regrettably, the political climate is one in which it is easier to see politicians moving away from encryption rather than towards it. I wouldn't be surprised to see a bill not unlike the fictitious one I laid out appear within the next few years. The question will be whether my predictions of significant personal and financial loss due to the lack of encryption will actually come true. If they do, it will be incumbent upon politicians to reinstate encryption immediately. Ideally, however, politicians will recognize that it's very close to, if not actually, illegal to remove encryption from consumer and corporate electronics in the first place. I'd be willing to fight for a world which includes encryption, and I'm sure my future employers will be willing to bring their resources to bear in the fight as well.  

Monday, March 21, 2016

The DMCA and Circumvention

In 1998 when Bill Clinton signed the Digital Millennium Copyright Act into law, he both created and destroyed critical features of the internet. The Act’s safe-harbor provisions enabled social media, blogs, and other crowd-sourced websites to flourish. At the same time, news outlets and internet advocates including Slate, the Electronic Frontier Foundation (EFF), Wired, and The New York Times claim that the law’s anti-circumvention statutes have done serious harm to the open flow of information, ideas, and creativity the internet originally stood to offer. In particular, the DMCA has this to say about circumvention: “no person shall circumvent a technological measure that effectively controls access to a work protected under [a given] title.” (per Wired) In English, this means that no individual hacker, company, or consumer may attempt to break into protected media for (almost) any reason. This provision was originally installed to protect DVDs from being copied into bootleg versions. Many people take umbrage with the statute, for varying reasons. The Atlantic argues that the law “threatens to make archivists criminals if they try to preserve our society’s artifacts for future generations” while the EFF rightly points out that the law makes it “legally risky” to engage in reverse engineering of copyrighted software.
The computer science field, both academic and industrial, finds it particularly difficult to come to grips with the dubious nature of reverse engineering. Except for purposes of determining interoperability, (even that can be questionable) reverse engineering is made illegal by the DMCA. Furthermore, the law has enabled companies to place digital locks on their code, preventing external tampering. In my opinion, the concept of software licenses and DRM schemes is absurd. If developers and filmmakers expect their code and films to be treated by the judicial system in the same manner as books or physical artwork, they provide to the public said code and films in the same manner. Books do not contain DRM software, nor are they only procurable under a license and “terms of service” agreement. Paintings do not require signature of a legal document just to complete the purchase transaction. Yet, paintings and books still receive copyright protection under the law. Developers and filmmakers must cease using DRM software and forcing customers into strange legal covenants just to acquire the software or other piece of media. Honestly, DRM is just companies being lazy and unwilling to face the open market. When someone purchases a book, he or she also purchases the rights to do whatever he or she wants with that specific copy: highlight in it, rip pages, read it to a child, or even burn it. The only thing a person cannot do is reprint the book and sell it as their own. Similar practices should apply to software and movies. However, in this case, the rights which should come with purchase would include reverse-engineering if not being done directly for profit and translation into new formats (i.e. burning mixtapes from iTunes purchases). Generally, software and media producers should not be allowed to remove the free nature of both people and markets.

In the same spirit, it should be considered ethical for people to build workarounds for DRM software, so long as they have no profiteering or malicious intent in doing so. If software and other digital media were to be sold in truly discrete, license-free forms, the ethics of reverse engineering, DRM circumvention, and phone unlocking would become clear: let the property owner do with his or her property as he or she pleases. Until these ethical questions can truly be resolved, however, property and copyright laws pertaining to digital media must be completely rewritten and creators of said media must be forced to face competition.

Tuesday, March 1, 2016

Online Advertising

     Without going into too much detail, I must admit that online advertising is what pays my college tuition, in an indirect sort of way. Consequently, my ethical response to online advertising is likely a bit more biased toward acceptance than most other people. At its core, online advertising is the result of companies cleverly making use of the data available to them. On the level, such behavior is in no way ethically reprehensible. The standard methods companies use to gather their data, i.e. page-view tracking, purchase history, social media analysis, etc. are all legitimate (this post will refer to them as reasonably-public) methods because they gather data which the subject knowingly and willingly makes public. Any post on social media should, in my opinion, be fair game for usage by a third party. Additionally, page-views and purchase history are all conscious decisions which the subject generally knows have the potential to be observed by a third party and thus become reasonably-public data. When the subject makes these decisions, it is on her to make her peace with that fact. (I would, however, like to see a beefed-up Incognito Mode become a better option for those who truly cannot fathom the idea of their browsing being observed.)
     The New York Times and The Guardian both chronicle cases of legitimate data collection. Target makes use of customer’s conscious and public decisions to great effect. Facebook collects social media data which is, by definition, public. (Social media? Come on…) Even the cases where lenders and recruiters collect data on their customers, as decried by the Kaspersky blog, is legitimate. In a society in which every company has the obligation to perform well for customers and shareholders alike, all potential competitive advantages which can be legitimately and legally acquired should be considered and used.
     However, when data to be used for advertising is acquired illegally—whether through hacking, intimidation, or bribery—the data itself and the resulting analytics and company actions become ethically disagreeable. Illegally or illegitimately acquired data not only gives the company in question an unfair advantage in the marketplace, but it also puts the customer at a disadvantage. A person whose not-reasonably-public decisions, identity, and preferences are compromised must now work hard to (if possible) restore his or her identity and good reputation. Nor should that person be expected to be the vanguard of their own not-reasonably-public data. That responsibility lies with the companies who can mobilize large IT departments to protect financial secrets, matters of identity, and so forth. Individual people generally do not have the IT expertise or physical ability to fully protect their own not-reasonably-public data, and so that charge shifts to the other, generally more powerful, party.

     With the current (and most logical) precedent of companies each holding and owning the data they collect on their customers, it is incumbent on those companies to protect the data from hacking and leaking for two reasons. First, hacking or leaking of not-reasonably-public data breaches the necessary relationship built on trust between the company and the customer as described in the previous paragraph. Second, it removes the marketplace advantage the company might have had by owning the data. Within this second point lies my justification for why companies should be allowed to sell reasonably-public user data. A key component of the modern marketplace economy is the securitization and distribution of individual bits of data (stocks, bonds, mortgages, etc.) In my opinion, reasonably-public user data is just more data ready to be securitized. Therefore, companies should be allowed to package and sell user-data in a responsible, airtight manner when the purchaser can prove that it will use the data for legitimate ends. Additionally, if the government has a very legitimate need for the data and can provide a warrant or court order, they should be provided with the data (in most cases.) Overall, the major keys when dealing with user information and advertising are legitimate collection of reasonably-public data, mindful protection of that data, and sound market practices when dealing with the data.

Thursday, February 18, 2016

Project #2

To view a white paper detailing the gender and cultural diversity within the Notre Dame CSE Department, please click on this Google Drive link.

Reflection Question #1:

Frankly, very little about the demographics data is surprising to me. Notre Dame is predominantly white, so it would logically follow that the overwhelming majority of CSE students are white, as well. Additionally, the poor gender diversity within the tech industry is something the class has well documented. It makes sense that academia has a higher percentage of female students as academia is what is most likely to generate positive changes in diversity in industry.

What interested me most was the remarkable growth in the overall number of students in each successive graduating class. My brother, a sophomore CS major, is part of the biggest class in Department history. As an electrical engineering student, I've seen firsthand the flip-side of this trend: the EE Class of 2018 is one of the very smallest in the EE Department's history. Popular culture reveres technology and tech companies. My brother will freely admit that shows like Silicon Valley and movies like The Social Network directly influenced him to study computer science. It seems very plausible that he's not the only one of 126 CSE sophomores to have been inspired in this way. I'll be watching with a keen eye to see if the number of CSE students continues to swell in subsequent years as computers and their power become ever more ingrained in everyday American culture.

Reflection Question #2:

The heated discussion during class on 2/18 about whistleblowing and the image of the U.S. military was direct proof that increased diversity would be a great development for the CSE Department. On one hand, the mostly white, male ROTC members held the view that opacity and independence for the military are critically important. Most of the opposing views were given by either women or non-white students who had different, often more worldly perspectives. This discussion was possible only because of the diversity which already exists within the CSE student population. I can only imagine how much more invigorating the discussion would have been if there was more cultural and gender diversity, and consequent diversity of opinion, in the room.

Essentially, diversity ensures vivacity. Whether within a classroom, a sports team, or a multinational corporation, diversity enables wider perspective, greater wisdom, and better decision-making. However, acquisition of diversity isn't always easy. Contrived, "hokey" diversity initiatives are more likely to hurt people than to help them. Companies and universities must make genuine, compassionate efforts to increase diversity, rather than just try to drive their percentages higher without any regard for what those percentages actually mean. True success can only come when there is true diversity.

Sunday, February 14, 2016

The Challenger and Whistleblowing

               The root cause of the Challenger disaster was a failure in the sealing rings between the different segments in the rocket booster engine. Houston mission controllers referred to the event at the time as “obviously a major malfunction.” According to the New York Times, “A seal failed on a rocket booster, and the stream of hot gas released by it ignited an external fuel tank… the unusually cold temperatures may have worsened the problem.” Basically, a poorly designed piece of one of the white rocket boosters was ruined by the cold and consequently caused an explosion which killed seven astronauts and disintegrated one of the five operational space shuttles. The more morally important piece of the puzzle was that NASA had been warned by their contractor, Thiokol, that the cold weather would negatively impact the sealing rings, known as O-rings. Motherboard described the situation: “Thiokol described the risk of low temperatures to NASA managers from their headquarters in Utah, and urged NASA to postpone the launch. ‘It isn’t what they wanted to hear…’ Challenger was a go.”
                One of Thiokol’s employees, Roger Boisjoly, had been part of the task force assigned to support NASA’s rocket booster engines. Fully aware that the O-rings would not be able to handle the unusually cold launch temperatures, Boisjoly and his teammates pleaded with their managers to ask NASA to stop the launch. Their concerns were overridden, and the rest was history. Boisjoly took all of the documents he had access to and kept them protected from the government. The Whistleblower Support Fund described Boisjoly’s act of whistleblowing: “Boisjoly met secretly with an NPR reporter shortly after the shuttle disaster to provide information about the problems at [Thiokol].” Thiokol, NASA, and the U.S. government responded by blackballing Boisjoly and keeping him from ever working in the aeronautics industry again. An intense, and ultimately unsuccessful, legal battle ensued.

                Boisjoly did the right thing by blowing the whistle on Thiokol and NASA. Once lives were lost, his actions were necessary to force the involved parties to ultimately change their practices. If Boisjoly hadn’t exposed the unsound engineering decisions which had been made, there is no guarantee that either NASA or Thiokol would have designed a new type of O-ring which enabled safe shuttle flights for the next 17 years. Furthermore, the bad publicity would not have forced cultural changes within NASA which persisted until Columbia. Additionally, the terrible publicity caused great financial challenges for NASA and its contractors. In response, Thiokol fired Boisjoly. While they were financially justified in doing so, they had no moral grounds for doing so. Punishment of Boisjoly was unethical, especially since Thiokol management had acted incredibly unethically by brushing a known and catastrophic design flaw aside. Even though his life wasn’t ruined, he did endure much hardship at Thiokol’s hand. Ultimately, whistleblowing did much good in this situation. Until the Columbia astronauts lost their lives, the forcibly redesigned O-rings and altered business practices very likely saved several lives. Whistleblowing, although painful in the short-term, usually results in long-term benefits for the general public.

Monday, February 8, 2016

Diversity in the Tech Workplace

     The lack of diversity in most tech companies today is a large problem. While many other industries have largely overcome the diversity issue, the tech industry still has a long way to go. As Google’s hiring statistics show, the tech industry faces both racial and gender diversity issues. Only 17% of Google’s tech workers are women and only 1% are black or other non-Asian minorities. It seems to me that both types of diversity issues are consequences of flaws in American social culture. Furthermore, it seems evident that these shortcomings threaten to undermine the credibility of the tech industry, and perhaps the industry itself, as well.
     Several black members of the tech industry have voiced their concerns with the state of diversity in the industry. Former Twitter engineer Leslie Miley has said that Twitter is “so bad at it” when discussing diversity with CodeSwitch, despite the fact that Twitter is a very popular medium within the black community. Google employee and Medium contributor EricaJoy wrote that she “[stuck] out like a sore thumb… I’ve gotten passed over for roles I know I could not only perform in, but that I could excel in.” Clearly, when there’s a situation when any employee, not to mention a large group of employees, cannot produce to his or her full potential because of cultural resistance, the employer cannot produce at its full potential, either. Thus, it rapidly becomes clear that the entire tech industry is operating at a suboptimal level. Only a two-part change in culture can fix this. On one hand, Silicon Valley must expand its search parameters for new coders. The current rotation of target schools produce predominantly white developers. Schools like Howard produce predominantly black developers who do not lack in talent or willpower, as the Bloomberg feature pointed out. It would behoove companies in the Valley to give students like Professor Burge’s a closer look. On the other hand, companies have to seriously invest in diversifying their workforces. Although programs like the ones mentioned by CNN Money are important, tech companies must not only stress the importance of diversity to their current employees, but to prospective and new employees, as well. Change can only happen when there is complete buy-in from all levels of the company, from CEO to HR to the technical staff.
     Equally disappointing is the skewed gender distribution within the tech industry. Men dominate the industry. The reason is simple: women don’t feel welcome. For the last 60 years, the culture which predominates within the Valley, whether you call it “nerd culture,” “hacker culture,” “dev culture,” or any one of a myriad of labels, has been very masculine. Despite our common humanity, it is clear that there are general psychological differences between men and women. Stimuli and environments which men thrive in can be very tough for women to navigate. Even in childhood, this can be seen. My brother and I shared a room early on, and needless to say, it was very much our room. Dark colors, Legos, underwear, and sports memorabilia were always strewn about. My sisters had rooms which were bright, meticulously kept, and generally looked nice. It should come as no surprise that my sisters never entered my brother’s and my room when we were young kids. It seems to me that the tech industry today is similar to my childhood bedroom: a decidedly unfriendly place for women. As New York Times editorialists have pointed out, women “are afraid they won’t fit in.” The aggression of “nerdy strutting” and the prevalence of male-centric geek culture has been off-putting to women since the mid-80s, when coding ceased to be a job performed almost equally by men and women. Valley companies can increase the numbers of women in tech jobs by making those jobs more attractive to women. All they must do is remove the masculine aggression associated with Valley culture. Harvey Mudd College proved this was possible by making their computer science program less masculine and cut-throat. Once the number of women is comparable to the number of men in tech jobs, Silicon Valley will be able to produce at unprecedented levels.

     Ultimately, the key to solving issues of racial and gender diversity is removing the cultural resistance which prevails in the Valley (and on the Street, to a lesser extent) today.  Once the necessary changes are made, the tech industry will be able to operate with an efficiency and wealth of creativity never before imagined. Then, and only then, will the computing industry become the revolutionary force it claims to be.

Monday, February 1, 2016

Burnout

Burnout is a particularly interesting subject to me, considering the job I’ll be beginning after graduation. In finance, my industry of choice, 70 or 80-hour work weeks are the norm for junior-level employees. Consequently, burnout is an issue which must be acknowledged and managed within the banking and finance industries. Those who work within banking claim that burnout is a direct result of the long hours analysts and associates are expected to work. While this may certainly be a factor, I would argue that it is not the only factor. After reading the piece about Marissa Mayer’s contrarian opinion on burnout, I started thinking about other causes of burnout.
While 70- or 80-hour workweeks are an obvious contributor to the burnout all-too-commonly seen in banking, perhaps just as important is the tendency for banking to grind against a person’s personality. Analysts spend most of their time carrying out tasks which they almost never see to full completion. Rather, they perform tasks whose completion and worth lies completely with the more senior bankers. For example, last summer as an intern I spent a lot of time developing models of various financial transactions my company was considering entering. These models were fueled by my ability to cleverly use various computer programs and my desire to figure out a company’s true story through thorough research. I enjoyed building these models, but I never got to see them put to use to determine the worth of a deal. Those decisions were waay above my paygrade. Many banking analysts do not like to see their work go somewhat unfulfilled; nor do they enjoy spending their time performing often arduous tasks which are constantly being demanded of them by senior people. Consequently, as Mayer says, they become resentful of their companies and leave after a short time.
I do not plan to let this stop me. As The Economist points out, tech is a ruthless meritocracy. Banking is perhaps the only more ruthless meritocracy in the world. One cannot rise to the top without proving themselves in the lower ranks or through other business avenues. Young bank employees often fail to realize this fact. I’m fully aware of it. As I progress into my job, I will work very hard to perform well and move up the ranks. As a great movie villain once said, “it’s all... part of the plan.” I cannot become resentful toward others for expecting me to prove myself. The opportunities which open up to a person who has made a successful career rising through the banking ranks are akin to the paradise spoken of in the Economist article. My goal is to acquire those opportunities.

Of course, I’m not oblivious to the perils of hard, if not completely fulfilling, work for 80 hours a week. As Andrew Dumont points out, hobbies and good life habits will be important diversions for me over the next few years. One of my goals is to get my golf handicap back to where it was when I golfed every day after caddying in the morning—this was in high school before my summers were spent in internships. Ideally, I’d like to be better at golf in three years than I was five years ago at my previous peak. Furthermore, I’m currently in a new phase of my life where fitness is a priority. Since I’ve been training for the Holy Half Marathon, I’ve noticed that having a fitness goal can actually be rewarding (and fun, too!) During the first few years of my career, I intend to hone fitness and diet habits which will enable me to more greatly enjoy life later on, after the good work has been done. Ultimately, for me, avoiding burnout will come down to three factors: having thick skin against present difficulties, being future-minded, and meeting goals which will improve my life forever.

Tuesday, January 26, 2016

Career Trajectories and Company Loyalty

The majority of the articles which were covered for this blog post make specific references to the tech industry and employment issues therein. Since I’m not going into tech, my experience will be a bit different from many others in the class. The standard investment banking career path—the one I’ll be following—is much more flowchart-based than that of the tech industry. By this, I mean that bankers do not simply change from one job to a similar one every few years. Rather, they typically work for two or three years right after college as an analyst. Analysts are the lowest rank and consequently are the most numerous (usually) and expected to learn, rather than lead. After two years, well-performing analysts are usually offered a third year as an analyst. During this period, analysts begin to hone their leadership skills in preparation for a possible promotion the next year. Analysts who do not perform particularly well are typically not offered another position within the bank. These people often go into another financial job or return to school to further hone their skills.
Analysts who have been given a third year and perform well are then promoted to associate for their fourth, fifth, and sixth years. Again, those who are not promoted find other work or return to school. Associates are now expected to become leaders. It is the job of the associate to support the bankers above him or her by quarterbacking the operations of the analysts within the associate’s group. To use a military analogy, first- and second-year analysts are the privates in the squad, third-year analysts are the corporals, and associates are the sergeants in charge of the squad. Associates spend three or four years in their position, taking on more leadership responsibility as necessary. The top associates are then considered for promotion to vice president. Vice presidents exist largely in a transitional role as they occupy the space between junior staff and senior management. Consequently, VPs must keep their hands in two buckets: the daily operations of the junior staff and the client-facing operations of senior management. Furthermore, good VPs will begin to develop their own portfolios of clients whom they will call on for the rest of their careers in banking. Once vice presidents have built a portfolio and demonstrated their leadership ability and social skills, they are considered for promotion to director. Directors are the second-highest mainline rank in most investment banks. They carry out all the typical client-facing duties and interactions, including cultivating new business, leading deals, and maintaining current client relationships. The best directors are promoted to managing director. Managing directors are not only responsible for their own performance, but for that of their entire group, including all the analysts, associates, VPs, and directors under them. Finally, some MDs seek positions within the corporate leadership of the bank as a whole—think C-level-type positions.

A nice feature of the i-banking career path is that upward mobility is all but guaranteed for those who perform their jobs well. My 5-year plan is to work hard and do as well as I can as an analyst. If I’m promoted, I’ll likely continue on the career path outlined above. If I’m not promoted, I’d like to go to business school and get an MBA. An MBA would enable me to switch to a wide variety of career paths, or even re-enter banking. Like I mentioned earlier, the decision to stay with my current company long-term would follow a “am I going to be promoted” sort of flowchart. Additionally, the career path outlined above has a job change every three years built in, as Vivian Giang and her sources for the article suggest. Although the issue of job-fluidity and consistently increasing pay is not a big deal in the banking industry, company loyalty is more of a problem. Often a bank will only be loyal to its employees if they are loyal to it or are very good at their jobs. For example, if a bank hears that one of their third-year analysts would rather pursue business school than a promotion, the bank would likely not even offer the promotion in the first place. Additionally, unlike the conflict between tech workers and Apple, Google, etc., the banking industry often faces and quietly deals with talent-poaching at all levels. Non-compete agreements are unusual among the lower ranks, especially because junior people often switch from bank to bank. However, non-disclosure agreements are very common and they are used to protect proprietary models, information, and financial data. Since the banking career path is very segmented into three-year blocks and is very flowchart-based, job-hopping is an inevitable aspect of the process.

Monday, January 18, 2016

Just Who is a Hacker?

Upon exploring the readings associated with hackers in this week's assignment, one key trend immediately surprised me: hackers are inherently right-brained people. Since their occupations, fascinations, and persuasions typically resolve around computers, hackers are typically viewed as technological, left-brained people. However, for hackers, computers are merely a tool to a larger end, just as marble is merely a tool used by a sculptor to achieve something more important. As many self-proclaimed hackers explain (and I alluded to in my previous post), hackers are artists.
                Some hackers actually come right out and say that they identify very closely with traditional artists: hacker Paul Graham in his 2003 essay “Hackers and Painters” says that “Hacking and painting have a lot in common. In fact, of all the different types of people I've known, hackers and painters are among the most alike. What hackers and painters have in common is that they're both makers… trying [to] make good things.” Later in the essay, Graham furthers his view on hacking by saying that hackers are like artists in that they can only learn by doing, unlike scientists or mathematicians. No one would dispute that artists (painters, sculptors, musicians, etc.) are very heavily right-brained people. It is becoming clear to me that hackers are just as right-brained as their artist counterparts.
                More evidence for the artistic and expressive nature of hackers can be found through studying their behavior and personality. As “A Portrait of J. Random Hacker” explains, common hacker hobbies include science fiction, music, medievalism, chess, backgammon, and other intellectual games. All of these make heavy use of the creative and artistic capacities of the brain and each of them help participants learn how to solve problems in new and interesting ways or encourage creative development, both of which are central to the hacker’s mission. Just as artists and musicians, many hackers typically hold leftist politics, think highly of themselves, occasionally indulge in recreational drugs and other substances, and don’t care for societal conventions of race, gender, and religion. These are all marks of an intensely creative, right-brained person.
        Hackers’ inherent creativity and artistry are often accompanied by a certain mistrust of established ways of doing things, whether these things are other ways of hacking, sports, or the modern sociopolitical order. Eric Raymond, in his “A Brief History of Hackerdom,” chronicles the technological developments made by hackers over the latter half of the 20th century. A common theme among the achievements was a desire to create contrasting systems, as seen in the rise of UNIX to supplant MIT’s early systems and later the compartmentalization of UNIX and its subsequent liberalization through the development of LINUX. Only later in the development of hacker culture when the internet and open-source became dominant did the mistrust of other computing systems erode away.
                    However, perhaps more importantly, many hackers have a mistrust of “the system,” or “the man.” I postulate that this is an artifact of the 1960s and ‘70s countercultural movements. Phrack Magazine contributor “The Mentor” expresses his mistrust of the system during his mid-‘80s juvenile years: “We explore... and you call us criminals. We seek after knowledge... and you call us criminals.  We exist without skin color, without nationality, without religious bias... and you call us criminals.” I can’t help but think of the image of the angst-ridden teenager fed up with the world and his superiors. 30 years later, The Mentor’s misgivings have been codified within the larger hacker culture: “they carry an ethic of disdain towards systems that normally allow little agency on the part of ordinary individuals.” (Brett Scott, “The Hacker Hacked”)
                    As an engineering student who has been doing stage lighting design for eight years and who will be going into investment banking after graduation, I have mixed reactions to the idea of “the hacker identity.” I certainly do not consider myself a hacker, but there are aspects of the culture I identify with. As a stage lighting designer, it is my job to leverage technology to create beautiful art. Such is also the occupation of the hacker. Like hackers, there are times when I prefer to use my right-brain to do things my own way. However, I have no mistrust of “the system” and in fact, I hope to be contributing meaningfully to it for many years to come. (See above about investment banking…) In a moment of self-reflection, I would imagine that perhaps I became an engineering student because I sit somewhere in between the hacker culture and the mainstream culture. Art and creativity are exhilarating (I can sit for hours working on CAD drawings of random buildings that pop into my head), but so are logic, technology, and mainstream constructs. The technical details of a microchip and the modeling of a mergers and acquisitions deal are more fascinating to me than science fiction, medievalism, or finding ways to outsmart “the system.” For these reasons, although I do somewhat identify with it, I would not consider myself a part of hacker culture.

Wednesday, January 13, 2016

CS: Art, Engineering, or Science?

Several articles included within the first reading assignment ask the question “What is software development?” Is it science, engineering, or perhaps even art? In my opinion, computer science and software development are not engineering disciplines. They do not have several prerequisites needed to be included within the engineering field. Instead, software professionals engage in what I prefer to think of as “artistic science.” As an electrical engineering student, my opinion may contain some bias, but I'd like to make a few points to help illustrate my claim.
Traditionally, engineering has been defined as the manipulation of physical laws with natural materials in such a way as to achieve a beneficial and finite result. As Stack Exchange founder Jeff Atwood mentions, “Traditional bridge-building engineering disciplines are based on God’s rules—physics. Rules that have been absolute and unchanging.” Software development does not fit within either of the above definitions. Physical laws like gravity, mass/energy conservation, quantum behavior, and even time-dilation have no bearing upon software development. They are important, however, for the electrical engineers who create the hardware necessary for software developers to ply their craft. Furthermore, software development does not rely on naturally-found materials to achieve an end. All a software developer needs are creativity, knowledge, a development environment, and a good mechanical keyboard. With these immaterial and/or unnatural inputs, a software developer can, like engineers, set out to achieve beneficial results. However, unlike the results of good engineering, software is never a finished product. Ian Bogost from The Atlantic points out that “today’s software development is iterative.” Unlike bridge-building, where the first iteration can be the only iteration for 50 or 100 years, software developers are free to continually improve their products. They are free to keep developing. This final reason alone certainly explains why computer programmers should be called developers and not engineers.
                Although software developers should not be classified as engineers, they can be classified as scientists. The iterative nature of coding very closely parallels the experimental processes scientists use. Inherent within scientific experimentation is a need for repeat-ability and stable boundary conditions. Software development relies on the very same. If computers did not offer repeat-ability and stability, software development couldn’t take place. Furthermore, like scientists, software developers rely on mathematics to inform their work. Without discrete math and the mathematics of binary logic, code developers would not be able to create a product which computer hardware could run. Clearly, software developers rely on the principles of scientific experimentation in their work. Consequently, they can be considered scientists themselves. However, computer science involves another element: creativity. Great software is great artwork, and creative computer scientists are also creative artists. Like artists, computer programmers have the ability to apply creative thinking to a blank canvas (or blank source file, as it were) and create something truly beautiful. No one would argue that Windows and MacOS weren’t works of art when they first came out. They beautified the computing experience, just as works of art beautify the human experience. Facebook changed human discourse in a way not unlike great art forms can. So, to fully describe the field of software development, it must be thought of both as a science and as an art.
                With all of the above taken into consideration, it becomes evident that software development is not a form of engineering, but rather a combination of science and art. It does not create physical or finite results in the way engineering does. Rather, software development is a scientific exercise which requires artistic creativity and discipline. In short: computer science is an acceptable term, software engineering is not.

Tuesday, January 12, 2016

Introduction

My name is Jake Reilly. I'm from the Chicago suburbs and I'm finally being vindicated for my 21 years as a Cubs fan. As a senior in EE, my interests lie primarily in how engineering can influence business and how business and economic requirements should be considered whenever engineering work is done. Furthermore, many of the EE projects we do tend to be cross-disciplinary. I enjoy being exposed to many different lines of study and ways of thinking. Outside of class, I'm very involved with the technical crew at the DeBartolo Performing Arts Center, particularly in the realm of lighting design. I've designed lights for student productions, Broadway-level dance companies, Hip-Hop Night, and everything in between.

I'm looking forward to learning more about the social and ethical implications of emerging technologies including AI, self-driving cars, medical robotics, and the technical aspects of the sharing economy. In my opinion, the biggest issue facing any engineer is "We can do this, but should we?" For electrical and computer scientists and engineers, this question demands that we always consider our breakthroughs and products from the perspective of not only their necessity, but their moral acceptability. Artificial intelligence, in particular, begs the question of "should we" in a way we've never seen before. I'm looking forward to hopefully hearing more on this topic.