Bias and Photography Technology Development

Replicating Biases

While scrolling through Twitter, a certain user caught my eye. @TayTweets had published a stream of racist, sexist, overall insulting comments. Looking deeper, I realized that Tay was not actually a real user: she was Microsoft’s learning chatbot. Microsoft never created her to be demeaning; rather, this bot had read other’s comments and replicated the overall pattern of speech and behavior on Twitter.. The robot itself didn’t have evil or good intentions, or really any intentions overall. This is not the first time we have seen technology perform in derogatory ways. Throughout history, machines have been programmed around human behavior. It is not that the technology itself is opinionated; rather, the information it draws from have unaccounted biases which may affect its performance.

One example of seemingly biased technology is Artificial Intelligence, (AI). Used in all sorts of programming, this technology seems to be able to make decisions and think for itself. However, it’s actions are based on vast amounts of previously collected data. Basic data to teach vocabulary is drawn from the Enron emails.  In Amanda Levendowski’s report, How Copyright Law can fix Artificial Intelligence’s Implicit Bias Problem, she states that these emails are considered “Biased, Low-Friction Data”, or BLFD. The 1.6 million Enron emails were exchanged between “…employees of [a] Texas oil-and-gas company that collapsed under federal investigation for fraud stemming from systemic, institutionalized unethical culture…” (Levendowski, 2018). While the Enron emails aren’t completely public, the chance of technology companies being sued for using them is extremely low, making them “Low Friction”. However, these emails contain extreme biases to the point where researchers have studied them specifically to understand basics of power dynamics and gender biases. Despite their lack of accurate representation, they continue to be a main source of AI programming. The use of this kind of BLFD for programming AI can have obscene effects, causing entire perspectives to be erased and intolerance to be magnified. 

The Shirley card, seen in early film cameras,  is another example of how unforeseen biases can have massive effects on the technology industry. As film cameras became more popular, a problem regarding skin tone was immediately noticed. Those with darker skin tones did not appear accurately on film; instead, their color was exaggerated and appeared “Ashen-colored” (Lorna Roth, 2009). This was the result of cameras being based around an image of a white woman. Her skin was used as a baseline for camera’s color techniques, and any large deviation from her Caucasian complexion appeared inaccurate. Film processing was not a cheap process, and Kodak envisioned its market to be centered around wealthy white families. After this problem was discovered, color cards with multiple people of different skin tones were made, and photography became more accurate for all races. The original body of the camera was never altered to fix this problem; when the information it was based upon became more inclusive, it allowed the entire process to become less biased. 

    Today’s technology has given our population incredible opportunities. We can freeze time, create life-like robots, and access countless ideas with one click. However, when creating these machines, we must constantly evaluate our own unconscious biases so they are not inherently reproduced. Technology, in its simplest form, is purely cold, hard, metals and plastics. It does not contain its own brain nor consciousness. Only humans have the capacity to assign meaning to the world around them; however, destructive thoughts and ideas can be transferred into the machines we create today. 

Works Cited:

Levendowski, Amanda. “How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem.” Washington Law Review, vol. 93, no. 2, June 2018, pp. 579–630. EBSCOhost, search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=130507079&site=ehost-live.

Roth, Lorna. “Looking at Shirley, the Ultimate Norm: Colour Balance, Image Technologies, and Cognitive Equity.” Canadian Journal of Communication, vol. 34, no. 1, 2009. Academic Search Complete, ESCOhost, doi:10.22230/cjc.2009v34n1a2196.

Vincent, James. “Twitter Taught Microsoft’s Friendly AI Chatbot to Be a Racist Asshole in Less than a Day.” The Verge, The Verge, 24 Mar. 2016, www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist.

Comments are closed.