Throughout our entire lives, we have grown up and gone through school being taught a certain wait to write and speak; or in other words, how to properly use language. In addition to all the rules we are told to abide by, we are also taught a sense of morality that we are supposed follow when expressing our opinions. This morality is more often than not forgotten and people truly begin to utilize what America is founded on, Freedom of Speech. Even though America is very passionate about this ability to have Freedom of Speech, it can often lead to extreme controversy and allow people to express their true feelings about something or someone, even if it is extremely cruel and hurtful. At young age we were all taught that, “If you do not have anything nice to say, then do not say anything at all.” It is very clear that this saying has been thrown out the window by many Americans, especially those who are habitual users of hate speech.
Hate speech is the process of using words as weapons in order to put down, or essentially take out, a specific group. Often times hate speech is used against those of a different race, ethnicity, age, religion, gender, sexual orientation, mental or physical disability. When engaging in hate speech it is insulting, demeaning, and ridiculing these people with differences. People who are victims of hate speech encounter many issues throughout their life, and some even resort to suicide. Hate speech is extremely harmful, but can it stopped when it is protected by Freedom of Speech? Occasionally hate speech can be seen as a threat and then can be acted upon, but if not it is a painful abuse that people have to deal with on their own. Freedom of Speech is a wonderful right that American’s have, but unfortunately it can protect those who should not be protected.
In this week’s reading from How to Lie With Statistics, in chapter 4 the Stanford-Binet IQ test was discussed and how the probable error has been found to be approximately three percent. In the example given in the text, this margin of error could mean that the test given on “Linda” and “Peter” telling their parents that Linda had a higher IQ could be completely false if their individual results happen to be affected by the error margin. For some reason, this got me thinking about College Board tests and standardized testing in general. For tests given to grade students such as the ERB, the scores you receive are ranged within different percentiles rather than giving students a raw score of some kind.
Is it possible that the same margins of error fall within these testing categories? In the US Public schools system, there is so much weight on students’ testing results compared to national averages. Their “results” lead them to advanced program acceptances and to accelerated tracks. If margins of error analyzing whether a student falls within the 50th vs. the 80th percentile for a subject exists then I would argue the system in place presents inaccuracies that are limiting students’ academic growth and achievement.
Americans take pride in the fact that we live in a nation in which all people have the right to express their ideas and opinions freely. This Freedom of Speech is what makes our nation truly “free”. Unfortunately, there are people who take advantage of this freedom, publicly engaging in hate speech on television, over the radio, and in other forms of media. Take Don Imus, for example. In 2007, Imus called the Rutgers women’s basketball team “a bunch of nappy headed hoes.” Imus has never met these women and made this offensive judgement simply off of their appearances (Im sorry…I didn’t know girls were supposed to do their hair and makeup before a national championship basketball game).
Although I am proud to be American and proud that we, for the most part, respect each others’ opinions, I believe that we should hold ourselves accountable for what we say and to whom we say it. We shouldn’t abuse this privilege simply because we have it.
Throughout How To Lie WIth Statistics, we are warned of how ambiguous statistics can be. Darrell Huff teaches us about how to question statistics, as well as what is needed to have a reliable sample and experiment. I feel like we see statistics almost every where. They are constantly on the news, in advertisements, in articles, journals and blogs. We see them in a vairety of different mediums, stores, work and many more places. However, I feel that it is sometimes hard to trust any statistics other than the stock market or sports. Statistics are often used to convince people to favor one side over another. It is not easy knowing which statistics to question further and which ones to accept. It is hard to explain the statistics in most of the ways it is communicated, unless we were to attatch a journal entry to the end. If we are always questioning the statistics we saw, how will they even be able to serve much of a purpose or means of persuasion anymore.
As I continued my reading of How to Lie with Statistics by Darrell Hunt, I was even more convinced that I would never be able to trust any fact, statistic or percentage I was presented with ever again. Before reading this book, I was unaware about how prevalent the manipulation of mathematical data was. However, example after example illustrated to me that I mistakenly trusted the idea that math does not lie. There were stories from almost every decade and industry in American using this tricks of deception. The problem with the solutions posted in Chapter 10 is that many times statistics and percentages are not presented with many facts needed to analyze their accuracy. While many stats and percentages associated with things such as weight loss products and tabloids are usually not trusted anyway, How to Lie with Statistics taught me that I have to question the reputable sources now as well.
In a way, I believe this book shows the problems of American culture. A critique of our culture is that it is solely based on the accumulation of money and the growth of capitalism. This book represents the idea that the ends justify the means. It presents the idea that people will manipulate whatever they can in order to convince more people to buy their product/idea or give them more money. There is no idea of morality, ethics or accountability. People do not believe they have the duty to tell their customers the truth but instead will use lying to convince them of the opposite. They prey on the idea that most people believe they can rely on the trust of numbers. While this may seem as an over-exageration of some misrepresented numbers, I believe the underlying values of tampering with numbers and proof.
What I enjoyed most from our reading for this class was the last chapter in How to Lie with Statistics that explained how you can determine which statistics to trust. I know that after our reading from last week I walked away thinking, now how am I ever going to know what to believe? We learned about so many tricks that can be used to manipulate statistics. Throughout the next week I found myself writing off any statistic I was presented with assuming that it couldn’t be trusted, for surely it had been distorted in some way. This included advertising campaigns on television, but especially the news articles I was reading when looking for an article for my paper. Somehow in a matter of days, I had gone from trusting everything that I read to trusting nothing. Of course statistics can be a really helpful and informative, if presented correctly, so I really like that they ended the book by giving you some advice about how to know which statistics to trust. What I really learned from both this reading and the Second Thoughts reading was that it is really all about the presentation. Whether you are referring to the median as the average, saying 1 out of every 4 as opposed to 25%, or using a word that may give off a specific message due to it’s connotations, you can really make an audience interpret data in completely different ways, all due to the way you present it. This has really made me think much more carefully about what I am reading.
While reading this week’s homework in “How to Lie with Statistics,” a certain portion from chapter 3 really got me thinking. Author Daniel Huff discusses an instance where a test for a polio vaccine came out. The statistics were maneuvered in a way that makes it seem as if none of the subjects contracted the disease. Reading this portion of the book made me think back to our class discussion regarding the meaning of an average. Averages can have many different shapes or forms, such as the mean, median, and mode. I thought about how medical practices often share results from studies using wording such as “on average, only 1 out of every 100 subjects contracted the disease from the vaccine.” What kind of average was hypothetically used in this instance? Although Huff describes the many types of manipulation of statistics, in this instance there could be grave medical consequences arising from the statistics leading to false advertising. Many patients, especially children, could be at an increased risk for disease because a medical practice chose the median instead of the mode. Or patients with chronic illnesses could be more likely to participate in a trial study without knowing the real risks that the study may entail. Huff’s interesting perspective allowed me to connect the reading for this week to our earlier class discussions.
I have two points I want to hit this evening following the readings done in the “How to Lie with Statistics” book. The first point is one that has been on my mind recently and it regards housing. Three teammates and I are in the market for a house and are finding it exceedingly difficult to find a house to accommodate us. When the author was talking about how the housing market got it all wrong designing houses for the average amount of people in a family, I think the market self-corrected since the book was published because now its impossible to find a house for the average! There seems to be an abundance of one and two room places in Richmond, but a dearth of houses built for four.
The second point I want to address is the concept of being normal (talked about in the third chapter). I know we talked about this in class briefly following the article that was read allowed, but I wanted to relate this back to something we discussed in 101. The majority makes right. This is fact when dealing with society. However, what happens when this majority is only perceived to be so as a result of a statistic quoting the norm? Its quite easy to make something appear to be the norm as Dr. Kinsey essentially proved. So lets say the behavior of something taboo is reported as normal, does that make it right? Simply food for thought.
“How can you avoid being fooled by unconclusive results? Must every man be his own statistician and study the raw data for himself?” (Huff, 44)
While reading Chapter 3 of “How to Lie with Statistics”, I was struck by this question. Every day, while online, watching TV, reading newspapers or magazines, or listening to the radio, we are confronted with statistics, numbers, facts and claims that are trying to convince us that this particular brand of shampoo will change our lives. People use statistics everyday to demonstrate a particular point, and to manipulate reactions. However, when can we know whether or not the information we are being fed is factual? At this point in 21st century society, it seems as though we almost expect that statistics shown in commercials on TV have been fudged or constructed so as to convey a certain message. If a “doctor” on TV says that 2/3 patients found a certain drug effective, my automatic response is to doubt the truth of that statement. It seems to me that society has taken what Huff was advising in his book to a new level, to the point where we automatically assume that if we are shown a graph, chart, or percentage and it is associated with advertising, that it must not be completely accurate. It is good to be doubtful rather than gullible, because without sufficient information it is impossible to discern whether or not the information is correct.
But at this point, why do advertisements even bother with throwing in graphs and charts? If the consumers understand that inevitably the data will have been influenced one way or the other, what do they hope to gain? There will of course still be the viewer who is naive enough to trust the data, yet it seems as though we are almost immune to these statistics.
On the other hand, how can we be so sure that these statistics are false? If we can’t prove it either way, maybe the information we are being shown is in fact, the facts. Either way, I think the moral of Huff’s book is that we should not accept statistics at face value. It is our duty as the interpreters to dig deeper and investigate for ourselves where the data is coming from, and in what context it should be analyzed. In this way, I think the answer to his question is yes, to some degree we must all act as statisticians if we hope to find the truth behind the numbers.
I found this chapter’s discussion of hate speech to be particularly interesting, especially in regards to the conflict between morals and freedoms that it presents. We take pride in American society in our ability to say exactly what we think and feel, but when it comes to the cruel stereotypes and malicious insults that are described here as hate speech, we see limitations on this ability. But is it really fair to censure a person’s words, or reprimand them–like those of Bill O’Reilly and Don Imus–just because these men chose to announce their racist thoughts publicly? If they had made these statements to a group of family or friends, President Bush would certainly never have been aware or involved. On the other hand, though, it could be argued that to say things meant to demean any particular group could become a threat to their rights of security and happiness, and can of course be counted a type of discrimination, an issue our country tries so hard to prevent.
As the chapter progresses into its next section on “Linguistic Lethargy,” however, I begin to become skeptical of its claims. Sure, there are words and linguistic habits in our language that exhibit deep-rooted prejudice, including the contrasting connotations of the words “white” and “black” and, as the author states, the inclusion of women in the general term “man” and even the structure of the word wo(man) itself. But can these types of linguistic traditions really be considered hate speech?? I would be tempted to argue no, and that instead they are simply an example of culturally defined uses of language, but even this argument can be seen as weak. While at one time, that the word woman is simply a variation of the word man might have reflected certain cultural connotations, whether it was the inferiority of women or the biblical reference to woman being made from man. But I think that in today’s times, such words are simply overlooked habits and the leftovers of another generation that have never been updated, not necessarily a reflection of the discrimination or sexism in our society.