Jump to content

Cherry picking

From Wikipedia, the free encyclopedia
Cherry-picking is often used in science denial such as climate change denial. For example, by deliberately cherry picking appropriate time periods, here 1998–2012, an artificial "pause" can be created, even when there is an ongoing warming trend.[1] The same problem could occur with the zoomed-out portion of the graph; if the data from before 1880 went in an unpredicted direction, that would cause another (unintentional) cherry picking fallacy. Furthermore, the temperature average was taken from 1951 to 1980, a relatively short span of time, so perhaps the true average temperature could be far different.

Cherry picking, suppressing evidence, or the fallacy of incomplete evidence is the act of pointing to individual cases or data that seem to confirm a particular position while ignoring a significant portion of related and similar cases or data that may contradict that position. Cherry picking may be committed intentionally or unintentionally.[2]

The term is based on the perceived process of harvesting fruit, such as cherries. The picker would be expected to select only the ripest and healthiest fruits. An observer who sees only the selected fruit may thus wrongly conclude that most, or even all, of the tree's fruit is in a likewise good condition. This can also give a false impression of the quality of the fruit (since it is only a sample and is not a representative sample). A concept sometimes confused with cherry picking is the idea of gathering only the fruit that is easy to harvest, while ignoring other fruit that is higher up on the tree and thus more difficult to obtain (see low-hanging fruit).

Cherry picking has a negative connotation as the practice neglects, overlooks or directly suppresses evidence that could lead to a complete picture.

Cherry picking can be found in many logical fallacies. For example, the "fallacy of anecdotal evidence" tends to overlook large amounts of data in favor of that known personally, "selective use of evidence" rejects material unfavorable to an argument, while a false dichotomy picks only two options when more are available. Some scholars classify cherry-picking as a fallacy of selective attention, the most common example of which is the confirmation bias.[3] Cherry picking can refer to the selection of data or data sets so a study or survey will give desired, predictable results which may be misleading or even completely contrary to reality.[4]

History

[edit]

A story about the 5th century BCE atheist philosopher Diagoras of Melos says how, when shown the votive gifts of people who had supposedly escaped death by shipwreck by praying to gods, he pointed out that many people had died at sea in spite of their prayers, yet these cases were not likewise commemorated[5] (this is an example of survivorship bias). Michel de Montaigne (1533–1592) in his essay on prophecies comments on people willing to believe in the validity of supposed seers:

I see some who are mightily given to study and comment upon their almanacs, and produce them to us as an authority when anything has fallen out pat; and, for that matter, it is hardly possible but that these alleged authorities sometimes stumble upon a truth amongst an infinite number of lies. ... I think never the better of them for some such accidental hit. ... [N]obody records their flimflams and false prognostics, forasmuch as they are infinite and common; but if they chop upon one truth, that carries a mighty report, as being rare, incredible, and prodigious.[6]

In science

[edit]

Cherry picking is one of the epistemological characteristics of denialism and widely used by different science denialists to seemingly contradict scientific findings. For example, it is used in climate change denial, evolution denial by creationists, denial of the negative health effects of consuming tobacco products and passive smoking.[1]

Choosing to make selective choices among competing evidence, so as to emphasize those results that support a given position, while ignoring or dismissing any findings that do not support it, is a practice known as "cherry picking" and is a hallmark of poor science or pseudo-science.[7]

— Richard Somerville, Testimony before the US House of Representatives Committee on Energy and Commerce Subcommittee on Energy and Power, March 8, 2011

Rigorous science looks at all the evidence (rather than cherry picking only favorable evidence), controls for variables as to identify what is actually working, uses blinded observations so as to minimize the effects of bias, and uses internally consistent logic."[8]

— Steven Novella, "A Skeptic In Oz", April 26, 2011

In medicine

[edit]

In a 2002 study, a review of previous medical data found cherry picking in tests of anti-depression medication:

[researchers] reviewed 31 antidepressant efficacy trials to identify the primary exclusion criteria used in determining eligibility for participation. Their findings suggest that patients in current antidepressant trials represent only a minority of patients treated in routine clinical practice for depression. Excluding potential clinical trial subjects with certain profiles means that the ability to generalize the results of antidepressant efficacy trials lacks empirical support, according to the authors.[9]

In argumentation

[edit]

In argumentation, the practice of "quote mining" is a form of cherry picking,[7] in which the debater selectively picks some quotes supporting a position (or exaggerating an opposing position) while ignoring those that moderate the original quote or put it into a different context. Cherry picking in debates is a large problem as the facts themselves are true but need to be put in context. Because research cannot be done live and is often untimely, cherry-picked facts or quotes usually stick in the public mainstream and, even when corrected, lead to widespread misrepresentation of groups targeted.

One-sided argument

[edit]

A one-sided argument (also known as card stacking, stacking the deck, ignoring the counterevidence, slanting, and suppressed evidence)[10] is an informal fallacy that occurs when only the reasons supporting a proposition are supplied, while all reasons opposing it are omitted.

Philosophy professor Peter Suber has written:

The one-sidedness fallacy does not make an argument invalid. It may not even make the argument unsound. The fallacy consists in persuading readers, and perhaps ourselves, that we have said enough to tilt the scale of evidence and therefore enough to justify a judgment. If we have been one-sided, though, then we haven't yet said enough to justify a judgment. The arguments on the other side may be stronger than our own. We won't know until we examine them.

So the one-sidedness fallacy doesn't mean that your premises are false or irrelevant, only that they are incomplete.

[…] You might think that one-sidedness is actually desirable when your goal is winning rather than discovering a complex and nuanced truth. If this is true, then it's true of every fallacy. If winning is persuading a decision-maker, then any kind of manipulation or deception that actually works is desirable. But in fact, while winning may sometimes be served by one-sidedness, it is usually better served by two-sidedness. If your argument (say) in court is one-sided, then you are likely to be surprised by a strong counter-argument for which you are unprepared. The lesson is to cultivate two-sidedness in your thinking about any issue. Beware of any job that requires you to truncate your own understanding.[11]

Card stacking is a propaganda technique that seeks to manipulate audience perception of an issue by emphasizing one side and repressing another.[12] Such emphasis may be achieved through media bias or the use of one-sided testimonials, or by simply censoring the voices of critics. The technique is commonly used in persuasive speeches by political candidates to discredit their opponents and to make themselves seem more worthy.[13]

The term originates from the magician's gimmick of "stacking the deck", which involves presenting a deck of cards that appears to have been randomly shuffled but which is, in fact, 'stacked' in a specific order. The magician knows the order and is able to control the outcome of the trick. In poker, cards can be stacked so that certain hands are dealt to certain players.[14]

The phenomenon can be applied to any subject and has wide applications. Whenever a broad spectrum of information exists, appearances can be rigged by highlighting some facts and ignoring others. Card stacking can be a tool of advocacy groups or of those groups with specific agendas.[15] For example, an enlistment poster might focus upon an impressive picture, with words such as "travel" and "adventure", while placing the words, "enlist for two to four years" at the bottom in a smaller and less noticeable point size.[16]

See also

[edit]

References

[edit]
  1. ^ a b Sven Ove Hansson: Science denial as a form of pseudoscience. Studies in History and Philosophy of Science. 63, 2017, pp 39–47, doi:10.1016/j.shpsa.2017.05.002.
  2. ^ Klass, Gary. "Just Plain Data Analysis: Common Statistical Fallacies in Analyses of Social Indicator Data. Department of Politics and Government, Illinois State University" (PDF). statlit.org. ~2008. Archived from the original (PDF) on March 25, 2014. Retrieved March 25, 2014.
  3. ^ "Fallacies | Internet Encyclopedia of Philosophy".
  4. ^ Goldacre, Ben (2008). Bad Science. HarperCollins Publishers. pp. 97–99. ISBN 978-0-00-728319-4.
  5. ^ Hecht, Jennifer Michael (2003). "Whatever Happened to Zeus and Hera?, 600 BCE–1 CE". Doubt: A History. Harper San Francisco. pp. 9–10. ISBN 0-06-009795-7.
  6. ^ Michel de Montaigne (1877) [First French edition 1580]. "Chapter XI--Of Prognostications". Essays. Translated by Charles Cotton.
  7. ^ a b "Devious deception in displaying data: Cherry picking", Science or Not, April 3, 2012, retrieved 16 February 2015
  8. ^ Novella, Steven (26 April 2011). "A Skeptic In Oz". Science-Based Medicine. Retrieved 16 February 2015.
  9. ^ "Typical Depression Patients Excluded from Drug Trials; exclusion criteria: is it 'cherry pickin'?". The Brown University Psychopharmacology Update. 13 (5). Wiley Periodicals: 1–3. May 2002. ISSN 1068-5308. Based on the studies:
    • Posternak, MA; Zimmerman, M; Keitner, GI; Miller, IW (February 2002). "A reevaluation of the exclusion criteria used in antidepressant efficacy trials". The American Journal of Psychiatry. 159 (2): 191–200. doi:10.1176/appi.ajp.159.2.191. PMID 11823258.
    • Zimmerman, M; Mattia, JI; Posternak, MA (March 2002). "Are subjects in pharmacological treatment trials of depression representative of patients in routine clinical practice?". The American Journal of Psychiatry. 159 (3): 469–73. doi:10.1176/appi.ajp.159.3.469. PMID 11870014.
  10. ^ "One-Sidedness - The Fallacy Files". Retrieved 14 October 2014.
  11. ^ Peter Suber. "The One-Sidedness Fallacy". Retrieved 25 September 2012.
  12. ^ The fine art of propaganda: a study of Father Coughlin's s=Institute for Propaganda Analysis. Harcourt Brace and Company. 1939. pp. 95–101. Retrieved November 24, 2010.
  13. ^ C. S. Kim, John (1993). The art of creative critical thinking. University Press of America. pp. 317–318. ISBN 9780819188472. Retrieved November 24, 2010.
  14. ^ Ruchlis, Hyman; Sandra Oddo (1990). Clear thinking: a practical introduction. Prometheus Books. pp. 195–196. ISBN 9780879755942. Retrieved November 24, 2010.
  15. ^ James, Walene (1995). Immunization: the reality behind the myth, Volume 3. Greenwood Publishing Group. pp. 193–194. ISBN 9780897893596. Retrieved November 24, 2010.
  16. ^ Shabo, Magedah (2008). Techniques of Propaganda and Persuasion. Prestwick House Inc. pp. 24–29. ISBN 9781580498746. Retrieved November 24, 2010.