New Study Shows Statisticians Can Prove Almost Anything

This forum is for non aviation related topics, political debate, random thoughts, and everything else that just doesn't seem to fit in the normal forums. ALL FORUM RULES STILL APPLY.

Moderators: sky's the limit, sepia, Sulako, lilfssister, North Shore

Locked
User avatar
Colonel Sanders
Top Poster
Top Poster
Posts: 7512
Joined: Sun Jun 14, 2009 5:17 pm
Location: Over Macho Grande

New Study Shows Statisticians Can Prove Almost Anything

Post by Colonel Sanders »

I know that starting new threads in Misc is akin
to masturbating in public - it might feel good, but
most people look away, embarrassed - but I just
couldn't pass up on this gem:
Catchy headlines about the latest counter-intuitive discovery in human psychology have a special place in journalism, offering a quirky distraction from the horrors of war and crime, the tedium of politics and the drudgery of economics.

But even as readers smirk over the latest gee whizzery about human nature, it is generally assumed that behind the headlines, in the peer-reviewed pages of academia, most scientists are engaged in sober analysis of rigorously gathered data, and that this leads them reliably to the truth.

Not so, says a new report in the journal Psychological Science, which claims to show “how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis.”

In “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant,” two scientists from the Wharton School of Business at the University of Pennsylvania, and a colleague from Berkeley, argue that modern academic psychologists have so much flexibility with numbers that they can literally prove anything.

In effect turning the weapons of statistical analysis against their own side, the trio managed to to prove something demonstrably false, and thereby cast a wide shadow of doubt on any researcher who claims his findings are “statistically significant.”

In “many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not,” they write.

Defined as “the incorrect rejection of a null hypothesis,” a false positive is “perhaps the most costly error” a scientist can make, they write, in part because they are “particularly persistent” in the literature.”

False positives also waste resources, and “inspire investment in fruitless research programs and can lead to ineffective policy changes.” Finally, they argue, a field known for publishing false positives risks losing its credibility.

Psychology, especially the branch of social psychology that merges with economics, is particularly sensitive to this criticism. It is a field in which reputations can be made with a single mention on the Freakonomics blog, and book deals signed based on single headlines.

One example of this trend is described in the December issue of The Atlantic magazine, in which David B. Klein, a libertarian economist at George Mason University in Virginia, retracts the claim he made last year in the Wall Street Journal, that left-wingers do not understand economics.

As quirky headlines go, it is hard to imagine a better one for the conservative Wall Street Journal than “Study Shows Left Wing Wrong About Economy” (In fact, the headline was “Are You Smarter Than A Fifth Grader?” which Klein acknowledges carried the implication that left-wingers are not.)

Citing his own “myside bias,” otherwise known as confirmation bias, or the tendency to favour ideas that fit with one’s settled positions, Prof. Klein now admits that, according to the data he used, the ignorance he attributed to the left is also true of the right, and so the headline should have been less dramatic, something closer to “Nobody Understands Economics: Study.”

The problem, as Prof. Klein puts it, was the hidden bias in his own use of the data, and in the decisions he made about how to analyze it.

These decisions about data use are not usually made in advance of the research, based on rigid principles, according to the authors of the Psychological Science paper. Rather, they are dealt with as they arise, and it is common and accepted practice “to explore various analytical alternatives, to search for a combination that yields ‘statistical significance,’ and then to report only what ‘worked.’ ”

The authors — Joseph P. Simmons, Leif D. Nelson and Uri Simonsohn — describe this flexibility as “researcher degrees of freedom,” and suggest that too much of it leads to bias at best, and nonsense at worst.

As a remedy, they offer a series of proposed guidelines for researchers and reviewers, but it was their somewhat cheeky experiment that brought the problem into the starkest relief.

As ever in social psychology, the experiment began with a room full of undergraduate guinea pigs, in this case paid for their attendance at a lab at the University of Pennsylvania. In the first of two separate trials, 30 students listened on headphones to one of two songs: either Kalimba, “an instrumental song by Mr. Scruff that comes free with the Windows 7 operating system,” or Hot Potato, performed by the children’s band The Wiggles.

Afterwards, they were asked to fill out a survey including the question, “How old do you feel right now: very young, young, neither young nor old, old, or very old.” They were also asked their father’s age, which allowed the researchers to control for variation in baseline age across participants.

Using a common statistical tool known as analysis of covariance, or ANCOVA, which measures one set of numbers against another, the authors were able to show that, on average, listening to the children’s song made people feel older than listening to the control song.

A second experiment aimed to extend these results with a song about getting old, When I’m Sixty-Four, by the Beatles, with Kalimba again as the control song. But this time, instead of being asked how old they felt, they were asked for their actual birthdate, which allowed precise calculation of their age.

An ANCOVA analysis, controlling for their father’s age, showed a statistically significant but logically impossible effect: listening to When I’m Sixty-Four made people 16 months younger than listening to Kalimba.

Listening to a song obviously has no bearing on how old you actually are. This nonsensical result, they argue, was merely an artifact of flawed analysis within a scientific culture that permits all kinds of relevant details to be excluded from the final publication.

Under their proposed guidelines, though not under current accepted scientific practices, the authors would have been required to disclose that they in fact asked participants many other questions, and did not decide in advance when to stop collecting data, which can skew results. They also would have been obliged to disclose that, without controlling for father’s age, there was no significant effect, and the experiment was more or less a bust.

“Our goal as scientists is not to publish as many articles as we can, but to discover and disseminate truth,” they write. “We should embrace these [proposed rules about disclosing research methods] as if the credibility of our profession depended on them. Because it does.”
---------- ADS -----------
 
cgzro
Rank (9)
Rank (9)
Posts: 1735
Joined: Wed Jul 25, 2007 7:45 am

Re: New Study Shows Statisticians Can Prove Almost Anything

Post by cgzro »

Yup. You should post that on Climate Audit. they have been pointing that out for 10years now.

Recently I was helping somebody with an economics question related to regression.
The question plotted x and y in a scatter and the data was basically a shotgun circular shaped plot. It was a multiple choice question with one of the options being d) no correlation
Well i helped her do the regression showed her the crappy correlation then pointed out visualy it was a waste of time.

Well we got the "wrong" answer but what do I know about it:) And based on the R2 stats that get published as proof.. and throwing data out that makes the correlation worse much of my mathematical training was wrong!

Welcome to new age statistics.
---------- ADS -----------
 
User avatar
Expat
Rank 10
Rank 10
Posts: 2383
Joined: Sat Jan 29, 2005 3:58 am
Location: Central Asia

Re: New Study Shows Statisticians Can Prove Almost Anything

Post by Expat »

90 % of the dentists surveyed recommend using a toothbrush and toothpaste for brushing your teeth. :lol:
---------- ADS -----------
 
Success in life is when the cognac that you drink is older than the women you drink it with.
User avatar
Siddley Hawker
Rank 11
Rank 11
Posts: 3353
Joined: Tue Aug 10, 2004 6:56 pm
Location: 50.13N 66.17W

Re: New Study Shows Statisticians Can Prove Almost Anything

Post by Siddley Hawker »

I once saw a statistical survey that concluded 78% of respondents required the use of both hands and a funnel to find their anuses. The remaining 22% required only a flashlight. I remember thinking that seemed about right.
---------- ADS -----------
 
User avatar
AOW
Rank 6
Rank 6
Posts: 465
Joined: Fri May 20, 2005 2:23 pm

Re: New Study Shows Statisticians Can Prove Almost Anything

Post by AOW »

For those actually interested in the study, the full text of the journal article is available here.

On a related note, it has recently been determined that research causes cancer in lab rats.
---------- ADS -----------
 
bmc
Rank 11
Rank 11
Posts: 4014
Joined: Tue May 16, 2006 10:06 pm
Location: Switzerland

Re: New Study Shows Statisticians Can Prove Almost Anything

Post by bmc »

Apparently 78.2% of all statistics are made up.
---------- ADS -----------
 
bmc
Locked

Return to “The Water Cooler”