Defense of the book or claims of "what is actually in
The Bell Curve as opposed to what people think is in it" often come from conservative think-tanks like the
American Enterprise Institute: "
The Bell Curve Explained".
But even if it such assertions of misrepresentation carried weight or were non-biased, the book was published in 1994. Which was well before the psycho-social sciences finally began to acknowledge how sloppy their standards are.[*] And as if that's not bad enough, the authors never submitted it before publication to the quasi-dysfunctional peer review of that era, anyway!
- - - - - -
[*] That is, even once accepted, non-controversial published papers and literature of former decades are now potentially suspect. It was the aftershocks of
Daryl Bem's work (excerpt below) and more deliberate exposés by others which only recently have started engendering reforms in psychological research. (Supposedly!)
- Daniel Engber: [...] In 2005, while Bem was still working on his ESP experiments, medical doctor and statistician John Ioannidis published a short but often-cited essay arguing that “most published research findings are false.” Among the major sources of this problem, according to Ioannidis, was that researchers gave themselves too much flexibility in designing and analyzing experiments—that is, they might be trying lots of different methods and reporting only the “best” results.
Bem’s colleagues in psychology had, for their part, been engaged in methodological debates for decades, with many pointing out that sample sizes were far too small, that treatments of statistics could be quite misleading, and that researchers often conjured their hypotheses after collecting all their data. And every once in a while, someone would bemoan the lack of replications in the research literature. [...] Even by the mid-2000s, the darker implications of these warnings hadn’t really broken through. Certain papers might be sloppy or even spurious, but major swaths of published work? Only Chicken Little types would go that far. “You felt so alone. You knew something was wrong, but nobody was listening,” says Uli Schimmack, a psychologist at the University of Toronto Mississauga and something of a Chicken Little. “I felt very depressed until the Bem paper came out.”
[...] These dodgy methods were clearly rife in academic science. A 2011 survey of more than 2,000 university psychologists had found that more than half of those researchers admitted using them. But how badly could they really screw things up? By running 15,000 simulations, Simmons, Nelson, and Simonsohn showed that a researcher could almost double her false-positive rate (often treated as if it were 5 percent) with just a single, seemingly innocuous manipulation. And if a researcher combined several questionable (but common) research practices—fiddling with the sample size and choosing among dependent variables after the fact, for instance—the false-positive rate might soar to more than 60 percent.
“It wasn’t until we ran those simulations that we understood how much these things mattered,” Nelson said. “You could have an entire field that is trying to be noble, actively generating false findings.” To underline their point, Nelson and the others ran their own dummy experiment to show how easy it could be to gin up a totally impossible result. [...]
Wagenmakers would later write that psi researchers such as Bem deserve “substantial credit” for the current state of introspection in psychology, as well as for the crisis of confidence that is now spreading into other areas of study. “It is their work that has helped convince other researchers that the academic system is broken,” he said, “for if our standard scientific methods allow one to prove the impossible, than these methods are surely up for revision.” --Daryl Bem Proved ESP Is Real - Which means science is broken