Historian of biology Michael Barton offers this thought after reviewing bits of Discovery Institute's new curriculum Discovering Intelligent Design: "It’s no wonder that some have dubbed the Discovery Institute the Dishonesty Institute." Because a few commentators to my blog pull out the "dishonesty" card on me now and then, I thought it would be worth our while here to pursue what Barton thought was dishonest about Discovery Institute's critique of certain aspects of evolutionary biology (and promotion of intelligent design as a better alternative). My colleague at Discovery Institute, Casey Luskin, has provided a thorough response to Barton's dishonesty accusations. I agree with Casey's final sentence (ditto regarding the charges against me by some CP Blog readers): "I tell you, the things for which we get called 'dishonest' never cease to amaze me."
Here is Barton's opening paragraph.
One would perhaps think that after being shown on multiple occasions that a quote they decided to cherry pick from a historical figure’s work in fact does not convey what they want that figure to have said in the past, said cherry picker would decide to stop using that quote in a vain attempt to discredit that historical figure. The tactic of quote-mining Charles Darwin is something I’ve posted a lot about before, and it continues to astound me that creationists – no, sorry, intelligent design advocates – no, wait, yes, creationists – time and time again slap history in its face. But that’s how creationists work: they say something they think supports their view, and will never reconsider even in the face of evidence against it.
Note that he rates himself highly in the category of recognizing quotations taken out of context (for dishonest purposes). Luskin notes Barton's accusations and replies, beginning with this:
The main offense we committed is this: We quoted Darwin correctly as having said "If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down," but we failed to note that Darwin then says "But I can find out no such case." The anonymous critic then launches into the usual personal attacks, calling us "the Dishonesty Institute" etc. Evidently, the critic hasn't read Discovery Intelligent Design carefully. He ignores the fact that immediately after quoting that passage from Origin of Species, Discovering Intelligent Design explicitly notes that Darwin said he could find no such case. [Read more here.]
In a recent brief essay on a related topic, When Good Science Turns Bad: Some Potentials for Ugly Distortion, we find this jolting introduction:
The layman tends to think of “science” as some kind of uniform knowledge-producing machine: pour ingredients into the scientific method, and out come objective, scientific facts. Real science is much more human, sometimes all too human. Some of the most hazardous biases in science are those the scientists are not even aware of. A couple of examples surfaced in recent journal articles. Both have the potential to either generate incorrect conclusions, or to distort the image of “good science.”
This essay makes some very useful remarks about the “journal impact factor,” which according to Science Insider, a policy blog from Science Magazine, it was a nice idea that turned rogue:
Journal impact factors, calculated by the company Thomson Reuters, were first developed in the 1950s to help libraries decide which journals to order. Yet, impact factors are now widely used to assess the performance of individuals and research institutions. The metric "has become an obsession" that "warp[s] the way that research is conducted, reported, and funded," said a group of scientists organized by the American Society for Cell Biology (ASCB) in a press release. (Emphasis added.)
After covering this topic and showing its relevance to the controversy over evolution and intelligent design, the essay moves on to another related topic:
Another potential “distorter” of science is reliance on black-box software. Modeling in science is frequently done by software these days, but software is not peer reviewed. Who checks the validity of software? Who knows if the pretty charts and graphs it puts on the screen tell the truth?
This problem was discussed in another paper in Science, “Troubling Trends in Scientific Software Use.” A team of six primarily from Microsoft Research decided to take a look at how scientists employ software, and found out something troubling indeed: it is very unscientific! Many latch onto particular modeling tools like teens sharing apps.
How is this relevant to Darwinism?
Another field highly dependent on models is Darwinian tree-making. Phylogenetics is rife with models that try to crunch vast amounts of data into simple diagrams that supposedly show ancestral relationships between plants and animals.
Those who know the literature may be aware of algorithmic issues such as long-branch attraction that can distort results, even without the software. But who ensures that the software even implements the algorithms correctly? Along the modeler’s way, various shortcuts or compromises like “rate heterogeneity” can force uncooperative data to fit expectations. The potential for self-deception is huge.
To read more about these topics, and to consider how they are related to honesty in science, go here.