Firing Up the Blue Brain -A Galaxy Classic
Japan Creates Mega Solar City -Leads World

Wired Magazine's Editor Serves Up Nonsense on "Demise" of Scientific Method

Wired_logo The internet has caused trouble in science classrooms:  with Google and Wikipedia it's difficult to know if somebody understands the material or just how to copy and paste.  Wired editor Chris Anderson has raised the stakes by claiming that the modern wealth of data renders the entire scientific method obsolete.  Like finding your wife rubbing butter onto a naked clown shaving your dog, there's just so much wrong with that it's hard to know where to begin.

Chris_anderson_2 Luckily your friends at The Daily Galaxy are expert b.s. detectors, and this attempt at beyond-the-curve controversy falls at the very first law of rubbish identification: "Does it start by saying that a widely held belief is wrong?"  Oh yes it does, with Mr Anderson talking about the scientific maxim "Correlation is not causation" the same way you'd talk about how doctors used to put leeches on people.

This combines with his second error: Belief that the Internet is the entire world.  This is an easy mistake for somebody like a Wired editor to make, but the fact remains that if you walked down a street shouting "LOLCAT" most people wouldn't know what the hell you were talking about.  This is important.  In fact, a species where everybody knows about LOLCATS is one whose viability needs severe re-evaluation.

The correlation and computer errors combine in his central thesis:  that Google don't care about why or how a site works - if the statistics say it's the one then that's good enough for them.  The unspoken subtext is that if it's good enough for Google, the Gods of Light and the Truth and the Way, it should be good enough for everything.

Of course it isn't.  Google are good with the statistics of huge volumes of data because that's their whole entire job.  Fish are really good at swimming because they live in the water - that doesn't mean a heart surgeon should shove his face into on open chest cavity and try to locomote forward by wiggling his body and gills.

Noticing a correlation between factors is the START of science, not the end.  When you see that two things affect each other and ask "Why?", you're a scientist.  When you just record a million trials you're an accountant.  When you say "It happens because that's the way things are" you're either a mother answering a five-year-old's fortieth question in a row, or uninterested, or possibly religious.

We've taken the liberty of reconstructing Isaac Newton's scientific achievements via the Anderson Method:

Newton: Wow, that apple fell.  I will now drop a million other things and see if they all fall.

[Later]Yes, they all fall!  Objects fall.

Viewer:  So, how can we use that conclusion to progress engineering, technology or science?

Newton:  >Shrugs<  I 'unno.  Things fall though.  Maybe you could try just shoving millions of things together and seeing what happens?

Viewer:  Thanks.  By that strategy we should work out how to make a car by the year three billion.

It's only possible to advance when you understand the whys behind the what happened - how can you get to the moon armed only with the knowledge that things don't go up?  Just observing patterns and saying you're done is what came BEFORE science.

International computing efforts like CancerBusters do "just keep trying things" to see what will work as Anderson describes, but the kinds of things they try and the things they're trying to do were all specified by rigorous scientific method.  Claiming petabytes of data and globally distributed computing will end the scientific method is like claiming power tools ended carpentry.

Posted by Luke McKinney.

Miswired

Related Galaxy Posts:

Surfing the HyperNet -A Quantum Future Coming Soon?
Quest for Identity in the Digital Village -Daily Video Classic
Internet Going Galactic -To & Beyond
Beyond Google 3: Why a Semantic Web Will Be Smarter, Faster & All-Around Better
Quantum Physics & the Quest for the Perfect Internet
IBM "Cell" Tech Driving Emergence of the 3-D Web

The 1.5 Gigayear Technology Gap
Dr Strangelove Two? -Cambridge Astrophysicist Gives Earthlings a 50/50 Chance of Making it Through the Century

The Final Century -Video Classic
The "Hawking Solution" -Will Saving Humanity Require Leaving the Planet?
Bigger Threat than Global Warming -Mass Species Extinction
Coming of Age in the Holocene
Robots Rising -Scientists are Worried
Ghost Map -Scientists Unlock Secret of 1918 Pandemic

Comments

Mathematical methods of approximation do in fact aid science--look at the progress made with study the human genome, but I would assume that such mathematical methods would be one process in the scientific method rather than a replacement. Good points

Wikipedia quote from Gaston Bachelard:
“ Scientifically, we think the truth as the historical rectification of a long error, and we think experience as the rectification of the common and originary illusion (illusion première)[1] ”

The reason the future is in the cloud, Is that labs and the "scientific" method are untenable for the everyday human, a scientist in his own right.

those sidelined
\
now those lines can be added to the cloud.

open the workshops, open the labs.

and we shall have music all over the place

history will see this as a dark technological age,
the internet our bastion of hope.

LOL, put three scientists in a room and its doubtful all three will agree on any topic.

JT
http://www.Ultimate-Anonymity.com

I know an article is good when I scroll all the way down to say: You are the man.

Just observing patterns and saying you're done is what came BEFORE science.
---------------------------

The Intelligent Design crowd hasn't figured that one out yet. The way they are so easily abused by sophist bullshit would be humorous if they weren't so doggedly earnest in their outspoken prosetylization of nonsense.

Enjoy.

You've got two things going on here. If you have the resources, equipment, and technology to do it, the rigors of the scientific method are worth pursuing. However one arranges the steps, it is good to establish a claim, determine its testability, utilize precision and logic and recording information, and then see if someone else can replicate your findings or discover alternative ones. For the rest of us not in the lab, we can't do that, but that just means that we have to use whatever facilities available to us. Use your head!

Particularly now with the energy issues at hand, it's good to see attention being paid to research and development. Other than that, just meandering around life listening to other people and say, "yeah, that's good enough for me" doesn't quite work. Weigh out the options. Do pros and cons. If you believe in something, walk down the road with it a bit and follow it to conclusions.

Another way to test an article for relevance: Does individual that is criticizing the claims of another person provide link to what they are criticizing? - no

While I will agree the title of the wired article is bound to raise eyebrows, usually titles is magazines and newspapers purposely do that. Its called attracting attention or getting readers. To read the title isn't enough.

the article is here for those who wish to read it before praising Mr. McKinney. http://www.wired.com/science/discoveries/magazine/16-07/pb_theory
as a fan of the scientific method, I am sure he would happy to have others review it and see if the same conclusion is reached.

All I get from Mr. Anderson's article is that because the scientific method depends on testing and verification and that our theories are getting beyond what we can test, if there is enough indicators of an interaction between 2 things they are most likely linked.

Perhaps the only flaw I see is the idea that what we cannot test for and prove or disprove now will continue to be untestable. But I will agree that scientific theories that need to be tested before the scientific method allows them to be laws is growing. Science hasn't allowed this to slow them down and have theories based on assuming other theories are true...

I do agree with enough data you should be able to remove coincidence, and leave linked items, and be able to identify cause from effect(s)

Perhaps the biggest problem Mr. Anderson fails to recognize is that if the data existed, then it could be tested. the reasons the tests are needed is to generate the data which we use to confirm or deny. While the 'data' might be all around us, when it is in a form (due to size, or short time span) that prevents us from viewing/obtaining it with the tools at hand, no super algorithm that google or anyone else can come up with will be able to parse it.

since you had your own horrible example of newton's apple, let me provide my own. We see a clock. we see the hands move and we can confirm that it keeps good time by checking it with other clocks. w/o opening it we are able to determine if it is digital or mechanical. we cannot determine how all the parts work together to allow it to keep time until we open it. If it is mechanical, we may need to remove some parts in order to see the interactions that make it work, but it is rather obvious. If it is digital, we would have to look at circuit pathways and such making it harder to test if our understanding of how all it works is correct.

Give a digital watch to someone 200 years ago, they could not test it to be able to conclude how it works. they only could see that it keeps time, the rest would the a hypothesis. No amount of data from watching the watch would give them an idea until they can open it and run tests.

So yes, you are right, Mr. Anderson is wrong, but not for the rather simple reason you give

You can massage Anderson's point a bit and it becomes a lot more defensible.

In essence, there are two aspects of "science" you need to disentangle:

(A) the theory-building / understanding part
(B) the predictive / diagnostic part

The smoothed Anderson hypothesis: there are very, very wide areas of inquiry in which it is not unreasonable to expect that the combination of large volumes of data and machine learning algorithms are going to allow the development of highly-accurate tools for (B) without providing any correspondent developments in (A)...and, further, the rate of progress in tools-for-(B) (prediction and diagnosis) may greatly outstrip the rate of progress in tools-for-(A) (theory-building and understanding).

Sample (B) tools that could work without having underlying (A) progress:
- based on a history of my various biomeasures (resting, heart rate, caloric intake, blood pressure, etc.), give me accurate probabilities of contracting various diseases; make this interactive, so I can see how various behavior modifications (in the future) change those probabilities.
- based on empirical studies of other organizations and social networks, calculate the odds of various group dynamics emerging when a particular node's behavior is altered
- based on empirical studies of people with similar dna, calculate my odds of contracting various genetic and non-genetic diseases
...etc

If tools like the above existed and worked reasonably well (eg: empirically, the predicted probabilities matched the distribution of outcomes under a reasonable choice of evaluation metric) they'd be extremely useful, even if the underlying mechanics were not fully understood.

And that's the smoothed hypothesis: theory-building is very slow and very hard, but machine learning against empirical data is not; it's easy to see a future where there is a great depth of useful predictive tools -- (B) tools -- without any real understanding backing their use, because the ease of making new, useful things in category (B) is much greater than the ease of (A).

I have to agree with passing through, I think most people claiming to understand what science is does not have any kind of historical perspective, the science of 100 years ago borders on almost outright quackery in many instances, and 100's of years before that it's even worse. "Science" is not this well understood thing. One only has to read Paul feyerabend to know that many scientific discoveries and intuitions come from a wide range of discplines without the so called 'method' actually being used.

I think this all has to do with cartesian reductionism of western culture, the idea that you can reduce and isolate a part of the system from the whole to gain better understanding of the whole, while ignoring the idea that the universe is holistic.

http://en.wikipedia.org/wiki/Holism

I think you're right in your assessment that the article was a bit naive. But, I don't think your reaction is warranted considering his initial article. That is, I think he was saying it will become increasingly difficult for the meticulous, rigors of science to keep up with the enormous amounts of evaluated and unevaluated data.

Scientific method is not dead, but its legitimate--and sometimes illegitimate-authoritative tone is being drowned out by the plethora of arguments on both sides. Simply put, it can get overwhelming for a person without a science background. Even with a degree in ECE, sometimes I don't know what to think when there's enough sound research on both sides. Furthermore, sometimes there is unsound research. In the latter case, this serves to confuse the masses (i.e. wikipedia).

The internet generation is assuming more data is always good data. But, science would reject a lot of the data based simply on how it was derived.

I think it is the job of people like you and me to do our best to make the conclusions of science palatable while still credible. Let's be honest, nobody is going to pick up a lengthy journal article no matter how amazing it is. We should be loyal to the unbiased nature while still interpreting the conclusions in a meaningful, concise way. Some of the previous comments have done an excellent job of doing so (better than the article itself did).

I really don’t get this entire line of argument. What’s the big deal again? First, Anderson is suggesting that since data sets are extraordinarily large, they are/may be impossible to model. Second, that this dumps the Scientific Method. Third, that ‘knowledge’ or ‘science’ can continue without either models or theories, and that statistical analysis might be sufficient in their place. I think that’s a fair summary, no? McKinney’s stated objection (I think there was really only one that was on point here) is that knowledge requires understanding.

I think all of these points are rather trivial.

Starting with Anderson: data set size has nothing to do with modeling. I think there are two horns to this argument, and the first horn Anderson is targeting is ‘modeling’ specifically, but he relies on a weak understanding of some basic truths about Scientific Modeling, and how poorly models match up to the things being modeled. But in fact, that’s what a model is designed to do, to match, or more properly, “map” poorly. It *is* there to capture some refined truth, and at the same time, illuminate or highlight that truth by eliminating unnecessary or superfluous data. And it’s worth noting that the counter point that complexity entails a rejection of the definition of “superfluous” — everything is relevant — is valid as far as I can tell. That’s interesting. But not novel. This “truth” has been known for a very long time. Scientific Laws aren’t accurate depictions of reality — they’re simplifications. Abstractions. And as such, they’re false. All abstractions are missing part of the story, and doing so deliberately. This is pretty much the SOP for Science with a capital ‘S’. What the second horn of the argument fails on is that models and people somehow map. That is, models must be understandable. Which is, of course, not true. Most scientists would argue that models can range in complexity. The more elegant the model the more basic the truth, is perhaps how they’d argue. But not all systems are amenable to such analysis. “Consciousness” is one of those things. It’s hard. There’s lots of moving parts. And those parts may not be reduce-able to chunks that make modeling simple or even comprehensible. Does that make it less of a model?

To Anderson’s second point, that models are required for the Scientific Method, I have not much to add. If you hold that theories specify a class of models and the goal of the enterprise is to determine the truth conditions for the models, fine. Kick out the models and the whole structure collapses. Most would argue that this is a silly interpretation of science. But if this is your straw man, so be it. I would like to propose, however, that complex models might well be beyond our (as humans) ability to grasp in a rather trivial dimension — like, say the sheer number of variables entailed by the model. But again, this doesn’t mean that the model isn’t there, or that the scientific engine is flawed. Just that we are.

To Anderson’s third point, and McKinney’s only point, correlation is not causation. This is a truism. But law and theory do not require a set of necessary and sufficient causes to claim “advance”. While this may not be intuitive, Karl Popper has pretty much ruined the hopes of scientists to uncover The Truth. “Pursuit” of the truth is much more worthwhile simply because it’s doable. But who knows when we’ll ever know The Truth? As both authors acknowledge, Relativity and Newtonian mechanics are not True. They’re “true enough”. And in many cases, statistical correlation may be as close as we’re going to get. Get over it.

What I’d like to say is that Science, while not about Truth, is about uncovering interesting regularities, predicting behavior, and such. Data mining is a very interesting avenue for scientific research, and as Anderson points out, Craig Ventner has used it to interesting impact. And so long as the scientific enterprise is willing to be driven in whimsical directions as data driven revelations are revealed, then great. Have at it. But never assume for a moment that that’s what’s happening. As soon as the quest becomes in any way directed, as Ventner himself did, both theory and modeling come along for the ride. As soon as you say, “huh, that’s interesting”, you’re building theories, developing and testing models — you’re doing Science. Just like always. Whoopee.

It has been realized that the benefit of leeches is that they produce anti-clotting chemistry. They were not used without an actual benefit.

Undigging this article because of its cheap swipe at "the religious."

I think the guy from Wired has a good argument if he emphasizes that computers do not have to form a hypothesis, they just 'gather data indiscriminantly'. One does not 'necessarily' have to form a hypothesis in order to come to a 'theory'. Imagine: a computer gathers the facts to support a billion scientific theories, but a computer won't be able to tell you if it is truly relevant; ie, a computer can tell you that it wasn't Lotte who was responsible for the sorrows of the young Werther, but rather 'bad German food'. This is simply irrelevant. Approaching the gathered data with a hypothesis will simply increase your likelihood of finding meaningful data.

Evidence still reigns supreme. If it doesn't work, it ain't science. That's a simple way to look at it.

Science has actually had its fair share of history that shows that the scientific method is fairly robust. It kind of compliments the internet in its openness. You're not to cloud up any experiment, you are to provide your fellow scientists with the exact experimental details so it can be replicated. It's an OPEN society, with everyone knowing what the others are up to. And there is debate, lots of it. Historically, science has been able to expand beyond what copernicus or galileo or even what the greeks/chinese/others knew, often rejecting previous models entirely. THAT shows robustness, that everyone plays by those rules: that evidence and the experiment are of primary importance. That science can change and transform is not a sign of weakness but of strength.

I don't think we would be leaving the scientific method behind if we still tested things and replicated the process, because that IS that method. And computer modelling, etc, are already used in conjunction with other techniques. So....what's all the fuss about?

Science does not really tell us "why". Science is only about building models. Take gravity. Newton's Universal Law of gravity is a mathematical model that relates the masses of the two objects and their distance apart to the gravitational force between them. It is a useful model in most cases, but it does not tell us WHY there is a gravitational attraction. It is simply a relationship based on evidence. Science does not give us the truth. "If it's truth you are looking for, Dr. Tyree's class is right down the hall".

as a student i can tell you that most dont understand the material, but I at least try to and then when I cant i just go on wikipedia. I think once colleges stop trying to speed up the learning process and giving us a lot of general ed. classes to take then maybe we would have time to actually comprehend the material. That isnt an excuse, I major in biology and I understand my biology classes...but physics im not even going to both.

http://sensico.wordpress.com/2008/07/03/mccain-keeps-crying-over-gen-clark/

I'm going to try that LOLCAT experiment tomorrow outside the doughnut shop.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Your Information

(Name is required. Email address will not be displayed with the comment.)