In my ongoing search to better understand how we reconcile the creative tension between subjective and objective measures of the world — including our ongoing (and thus far) elusive search for a better way of tracking how people learn — I took note of a recent New Yorker article that cast light on some emerging problems with the ostensible foundation of all objective research — the scientific method.

In the article, author Jonah Lehrer highlights a score of multiyear studies — ranging from the pharmaceutical to the psychological — in which core data changed dramatically over time. Drugs that were once hailed as breakthroughs demonstrated a dramatic decrease in effectiveness. Groundbreaking insights about memory and language ended up not being so replicable after all. And the emergence of a new truth in modern science — the “decline effect” — cast doubt on the purely objective foundation of modern science itself.

Without recounting the article in entire, there are several insights that have great relevance to those of us seeking to find a better way of helping children learn:

  • In the scientific community, publication bias has been revealed as a very real danger (in one study, 97% of psychology studies were proving their hypotheses, meaning either they were extraordinarily lucky or only publishing outcomes of successful experiments). The lesson seems clear: if we’re not careful, our well-intentioned search for the answers we seek may lead us to overvalue the data that tell us what we want to hear. In the education community, how does this insight impact our own efforts, which place great emphasis on greater accountability and measurement, and yet do so by glossing over a core issue — the individual learning process — that is notoriously mercurial, nonlinear, and discrete?
  • In the scientific community, a growing chorus of voices is worried about the current obsession with “replicability”, which, as one scientist put it, “distracts from the real problem, which is faulty design.” In the education community, are we doing something similar — is our obsession with replicability leading us to embrace “miracle cures” long before we have even fully diagnosed the problem we are trying to address?
  • In the scientific community, Lehrer writes, the “decline effect” is so gnawing “because it reminds us how difficult it is to prove anything.” If these sorts of challenges are confronting the scientific community, how will we in the education community respond? To what extent are we willing to acknowledge that weights and measures are both important — and insufficient? And to what extent are we willing to admit that when the reports are finished and the PowerPoint presentations conclude, we still have to choose what we believe?