Some people learn best by doing, right? Others have a visual memory, and it’s important for them to see something depicted if they want to remember it. Then there’s those who learn most effectively through reading and writing, and another group takes on new ideas best if they hear them.
This idea of different “learning styles” is widely accepted among a huge proportion of the public. But there’s one major problem–there’s no evidence that it’s true.
In a 2008 paper, four psychologists reviewed every study ever conducted on learning styles, dating all the way back to the 1920s. They found loads of evidence that both kids and adults will, if asked, express a preference about how they like information to be presented to them. They also found that some people are definitively better than others at processing different kinds of information.
What they didn’t find was any evidence that those two truths interact–that an instructional method that proves effective for students with one learning preference is not equally as effective for students with a different one. While there have been a lot of studies on learning styles, only a handful were designed to adequately test their validity in the classroom. Of those that did, several contradicted this accepted wisdom about how we learn best.
It’s important to note that a lack of evidence for something is not the same as actively disproving it. But it should also be said that disproving learning styles would require a far, far higher base of evidence, and may even be impossible. “At present, there is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice,” the psychologists wrote in 2008. “Limited education resources would better be devoted to adopting other educational practices that have a strong evidence base, of which there are an increasing number.”
Learning styles isn’t the only flawed belief we have about the ways we learn. In fact, there’s a whole laundry list of “neuromyths,” and some are more insidious than others.
Let’s start with the persistent myth of humans only using 10 percent of our brains. You can probably guess that this one just isn’t true–it’s unlikely that evolution and/or God would have provided us with a bodily organ that’s 90 percent useless. In reality, we use almost every part of the brain over a 24-hour period, though small groups of neurons go through constant cycles of being dormant and active. “Evidence would show over a day you use 100 percent of the brain,” John Henley, a neurologist at the Mayo Clinic, told Scientific American in 2008.
Then there’s the one about people being “left-brained” or “right-brained” based on their personality and cognitive style. Left-brained people, it’s believed, have a more logical and methodological approach, while right-brained people are more creative and intuitive.
But in 2013 a team of neuroscientists investigated whether this assessment had merit by looking at MRI brain images from more than a thousand volunteers between the ages of 7 and 29. They found that while certain networks of neurons tended to be stronger on either the left or right hemisphere of individual brains, that side preference didn’t hold true for the entire brain. “Our data are not consistent with a whole-brain phenotype of greater “‘left-brained’ or greater “‘right-brained’ network strength across individuals,” they concluded.
These types of myths might be easy to debunk, but other fallacies are more deeply ingrained in our education systems and harder to root out. Take early childhood education, for example. The human brain grows so much in a child’s first five years that it would seem obvious that preschool programs would have a huge effect on cognitive development. Except that meta-studies show that by the age of 8 it’s almost impossible to tell which children had preschool education and which didn’t. So while your little darling might seem to be enjoying those early days in the classroom, it has no detectable long-term effect on his or her gray matter.
Similarly, much has been said about the importance of play in child development. But in 2013 a group of psychologists reviewed 40 years of studies before writing: “Our take-away message is that existing evidence does not support strong causal claims about the unique importance of pretend play for development, and that much more and better research is essential for clarifying its possible role.” It’s entirely plausible that play is merely one of many routes to development, or perhaps a secondary effect of those development strategies.
The concept of “digital natives” is pretty questionable, too–the idea that kids who have grown up with the web have somehow developed the ability to do many different things at the same time in a way that their parents can’t. In actuality, studies show that today’s university students use a very limited range of largely established services (Google, Facebook, Wikipedia, etc.) for both learning and socializing, and don’t have an especially deep knowledge of technology. As for multitasking, kids have become practiced at it, sure, but they still suffer the exact same cognitive setbacks that non-“digital natives” do when trying to do several things at once.
“There is overwhelming evidence that [digital natives] do not exist,” wrote psychologists Paul A. Kirschner and Jeroen J.G. van Merriënboer in a study of urban legends in education in 2013. “They are not capable of doing that with modern technologies which is ascribed to their repertoire,” they said, and “they actually may ‘suffer’ if teaching and education tries to play into these so-called abilities to relate to, work with, and control their own learning in multimedia and digitally pervasive environments.”
When it comes to the classroom, perhaps the most sinister practice of all is medicating students who don’t perform well. In a 2015 review titled “What Doesn’t Work in Education: The Politics of Distraction,” John Hattie wrote: “There has been a major increase in the number of children who come to school each day pre-labeled. In my own state, Victoria, the incidence of autism and Asperger’s has increased 340 per cent in the past three years.”
He continued: “Although diagnostic tests may have improved, it is hard to believe that these major increases in incidence are real. One potential reason for the increase might be parents’ (and teachers’) desire to seek an explanation for “‘unusual’ behaviours and the medical and pharmaceutical professions’ ready provision of answers (and drugs). Another potential reason for the spike might be the extra funding that is tied to students who are labelled as autistic.”
Hattie was very clear not to claim that ADHD and autism aren’t real; they are, he said. “Instead, I believe that the massive increase in the frequency of these labels points to a potential cultural problem: students are being diagnosed and labelled primarily for financial and accountability reasons rather than for the enactment of appropriate educational interventions.”
These educational myths are not an insignificant problem–they affect teachers just as much as they do the general public. In a 2012 study, 242 teachers in the United Kingdom and Netherlands believed an average of nearly half of the collection of “neuromyths” gathered by the researchers, especially those linked to commercialized education programs like the California nonprofit Brain Gym, which promotes certain physical exercises it says improve children’s ability to learn based entirely on pseudoscience, or the VARK program‘s promotion of learning styles.
“These myths persist because they spread easily, offer alluring explanations, and simple, practical solutions,” said Harry Fletcher-Wood, an education researcher at the Institute for Teaching in London. “They spread easily because they are relatively simple–albeit dressed to impress in pseudoscientific explanations.”
In 2014, Stanford’s Jack Schneider wrote a book aiming to help scientists spread evidence-based strategies in education called From the Ivory Tower to the Schoolhouse: How Scholarship Becomes Common Knowledge in Education. In it, Schneider lists four factors that any idea must have if teachers are going to notice, accept, use, and share it. It’s clear, however, that these same factors are just as good at spreading pseudoscience.
The first factor asks if the idea is relevant to something teachers experience, and whether there appears to be evidence to back it up. Most educational myths that persist deal with situations teachers come across a lot, and they’re based around enough neuroscience that they sound plausible to someone who hasn’t studied them in depth.
The second factor, acceptance, means the idea presented must be compatible with the inner values of teachers. Many educators like to believe that they can find creative methods for teaching their students even inside the rigid, one-size-fits-all system they grew up with–so the more ideas sound like they can be personalized to a student, the more likely they are to be looked upon favorably.
The third, usage, looks at how easily an idea can be implemented in the classroom. It’s fairly simple to create a lesson that takes learning styles, the left brain/right brain myth, or the importance of play into account, for example. And while it’s harder for an individual teacher to spread the idea that preschool education is vital or that kids who aren’t performing well in the classroom may have mental health issues, these concepts take hold at a higher level among those setting education and health policy.
The fourth and final factor relies on how spreadable the idea is. Does it require years of training to learn it, or can it be picked up in a half-hour conference session? Ideas that fit the latter description are much more likely to go viral for obvious reasons–they’re easy to communicate.
“What we are dealing with here is a very popular and very persistent pseudoscience, which jeopardizes both the quality of education and the credibility of the educational sciences,” said Kirschner and van Merriënboer. “There is the risk that we are entering a downward spiral: The popularity of urban legends paints the educational sciences as a mumbo-jumbo science, which in turn makes it increasingly difficult to move valuable innovations in the field of education into practice.”
Fletcher-Wood added: “[These myths] offer alluring models which seem to explain much of what we see. And they offer simple solutions: Kids aren’t concentrating–give them a tablet! [Suddenly] they’re digital natives!” Unfortunately, these falsehoods will “remain remarkably stubborn,” he said, “because people tend to discount new information which contracts their existing beliefs.”
Some researchers are more optimistic than others about whether it’s ultimately possible to chase out these misconceptions for good. Kirschner and van Merriënboer are not hopeful. “The step from legend-based education based on pseudoscience to evidence-based education based on science demands a quantum leap,” they wrote. “Rather than a quick change in research methodologies or objects of study, it requires a fundamental change in scientific attitude.”
But in a 2012 article in the journal Frontiers in Psychology, Sanne Dekker, Nikki Lee, Paul Howard-Jones and Jelle Jolles describe how work is already beginning to establish effective methods for chasing out these misbeliefs.
“Such intervention studies should be performed according to the principles and approach of evidence-based or evidence-informed practice. This could yield valuable information for the prevention of myths in the future and for the development of valid educational innovations,” they said.
Fletcher-Wood picks out what some solutions could look like in the education system. “The first is raising the general level of research literacy,” he says. “Helping people to spot the difference between a randomized, controlled trial and opinion, based on a handful of surveys. This may sound obvious, but pseudo-experts and the media can both be guilty of promoting work as “‘research’ which does not meet basic quality guidelines.”
The second is the “meme-ification” of research, an idea that will no doubt strike fear into the hearts of teachers around the world. “The Learning Scientists’ blogs and posters are an interesting way of trying to share complicated but true research findings in an easy and accessible way,” says Fletcher-Wood. “We can’t expect everyone to spend their evenings reading peer-reviewed papers; we can present genuine research more conveniently. This brings its own problems–research as meme wars–but it makes us no worse off than we were previously. The other solution is to ensure that those who’ve read around neuromyths combat these ideas humbly but persistently.”
Ultimately, the research shows that teachers are interested in learning about the brain and its role in learning. That’s encouraging, wrote Dekker and her colleagues in their 2012 editorial, adding: “Although the integration of neuroscience in educational practice remains challenging, joint efforts of scientists and practitioners may pave the way toward a successful collaboration between the two fields.”
How We Get To Next was a magazine that explored the future of science, technology, and culture from 2014 to 2019. This article is part of our How We Learn section, on the future of education. Click the logo to read more.