Does the quality of the science really matter?

Lumos Labs, who make the game Lumosity, were fined $2m because they did not have sufficient scientific evidence to back up their brain-training claims. How well would neuromarketing stand-up to similar scrutiny?

Lumos Labs, who make the game Lumosity, were fined $2m because they did not have sufficient scientific evidence to back up their claim that playing the game provided brain health benefits such performing better at work and protecting against cognitive decline.

“Lumosity preyed on consumers’ fears about age-related cognitive decline, suggesting their games could stave off memory loss, dementia and even Alzheimer’s disease. But Lumosity simply did not have the science to back up its ads,” Jessica Rich, director of the US Federal Trade Commission’s bureau of consumer protection, said in a statement.

The FTC was specifically targeting companies developing medical apps with dubious claims of health benefits, but could this ruling have broader implications? If governments start reviewing the scientific evidence behind marketing claims, how well will neuromarketing stand up?

There are certainly high profile examples of where neuroscientists themselves have challenged the validity of neuromarketing. For instance, Martin Lindstrom’s claim that you love your iPhone was roundly criticized because the science was simply incorrect. Lindstrom found that people exposed to audio or video examples of iPhones showed activation of their insula cortex, a part of the brain he claimed was activated by being in love. Prof. Russ Poldrack, a Stanford neuroscientist, took exception to this in a post entitled NYT OpEd + fMRI = complete crap. Russ pointed out that the insula is active in roughly a third of all brain imaging experiments but remarkably it was not active in some of the most famous studies of love. More generally, he notes that trying to infer mental states from brain activation is a difficult problem (known as reverse inference) and that informal examples such as this are deeply flawed.

A similar example comes from an advertisement that claimed to investigate whether “the visceral experience of flying a fighter jet could ever be matched by a Porsche?” In the video, a volunteer is wired to an electroencephalography (EEG) system to measure his brain activity. According to the “scientist” in the video, their results showed the nucleus accumbens releasing massive amounts of dopamine both when making corkscrew turns in the jet and left turns in the Porsche, indicating that the volunteer’s experience was similar in both cases. EEG, however, doesn’t measure dopamine release – it measures electrical activity – so this claim is patently false. In fact, there were many dodgy aspects to the “science” in this ad which were ably dissected by the blogger NeuroBollocks.

My point here is not to criticise egregious examples of consumer neuroscience but rather to raise the more general question: how would any neuromarketing campaign fare when its scientific evidence is subject to external scrutiny? As part of the court settlement, Lumos Labs were ordered to have “competent and reliable scientific evidence” to support its marketing claims. I suspect consumers would agree that this is a reasonable standard, but if your company commissioned a neuromarketing project, how can you be sure you’re steering clear of false advertising?

Being fined for false advertising is only one possible consequence of getting your science wrong. Had Apple proceeded to run a major "You Love your iPhone" campaign, they would have thrown good money after bad and left bemused consumers wondering what they were on about. The cost of a wasted campaign could have easily exceeded the fines paid by Lumos Labs.

In a similar fashion, Porsche ran a real risk with their fighter jet ad when they chose to fake their science. Porche's brand is arguably built around the high quality science and engineering that distinguishes their cars from their competitors. If consumers became worried that Porche is simply faking their expertise, what consequences would come from the loss of trust in their brand?

Nobody gets their science right every time.  Academics accept that 1 in 20 results will be incorrect simply based on the community's criteria for what makes acceptable evidence.  To help maintain standards, scientists publish not only their findings, but also all of their methods so the process is open to scrutiny.  There are no proprietary methods in science. The methods and results are then evaluated by independent experts before the findings are published. 

The best neuromarketing follows this model and in doing so, protects against the worst case scenarios. If Lumos had such evidence, their marketing claims would have withstood the FTC's test.  If Apple had followed these procedures, they would never have made such an obvious mistake in interpreting their data.  If Porche had considered the potential harm to their brand, they almost certainly would not have chosen to fake the science in their advertisement.

Getting the science right matters.

About the Author

Prof. Joseph Devlin

Chief Scientific Officer, Co-Founder

Joe's PhD is in Artificial Intelligence but he found himself much more interested in natural intelligence -- how the human mind works. After training in neuroimaging at Cambridge and Oxford, he established a reputation as a leading researcher in how the human brain processes language. He is a former Head of Experimental Psychology at UCL as well as the current Vice Dean for Innovation and Enterprise. He has collaborated on various projects with a variety of media partners, including the BBC, the Times, Guardian and Daily Telegraph.

Joe has published more than 60 scientific articles in international journals.

More from this Author

Your privacy

OK

We use cookies to improve your experience on our site. To find out more, read our privacy policy.