Multitasking: is it Overrated?

The latest from!

The ability to multitask is usually viewed positively. But does it help in improving productivity? New research indicates that this is highly unlikely.

For years we have been hearing and being allured by the term multitasking, with many of us unwittingly believing that it is something supernatural and too good to neglect, and we ought to develop it in ourselves to succeed. But the question arises, is it as good as it sounds? What does the new research say? Is it possible for a person to concentrate and remain productive on several tasks at once?

Going through research articles can be perplexing for most of us, but most of the research seems to favor doing one task at a time. There are several downsides to multitasking, often resulting in decreased overall productivity and quality of work. A person who tries to finish one job at a time seems to do more at the end of the day. Multitasking could be valuable only to a certain extent if you are doing some simple tasks, but after a certain threshold productivity decreases and quality of work suffers. Multitasking often fails to improve productivity.

Ultimately, it depends on the kind of tasks you are doing. If a person is doing lots of jobs, that are simple and the person performing them is adept at them, then maybe yes, in some cases multitasking may be good, but mostly this is not the case. Some people are inherently inferior to others in multitasking. One thing that most scientific research demonstrates is that multitasking results in significant loss of accuracy. The more you multitask, the more errors you are bound to make, thereby leading to reduced overall productivity. Considering that in some cases performance errors can even be catastrophic, improved performance with most multitasking is a mere illusion that comes at the price of poor results and higher error rate.

Multitasking leads to mental overloading

When people are given various tasks, they need some time to switch to another function. This results in the loss of both time and productivity. This switching time is directly related to the complexity of the task. If the task is being switched to is complicated and less familiar, one needs more time then when attending to more familiar task. Thus multitasking can result in mental overloading in reconfiguring mental settings as one switches between tasks. Also one has to memorize more information while switching between the various functions, like the progress status of the previous task. Research shows that these short mental blocks between the switching can bring down productivity by as much as forty percent. Scheduling tasks can be more efficient in increasing productivity as compared with multitasking.

What about gender differences?

Women may be better at juggling between several tasks, but this is true only when the tasks are simple in nature, the kind of tasks achieved on a daily basis, that do not require much mental processing. Examples include cleaning the house and talking on the phone. But when multitasking involves more complex tasks, this gender difference becomes irrelevant. Therefore,  talking on the phone while driving is equally dangerous for both sexes. In fact, one study demonstrated that women dislike multitasking as much as the men do. Given a choice, women don’t seem to switch between several tasks more often than men. When multitasking, both genders perform equally poorly. Whether multitasking is done by free will or it has been forced due to job constraints does not seem to have any effect on productivity.

Who is multitasking and why?

One study focused on finding personality differences between self-proclaimed multitaskers and non-multitaskers. The study tried to find out why some people opt to multitask while other may avoid, and if the multitaskers are actually any good at multitasking. Results of the research were quite astonishing.

Most of the people who multitask are not necessarily good at it. In fact, the results pointed to the opposite: people who are good at multitasking usually avoid doing it. Generally, those who are impulsive sensation seekers tend to multitask. By multitasking, some of these people seem to gain pleasure. Another reason why certain people multitask is due to overbloated self-assessment. Individuals who more regularly multitask often overrate their capabilities and fail to understand that they are not better than others in multitasking.

Multitaskers make mistakes more often, and they seem to be less self-critical about their abilities, and they have lowered understanding of their errors and losses. Further, multitaskers are often people with attention deficits who have difficulty focusing on a single given task.

What we know so far about multitasking

  • Multitasking decreases productivity in most cases, by as much as forty percent.
  • Multitasking results in much higher error rates, which reduces productivity and can be harmful in some cases.
  • There are no proven gender differences in efficient multitasking.
  • Multitasking is often related to certain personality traits like being impulsive and sensation seeking.
  • Multitaskers are not typically the people who are good at it.
  • Multitaskers often lack the ability to concentrate properly on a given task.
  • Effectiveness of multitasking depends on the complexity of the job, with multiple complex jobs being harder to do at the same time than multiple easy jobs.
  • Scheduling the various tasks can increase productivity relative to multitasking.

Thus,  the existing research studies seem to favor doing tasks sequentially or one by one, rather than multitasking. Multitasking increases the risk of making mistakes, and this rule applies equally to both genders. Therefore, understanding the downsides of multitasking not only improves productivity, but might also save us from catastrophic errors.


Adler, R.F., Benbunan-Fich, R., 2012. Juggling on a high wire: Multitasking effects on performance. Int. J. Hum.-Comput. Stud. 70, 156–168. doi: 10.1016/j.ijhcs.2011.10.003.

Buser, T., Peter, N., 2012. Multitasking. Exp. Econ. 15, 641–655. doi: 10.1007/s10683-012-9318-8.

Kc, D.S., 2013. Does Multitasking Improve Performance? Evidence from the Emergency Department. Manuf. Serv. Oper. Manag. 16, 168–183. doi: 10.1287/msom.2013.0464.

Multitasking: Switching costs [WWW Document], n.d. . URL (accessed 7.22.17).

Sanbonmatsu, D.M., Strayer, D.L., Medeiros-Ward, N., Watson, J.M., 2013. Who Multi-Tasks and Why? Multi-Tasking Ability, Perceived Multi-Tasking Ability, Impulsivity, and Sensation Seeking. PLOS ONE 8, e54402. doi: 10.1371/journal.pone.0054402.

Image via SerenaWong/Pixabay.

Brain Blogger

Nootropic Effects of Psychedelic and Addictive Substances

The latest from!

In my previous article on the subject of nootropics, I was writing about brain enhancing effects of some medicines and natural compounds. There is, however, a large number of nootropics that received little recognition from official science and remain rather poorly studied. There is a good reason for this too – these compounds tend to be addictive or hallucinogenic. This article aims to cover what is known about the effects of these substances.


It is rather curious that nicotine, a well-known addictive component of tobacco smoke, was confirmed to have nootropic effect. The research into this property of nicotine was triggered by observations that ex-smokers tend to complain about the lack of concentration and general decline in various aspects of cognitive abilities. It turned out that nicotine does improve episodic and working memory, as well as attention. Nicotine doses delivered via patches had positive effects (improved performance in cognitive tests) in adults with mild cognitive disorders, as well as in healthy non-smokers.

Cannabis/marijuana and cognitive processes

People of artistic professions often claim that smoking pot helps creativity. There is scientific evidence to substantiate these claims. Cannabinoids seem to temporarily increase communication between the left and right hemispheres of the brain thus creating a state of hyperconnectivity and allowing a loose flow of associations. This may explain the heightened creativity individuals experience when using marijuana. Reports of positive benefits include improved mood, lower levels of anxiety, stress, and depression, improved focus and fewer distractions, improved reaction times, more creative thoughts, greater verbal fluency, and better calculative complexity. These effects are largely dose-dependent, and taking higher amounts may lead to the opposite effects including sluggishness, lack of focus, nervousness, and impaired memory formation and recall.

However, the negative long-term effects of cannabis on brain structure and function have been demonstrated beyond any reasonable doubt. In fact, cognitive decline associated with the use of cannabis is a serious medical problem, and lots of scientific research aims to gain insights into this problem and the potential approaches to reverse decline.

Grey area: Psychedelic drugs (LSD, mushrooms) in microdosing

Type “psychedelics” and “microdosing” in Google search, and you will be flooded with thousands of articles claiming that compounds like LSD and psilocybin (active component of magic mushroom) have almost miraculous effect on human cognitive abilities. It appears that many inventors, researchers and innovators use psychedelic compounds in very small doses, occasionally or regularly, to reach a state of enhanced consciousness, get into flow, and work more productively.

But here is a problem: not a single proper scientific publication supports these claims. There is a good reason for this: due to their well-known hallucinogenic properties and serious potential side effects, psychedelics like LSD are banned in most countries around the world. In fact, LSD was banned in the US and UK back in 1960s. This means that the only peer-reviewed published research that could inform on the actual measurable effects of psychedelics as a nootropic were done 50 years ago. The most commonly cited work (Harman, et. al. (1966) “Psychedelic Agents in Creative Problem-Solving: A Pilot Study.” Psychedelic Reports 19, 211-27.) was published in 1966. Although the findings reported in this publication are interesting, the quality of this work in terms of general organization, the use of suitable control subjects, and statistical power is hardly satisfactory.

The hallucinogenic properties of psychedelics are well documented. Microdosing of these compounds for enhancement of cognitive abilities, however, has not been investigated scientifically. This leave lots of space for imagination and conspiracy theories. There were repeated calls from the research community to lift the ban on research into psychedelics, but so far they seem to have fallen on deaf ears.

There are numerous evidences that psychedelics can be used to treat various psychiatric disorders. Some resent studies indicate that administration of psylobicin in moderate doses is not associated with any significant short-term or long-term risk. When it comes to cognitive enhancement, none of the available peer-reviewed scientific publications confirm or rule out such a phenomenon. One interesting resent publication claims that exposure to microdoses of psilocybin creates a state of hyperconnectivity in the brain. The findings from functional MRI experiments show:

“that the structure of the brain’s functional patterns undergoes a dramatic change post-psilocybin, characterized by the appearance of many transient structures of low stability and of a small number of persistent ones that are not observed in the case of placebo. This means that the psychedelic state is associated with a less constrained and more intercommunicative mode of brain function, which is consistent with descriptions of the nature of consciousness in the psychedelic state.”

In other words, the study indirectly points to the possibility of cognitive enhancement and creative stimulation under the influence of psychedelics. Nonetheless, a more definite confirmation of this phenomenon is yet to be published.

A word in defense of official science

People of a more adventurous nature tend to blame the science and medicine industry for slowness in recognizing the benefits of smart drugs. But let’s look at this problem from the perspective of researchers. Most drugs are safe, but from time to time people do experience serious side effects and even life-threatening complications. Nobody wants to be one of the unlucky few. If something goes wrong, you’ll have nobody but yourself to blame. Regulatory bodies can recommend any given substance for any particular use only when they have sufficient evidence that a) confirms its effectiveness and b) shows that its side effects are mild and manageable, and/or its benefits far exceed the potential complications associated with its use (i.e., the risk is worth taking).

Development of novel nootropics is hampered by research, validation, and regulatory challenges. The road from the research laboratory to FDA approval is difficult, long, and costly. Pharmacological enhancement of healthy populations is fraught with ethical and philosophical pushbacks. Therapeutic effects observed in cognitively impaired patients often contradict those in healthy populations. Even approved drugs have issues with side effects and large individual differences. The long-term effects of nootropics are typically unknown. Most importantly, there is still much to be learned about the cellular and molecular basis for the various aspects of cognition. Once they are better understood, pharmacologists will have much better ideas about the processes in the brain to target and how to do it.

It is easy to get carried away with the potential opportunities that nootropics might offer. But don’t forget classical approaches: proper diet and exercise DO enhance brain functions. Many famous thinkers and creative people benefited from simple regular physical activities. Charles Dickens was spending several hours every day walking, sometimes for as much as 20 or 30 miles. Aristotle and Ludwig Van Beethoven are two other famous people who were known for their habit of wondering around while thinking. Physical activity pumps blood through your body and helps to deliver more oxygen to your brain. Regular exercise and healthy diet also keep your blood vessels healthy ensuring that this vital oxygenation is not reduced as you get older. Your normal lifestyle is responsible for your basic level of cognitive abilities. Smart drugs can be used to spike it up from time to time, but if the basic level is low, the spikes won’t go that high anyway!

To sum it up, although an occasional joint may heighten your creativity, the regular use of cannabis is definitely not a good approach to enhance cognitive abilities. There is an acute lack of research on benefits (or absence of such) of psychedelics in cognitive enhancement. Virtually all online information on the benefits of psychedelics as cognitive enhancers are completely unsubstantiated by scientific evidence. Any positive or negative appraisals represent personal views of the articles’ authors rather than results of research studies. Your lifestyle influences you basic level of cognitive abilities – don’t ignore generally accepted good strategies.


Heishman SJ et al. (2010) Meta-analysis of the acute effects of nicotine and smoking on human performance. Psychopharmacology (2010) 210: 453. doi: 10.1007/s00213-010-1848-1

Newhouse P et al. (2012) Nicotine treatment of mild cognitive impairment: a 6-month double-blind pilot clinical trial. Neurology 78, 91-101. doi: 10.1212/WNL.0b013e31823efcbb.

Wignall ND and de Wit H (2011) Effects of nicotine on attention and inhibitory control in healthy nonsmokers. Experimental and Clinical Psychopharmacology 19, 183-191. doi: 10.1037/a0023292

Morgan CJ et al. (2010) Hyper-priming in cannabis users: a naturalistic study of the effects of cannabis on semantic memory function. Psychiatry Res 176, 213-218. doi: 10.1016/j.psychres.2008.09.002.

Giovanni Battistella G et al. (2014) Long-Term Effects of Cannabis on Brain Structure. Neuropsychopharmacology 39, 2041–2048; doi: 10.1038/npp.2014.67

Filbey FM et al. (2014) Long-term effects of marijuana use on the brain. Proc Natl Acad Sci USA 111, 16913–16918. doi: 10.1073/pnas.1415297111

Studerus E et al. (2010) Acute, subacute and long-term subjective effects of psilocybin in healthy humans: a pooled analysis of experimental studies. Journal of Psychopharmacology  25, 1434 – 1452. doi: 10.1177/0269881110382466

Petri G et al. (2014) Homological Scaffolds of Brain Functional Networks. J R Soc Interface 11, 20140873. doi: 10.1098/rsif.2014.0873

Image via Wunderela/Pixabay.

Brain Blogger

Do We Sense Each Other’s Sickness?

The latest from!

Social behavior is important for our survival as a species. But social interaction also gives pathogens a chance to spread, and it thereby increases our exposure to infection. Our immune system is a complex defense system that has evolved to protect us from infections. Therefore, it makes sense to assume that our immune system must have developed ingenious strategies to protect us from new pathogens to which social interaction has exposed us.

Evidence of a link between the immune system and our social behavior has been accumulating in the last years. A direct connection between the brain and the immune system, through lymphatic vessels in the meninges, was recently revealed. Then it was shown that the immune system can directly affect, and even control social behavior and the desire for social interaction – an impaired immunity was shown to induce deficits in social behavior. This sounds like a clever preventive self-defense mechanism designed to avoid contagion – in times of poor immunity, our brain gets the message to reduce social interaction and, consequently, exposure to pathogens.

This is a self-defense mechanism that is activated when our body signals a poor immunological status; it’s an internal chemical communication system. But is there an external threat signaling system? The ability to detect and avoid infected individuals would clearly be a great evolutionary asset in strengthening our protection mechanisms. Many animals can detect sickness via odors, leading to a restraint in social interaction, most likely intended to reduce exposure to disease. Do humans have a similar sensory sickness detection system, something that allows us to detect infectious threat in others?

To answer this question, a new study aimed at determining whether humans can detect sickness in others from visual and olfactory cues. Sickness was experimentally induced through the injection of lipopolysaccharide (LPS), a molecule found in the membrane of Gram-negative bacteria that provokes robust immune responses. The activation of immune responses leads to an increase in the production of pro-inflammatory molecules that activate sickness responses and behaviors. It is known that visual cues of sickness, such as redness of the skin, allow us to infer the health of others. But although LPS induces a strong sickness response, its observable effects are subtle, and odor cues are difficult to perceive.

Photos of the face and samples of body odors of both sick and healthy individuals were presented to a group of naïve participants while their cerebral responses were recorded using fMRI. These participants were not aware that they would be seeing and smelling sick and healthy people. They were asked to focus on the faces while the odors were also presented and rate how much they liked the person. Faces were also rated on attractiveness, health, and desired social interaction, and odors were rated on intensity, pleasantness and health. This allowed the assessment of the “liking behavior” towards the faces, an indication of the will to approach and interact with others.

The rating of sick and healthy faces showed that photos obtained during acute sickness were generally considered less attractive, less healthy, and less socially desirable than the faces of participants receiving the placebo treatment. When faces were presented concomitantly with an odor, there was a lower liking of sick than of healthy faces, regardless of the odor presented with the face. Although participants were not able to perceive sickness in the odors, nor did they rate sick odors as more unpleasant or more intense than healthy odors, faces, regardless of being sick or healthy, were also less liked when paired with sick body odor.

These results show that we can detect early and subtle signs of sickness in others from both facial and olfactory cues, even just a couple of hours after activation of their immune system. Moreover, fMRI data revealed that visual and olfactory sickness cues activated their respective visual face processing and olfactory sensory cortices, as well as multisensory convergence zones. And even though odors were often too weak to be consciously detected, these olfactory sickness cues still led to activation of the olfactory cortex.

The study also revealed that this perception of subtle cues of sickness leads to reduced liking and decreased will for social interaction. This response may represent a human behavioral defense system against disease. The integration of olfactory and visual sickness cues in the brain may be part of a mechanism designed to detect sickness, resulting in behavioral avoidance of sick individuals, and in avoidance of impending threats of infection.


Filiano AJ, et al (2016). Unexpected role of interferon-? in regulating neuronal connectivity and social behaviour. Nature, 535(7612):425-9. doi: 10.1038/nature18626

Kipnis J (2016). Multifaceted interactions between adaptive immunity and the central nervous system. Science, 353(6301):766-71. doi: 10.1126/science.aag2638

Louveau A, et al (2015). Structural and functional features of central nervous system lymphatic vessels. Nature, 523(7560):337-41. doi: 10.1038/nature14432

Regenbogen C, et al (2017). Behavioral and neural correlates to multisensory detection of sick humans. Proc Natl Acad Sci U S A, pii: 201617357. doi: 10.1073/pnas.1617357114. [Epub ahead of print]

Shattuck EC, Muehlenbein MP (2015). Human sickness behavior: Ultimate and proximate explanations.Am J Phys Anthropol, 157(1):1-18. doi: 10.1002/ajpa.22698.

Image via junko/Pixabay.

Brain Blogger

Nurturing the Brain – Part 11, Magnesium

The latest from!

Magnesium is everywhere – it does not occur free in nature, only in combination with other elements, but it is the eighth most abundant chemical element in the Earth’s crust and the third most abundant element in seawater; it is even the ninth most abundant in the Milky Way. In the human body, magnesium is the fourth most abundant ion and the eleventh most abundant element by mass, being stored in bones, muscles, and soft tissues.

Magnesium is fundamental for health: it is essential to all cells and to the function of hundreds of enzymes, including enzymes that synthesize DNA and RNA, and enzymes involved in cellular energy metabolism, many of which are vital. Magnesium is involved in virtually every major metabolic and biochemical process in our cells and it plays a critical role in the physiology of basically every single organ.

Low plasma levels of magnesium are common and are mostly due to poor dietary intake, which has lowered significantly in the last decades. Magnesium can be found in high quantities in foods containing dietary fiber, including green leafy vegetables, legumes, nuts, seeds, and whole grains. But although magnesium is widely distributed in vegetable and animal foods, some types of food processing can lower magnesium content up to 90%. Also, the soil used for conventional agriculture is becoming increasingly deprived of essential minerals. In the last 60 years, the magnesium content in fruit and vegetables has decreased by around 20 to 30%.

Symptomatic magnesium deficiency due to low dietary intake in healthy people is not very frequent, but a consistently poor dietary supply of magnesium has insidious effects. Magnesium deficiency alters biochemical pathways and increases the risk of a wide range of diseases over time, namely hypertension and cardiovascular diseases, metabolic diseases, osteoporosis, and migraine headaches, for example.

In the brain, magnesium is an important regulator of neurotransmitter signaling, particularly glutamate and GABA, the main neurotransmitters by modulating the activation of NMDA glutamate receptors and GABAA receptors. It also contributes to the maintenance of adequate calcium levels in the cell through the regulation of calcium channels’ activity.

These physiological roles make magnesium an essential element in important neuronal processes. Magnesium participates in the mechanisms of synaptic transmission, neuronal plasticity, and consequently, learning and memory. Accordingly, increased levels of magnesium in the brain have been shown to promote multiple mechanisms of synaptic plasticity that enhance different forms of learning and memory, and delay age-related cognitive decline. Increased levels of magnesium in the brain have also been linked to an increased proliferation of neural stem cells, indicating that it may promote the generation of new neurons (neurogenesis) in adulthood. This is an important feature because neurogenesis is a key mechanism in the brain’s structural and functional adaptability, in cognitive flexibility, and in mood regulation.

Magnesium supplementation has also been shown to modulate the neuroendocrine system and to improve sleep quality by promoting slow wave (deep) sleep, which, among many other functions, is also important for cognition and memory consolidation.

Furthermore, magnesium may enhance the beneficial effects of exercise in the brain, since it has been shown to increase the availability of glucose in the blood, muscle, and brain, and diminish the accumulation of lactate in the blood and muscles during exercise.

But just as increasing magnesium levels can be beneficial, magnesium deficiency can have serious harmful effects.

Magnesium has important roles in the regulation of oxidative stress, inflammatory processes and modulation of brain blood flow. In circumstances of magnesium deficiency, all of these functions can potentially be dysregulated, laying ground for neurological disorders. Also, in a context of low magnesium availability in the brain, NMDA glutamate receptors, which are excitatory, may become excessively activated, and GABAA receptors, which are inhibitory, may become insufficiently activated; this can lead to neuronal hyperactivity and to a condition known as glutamate excitotoxicity. This causes an excessive accumulation of calcium in neurons, which in turn leads to the production of toxic reactive oxygen species and, ultimately, to neuronal cell death.

Magnesium deficiency has been associated with several neurological and psychiatric diseases, including migraine, epilepsy, depression, schizophrenia, bipolar disorder, stress, and neurodegenerative diseases. Magnesium supplementation has shown beneficial effects on many of these conditions, as well as in post-stroke, post-traumatic brain injury, and post-spinal cord injury therapies. This therapeutic action is likely due to its action in blocking NMDA glutamate receptors and decreasing excitotoxicity, in reducing oxidative stress and inflammation, and in increasing blood flow to the brain, all of which are determinant in the outcome of these conditions.

There are multiple benefits to be obtained from magnesium, both from a health promotion, and from a disease prevention and management perspective. The recommended daily intake of magnesium is of 320mg for females and 420mg for males. Too much magnesium from food sources has no associated health risks in healthy individuals because the kidneys readily eliminate the excess. However, there is a recommended upper intake level for supplemental magnesium, since it can cause gastrointestinal side effects. So, keep it below 350mg/day.


Chen HY, et al (2014). Magnesium enhances exercise performance via increasing glucose availability in the blood, muscle, and brain during exercise. PLoS One, 9(1):e85486. doi: 10.1371/journal.pone.0085486

de Baaij JH, et al (2015). Magnesium in man: implications for health and disease. Physiol Rev, 95(1):1-46. doi: 10.1152/physrev.00012.2014

Held K, et al (2002). Oral Mg(2+) supplementation reverses age-related neuroendocrine and sleep EEG changes in humans. Pharmacopsychiatry, 35(4):135-43. doi: 10.1016/j.pbb.2004.01.006

Jia S, et al (2016). Elevation of Brain Magnesium Potentiates Neural Stem Cell Proliferation in the Hippocampus of Young and Aged Mice. J Cell Physiol, 231(9):1903-12. doi: 10.1002/jcp.25306

National Institutes of Health, Office of Dietary Supplements. Magnesium Fact Sheet for Health Professionals

Slutsky I, et al (2010). Enhancement of learning and memory by elevating brain magnesium. Neuron. 2010 Jan 28;65(2):165-77. doi: 10.1016/j.neuron.2009.12.026

Image via Brett_Hondow/Pixabay.

Brain Blogger

Nootropics: How Smart Can You Get on Smart Drugs?

The latest from!

The use of smart drugs is becoming “trendy”. Lots of people are taking various substances regularly, many others try them from time to time. The idea of enhancing the brain’s ability, or tapping into its unused reservoir is definitely sexy, and many people are actively looking for information on this subject.

The shortage of scientifically verified information is exactly the reason I’m writing this article. Although thousand of publications on “smart drugs”, “cognitive enhancers”, and “nootropics” etc. can be found online, the overwhelming majority of claims are unsubstantiated or unashamedly commercialized. This means that the info you come across mostly consists of descriptions of personal opinions or experiences, or compilations of facts published elsewhere, or just articles from popular media where people can write whatever they want.

Multiple websites publish all kind of rubbish just to convince you to buy yet another wonderfully effective smart drug. Few people make an effort to refer to their sources of information, not to mention to present scientific and statistical data backing their claims. This is particularly enigmatic when these articles provide recipes for various drug combinations and claim the superiority of some of these combinations/compounds over the others. However, even scientific data on the subject is rather incomplete. Many studies were done using only a small number of participants, or in the absence of any reasonable controls. On their own, studies of such kind are of little, if any, value.

Fortunately, several systematic reviews and meta-analyses on the use of nootropics were published in the last couple of years. Systematic reviews and meta-analyses combine data from multiple individual studies, thus making the data statistically significant. This is a better way of assessing the efficacy of different drugs in the general, healthy population, and these are the publications that I will mostly use as reference points in this article.

How to prove that a smart drug is really smart?

Smart drugs (e.g., nootropics and cognitive enhancers) are defined as substances that improve cognitive function, particularly executive functions, memory, creativity, or motivation, in healthy individuals. The last bit is important: there are many drugs that were specifically developed to enhance brain functions in people with various cognitive disorders or deficits. Such drugs won’t necessarily smarten up healthy people, and when they do, they are not necessarily safe. Nootropics may come in many forms, from classical pharmaceutics in the form of pills to herbal supplements and “functional foods”.

There are only few smart drugs that are proven to improve some aspects of cognition. Proving that a compound has the properties of a nootropic is not a simple task. There is no straightforward way of measuring whatever cognitive enhancement you may experience once the pill is taken. The drug may indeed work and visibly increase your productivity. But it may also simply improve your mood if you anticipate a positive effect. On top of this, any given drug may work for some people and not work for others. Furthermore, the use of any drug is associated with potential side effects (e.g., headaches) that might eliminate its advantages in productivity and creativity. If the changes in productivity can be measured using some tests, creativity still remains something arguably impossible to quantify.

How smart drugs work?

There are several mechanisms that can be involved in the functioning of smart drugs. Some drugs can increase the blood flow (and thus oxygen supply) to the brain. Others can accelerate neuronal communication through increased release of certain neuromediators or through agonistic effects on the receptors of these neuromediators. Some compounds can serve as biochemical precursors of neuromediators, others may prevent oxidative damage to brain cells or provide them with a source of energy. Some of these changes can be achieved quickly making the drugs work almost instantly. Others, such as amendment/prevention of neuronal damage, manifest themselves only after prolonged use of the drug, thus making any changes in cognitive functions not so fast and not so obvious (although they can still be substantial).

Short overview of most popular nootropics

Amphetamines are a class of pharmaceuticals that include adderall, dextroamphetamine, and lisdexamphetamine. The drugs were developed to treat people with ADHD (attention deficit hyperactivity disorder) and this is where their effects are the most prominent. The drugs were also demonstrated to improve episodic memory, working memory, and some aspects of attention in the general population. At low doses they improve memory consolidation, recall of information, and motivation to perform tasks that require high degree of attention. Ritalin is structurally different from amphetamines and works through different mechanisms, although produces similar effects. Both amphetamines and ritalin improve cognitive functions, albeit only at lower doses. At high doses they stimulate other neural pathways not involved in learning that effectively cancel their positive effects on cognition.

Wakefulness-promoting agents, such as modafinil and armodafinil, increase alertness, counteract fatigue, and increase productivity and motivation. Modafinil is praised for its ability to improve reaction time, logical reasoning, and problem-solving. The drug is clinically prescribed  for a number of conditions including sleep apnea, narcolepsy, and shift work sleep disorder.

Compounds from the racetam family (piracetam, oxiracetam etc.) are more extensively studied compared than the newer nootropics. Piracetam was developed back in the 1960s and has an almost perfect safety profile. Convincingly, it was shown to improve cognitive abilities, particularly in older people and those with cognitive impairment. Although piracetam is officially recognized as a nootropic, its brain-enhancing effects in healthy people are considered to be moderate. There is a number of other derivatives from this group of drugs which, allegedly, work better. A good example is phenotropil. This compound was developed in Russia where it is available as a prescription drug. It was demonstrated to have a memory enhancing effect. The drug can be used as a stimulant and enhances resistance to extreme temperatures and stress. Due to its stimulating effect, phenotropil is banned by the World Anti-Doping Agency, which means that it cannot be used by athletes intending to compete in official events.

Xanthines, such as caffeine, are some of the most commonly used compounds with nootropic effects. In particular, they increase alertness and performance levels. Caffeine is not what comes to mind when we think of nootropics, but apparently its effect is comparable to many pharmaceuticals.

L-Theanine, a chemical component of green tea, is very well studied and its effects on promoting alertness and attention are confirmed by multiple research.

When it comes to nutraceuticals and herbal supplements, recent studies appear to be contradictory. Some data do support the memory-enhancing effects of such plants as Gingko biloba, Asian ginseng, and Bacopa monnieri, but systematic reviews do not find convincing evidence of their effectiveness. It is likely that herbal supplements may work well over longer periods of time and improve cognitive abilities, but in the short term their effects are not particularly obvious. The same applies to many vitamins, such as vitamin E and B group vitamins, as well as Omega-3 fatty acids: the evidence supporting their benefits are limited at the present time.

To conclude, only few drugs are scientifically proven to be associated with moderate cognitive enhancement effects in the healthy population. Being sceptical when assessing information on smart drugs from the internet is a good idea: lots of ridiculous rubbish is published online. Most nootropics are relatively safe, but side effects are always a possibility since the response to nootropics is highly individual.


Spencer BC et al. (2015) The Cognition-Enhancing Effects of Psychostimulants Involve Direct Action in the Prefrontal Cortex. Biological Psychiatry 77, 940–950. doi:10.1016/j.biopsych.2014.09.013

Ilieva IP et al. (2015) Prescription Stimulants’ Effects on Healthy Inhibitory Control, Working Memory, and Episodic Memory: A Meta-analysis. J Cogn Neurosci. 27, 1069-1089. doi:10.1162/jocn_a_00776

Bagot KS and Kaminer Y (2014) Efficacy of stimulants for cognitive enhancement in non-attention deficit hyperactivity disorder youth: a systematic review. Addiction 109, 547–557. doi:10.1111/add.12460

Linssen AMW et al. (2014) Cognitive effects of methylphenidate in healthy volunteers: a review of single dose studies. Int J Neuropsychopharmacol 17, 961-977. doi:10.1017/S1461145713001594

Urban KR and Gao WJ (2014) Performance enhancement at the cost of potential brain plasticity: neural ramifications of nootropic drugs in the healthy developing brain. Front. Syst. Neurosci.| doi: 10.3389/fnsys.2014.00038

Winblad B (2005) Piracetam: a review of pharmacological properties and clinical uses. CNS Drug Rev. 11, 169-182. PMID:16007238

Zvejniece L et al. (2011) Investigation into Stereoselective Pharmacological Activity of Phenotropil. Basic & Clinical Pharmacology & Toxicology 109, 407–412. doi: 10.1111/j.1742-7843.2011.00742.x

Rogers PJ (2007) Caffeine, mood and mental performance in everyday life. Nutrition Bulletin 32, 84–89. doi: 10.1111/j.1467-3010.2007.00607.x

Camfield DA et al. (2014) Acute effects of tea constituents L-theanine, caffeine, and epigallocatechin gallate on cognitive function and mood: a systematic review and meta-analysis. Nutr Rev 72, 507-522. DOI:

Image via Pexels/Pixabay.

Brain Blogger

New Breakthroughs in Concussion Diagnostics

The latest from!

Concussion is the most common form of traumatic brain injury. While serious concussions present with overt symptoms, the diagnosis of mild concussions remains a clinical difficulty. Researchers recently developed two novel methods for the diagnosis of concussion that may aid in the identification of less severe traumatic brain injuries.

A concussion is a relatively common traumatic brain injury caused by a fall or blow to the head that results in a temporary impairment of brain function. Loss of consciousness is common, but contrary to popular belief, it is not a requirement for diagnosis. Typical indicators include drowsiness, dizziness, confusion, headache, and memory impairment, among other neurological symptoms. These symptoms usually persist for a few months, but some patients continue to experience cognitive and behavioral manifestations long after the initial injury.

The detection of concussion using objective, quantitative methods remains a clinical challenge. Traditional brain imaging techniques, such as computed tomography and magnetic resonance imaging, are not sensitive enough for this application. Therefore, diagnosis is usually made using non-objective methods such as patient interviews and self-assessments.

A group of researchers led by Margot Taylor at Simon Fraser University have developed a new method for detecting mild traumatic brain injuries using magnetoencephalographic (MEG) imaging. The results were published in the journal PLOS Computational Biology. Taylor and colleagues are members of the Behavioral and Cognitive Neuroscience Institute and the ImageTech Lab at SFU, home to the only research-dedicated MEG scanners in western Canada.

MEG imaging is a technique that measures the magnetic fields generated by brain activity. It has excellent spatial and temporal resolution and can identify events with millimeter and millisecond precision. MEG imaging can be used to map the interaction among various brain areas. Importantly for clinical applications, MEG imaging is completely noninvasive, and can be used in both adults and children without harmful radiation exposure, or the need to inject isotopes.

Mild traumatic brain injury is associated with damage to the white matter of the brain, made up of bundles of nerve axons that connect one brain area to another. When two brain areas are connected, they show activity pattern synchronization, referred to as oscillatory network synchrony. The researchers hypothesized that by using MEG imaging to measure network synchrony, they could identify small disruptions in the connections between brain areas that result from traumatic brain injuries.

MEG scans were performed on patients who were diagnosed with a mild traumatic brain injury (concussion) in the prior three months, and on healthy controls. Computer analyses were performed to calculate several indices of network connectivity, and significant differences were found between patients and controls. The differences were stronger when less time had elapsed between the injury and the imaging procedure. Overall, MEG imaging was able to detect the presence of a mild concussion with 88% accuracy.

In another 10.1038/srep39009″>groundbreaking study, a group of researchers led by Nina Kraus and Cynthia LaBella of the Auditory Neuroscience Laboratory at Northwestern University have developed an approach for diagnosing concussion using an auditory biomarker. Their work was recently published in the journal Scientific Reports.

The processing of sound by the brain is a complex process that involves interaction among the cognitive, sensory, and limbic systems. Damage to any of these brain systems should therefore interrupt auditory processing, the researchers proposed. The question Kraus and colleagues asked was whether concussion-induced changes in sound processing would be large enough to distinguish between patients with a concussion and controls.

To test this question, the researchers measured speech-evoked frequency-following responses (FFRs) in 40 children with and without a concussion. The FFR is generated by the auditory center of the brain, and it incorporates cognitive, sensory, and reward input. Changes in the FFR have previously been associated with other clinical syndromes. Importantly, the FFR can be easily measured by placing electrodes on the scalp and delivering a sound stimulus into the ear. The FFR measurement is highly reliable and is consistent across the lifespan within an individual.

The children who had sustained concussion were tested an average of 27 days after their injury. Neurophysiological responses were measured in terms of the magnitude, timing, accuracy, and pitch processing of the fundamental frequency of speech (F0). Children with concussion exhibited on average a 35% smaller response to the F0 than the controls. They also showed impaired pitch coding, and smaller and slower brain responses to speech. These measures were correlated with the number of concussion symptoms reported by the patients.

The researchers created a statistical model incorporating these data. Their model was able to classify subjects into concussion or control groups with 90% sensitivity and 95% specificity, suggesting that the observed changes in auditory processing are an accurate biomarker of concussion.

The researchers cite the portability, reliability, and accuracy of their test as major advantages for its clinical application in diagnosing new injuries. Baseline readings could be made in athletes who participate in impact sports, and repeat measures could be taken at the end of the season or following an injury.

Together with MEG imaging, this method provides an objective and quantitative means for diagnosing brain dysfunction after mild traumatic brain injury. These methods will be useful both for identifying patients with mild injuries, and for developing guidelines for recovery and the return to work and play following an injury.


Dimou S. and Lagopoulos J. (2014). Toward objective markers of concussion in sport: a review of white matter and neurometabolic changes in the brain after sports-related concussion. J Neurotrauma. 1;31(5):413-24. DOI: 10.1089/neu.2013.3050

Vakorin V.A., Doesburg S.M., Da Costa L., Jetly R., Pang E.W., Taylor M.J. (2016) Detecting Mild Traumatic Brain Injury Using Resting State Magnetoencephalographic Connectivity. PLOS Computational Biology 12(12):e1004914. DOI: 10.1371/journal.pcbi.1004914

Nina Kraus N., Thompson E.C., Krizman J., Cook K., White-Schwoch T., LaBella, C.R. (2016) Auditory biological marker of concussion in children. Sci Rep. 6:39009. DOI: 10.1038/srep39009

Kraus, N. and White-Schwoch, T. (2015) Unraveling the biology of auditory learning: A cognitive-sensorimotor-reward framework. Trends Cogn Sci. 19:642–654. DOI:

Image via Couleur/Pixabay.

Brain Blogger

Prevention Is the Best Medicine for Dementia

The latest from!

Population aging is bringing about a substantial increase in the prevalence of neurocognitive disorders. Current projections estimate that, by 2050, more that 130 million people will be affected by dementia worldwide. As experts assemble to devise strategies to face this incoming challenge, one conclusion stands out: prevention is crucial.

The goal of prevention is obvious: to promote good health and take action before disease onset, thereby reducing the incidence of disease. And this is obviously better than having to manage a disease and its complications, and losing quality of life – “prevention is better than cure.”

But in order for preventive behaviors to be acquired, knowledge is essential – knowledge of modifiable risk factors and preemptive actions that can be adopted, and knowledge of how effective they really are. Studies addressing the benefit of lifestyle interventions for the prevention of dementia have identified numerous modifiable risk and protective factors and have shown that change can indeed be beneficial.

Modifiable risk factors for cognitive impairment include lifestyle factors such as smoking, high alcohol intake, diet (saturated fats, sugar, processed foods), and poor physical activity; these then manifest in other risk factors that are already a consequence of inadequate lifestyle options, namely vascular and metabolic diseases (cerebrovascular and cardiovascular diseases, diabetes, hypertension, overweight and obesity, high cholesterol). In the end, it all builds up to speed up cognitive decline.

Protective factors include opposite lifestyle choices, such as quitting smoking, moderate alcohol intake, healthier diet (Mediterranean diet, polyunsaturated fatty acids and fish-related fats, vitamins B6 and B12, folate, antioxidant vitamins (A, C and E), vitamin D, physical activity, and mentally stimulating activity.

Still, cognitive disorders are complex, multifactorial conditions – even if you lead the healthiest life, dementia may still strike you. But this should not be an excuse to let go because research shows is that preventive behaviors shift the odds in your favor.

An important aspect of behavioral change is that it should be integrative. Single-domain interventions provide some benefit: physical activity and cognitive training have been positively associated with cognitive performance in multiple studies; also, a recent meta-analysis showed that an increased consumption of fruit and vegetables reduces the risk of cognitive impairment and dementia. But multi-domain interventions, in which multiple risk factors are targeted simultaneously are more likely to deliver better results.

For example, a 2015 Finnish study assessed the effect of a 2-year multimodal intervention in adults aged 60-77 years at risk of cognitive decline but without pronounced cognitive impairment. Four intervention targets were included: diet, exercise, cognitive training, and vascular risk. The program’s design included a diet with high consumption of fruit and vegetables, consumption of wholegrain cereal products and low-fat milk and meat products, low sucrose intake, use of vegetable margarine and rapeseed oil instead of butter, and fish consumption at least two portions per week.

The physical exercise training program included progressive muscle strength training, aerobic exercise, and exercises to improve postural balance. Cognitive training consisted of computer-based training targeting executive processes, working memory, episodic memory, and mental speed. Metabolic and vascular risk factors were monitored throughout the study. Social activities were also stimulated through the numerous group meetings of all intervention components.

The study showed that simultaneous changes in multiple risk factors, even of small magnitude, had beneficial effects on the risk of cognitive decline, on overall cognition, complex memory tasks, executive functioning and processing speed, and on also BMI, dietary habits, and physical activity.

But a timely prevention seems fundamental – these lifestyle interventions may not be as effective once cognitive impairment is manifest. A recent study evaluated the impact of a 3-year omega-3 fatty acid supplementation with or without multi-domain lifestyle interventions on cognitive function in adults aged 70 years or older. These adults already had symptoms of cognitive impairment: either memory complaints, limitations in one instrumental daily living activity, or slow gait speed. The multi-domain intervention included cognitive training, physical activity, nutrition, and management of cardiovascular risk factors.

In this case, neither the omega-3 supplementation alone, nor the combination with the lifestyle interventions were able to reduce cognitive decline. Also, the adherence to lifestyle interventions over time was lower in this study when compared to other studies with younger seniors, with no clinical manifestations of dementia onset. But still, those with increased risk of dementia were the ones who benefited the most.

Early prevention is probably the best strategy. Instead of trying to prevent dementia later in life, focusing on preventing earlier, milder, and more common forms of cognitive impairment may be a better strategy that may end up also preventing cardiovascular and metabolic diseases and, ultimately, dementia. Because they’re all fruits from the same tree.


Andrieu S, et al (2017). Effect of long-term omega 3 polyunsaturated fatty acid supplementation with or without multidomain intervention on cognitive function in elderly adults with memory complaints (MAPT): a randomised, placebo-controlled trial. Lancet Neurol, 16(5):377-389. doi: 10.1016/S1474-4422(17)30040-6

Jiang X, et al (2017). Increased Consumption of Fruit and Vegetables Is Related to a Reduced Risk of Cognitive Impairment and Dementia: Meta-Analysis. Front Aging Neurosci, 9:18. doi: 10.3389/fnagi.2017.00018

Kivipelto M, et al (2017). Can lifestyle changes prevent cognitive impairment? Lancet Neurol, 16(5):338-339. doi: 10.1016/S1474-4422(17)30080-7

Ngandu T, et al (2015). A 2 year multidomain intervention of diet, exercise, cognitive training, and vascular risk monitoring versus control to prevent cognitive decline in at-risk elderly people (FINGER): a randomised controlled trial. Lancet, 385(9984):2255-63. doi: 10.1016/S0140-6736(15)60461-5

Shah H, et al (2016). Research priorities to reduce the global burden of dementia by 2025. Lancet Neurol, 15(12):1285-1294. doi: 10.1016/S1474-4422(16)30235-6

Solomon A, et al (2016). Advances in the prevention of Alzheimer’s disease and dementia. J Intern Med, 275(3):229-50. doi: 10.1111/joim.12178

Image via Couleur/Pixabay.

Brain Blogger