There isn’t a brain model that is suitable for everyone.
With the help of machine learning, scientists have been able to better understand how the brain creates complex human traits by finding patterns of brain activity linked to traits such impulsivity, traits like working memory, and diseases like depression. These techniques allow scientists to create models of these linkages, which can then be applied to predict human performance and health.
It only functions, though, if models are inclusive, which prior research has demonstrated is not the case. There are those people that simply do not fit the model.
In a study that was recently published in the journal Nature, researchers from Yale University investigated who these models tend to fail in, why that happens, and what can be done to fix it.
The original study main investigator, an M.D.-Ph.D. Abigail Greene, a Yale School of Medicine student, says that in order to be most useful, models must be applicable to any particular person.
She stated that the model must be applicable to the patient who is currently in front of them if this type of work is to be applied in a clinical setting, for example.
Research in two brain model strategies
Two strategies are being considered by Greene and her colleagues because they think they could improve the precision with which psychiatric categorization is delivered by models. The first is by more precisely classifying clinical groups. For instance, schizophrenia can be diagnosed based on a wide range of symptoms that can differ substantially from person to person. If researchers have a better understanding of the neurological basis of schizophrenia, including its symptoms and subtypes, they may be able to classify patients in more accurate ways. Second, some traits, like impulsivity, are present in a wide range of situations. Regardless of the underlying medical condition, knowing the neurological underpinnings of impulsivity may help doctors treat that symptom more successfully.
Greene said that these changes would have an impact on how a patient responds to treatment. The more effectively we can adapt treatments to various subsets of people who may or may not have the same diagnoses, the better. But first, she said, models must be universally applicable.
Greene and her colleagues first trained models that could utilize brain activity patterns to predict how well a person would perform on a number of cognitive tests in order to look into model failures. When put to the test, the models successfully predicted how most standardized tests would do. However, for some individuals, they were mistaken, assuming incorrectly that individuals would score poorly when they actually scored well, and vice versa. The research team next looked at which individuals the models misclassified. The same individuals were consistently misclassified throughout tasks and studies, according to Greene’s results. “And the misclassified individuals in one dataset common characteristics with the incorrectly classified individuals in another dataset. Being misclassified therefore has significant significance.
They next investigated whether there were any changes in those people’s brains that would account for these identical misclassifications. However, there were no obvious variations. Instead, they discovered that misclassifications were related to clinical characteristics like symptom severity and sociodemographic parameters like age and education. In the final, they came to the conclusion that the models weren’t just reflecting cognitive ability. According to Greene, they were actually reflecting more intricate “profiles” that combined cognitive abilities with numerous sociodemographic and clinical variables. She added, “And everyone who didn’t fit that stereotypical description was failing by the models.” For instance, one of the models employed in the study linked greater schooling to better results on cognitive tests. Any less educated people who performed well didn’t meet the model’s profile and were frequently predicted incorrectly.