Evan Morris learned a lesson from a student several years ago that continues to drive his thinking and research today.
Morris, co-director of imaging at the Yale PET Center, teaches the Responsible Conduct of Research (RCR) class, which is a requirement for students who receive funding from the National Institutes of Health or the National Science Foundation.
A few years ago, Adele Ricciardi, an MD, PhD student specializing in gene editing in utero, spoke to Morris’ class about the ethical dilemmas of her work. As she began to untangle some ethical knots during her discussion with the other students, Morris began to wonder if there were similar issues of the impact of his own specialty, brain imaging, on patients and volunteers.
Morris will tell you “my bag is PET” (positron emission tomography). He uses PET imaging to study neuropsychiatric diseases.
“It’s a good thing when we identify biomarkers and predictors of disease, ” Morris said. “When we go further than that, things could be ethically dicey.”
PET “is commonly used, clinically, to identify sites of altered metabolism (e.g., tumors). In research, it can be used to identify molecular targets for treatment,” he said. “In my work at Yale, in collaboration with Professor Suchitra Krishnan-Sarin of psychiatry, we have used PET imaging of an opioid receptor to predict which problem drinkers would reduce their drinking while on the medication, naltrexone,” he said.
“Driven by AI and ever-faster computers, the predictive ability of the scans will improve.”
What will happen when technological advances make a quantum leap in accuracy? And how soon might that happen?
In search of answers, Morris sought out Michelle Hampson, director of real-time functional magnetic resonance imaging (fMRI) in the department of radiology & biomedical imaging. This summer, Hampson was the senior author of a study that used real-time fMRI neurofeedback, a relatively new technique, to train adolescents with Tourette Syndrome to control their tics.
Morris and Hampson discussed the long-term consequences of predictive brain imaging. “Maybe it’s time for a broader conversation,” they agreed .
This past summer, Morris worked through his questions, and how to best transmit his ideas to the wider, non-academic public, during a two-week sabbatical at The Hastings Center, an independent bioethics research institute in Garrison, New York.
“I presented my ideas to the resident scholars at the center and showed them the first draft of an opinion piece I wrote. I was criticized, disbelieved and rejected, albeit politely,” he said.
“A number of the resident scholars at Hastings took time to meet with me and discuss my ideas. They knew how to formulate an op-ed. ‘Don’t waste the reader’s time. Give them one main idea,’ they emphasized. It was tough, but I accepted their criticisms and suggestions, and used them.”
The result was the piece, “Why We Need Guidelines for Brain Scan Data,” which was published Sept. 17 in Wired.
“Brain scans, aided by artificial intelligence, reveal as much about us as our DNA,” Morris wrote. “Grappling with their ethical implications is vital to scientific integrity.”
Morris hopes to organize a study group at Yale or a track at a conference to further discuss the privacy concerns around AI-enhanced predictive brain scans and to propose guidelines. He also plans to challenge the students in his next RCR class, or at least to alert them to the weighty issues that brain imagers and other researchers will need to be ready to confront.