Posted in | News | Medical Robotics

Machine Learning Assists Diagnosis of Seizures in Unconscious Patients

Researchers from Duke University created an assisted machine learning model that significantly enhances the ability of medical personnel to read the electroencephalography (EEG) charts of patients in intensive care. This research was published in the New England Journal of Medicine AI.

Machine Learning Assists Diagnosis of Seizures in Unconscious Patients
This starfish-like graph is a visual representation of how a new AI algorithm helps medical care professionals read the EEG patterns of patients in danger of suffering brain damage from seizures or seizure-like events. Each differently colored arm represents one type of seizure-like event the EEG could represent. The closer the algorithm puts a specific chart toward the tip of an arm, the surer it is of its decision, while those placed closer to the central body are less certain. Image Credit: Duke University.

The computational technique could save thousands of lives annually because EEG readings are the only way to determine when unconscious individuals are experiencing seizure-like episodes or are in danger of having a seizure.

EEGs analyze the electrical signals coming from the brain by attaching tiny sensors to the scalp. The result is a long series of up-and-down squiggles. These lines abruptly leap up and down, resembling an earthquake seismograph, when a patient is experiencing a seizure; this is an obvious signal. However, it is more difficult to distinguish seizure-like occurrences from other medically significant aberrations.

The brain activity we are looking at exists along a continuum, where seizures are at one end, but there are still a lot of events in the middle that can also cause harm and require medication. The EEG patterns caused by those events are more difficult to recognize and categorize confidently, even by highly trained neurologists, which not every medical facility has. But doing so is extremely important to the health outcomes of these patients.

Dr. Brandon Westover, Associate Professor, Department of Neurology, Harvard Medical School

The doctors looked to Cynthia Rudin's lab, the Earl D. McLean, Jr. Professor of Computer Science and Electrical and Computer Engineering at Duke, to develop a tool for these assessments. Rudin's area of expertise is creating “interpretable” machine learning algorithms with her colleagues.

Unlike most machine learning models, which are essentially” black boxes” that make it impossible for a human to know how they are reaching conclusions, interpretable machine learning models must show their work.

To classify the EEG samples from over 2,700 individuals as either seizures, one of four categories of seizure-like episodes, or “other,” the research team first had over 120 specialists identify the pertinent elements in the graphs.

In EEG charts, every event manifests as distinct patterns or recurrences inside the wavy lines. However, because these charts are rarely static, inaccurate data may mask warning signs or blend to produce an illogical picture.

There is a ground truth, but it is difficult to read. The inherent ambiguity in many of these charts meant we had to train the model to place its decisions within a continuum rather than well-defined separate bins.

Stark Guo, Ph.D. Student, Duke University

That continuum is visually similar to a multicolored starfish swimming away from a predator. Each differently colored arm represents one type of seizure-like event. The closer the algorithm puts a specific chart toward the tip of an arm, the more confident it is of its decision. It places those it is less certain of toward the central body.

In addition to this visual categorization, the algorithm highlights the patterns in the brainwaves used to arrive at its conclusion. It offers three samples of medically diagnosed charts that it believes to be comparable.

This lets a medical professional quickly look at the important sections and either agree that the patterns are there or decide that the algorithm is off the mark. Even if they are not highly trained to read EEGs, they can make a much more educated decision.

Alina Barnett, Postdoctoral Research Associate, Duke University

The joint team tested the system by having eight medical professionals with relevant experience classify 100 EEG samples into six categories, once with and once without the assistance of artificial intelligence.

Every participant's performance increased significantly; on average, their accuracy increased from 47 % to 71 %. Additionally, their results outperformed those of a prior study that used a comparable “black box” approach.

Cynthia Rudin said, “Usually, people think that black box machine learning models are more accurate, but for many important applications, like this one, it is just not true. It is much easier to troubleshoot models when they are interpretable. And in this case, the interpretable model was more accurate. It also provides a bird’s eye view of the types of anomalous electrical signals that occur in the brain, which is useful for care of critically ill patients.”

The research was funded by the National Science Foundation, the National Institutes of Health, and the DHHS Nebraska Stem Cell Grant

Journal Reference:

‌ Barnett, J, A. et al. (2024) Improving Clinician Performance in Classifying EEG Patterns on the Ictal–Interictal Injury Continuum Using Interpretable Machine Learning. New England Journal of Medicine AI. doi.org/10.1056/aioa2300331

Source: https://pratt.duke.edu/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.