Digital technology automates assessment of severity
Bonn/Germany, April 18, 2023. Researchers at DZNE and the University Hospital Bonn, together with the Berlin-based company PeakProfiling GmbH, have developed a computer-assisted method that recognizes the severity of speech disturbances resulting from ataxia, a brain disease, with great accuracy. They report on this in the scientific journal “npj digital medicine”. In the long term, the new methodology, which leverages artificial intelligence, could be used in science as well as in clinical routine.
The term “ataxia” refers to a group of rare, neurodegenerative brain diseases that manifest as gait insecurity, impaired swallowing and speech disorders, among other symptoms. “Pronunciation becomes slurred, speech rhythm irregular. The pace of speech is usually slowed and sluggish, but can suddenly accelerate. All of this impairs the ability to communicate,” explains Dr. Marcus Grobe-Einsler, a DZNE researcher and clinician in the Department of Neurology of the University Hospital Bonn (UKB). “For the assessment of the severity of speech disturbance, there is an established classification system with six levels. Up to now, this classification has been done by hand, so to speak, by clinical professionals. This is time-consuming and to a certain extent subjective. In a proof-of-concept study, we have now been able to show that it is possible to automate and objectify the established classification by means of computer technology. Our approach could greatly simplify the procedures for determining the severity of ataxia.”
Cooperation with Industry
For these studies, Grobe-Einsler and colleagues’ cooperated with PeakProfiling GmbH. The Berlin-based company specializes in the analysis of voices and noises. For the current study, voice recordings of 67 patients with predominantly mild or moderate ataxia were used. The statements were responses to standardized questions. For example, the study participants were asked to talk about their hobbies and to count aloud from 1 to 10 and back again. With the help of dedicated sound analysis software and “machine learning” algorithms - a variety of artificial intelligence - the researchers were able to identify more than one hundred characteristic features inter alia in the volunteers’ speech rhythm and in modulations of their loudness.
A High Hit Rate
Based on these parameters, in a next step, the digital analysis system was trimmed in such a way that the computed severity matched as closely as possible the rating given by a panel of three experts who had examined the voice samples. The experts’ vote was taken as a reference. In the end, the computer-assisted approach achieved a hit rate of 80 percent on a sample of recordings that had been excluded from the software’s optimization process and was therefore independent of it.
Potential Applications
“We now intend to further refine our method in larger studies and transfer it from German to other languages in international cooperation,” says Prof. Thomas Klockgether, Director of Clinical Research at DZNE and also head of the Department of Neurology of UKB. The degree of speech impairment is an essential criterion for evaluating the health condition of a person with ataxia, the neurologist explains. A method that objectifies and automates this assessment would therefore have great potential for both research and clinical practice. “Our technique could help with monitoring the course of the disease, and in addition, because of its degree of automation, it can be used efficiently in studies with many individuals. This is very valuable especially in the context of drug trials. In this respect, there has recently been a lot of momentum in the field of ataxia, because there are new, albeit still experimental, therapeutic approaches.”
In addition, he notes, it is conceivable to integrate appropriate software into a smartphone app. “With ataxia, there are often fluctuations in health status that can only be mapped sporadically through clinic visits. Using smartphones and digital technology, this could be done much more precisely - and the software could also tell patients about the effect of logopedics or other treatment measures on their speech. Many patients want such direct feedback,” says Klockgether.