1Department of Otolaryngology, Audiology and Phoniatrics
Unit, University of Pisa, Italy.
2Department of Clinical and Experimental Medicine,
University of Pisa, Italy.
3Department of Surgical, Medical and Molecular Pathology
and Critical Care Medicine, University of Pisa, Italy.
4Cochlear Implant Unit, Karolinska Institute, Sweden.
*Corresponding author: Francesco Lazzerini
Department of Otolaryngology, Audiology and Phoniatrics
Unit, University of Pisa, 56126 Pisa, Italy.
Email ID: francilazzerini@gmail.com
Received: Feb 03, 2025
Accepted: Mar 19, 2025
Published Online: Mar 26, 2025
Journal: Journal of Neurology and Neurological Sciences
Copyright: © Lazzerini F (2025). This Article is distributed under the terms of Creative Commons Attribution 4.0 International License
Citation: Lazzerini F, Bruschini L, Baldassari L, Forli F, Berrettini S. Cross-modal audio-visual rehabilitation in unilateral cochlear implanted patients: A pilot study. J Neurol Neuro Sci. 2025; 1(1): 1001.
Background: For many hearing-impaired individuals, even with the aid of Hearing Aids (HAs) or Cochlear Implants (CIs), one of the most demanding tasks remains understanding speech in noise, which is closely tied to spatial hearing. Recent research has highlighted the significant advantages of integrating informa tion from various sensory modalities, a phenomenon known as cross-modal stimulation. It has been demonstrated that visual cues can modulate the mental representation of sound sources and spatial hearing. This study aims to investigate whether a training program based on cross-modal audio-visual stimulation can benefit the hearing-impaired population, in particular CI re cipients.
Methods: A monocentric national prospective clinical interven tion study was conducted to assess the effectiveness of a cross modal audio-visual stimulation training in unilateral cochlear im plant recipients. A control group of typically hearing individuals was also included. Both the study group and the control group underwent evaluations before and after the treatment for: local ization abilities, speech perception abilities in silence and with background noise, Speech Reception Threshold (SRT), and Pa tients’ Reported Outcomes (PROs).
Results: Although the difference in localization abilities between the two groups was statistically significant, the mean improve ment in localization after treatment in the study group narrowly missed statistical significance. Speech, perception abilities in si lence and with background noise, as well as SRT values and PROs results in both groups showed no statistically significant differ ences before and after rehabilitation.
Conclusion: Cross-modal rehabilitation in hearing impaired pop ulation showed potential benefits, though further investigations with wider study population are eventually needed.
Keywords: Cochlear implant; Cross-modal stimulation; Rehabilitation; Spatial hearing.
Hearing Loss (HL) is a pervasive global health issue with sig nificant societal and individual consequences, impacting not only audiological capabilities, but also communication, mental health and overall quality of life, with substantial burden to the economy, productivity and healthcare costs. Even if much of the consequences of HL can nowadays be relieved thanks to advancements in Hearing Aids (HAs) and Cochlear Implants (CIs), some tasks remain difficult, with one of the most demanding of them being speech understanding in noise [1,2]. Speech under standing in noise is mostly related to binaural cues perception, and, more specifically, to spatial hearing.
Spatial hearing can be defined as the ability of the audi tory system to use the information concerning the location of a sound source and its arrival path for sound analysis. Spatial hearing not only assists orienting to the sound source, but also in perpetually segregating the target sound and the interfering noise arriving from different directions (squelch effect) [3,4]. The lack of central binaural integrations of hearing cues and, therefore, the impairment of spatial hearing persists even after partial remediation achieved with proper prosthetization [5,6].
Among of the main actors in multisensorial central integra t ion are the Multisensory Neurons (MN); this neuronal popula t ion has been proved to exist in the Superior Colliculus (SC) as well as in the cortex of animals [7,8] as well as in humans’ [9], and it’s capable of responding to stimuli from different sensory modalities. The SC serves then as a vital hub for integrating in puts from multiple sensory systems; its capacity for multisen sory processing appears to be latent in newborns, gradually emerging during postnatal developments, contingent upon the interactions between neural networks and environmental stimuli [10]. This suggests that the ability to seamlessly combine congruent cross-modal stimuli takes time to mature, implying that exposure to cross-modal signals may play a crucial role in shaping functional multisensory principles [11]. Further works have shown that multisensory integration reaches maturity around at 15 years of age [12], and identified a parallel devel opmental trajectory for visuo-haptic multisensory integration [13]. These studies lend support to the idea that multisensory integration can indeed be a trainable process, therefore appli cable to both children and the adult population.
It has been proven that the mental representation of sound sources, thus the spatial hearing, can be modulated by vision, which in turn plays a crucial role in fine-tuning human auditory spatial perception [14,15]. In support of this, findings from stud ies involving both blind individuals and blindfolded individuals, as well as research involving patients with visual-field impair ments, have consistently shown specific alterations in sound localization in those subjects [16-20]. This knowledge provides the basis of multisensory training in which auditory stimuli were presented with quasi-coincident visual stimuli, in term of spatial and temporal alignment [21].
The rehabilitation with cross-modal audio-visual stimulation in patients with vision deficit has been nowadays proven to give clear benefits [22], with interesting results when involving indi viduals affected with hemianopia, wherein audio-visual interac t ion was shown to enhance visual detection [23], visual localiza t ion [24], and reduce saccadic reaction times [25]. Furthermore, it has been established that a sound synchronized both spatially and temporally with a visual stimulus can enhance visual per ception in the blind hemifield of hemianopic patients and im prove their environmental orientation [26].
Bolognini and collaborators [27] delved into the possibility of inducing long-lasting enhancements in visual field deficits through a training method centred around systematic audio visual stimulation of the visual field. Their results demonstrated progressive improvements in visual detection during the train ing period, coupled with enhancements in visual oculomotor exploration, which would in turn allow patients to efficiently compensate for vision loss in the affected hemifield, ultimately facilitating spatial orientation. This improvement would remain stable even during the one-month follow-up control session. Bolognini’s rehabilitation protocol was later applied to children and adolescents by Tinelli and colleagues [28], affirming the effectiveness of this rehabilitation approach for young individu als with visual field defects stemming from acquired unilateral brain lesions during childhood.
However, the rehabilitation of hearing deficit with cross-mo dal audio-visual stimulations is a much less travelled road. The most studied hearing-impaired population in this field was rep resented by monoaural subjects, specifically by unilateral deaf ened people, and by bilateral sever-to-profound deaf submitted to unilateral CI. Monoaural condition results in fact in poor spa t ial hearing [29], especially pronounced in patients who have undergone unilateral cochlear implantation [30,31].
Following findings demonstrated that performance improve ments in such populations were significantly greater after a training that exploited spatially and temporally congruent au dio-visual inputs when compared to a training based on audi tory information alone, both in individuals with normal hearing who use monoaural plugs [32,33] as well as with patients with CI [34-36].
It has also been proven that monoaural subjects likely utilize information from other sensory modalities (referred to as cross modal compensation) for sound localization, with a particular emphasis on the visual channel [37].
The purpose of this study was to understand if a training pro tocol based on cross-modal audio-visual stimulation could give benefits to the hearing-impaired population in terms of spatial hearing. This would be achieved through MN training, thereby enhancing the central integration of audio-visual stimuli, thus mitigating spatial hearing deficits by improving multisensory in tegration, ultimately enhancing spatial perception.
Since the hearing-impaired population represents a specific and relatively homogeneous group, the study was focused on CI recipients. The spatial hearing was evaluated in terms of sound localization abilities and speech perception abilities with back ground noise.
The study had the participation of adult subjects above the age of 18 years old, that were making regular use of a unilat eral cochlear implant for at least 2 years, and had satisfactory speech perception results (following the protocol described in Burdo S et al. 1997), which would mean better than 75% in si lence and better than 50% with background noise with a sound to-noise ratio of +10, making up for the study group. All par making up for the study group. All par making up for the study group. All par t icipants had no uncorrected visual impairment, as well as no cognitive deficiency.
A population of typically hearing subjects, older than 18 years as well, without uncorrected visual impairment, cognitive deficiency was enrolled as the control group.
Each participant underwent an evaluation before the reha bilitation (T0) and one at the end of the Treatment (T1); in each evaluation the assessed parameters were localization abilities through the usage of a rehabilitation device known as AvDesk, speech perception abilities in silence and with background noise, Speech Reception Threshold (SRT) assessment through the Matrix Sentence Test and specific Patients Reported Out comes (PROs) questionnaires (the SSQ and the NICQ).
Localization abilities were assessed through the usage of a rehabilitation device called AvDesk, originally born for the cross-modal audiovisual rehabilitation of subjects affected by hemianopia or quadrantanopia. The main component of the device is represented by a simulation panel. Through this panel, audiovisual stimuli were sent to the patients standing in front of it. Once this rollable panel is opened (taking a semicircular 180° shape), it presents 12 active segments containing 2 led lights and a loudspeaker each, 20 passive segments placed 2-by-2 between the active segments (one every 15°), with structural function, and 1 central control segment containing a HR cam for eye and head movement monitoring as well as a guiding led.
Participants were seated in a chair positioned approximately 50 to 60 centimetres in front of the AvDesk, facing forward, with their body aligned along the centre of the device. The initial fix ation point was set on the middle plane. Subjects were instruct ed to maintain their gaze on the fixation point without making any head movements. To detect the presence of a sound, they were required to press a wireless button. Prior to each trial, a built-in software system within the device monitored their fixa t ion using the HR camera. The device initiated each trial only af ter confirming the correct posture. Treatment was administered using the unilateral cochlear implant in the standard setting for each participant, by presenting two types of sensory stimula t ion; first, a unimodal auditory condition, with just an auditory stimulus; second, a cross-modal condition, where sounds and visual stimuli were presented in the same location (spatially coincident). The number of blocks varied for each participant, depending on their individual progress in each session with different Stimulus Onset Asynchronies (SOAs). The treatment began with a SOA of 500 ms, followed in subsequent training sessions by progressively shorter SOAs (250 ms, 100 ms). The acoustical stimuli were generated by piezoelectric loudspeakers (0.4W, 8V), located inside each active segments, and consisted in a comfortably hearing sound – calibrated up on each subject free field hearing abilities) of 1500 Hz. The visual stimuli, which always preceded the audible targets, consisted of illuminations of a small red LED with a luminance of 90 cd/m2 each.
Each subject was instructed on the rehabilitation functioning and encouraged to focus on the presence and direction of the acoustic stimuli. In cross-modal condition, the visual impulse guided the attention of the subject in a direction, anticipating the emission of the sound. In unimodal auditory condition, each subject attempted to blindly locate the sound. The progressive shortening of SOAs made the test more difficult. The device emitted twelve consecutive unimodal sound-only stimuli, each from a randomized active panel. Subjects were asked to write on a notepad the number of the supposed panel the sounds were produced from. The procedure was double blinded, as the experimenter too was unaware of the exact location of the sound source. After this test, the AvDesk software revealed which of the panel emitted each sound, and by comparing it with the subject’s answers, it was possible to calculate the dis crepancy between them, thus making it possible to calculate the degrees of error of each answer, and ultimately the mean degrees of error for each session.
Speech perception was evaluated by the same speech thera pist in all patients to eliminate bias, in the Italian language, with live voice, with no lip-reading. The disyllabic word recognition score was determined using lists of 20 Italian words presented at a sound level of 65dB in a free-field setting, according to the “Protocollo comune di valutazione dei risultati in audiologia ria bilitativa” by Sandro Burdo et al.
For assessing open-set speech recognition scores in the pres ence of background noise, a Signal-to-Noise Ratio (SNR) of +10 was employed. During the tests, the speech therapist, as well as the loudspeaker generating the noise, were positioned in front of the patients (S0N0 configuration).
Each subject was submitted to perceptive tests in their eve ryday hearing condition: with the cochlear implant for the study group, and unaided for the control group.
The Matrix Sentence Test comprises a 50-word base matrix encompassing ten names, ten verbs, ten numerals, ten adjec t ives, and ten nouns. From this foundational matrix, semanti cally unpredictable sentences following a fixed grammatical structure are randomly generated [38]. During this test, the ex aminer has the flexibility to adjust the speech and background volume. We maintained the noise fixed at 65 dB.
The test calculates the Speech Reception Threshold (SRT), which indicates the difference between the speech and back ground noise at the 50% of word recognition.
Furthermore, the results can be compared to a national ref erence value (that in Italy is -7.1 dB) [39], with a standard devia t ion of the SRT across the test lists at 0.2 dB and a test-retest reliability of 0.6 dB.
Two questionnaires were then administered to the subjects that were part of the study group. The first one was the Speech, Spatial, and Qualities of Hearing Scale (SSQ). SSQ is a compre hensive assessment tool designed to evaluate various aspects of hearing impairment across multiple domains. This scale en compasses a wide range of hearing-related challenges, includ ing speech perception under both quiet and spatially complex conditions, localization tasks, and the subjective assessment of speech quality. The SSQ measures the perceived quality of speech in terms of its naturalness, clarity, the ability to differen t iate between speakers, and the perception of music.
The SSQ questionnaire consists of a total of 49 questions, and respondents provide their ratings on a scale ranging from 0 to 10 for each question. This scale is divided into three distinct aspects, each with its independent score: speech, spatial, and quality aspects. This multidimensional approach allows for a comprehensive evaluation of an individual’s hearing difficulties, providing insights into various facets of their auditory experi ences and challenges.
The SSQ is a valuable tool for assessing and addressing hear ing impairments, as it offers a nuanced understanding of the specific areas where individuals may struggle, ultimately aiding in the development of targeted interventions and treatment strategies [40].
We submitted to the patients, all Italian mother tongue speakers, the validated Italian version of the SSQ.
In order to establish a reliable self-assessment tool tailored specifically for individuals with CIs, the second questionnaire was the Nijmegen Cochlear Implant Questionnaire (NCIQ). It’s comprised of six distinct sub-domains, each targeting different aspects of the CI recipient’s quality of life. These sub-domains encompass fundamental sound perception, more advanced sound perception, speech production, self-esteem, daily activi t ies, and social interactions. Within each sub-domain, respond ents encounter ten items, presented as statements, with five response options on a 5-point Likert-type scale, ranging from “never” to “always” for 55 statements and from “no” to “good” for 5 statements. Participants are asked to select the statement that best reflects their personal experiences related to the question at hand. If a particular statement does not apply to a respondent, they have the option to provide a sixth response: ‘not applicable.’ To calculate the score for each sub-domain, the response categories are converted, with values of 1 equating to 0, 2 to 25, 3 to 50, 4 to 75, and 5 to 100. The scores for the ten items within each sub-domain are summed and divided by the number of completed items. Higher scores indicate a better quality of life. The NCIQ has undergone adaptations and valida t ions in many languages, including Italian. The initial study on the NCIQ reported favourable internal consistency and satisfac tory reliability. It suggested that the NCIQ holds promise as a valuable tool for evaluating the impact of cochlear implants on a patient’s quality of life and for conducting outcomes research in audiology [41]. In this case also, an Italian validated version of the questionary was given to the study subjects [42].
Each daily session lasted approximately one and a half hour, with breaks provided based on the participant’s performance and fatigue level. The entire training duration spanned five con secutive days.
The study group was composed of 4 unilaterally CI recipi ents, 1 female and 3 males, with a mean age of 47.5 years old (from 26 to 65 years). The mean age at cochlear implantation was 35 (from 3 to 60 years old). In 2 cases the CI was on the right side, in the other 2 on the left side. All the subjects had been implanted with Cochlear Nucleus system. 2 participants were CP900 sound processor recipients, 2 CP1000.
The control group was composed of 5 typically hearing sub jects. 4 were males, 1 was female, with a mean age of 30.6 years (from 25 to 42). All the enrolled subjects had undergone a tonal audiometry, showing a bilateral hearing threshold below 25dB on all tested frequencies (250 Hz, 500 Hz, 1000 Hz, 2000 Hz, 4000 Hz, 8000 Hz).
Gender distribution and difference of age at the enrolment between study and control group were not statistically signifi cant (respectively p=0.722 and p=0.079).
All subjects of the study and the control group successfully completed the full rehabilitation program. In the study group, the mean number of stimuli for each daily session was 395 (from 360 to 430), while at the end of rehabilitation was 1976 (from 1800 to 2150 stimuli). The mean intensity of the comfort ably hearing level for the auditory stimuli was 35 dbHL (from 20 to 40). The mean SOA was 323 msec (from 250 to 500 msec).
In the control group, the number of stimuli for each daily ses sion was fixed at 400. The mean number of stimuli at the end of rehabilitation was 2000 for each subject. The mean intensity of comfortably hearing level for the auditory stimuli was 20 dB for all the subjects in the control group. The delay between visual and auditory stimulations was fixed at 250 ms.
The difference between the mean number of stimuli for each daily session, as well as the mean total number of stimuli at the end of rehabilitation for the study and the control group were not statistically significant (p=1.000 and p=1.000, respectively). The difference between mean intensity of auditory stimuli for study and control group was not statistically significant as well (p=0.067). On the other hand, the difference of mean delay be tween visual and auditory stimulations between the study and the control group was statistically significant (p=0.016).
In the study group the mean degrees of error at localization test were 53.37° (from 33.75 to 83.75°) at T0. At T1 the mean degrees of error dropped to 41.33° (from 31.36 to 47.72°). The mean difference in degrees of error between T0 and T1 in study group narrowly missed the statistical significancy (p=0.068). The mean improvement in localization after the rehabilitation protocol in the study group was 12.04°.
In the control group the mean degrees of error in the locali zation test were 14.7° (from 3.72 to 32.25°) at T0. At T1 the mean degrees of error were 11.4° (from 5 to 18.5°). The mean difference in degrees of error between T0 and T1 in the study group was not statistically significant (p=0.500). The mean im provement in localization after the rehabilitation protocol in the study group was 3.3°.
The localization abilities both at T0 and T1, resulted sig nificantly lower in the study group than in the control group (p=0.016 and p=0.016, respectively). The difference in the mean improvement at localization test between study and control group was not statistically significant (p=0.413) (Figure 1).
No correlation was found in the study group between locali zation abilities both at T0 and T1 and the age at rehabilitation, age at implantation, model of inner cochlear implant part as well as model of processor.
The mean disyllabic words perception of the study group was 83.75% (from 70 to 100%) in silence and 56.25% (from 40 to 90%) with SNR+10 at T0 and 86.25% in silence (from 75 to 100%) and 62.50% (from 45 to 90%) at T1. The differences be tween speech perception abilities in silence and in noise at T0 and T1 were not statistically significant (p=0.157 and p=0.180, respectively) (Figure 2).
The control group, on the other hand, showed a mean disyl labic words perception of 100% in silence and with background noise at T0 already and so it has not been tested further for speech perception abilities at T1.
No correlation was found in the study group between speech perception abilities both at T0 and T1 and the age at rehabilita tion, age at implantation, model of inner cochlear implant part or model of processor.
At T0, in the study group, the mean SRT at Italian Matrix Test was 3.37 (from 1.4 to 6). At T1 the SRT slightly increased at 3.57 (from 0.8 to 9.6), without reaching a statistically significant dif ference with pre-rehabilitation mean values (p=0.715) (Figure 3).
On the other hand, in the control group at T0 the mean SRT at Italian Matrix Test was -6.54 (from -7 to -6.2). Similarly, at T1 the mean SRT was -6.50 (from -7.1 to -6.0). Again, the difference of mean SRT at T0 and T1 in control group was not statistically significant (p=0.581).
No correlation was found in the study group between SRT both at T0 and T1 and the age at rehabilitation, age at implanta t ion, model of inner cochlear implant part or model of proces sor.
Only the study group was submitted to self-reported out come questionnaires. At T0, the mean results at SSQ were 57.0 points (from 45.0 to 71.0), 45.37 points (from 18.0 to 82.0) and 64.50 points (from 33.0 to 94.5) for the speech, spatial and quality area respectively, while at T1 they were 57.0 points (from 45.0 to 71.0), 47.25 points (from 22.0 to 82.0) and 64.50 points (from 33.0 to 94.5). The difference between mean scores at T0 and T1 was not statistically significant (p=1.00, p=0.180 and p=1.00, respectively).
The mean results of NCIQ at T0 in the study group were 63.5 (from 42.5 to 97.0), 71.8 (from 50.5 to 88.5), 65.8 (from 37.5 to 77.0), 65.8 (from 20.5 to 75.5), 68.3 (from 20.5 to 78.5) and 68.0 (from 27.5 to 80.5) for the sections about basic sound perception, advanced sound perception, speech production, self-esteem, activity and social interaction respectively. At T1 they were, respectively, 62.5 (from 40.5 to 99.0), 70.0 (from 55.5 to 88.5), 65.0 (from 39.5 to 78.0), 65.5 (from 20.5 to 75.0), 66.3 (from 21.5 to 77.0) and 67.2 (from 27.5 to 78.5). For this PRO, the difference between mean scores at T0 and T1 in each subdomain was not statistically significant (p=0.715 p=0.593, p=0.465, p=1.00, p=0.144, p=0.180, respectively).
No correlation was found between PROs results both at T0 and T1 and the age at rehabilitation, age at implantation, number and intensity of the stimuli, model of inner cochlear implant part or model of processor.
From the analysis of other noticeable correlations of our data emerged that the age at implantation was negatively correlated with speech perception abilities in silence both at T0 and T1 (ρ= −0.961, p=0.039 and ρ= −0.952, p=0.048, respectively), and with background noise at T1 (ρ= −0.951, p=0.041). Additionally, the mean SOA was positively correlated with localization error at T1 (ρ=0.772, p=0.015).
Our observations clearly indicate that individuals with uni lateral cochlear implants have the potential to enhance their sound localization skills, even with limited perception of audito ry cues. Therefore, there is a possibility for unilateral CI users to experience improvements in acoustic spatial perception within the experimental framework we investigated. In particular, we demonstrated that UCI users can enhance their ability to local ize sounds through repeated trials in a cross-modal audio-visual training program, and this reduction in errors extends to tasks beyond the specific training.
The previous investigation conducted by Luntz and col leagues already suggested the feasibility of enhancing spatial hearing skills in individuals with unilateral cochlear implants [30]. The present study marks a significant advancement com pared to previous research for several reasons: first, we used a multimodal stimulation; moreover, our study looks for the training effects in a sound localization task to evaluate gener alizability; furthermore, we’ve demonstrated the potential for improvement with a relatively brief training regimen consisting of only 5 days of practice. This is notably shorter in duration compared to previous studies [30,35,36].
Rehabilitation programs based on cross modal stimulations are a growing reality.
First of all, multisensorial integration had been proved to be a trainable ability; as such, it became more and more effective with the individual’s growth [10,12,13,43-50].
Our data seems to confirm this concept. In fact, all the study subjects experienced a progressive easiness in the rehabilita t ion task, gradually accepting shorter and shorter SOAs. The re duction of delay between the light and the sound stimuli made the effort of sound localization more intense.
Based on the three fundamental principles of multisensory integration [51], many authors proved beneficial effects of au dio-visual stimulation trainings for visual [22-28,52,53] and – to a lesser extent – hearing dysfunction [32-36,54-56].
On the vision aspect, hemianoptic patients represented the most studied population [23,27,28]. In these type of visu ally impaired subject, cross modal audio-visual stimulation had been proved as strongly and significantly effective in enhance visual detection, visual localization and reduce saccadic reac t ion times.
On the hearing side, previously studied populations were constituted by typically hearing subjects, also in monoaural condition (using monoaural plugs), unilateral hearing-impaired subjects, as well as CI recipients. From the previous audio-visual training protocols showed improvement in localization abilities [32,33,54], as well as in speech perception and SRT [35].
Our data partially confirm these previous findings, indicating an improvement in sound localization abilities in the study pop ulation of unilateral cochlear implanted adults, even if without a statistical significance. On the other hand, our results do not suggest an improvement in speech perception performances.
The lack of the statistical significance of our results could be due to the paucity of the sample, as well as the lack of signifi cant differences in the post-rehabilitation outcomes between the study and the control group.
The AvDesk device is very simple to be arranged and used everywhere. In some previous experiences on hemianoptic sub jects, some of them children, AvDesk device had been used in telemedicine rehabilitation [57]. With the acquired know-how with this device, it would be possible to extend our experience in telerehabilitation regimen in future studies.
As discussed above, recent findings, in particular from the work of [35], seemed to suggest that an improvement in sound localization abilities can be associated with an improvement in speech perception performances, at least. This could be specu lated as associated with a general improvement in spatial hear ing determined by the cross-modal training. Spatial hearing is, indeed, fundamental in enhancing the squelch ability. Unfortu nately, in our results, was not possible to assess a correlation between the improvement of localization tasks and speech per ception measures.
This lack of a noticeable correlation between localization and speech perception improvement in our cohort could be due many factors:
− The limited extension of the rehabilitation protocol;
− The paucity of the cohort;
− The sampling of the study population (made of unilateral co chlear implanted recipients);
− The type of rehabilitation;
Finally, our study also assessed the perceived benefit of the cross-modal rehabilitation in a cohort of unilaterally cochlear im planted subject with the Patient’s Reported Outcomes (PROs). PROs are crucial for assessing the subjective experiences and quality of life of individuals, providing valuable insights into the impact and effectiveness of healthcare interventions from the patient’s perspective. Not statistically significant differences in term of self-reported benefit emerged from our experimental study. But, again, the paucity of the study population as well as the short time interval between the questionaries administra t ions have surely had an impact on the results.
Limits of the study
It is important to acknowledge several limitations associated with this study.
First, the sample size was small, which may limit the applica t ion of the findings to a larger population and explain the lack of statistical significance. Unfortunately, enrolment had been hard because of the time-consuming nature of the rehabilita t ion protocol. Not many patients accepted to be submitted to such a demanding procedure for free, without the certainty of an improvement of their hearing.
Secondarily, the study did not include long-term follow-up data, which could be important to assess the durability of the observed improvements over time.
Lastly, this study focused on a specific population of coch lear implanted subjects, and caution should be exercised when generalizing the results to other populations or settings, such as bilateral CI users or cochlear implanted SSD users.
Our study reveals promising prospects for enhancing sound localization skills in individuals with unilateral cochlear im plants, thus in the presence of very limited auditory cues. While our results did not achieve statistical significance, they indicate the potential for improved acoustic spatial perception within the experimental context explored.
So, can a cross-modal audiovisual stimulation protocol amel iorate spatial hearing in the hearing-impaired population? Our study affirms the feasibility of this objective, at least in the case of unilateral cochlear implant recipients. However, it is impor tant to highlight that our study did not demonstrate a substan t ial improvement in speech intelligibility.
Despite the study’s limitations, including a small sample size and the absence of long-term follow-up data, it provides valu able insights into the potential benefits of cross-modal audio visual stimulation in cochlear implant users. Future research, addressing the mentioned limitations and extending the follow up period, could strengthen the validity and reliability of these f indings, contributing to the field of cross-modal rehabilitation.
Patents
Our study reveals promising prospects for enhancing sound localization skills in individuals with Unilateral Cochlear Im plants (UCI), even in the presence of limited auditory cues. While our results did not achieve statistical significance, they indicate the potential for improved acoustic spatial perception within the experimental context we explored.
Despite the study’s limitations, including a small sample size and the absence of long-term follow-up data, our pilot study provides awareness into the potential benefits of cross-modal audiovisual stimulation in cochlear implant users.
Future research, addressing the mentioned limitations, could strengthen the validity and reliability of these findings, contributing to the field of cross-modal rehabilitation.
Author contributions: Conceptualization, F.L.; methodology, F.L.; software, F.L.; validation, F.F., L.B. (Luca Bruschini) and S.B.; formal analysis, F.L.; investigation, F.L.; data curation, F.L..; writ ing—original draft preparation, F.L. and L.B. (Luca Baldassari); writing—review and editing, F.L. and L.B. (Luca Baldassari); visu alization, S.B.; supervision, F.F.; project administration, F.L. and S.B. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional review board statement: The study was con ducted in accordance with the Declaration of Helsinki and as sessed by the local Ethics Committee (protocol code CET16/24) for studies involving humans.
Informed consent statement: Informed consent was ob tained from all subjects involved in the study.
Data availability statement: The data generated or analysed during the study are available upon reasonable request. Inter ested researchers or individuals may contact the corresponding author to request access to the data.
Conflicts of interest: The authors declare no conflicts of in terest.