A data fusion environment for multimodal and multi-informational neuronavigation - Inserm - Institut national de la santé et de la recherche médicale Accéder directement au contenu
Article Dans Une Revue Computer Aided Surgery Année : 2000

A data fusion environment for multimodal and multi-informational neuronavigation

Résumé

OBJECTIVE: Part of the planning and performance of neurosurgery consists of determining target areas, areas to be avoided, landmark areas, and trajectories, all of which are components of the surgical script. Nowadays, neurosurgeons have access to multimodal medical imaging to support the definition of the surgical script. The purpose of this paper is to present a software environment developed by the authors that allows full multimodal and multi-informational planning as well as neuronavigation for epilepsy and tumor surgery. MATERIALS AND METHODS: We have developed a data fusion environment dedicated to neuronavigation around the Surgical Microscope Neuronavigator system (Carl Zeiss, Oberkochen, Germany). This environment includes registration, segmentation, 3D visualization, and interaction-applied tools. It provides the neuronavigation system with the multimodal information involved in the definition of the surgical script: lesional areas, sulci, ventricles segmented from magnetic resonance imaging (MRI), vessels segmented from magnetic resonance angiography (MRA), functional areas from magneto-encephalography (MEG), and functional magnetic resonance imaging (fMRI) for somatosensory, motor, or language activation. These data are considered to be relevant for the performance of the surgical procedure. The definition of each entity results from the same procedure: registration to the anatomical MRI data set (defined as the reference data set), segmentation, fused 3D display, selection of the relevant entities for the surgical step, encoding in 3D surface-based representation, and storage of the 3D surfaces in a file recognized by the neuronavigation software (STP 3.4, Leibinger; Freiburg, Germany). RESULTS: Multimodal neuronavigation is illustrated with two clinical cases for which multimodal information was introduced into the neuronavigation system. Lesional areas were used to define and follow the surgical path, sulci and vessels helped identify the anatomical environment of the surgical field, and, finally, MEG and fMRI functional information helped determine the position of functional high-risk areas. CONCLUSION: In this short evaluation, the ability to access preoperative multi-functional and anatomical data within the neuronavigation system was a valuable support for the surgical procedure.
Fichier principal
Vignette du fichier
cas2000.pdf (1.06 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

inserm-00331756 , version 1 (17-10-2008)

Identifiants

Citer

Pierre Jannin, Olivier J. Fleig, E. Seigneuret, Christophe Grova, Xavier Morandi, et al.. A data fusion environment for multimodal and multi-informational neuronavigation. Computer Aided Surgery, 2000, 5 (1), pp.1-10. ⟨10.3109/10929080009148866⟩. ⟨inserm-00331756⟩

Collections

INSERM UNIV-RENNES
225 Consultations
233 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More