Infant head and brain segmentation from magnetic resonance images using fusion-based deep learning strategies

Bibliographic Details
Main Author: Torres, Helena
Publication Date: 2024
Other Authors: Oliveira, Bruno, Morais, Pedro, Fritze, Anne, Hahn, Gabriele, Rudiger, Mario, Fonseca, Jaime, Vilaça, João
Format: Article
Language: eng
Source: Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)
Download full: http://hdl.handle.net/11110/3015
Summary: Magnetic resonance (MR) imaging is widely used for assessing infant head and brain development and for diagnosing pathologies. The main goal of this work is the development of a segmentation framework to create patient-specific head and brain anatomical models from MR images for clinical evaluation. The proposed strategy consists of a fusion-based Deep Learning (DL) approach that combines the information of different image sequences within the MR acquisition protocol, including the axial T1w, sagittal T1w, and coronal T1w after contrast. These image sequences are used as input for different fusion encoder–decoder network architectures based on the well-established U-Net framework. Specifically, three different fusion strategies are proposed and evaluated, namely early, intermediate, and late fusion. In the early fusion approach, the images are integrated at the beginning of the encoder–decoder architecture. In the intermediate fusion strategy, each image sequence is processed by an independent encoder, and the resulting feature maps are then jointly processed by a single decoder. In the late fusion method, each image is individually processed by an encoder–decoder, and the resulting feature maps are then combined to generate the final segmentations. A clinical in-house dataset consisting of 19 MR scans was used and divided into training, validation, and testing sets, with 3 MR scans defined as a fixed validation set. For the remaining 16 MR scans, a cross-validation approach was adopted to assess the performance of the methods. The training and testing processes were carried out with a split ratio of 75% for the training set and 25% for the testing set. The results show that the early and intermediate fusion methodologies presented the better performance (Dice coefficient of 97.6 ± 1.5% and 97.3 ± 1.8% for the head and Dice of 94.5 ± 1.7% and 94.8 ± 1.8% for the brain, respectively), whereas the late fusion method generated slightly worst results (Dice of 95.5 ± 4.4% and 93.8 ± 3.1% for the head and brain, respectively). Nevertheless, the volumetric analysis showed that no statistically significant differences were found between the volumes of the models generated by all the segmentation strategies and the ground truths. Overall, the proposed frameworks demonstrate accurate segmentation results and prove to be feasible for anatomical model analysis in clinical practice.
id RCAP_299aa1ed319ffb4ff053e83c2e79e37d
oai_identifier_str oai:ciencipca.ipca.pt:11110/3015
network_acronym_str RCAP
network_name_str Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)
repository_id_str https://opendoar.ac.uk/repository/7160
spelling Infant head and brain segmentation from magnetic resonance images using fusion-based deep learning strategiesmriinfant brain segmentationfusion-based deep learninginfant headMagnetic resonance (MR) imaging is widely used for assessing infant head and brain development and for diagnosing pathologies. The main goal of this work is the development of a segmentation framework to create patient-specific head and brain anatomical models from MR images for clinical evaluation. The proposed strategy consists of a fusion-based Deep Learning (DL) approach that combines the information of different image sequences within the MR acquisition protocol, including the axial T1w, sagittal T1w, and coronal T1w after contrast. These image sequences are used as input for different fusion encoder–decoder network architectures based on the well-established U-Net framework. Specifically, three different fusion strategies are proposed and evaluated, namely early, intermediate, and late fusion. In the early fusion approach, the images are integrated at the beginning of the encoder–decoder architecture. In the intermediate fusion strategy, each image sequence is processed by an independent encoder, and the resulting feature maps are then jointly processed by a single decoder. In the late fusion method, each image is individually processed by an encoder–decoder, and the resulting feature maps are then combined to generate the final segmentations. A clinical in-house dataset consisting of 19 MR scans was used and divided into training, validation, and testing sets, with 3 MR scans defined as a fixed validation set. For the remaining 16 MR scans, a cross-validation approach was adopted to assess the performance of the methods. The training and testing processes were carried out with a split ratio of 75% for the training set and 25% for the testing set. The results show that the early and intermediate fusion methodologies presented the better performance (Dice coefficient of 97.6 ± 1.5% and 97.3 ± 1.8% for the head and Dice of 94.5 ± 1.7% and 94.8 ± 1.8% for the brain, respectively), whereas the late fusion method generated slightly worst results (Dice of 95.5 ± 4.4% and 93.8 ± 3.1% for the head and brain, respectively). Nevertheless, the volumetric analysis showed that no statistically significant differences were found between the volumes of the models generated by all the segmentation strategies and the ground truths. Overall, the proposed frameworks demonstrate accurate segmentation results and prove to be feasible for anatomical model analysis in clinical practice.Open access funding provided by FCT|FCCN (b-on). This work was funded by the projects “NORTE-01-0145-FEDER-000045” and “NORTE-01-0145-FEDER-000059”, supported by Northern Portugal Regional Operational Programme (NORTE 2020), under the Portugal 2020 Partnership Agreement, through the European Regional Development Fund (FEDER). It was also funded by national funds, through the FCT (Fundação para a Ciência e a Tecnologia) and FCT/MCTES in the scope of the projects UIDB/00319/2020, UIDB/05549/2020 (https://doi.org/10.54499/UIDB/05549/2020), UIDP/05549/2020 (https://doi.org/10.54499/UIDP/05549/2020), CEECINST/00039/2021 and LASI-LA/P/0104/2020. This project was also funded by the Innovation Pact HfFP—Health From Portugal, co-funded from the “Mobilizing Agendas for Business Innovation” of the “Next Generation EU” program of Component 5 of the Recovery and Resilience Plan (RRP), concerning “Capitalization and Business Innovation”, under the Regulation of the Incentive System “Agendas for Business Innovation”. The authors also acknowledge support from FCT and the European Social Found, through Programa Operacional Capital Humano (POCH), in the scope of the PhD grant SFRH/BD/136670/2018, SFRH/BD/136721/2018, and COVID/BD/154328/2023.Multimedia Systems2024-09-192024-04-01T00:00:00Zinfo:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/articlehttp://hdl.handle.net/11110/3015http://hdl.handle.net/11110/3015engTorres, H. R., Oliveira, B., Morais, P., Fritze, A., Hahn, G., Rüdiger, M., ... & Vilaça, J. L. (2024). Infant head and brain segmentation from magnetic resonance images using fusion-based deep learning strategies. Multimedia Systems, 30(2), 71.metadata only accessinfo:eu-repo/semantics/openAccessTorres, HelenaOliveira, BrunoMorais, PedroFritze, AnneHahn, GabrieleRudiger, MarioFonseca, JaimeVilaça, Joãoreponame:Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)instname:FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologiainstacron:RCAAP2024-09-26T05:31:34Zoai:ciencipca.ipca.pt:11110/3015Portal AgregadorONGhttps://www.rcaap.pt/oai/openaireinfo@rcaap.ptopendoar:https://opendoar.ac.uk/repository/71602025-05-28T18:54:01.464795Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) - FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologiafalse
dc.title.none.fl_str_mv Infant head and brain segmentation from magnetic resonance images using fusion-based deep learning strategies
title Infant head and brain segmentation from magnetic resonance images using fusion-based deep learning strategies
spellingShingle Infant head and brain segmentation from magnetic resonance images using fusion-based deep learning strategies
Torres, Helena
mri
infant brain segmentation
fusion-based deep learning
infant head
title_short Infant head and brain segmentation from magnetic resonance images using fusion-based deep learning strategies
title_full Infant head and brain segmentation from magnetic resonance images using fusion-based deep learning strategies
title_fullStr Infant head and brain segmentation from magnetic resonance images using fusion-based deep learning strategies
title_full_unstemmed Infant head and brain segmentation from magnetic resonance images using fusion-based deep learning strategies
title_sort Infant head and brain segmentation from magnetic resonance images using fusion-based deep learning strategies
author Torres, Helena
author_facet Torres, Helena
Oliveira, Bruno
Morais, Pedro
Fritze, Anne
Hahn, Gabriele
Rudiger, Mario
Fonseca, Jaime
Vilaça, João
author_role author
author2 Oliveira, Bruno
Morais, Pedro
Fritze, Anne
Hahn, Gabriele
Rudiger, Mario
Fonseca, Jaime
Vilaça, João
author2_role author
author
author
author
author
author
author
dc.contributor.author.fl_str_mv Torres, Helena
Oliveira, Bruno
Morais, Pedro
Fritze, Anne
Hahn, Gabriele
Rudiger, Mario
Fonseca, Jaime
Vilaça, João
dc.subject.por.fl_str_mv mri
infant brain segmentation
fusion-based deep learning
infant head
topic mri
infant brain segmentation
fusion-based deep learning
infant head
description Magnetic resonance (MR) imaging is widely used for assessing infant head and brain development and for diagnosing pathologies. The main goal of this work is the development of a segmentation framework to create patient-specific head and brain anatomical models from MR images for clinical evaluation. The proposed strategy consists of a fusion-based Deep Learning (DL) approach that combines the information of different image sequences within the MR acquisition protocol, including the axial T1w, sagittal T1w, and coronal T1w after contrast. These image sequences are used as input for different fusion encoder–decoder network architectures based on the well-established U-Net framework. Specifically, three different fusion strategies are proposed and evaluated, namely early, intermediate, and late fusion. In the early fusion approach, the images are integrated at the beginning of the encoder–decoder architecture. In the intermediate fusion strategy, each image sequence is processed by an independent encoder, and the resulting feature maps are then jointly processed by a single decoder. In the late fusion method, each image is individually processed by an encoder–decoder, and the resulting feature maps are then combined to generate the final segmentations. A clinical in-house dataset consisting of 19 MR scans was used and divided into training, validation, and testing sets, with 3 MR scans defined as a fixed validation set. For the remaining 16 MR scans, a cross-validation approach was adopted to assess the performance of the methods. The training and testing processes were carried out with a split ratio of 75% for the training set and 25% for the testing set. The results show that the early and intermediate fusion methodologies presented the better performance (Dice coefficient of 97.6 ± 1.5% and 97.3 ± 1.8% for the head and Dice of 94.5 ± 1.7% and 94.8 ± 1.8% for the brain, respectively), whereas the late fusion method generated slightly worst results (Dice of 95.5 ± 4.4% and 93.8 ± 3.1% for the head and brain, respectively). Nevertheless, the volumetric analysis showed that no statistically significant differences were found between the volumes of the models generated by all the segmentation strategies and the ground truths. Overall, the proposed frameworks demonstrate accurate segmentation results and prove to be feasible for anatomical model analysis in clinical practice.
publishDate 2024
dc.date.none.fl_str_mv 2024-09-19
2024-04-01T00:00:00Z
dc.type.status.fl_str_mv info:eu-repo/semantics/publishedVersion
dc.type.driver.fl_str_mv info:eu-repo/semantics/article
format article
status_str publishedVersion
dc.identifier.uri.fl_str_mv http://hdl.handle.net/11110/3015
http://hdl.handle.net/11110/3015
url http://hdl.handle.net/11110/3015
dc.language.iso.fl_str_mv eng
language eng
dc.relation.none.fl_str_mv Torres, H. R., Oliveira, B., Morais, P., Fritze, A., Hahn, G., Rüdiger, M., ... & Vilaça, J. L. (2024). Infant head and brain segmentation from magnetic resonance images using fusion-based deep learning strategies. Multimedia Systems, 30(2), 71.
dc.rights.driver.fl_str_mv metadata only access
info:eu-repo/semantics/openAccess
rights_invalid_str_mv metadata only access
eu_rights_str_mv openAccess
dc.publisher.none.fl_str_mv Multimedia Systems
publisher.none.fl_str_mv Multimedia Systems
dc.source.none.fl_str_mv reponame:Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)
instname:FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologia
instacron:RCAAP
instname_str FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologia
instacron_str RCAAP
institution RCAAP
reponame_str Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)
collection Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)
repository.name.fl_str_mv Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) - FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologia
repository.mail.fl_str_mv info@rcaap.pt
_version_ 1833597740758597632