FRAMEWORK FOR LOW-QUAL ITY RETINAL MOSAICING
Main Author: | |
---|---|
Publication Date: | 2021 |
Format: | Master thesis |
Language: | eng |
Source: | Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) |
Download full: | http://hdl.handle.net/10400.8/6752 |
Summary: | The medical equipment used to capture retinal fundus images is generally expensive. With the development of technology and the emergence of smartphones, new portable screening options have emerged, one of them being the D-Eye device. This and other similar devices associated with a smartphone, when compared to specialized equipment, present lower quality in the retinal video captured, yet with sufficient quality to perform a medical pre-screening. From this, if necessary, individuals can be referred for specialized screening, in order to obtain a medical diagnosis. This dissertation contributes to the development of a framework, which is a tool that allows grouping a set of developed and explored methods, applied to low-quality retinal videos. Three areas of intervention were defined: the extraction of relevant regions in video sequences; creating mosaicing images in order to obtain a summary image of each retinal video; develop of a graphical interface to accommodate the previous contributions. To extract the relevant regions from these videos (the retinal zone), two methods were proposed, one of them is based on more classical image processing approaches such as thresholds and Hough Circle transform. The other performs the extraction of the retinal location by applying a neural network, which is one of the methods reported in the literature with good performance for object detection, the YOLOv4. The mosaicing process was divided into two stages; in the first stage, the GLAMpoints neural network was applied to extract relevant points. From these, some transformations are carried out to have in the same referential the overlap of common regions of the images. In the second stage, a smoothing process was performed in the transition between images. A graphical interface was developed to encompass all the above methods to facilitate access to and use of them. In addition, other features were implemented, such as comparing results with ground truth and exporting videos containing only regions of interest. |
id |
RCAP_124d076b59f3d49de80eed5cd5236995 |
---|---|
oai_identifier_str |
oai:iconline.ipleiria.pt:10400.8/6752 |
network_acronym_str |
RCAP |
network_name_str |
Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) |
repository_id_str |
https://opendoar.ac.uk/repository/7160 |
spelling |
FRAMEWORK FOR LOW-QUAL ITY RETINAL MOSAICINGConvolutional Neural NetworkObject detectionD-EyeMosaicingFundusRetinal imagesThe medical equipment used to capture retinal fundus images is generally expensive. With the development of technology and the emergence of smartphones, new portable screening options have emerged, one of them being the D-Eye device. This and other similar devices associated with a smartphone, when compared to specialized equipment, present lower quality in the retinal video captured, yet with sufficient quality to perform a medical pre-screening. From this, if necessary, individuals can be referred for specialized screening, in order to obtain a medical diagnosis. This dissertation contributes to the development of a framework, which is a tool that allows grouping a set of developed and explored methods, applied to low-quality retinal videos. Three areas of intervention were defined: the extraction of relevant regions in video sequences; creating mosaicing images in order to obtain a summary image of each retinal video; develop of a graphical interface to accommodate the previous contributions. To extract the relevant regions from these videos (the retinal zone), two methods were proposed, one of them is based on more classical image processing approaches such as thresholds and Hough Circle transform. The other performs the extraction of the retinal location by applying a neural network, which is one of the methods reported in the literature with good performance for object detection, the YOLOv4. The mosaicing process was divided into two stages; in the first stage, the GLAMpoints neural network was applied to extract relevant points. From these, some transformations are carried out to have in the same referential the overlap of common regions of the images. In the second stage, a smoothing process was performed in the transition between images. A graphical interface was developed to encompass all the above methods to facilitate access to and use of them. In addition, other features were implemented, such as comparing results with ground truth and exporting videos containing only regions of interest.Coelho, Paulo Jorge SimõesCunha, António Manuel Trigueiros da SilvaRepositório IC-OnlineSilva, Bruno Reis2022-03-09T10:52:45Z2021-11-252021-11-25T00:00:00Zinfo:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/masterThesisapplication/pdfhttp://hdl.handle.net/10400.8/6752urn:tid:202959031enginfo:eu-repo/semantics/openAccessreponame:Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)instname:FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologiainstacron:RCAAP2025-02-25T15:14:47Zoai:iconline.ipleiria.pt:10400.8/6752Portal AgregadorONGhttps://www.rcaap.pt/oai/openaireinfo@rcaap.ptopendoar:https://opendoar.ac.uk/repository/71602025-05-28T20:53:59.389779Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) - FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologiafalse |
dc.title.none.fl_str_mv |
FRAMEWORK FOR LOW-QUAL ITY RETINAL MOSAICING |
title |
FRAMEWORK FOR LOW-QUAL ITY RETINAL MOSAICING |
spellingShingle |
FRAMEWORK FOR LOW-QUAL ITY RETINAL MOSAICING Silva, Bruno Reis Convolutional Neural Network Object detection D-Eye Mosaicing Fundus Retinal images |
title_short |
FRAMEWORK FOR LOW-QUAL ITY RETINAL MOSAICING |
title_full |
FRAMEWORK FOR LOW-QUAL ITY RETINAL MOSAICING |
title_fullStr |
FRAMEWORK FOR LOW-QUAL ITY RETINAL MOSAICING |
title_full_unstemmed |
FRAMEWORK FOR LOW-QUAL ITY RETINAL MOSAICING |
title_sort |
FRAMEWORK FOR LOW-QUAL ITY RETINAL MOSAICING |
author |
Silva, Bruno Reis |
author_facet |
Silva, Bruno Reis |
author_role |
author |
dc.contributor.none.fl_str_mv |
Coelho, Paulo Jorge Simões Cunha, António Manuel Trigueiros da Silva Repositório IC-Online |
dc.contributor.author.fl_str_mv |
Silva, Bruno Reis |
dc.subject.por.fl_str_mv |
Convolutional Neural Network Object detection D-Eye Mosaicing Fundus Retinal images |
topic |
Convolutional Neural Network Object detection D-Eye Mosaicing Fundus Retinal images |
description |
The medical equipment used to capture retinal fundus images is generally expensive. With the development of technology and the emergence of smartphones, new portable screening options have emerged, one of them being the D-Eye device. This and other similar devices associated with a smartphone, when compared to specialized equipment, present lower quality in the retinal video captured, yet with sufficient quality to perform a medical pre-screening. From this, if necessary, individuals can be referred for specialized screening, in order to obtain a medical diagnosis. This dissertation contributes to the development of a framework, which is a tool that allows grouping a set of developed and explored methods, applied to low-quality retinal videos. Three areas of intervention were defined: the extraction of relevant regions in video sequences; creating mosaicing images in order to obtain a summary image of each retinal video; develop of a graphical interface to accommodate the previous contributions. To extract the relevant regions from these videos (the retinal zone), two methods were proposed, one of them is based on more classical image processing approaches such as thresholds and Hough Circle transform. The other performs the extraction of the retinal location by applying a neural network, which is one of the methods reported in the literature with good performance for object detection, the YOLOv4. The mosaicing process was divided into two stages; in the first stage, the GLAMpoints neural network was applied to extract relevant points. From these, some transformations are carried out to have in the same referential the overlap of common regions of the images. In the second stage, a smoothing process was performed in the transition between images. A graphical interface was developed to encompass all the above methods to facilitate access to and use of them. In addition, other features were implemented, such as comparing results with ground truth and exporting videos containing only regions of interest. |
publishDate |
2021 |
dc.date.none.fl_str_mv |
2021-11-25 2021-11-25T00:00:00Z 2022-03-09T10:52:45Z |
dc.type.status.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
dc.type.driver.fl_str_mv |
info:eu-repo/semantics/masterThesis |
format |
masterThesis |
status_str |
publishedVersion |
dc.identifier.uri.fl_str_mv |
http://hdl.handle.net/10400.8/6752 urn:tid:202959031 |
url |
http://hdl.handle.net/10400.8/6752 |
identifier_str_mv |
urn:tid:202959031 |
dc.language.iso.fl_str_mv |
eng |
language |
eng |
dc.rights.driver.fl_str_mv |
info:eu-repo/semantics/openAccess |
eu_rights_str_mv |
openAccess |
dc.format.none.fl_str_mv |
application/pdf |
dc.source.none.fl_str_mv |
reponame:Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) instname:FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologia instacron:RCAAP |
instname_str |
FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologia |
instacron_str |
RCAAP |
institution |
RCAAP |
reponame_str |
Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) |
collection |
Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) |
repository.name.fl_str_mv |
Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) - FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologia |
repository.mail.fl_str_mv |
info@rcaap.pt |
_version_ |
1833598942457102336 |