Comparing machine learning vs. humans for dietary assessment

Detalhes bibliográficos
Autor(a) principal: Abbasi, Maryam
Data de Publicação: 2022
Outros Autores: Cardoso, Filipe, Wanzeller, Cristina, Martins, Pedro
Idioma: eng
Título da fonte: Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)
Texto Completo: http://hdl.handle.net/10400.15/4688
Resumo: Due to the availability of large-scale datasets (e.g., ImageNet, UECFood) and the advancement of deep Convolutional Neural Networks (CNN), computer vision image recognition has evolved dramatically. Currently, there are three major methods for using CNN: starting from scratch, using a pre-trained network off the shelf, and performing unsupervised pre-training with supervised changes. When it comes to those with dietary restrictions, automatic food detection and assessment are critical.In this research, we show how to address detection difficulties by combining three CNNs. The different CNN architectures are then assessed. The amount of parameters in the examined CNN models ranges from 5,000 to 160 million, depending on the number of layers. Second, the various CNNs under consideration are assessed based on dataset sizes and physical image context. The results are assessed in terms of performance vs. training time vs. accuracy. Finally, the accuracy of CNNs is investigated and examined using human knowledge and classification from the human visual system (HVS). Finally, additional categorization techniques, such as bag-of-words, are considered to solve this problem.Based on the findings, it can be concluded that the HVS is more accurate when a data set comprises a wide range of variables. When the dataset is restricted to niche photos, the CNN outperforms the HVS.
id RCAP_96ccc5db96c5f3d6c9ac302ce38413ea
oai_identifier_str oai:repositorio.ipsantarem.pt:10400.15/4688
network_acronym_str RCAP
network_name_str Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)
repository_id_str https://opendoar.ac.uk/repository/7160
spelling Comparing machine learning vs. humans for dietary assessmentCNNGoogLeNetInceptionResNetDietaryDue to the availability of large-scale datasets (e.g., ImageNet, UECFood) and the advancement of deep Convolutional Neural Networks (CNN), computer vision image recognition has evolved dramatically. Currently, there are three major methods for using CNN: starting from scratch, using a pre-trained network off the shelf, and performing unsupervised pre-training with supervised changes. When it comes to those with dietary restrictions, automatic food detection and assessment are critical.In this research, we show how to address detection difficulties by combining three CNNs. The different CNN architectures are then assessed. The amount of parameters in the examined CNN models ranges from 5,000 to 160 million, depending on the number of layers. Second, the various CNNs under consideration are assessed based on dataset sizes and physical image context. The results are assessed in terms of performance vs. training time vs. accuracy. Finally, the accuracy of CNNs is investigated and examined using human knowledge and classification from the human visual system (HVS). Finally, additional categorization techniques, such as bag-of-words, are considered to solve this problem.Based on the findings, it can be concluded that the HVS is more accurate when a data set comprises a wide range of variables. When the dataset is restricted to niche photos, the CNN outperforms the HVS.SpringerRepositório Científico do Instituto Politécnico de SantarémAbbasi, MaryamCardoso, FilipeWanzeller, CristinaMartins, Pedro2024-01-10T14:02:42Z20222022-01-01T00:00:00Zbook partinfo:eu-repo/semantics/publishedVersionapplication/pdfhttp://hdl.handle.net/10400.15/4688eng978-3-031-14859-0https://doi.org/10.1007/978-3-031-14859-0_2info:eu-repo/semantics/openAccessreponame:Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)instname:FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologiainstacron:RCAAP2025-05-11T04:36:24Zoai:repositorio.ipsantarem.pt:10400.15/4688Portal AgregadorONGhttps://www.rcaap.pt/oai/openaireinfo@rcaap.ptopendoar:https://opendoar.ac.uk/repository/71602025-05-29T07:11:56.770668Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) - FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologiafalse
dc.title.none.fl_str_mv Comparing machine learning vs. humans for dietary assessment
title Comparing machine learning vs. humans for dietary assessment
spellingShingle Comparing machine learning vs. humans for dietary assessment
Abbasi, Maryam
CNN
GoogLeNet
Inception
ResNet
Dietary
title_short Comparing machine learning vs. humans for dietary assessment
title_full Comparing machine learning vs. humans for dietary assessment
title_fullStr Comparing machine learning vs. humans for dietary assessment
title_full_unstemmed Comparing machine learning vs. humans for dietary assessment
title_sort Comparing machine learning vs. humans for dietary assessment
author Abbasi, Maryam
author_facet Abbasi, Maryam
Cardoso, Filipe
Wanzeller, Cristina
Martins, Pedro
author_role author
author2 Cardoso, Filipe
Wanzeller, Cristina
Martins, Pedro
author2_role author
author
author
dc.contributor.none.fl_str_mv Repositório Científico do Instituto Politécnico de Santarém
dc.contributor.author.fl_str_mv Abbasi, Maryam
Cardoso, Filipe
Wanzeller, Cristina
Martins, Pedro
dc.subject.por.fl_str_mv CNN
GoogLeNet
Inception
ResNet
Dietary
topic CNN
GoogLeNet
Inception
ResNet
Dietary
description Due to the availability of large-scale datasets (e.g., ImageNet, UECFood) and the advancement of deep Convolutional Neural Networks (CNN), computer vision image recognition has evolved dramatically. Currently, there are three major methods for using CNN: starting from scratch, using a pre-trained network off the shelf, and performing unsupervised pre-training with supervised changes. When it comes to those with dietary restrictions, automatic food detection and assessment are critical.In this research, we show how to address detection difficulties by combining three CNNs. The different CNN architectures are then assessed. The amount of parameters in the examined CNN models ranges from 5,000 to 160 million, depending on the number of layers. Second, the various CNNs under consideration are assessed based on dataset sizes and physical image context. The results are assessed in terms of performance vs. training time vs. accuracy. Finally, the accuracy of CNNs is investigated and examined using human knowledge and classification from the human visual system (HVS). Finally, additional categorization techniques, such as bag-of-words, are considered to solve this problem.Based on the findings, it can be concluded that the HVS is more accurate when a data set comprises a wide range of variables. When the dataset is restricted to niche photos, the CNN outperforms the HVS.
publishDate 2022
dc.date.none.fl_str_mv 2022
2022-01-01T00:00:00Z
2024-01-10T14:02:42Z
dc.type.driver.fl_str_mv book part
dc.type.status.fl_str_mv info:eu-repo/semantics/publishedVersion
status_str publishedVersion
dc.identifier.uri.fl_str_mv http://hdl.handle.net/10400.15/4688
url http://hdl.handle.net/10400.15/4688
dc.language.iso.fl_str_mv eng
language eng
dc.relation.none.fl_str_mv 978-3-031-14859-0
https://doi.org/10.1007/978-3-031-14859-0_2
dc.rights.driver.fl_str_mv info:eu-repo/semantics/openAccess
eu_rights_str_mv openAccess
dc.format.none.fl_str_mv application/pdf
dc.publisher.none.fl_str_mv Springer
publisher.none.fl_str_mv Springer
dc.source.none.fl_str_mv reponame:Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)
instname:FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologia
instacron:RCAAP
instname_str FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologia
instacron_str RCAAP
institution RCAAP
reponame_str Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)
collection Repositórios Científicos de Acesso Aberto de Portugal (RCAAP)
repository.name.fl_str_mv Repositórios Científicos de Acesso Aberto de Portugal (RCAAP) - FCCN, serviços digitais da FCT – Fundação para a Ciência e a Tecnologia
repository.mail.fl_str_mv info@rcaap.pt
_version_ 1833602913731084288