relation: http://repositorio.unib.org/id/eprint/11065/ canonical: http://repositorio.unib.org/id/eprint/11065/ title: Deep Learning Approaches for Image Captioning: Opportunities, Challenges and Future Potential creator: Jamil, Azhar creator: Rehman, Saif Ur creator: Mahmood, Khalid creator: Gracia Villar, Mónica creator: Prola, Thomas creator: Diez, Isabel De La Torre creator: Samad, Md Abdus creator: Ashraf, Imran subject: Ingeniería description: Generative intelligence relies heavily on the integration of vision and language. Much of the research has focused on image captioning, which involves describing images with meaningful sentences. Typically, when generating sentences that describe the visual content, a language model and a vision encoder are commonly employed. Because of the incorporation of object areas, properties, multi-modal connections, attentive techniques, and early fusion approaches like bidirectional encoder representations from transformers (BERT), these components have experienced substantial advancements over the years. This research offers a reference to the body of literature, identifies emerging trends in an area that blends computer vision as well as natural language processing in order to maximize their complementary effects, and identifies the most significant technological improvements in architectures employed for image captioning. It also discusses various problem variants and open challenges. This comparison allows for an objective assessment of different techniques, architectures, and training strategies by identifying the most significant technical innovations, and offers valuable insights into the current landscape of image captioning research. date: 2024-02 type: Artículo type: PeerReviewed format: text language: en rights: cc_by_nc_nd_4 identifier: http://repositorio.unib.org/id/eprint/11065/1/Deep_Learning_Approaches_for_Image_Captioning_Opportunities_Challenges_and_Future_Potential.pdf identifier: Artículo Materias > Ingeniería Universidad Europea del Atlántico > Investigación > Producción Científica Universidad Internacional Iberoamericana México > Investigación > Producción Científica Universidad Internacional Iberoamericana Puerto Rico > Investigación > Producción Científica Universidad Internacional do Cuanza > Investigación > Producción Científica Universidad de La Romana > Investigación > Producción Científica Abierto Inglés Generative intelligence relies heavily on the integration of vision and language. Much of the research has focused on image captioning, which involves describing images with meaningful sentences. Typically, when generating sentences that describe the visual content, a language model and a vision encoder are commonly employed. Because of the incorporation of object areas, properties, multi-modal connections, attentive techniques, and early fusion approaches like bidirectional encoder representations from transformers (BERT), these components have experienced substantial advancements over the years. This research offers a reference to the body of literature, identifies emerging trends in an area that blends computer vision as well as natural language processing in order to maximize their complementary effects, and identifies the most significant technological improvements in architectures employed for image captioning. It also discusses various problem variants and open challenges. This comparison allows for an objective assessment of different techniques, architectures, and training strategies by identifying the most significant technical innovations, and offers valuable insights into the current landscape of image captioning research. metadata Jamil, Azhar; Rehman, Saif Ur; Mahmood, Khalid; Gracia Villar, Mónica; Prola, Thomas; Diez, Isabel De La Torre; Samad, Md Abdus y Ashraf, Imran mail SIN ESPECIFICAR, SIN ESPECIFICAR, SIN ESPECIFICAR, monica.gracia@uneatlantico.es, thomas.prola@uneatlantico.es, SIN ESPECIFICAR, SIN ESPECIFICAR, SIN ESPECIFICAR (2024) Deep Learning Approaches for Image Captioning: Opportunities, Challenges and Future Potential. IEEE Access. p. 1. ISSN 2169-3536 relation: http://doi.org/10.1109/ACCESS.2024.3365528 relation: doi:10.1109/ACCESS.2024.3365528 language: en