Items where Author is "Garay, Helena"

Up a level
Export as [feed] Atom [feed] RSS 1.0 [feed] RSS 2.0
Group by: Date | Document Type | No Grouping
Jump to: 2025 | 2024
Number of documents: 7.

2025

Article Subjects > Nutrition Europe University of Atlantic > Research > Scientific Production
Fundación Universitaria Internacional de Colombia > Research > Scientific Production
Ibero-american International University > Research > Scientific Production
Ibero-american International University > Research > Articles and Books
Universidad Internacional do Cuanza > Research > Scientific Production
University of La Romana > Research > Scientific Production
Open English Mango is one of the most beloved fruits and plays an indispensable role in the agricultural economies of many tropical countries like Pakistan, India, and other Southeast Asian countries. Similar to other fruits, mango cultivation is also threatened by various diseases, including Anthracnose and Red Rust. Although farmers try to mitigate such situations on time, early and accurate detection of mango diseases remains challenging due to multiple factors, such as limited understanding of disease diversity, similarity in symptoms, and frequent misclassification. To avoid such instances, this study proposes a multimodal deep learning framework that leverages both leaf and fruit images to improve classification performance and generalization. Individual CNN-based pre-trained models, including ResNet-50, MobileNetV2, EfficientNet-B0, and ConvNeXt, were trained separately on curated datasets of mango leaf and fruit diseases. A novel Modality Attention Fusion (MAF) mechanism was introduced to dynamically weight and combine predictions from both modalities based on their discriminative strength, as some diseases are more prominent on leaves than on fruits, and vice versa. To address overfitting and improve generalization, a class-aware augmentation pipeline was integrated, which performs augmentation according to the specific characteristics of each class. The proposed attention-based fusion strategy significantly outperformed individual models and static fusion approaches, achieving a test accuracy of 99.08%, an F1 score of 99.03%, and a perfect ROC-AUC of 99.96% using EfficientNet-B0 as the base. To evaluate the model’s real-world applicability, an interactive web application was developed using the Django framework and evaluated through out-of-distribution (OOD) testing on diverse mango samples collected from public sources. These findings underline the importance of combining visual cues from multiple organs of plants and adapting model attention to contextual features for real-world agricultural diagnostics. metadata Mohsin, Muhammad; Hashmi, Muhammad Shadab Alam; Delgado Noya, Irene; Garay, Helena; Abdel Samee, Nagwan and Ashraf, Imran mail UNSPECIFIED, UNSPECIFIED, irene.delgado@uneatlantico.es, helena.garay@uneatlantico.es, UNSPECIFIED, UNSPECIFIED (2025) Dual-modality fusion for mango disease classification using dynamic attention based ensemble of leaf & fruit images. Scientific Reports, 15 (1). ISSN 2045-2322

Article Subjects > Engineering Europe University of Atlantic > Research > Scientific Production
Ibero-american International University > Research > Scientific Production
Ibero-american International University > Research > Articles and Books
Universidad Internacional do Cuanza > Research > Scientific Production
University of La Romana > Research > Scientific Production
Open English Introduction: The rapid expansion of generated data through social networks has introduced significant challenges, which underscores the need for advanced methods to analyze and interpret these complex systems. Deep learning has emerged as an effective approach, offering robust capabilities to process large datasets, and uncover intricate relationships and patterns. Methods: In this systematic literature review, we explore research conducted over the past decade, focusing on the use of deep learning techniques for community detection in social networks. A total of 19 studies were carefully selected from reputable databases, including the ACM Library, Springer Link, Scopus, Science Direct, and IEEE Xplore. This review investigates the employed methodologies, evaluates their effectiveness, and discusses the challenges identified in these works. Results: Our review shows that models like graph neural networks (GNNs), autoencoders, and convolutional neural networks (CNNs) are some of the most commonly used approaches for community detection. It also examines the variety of social networks, datasets, evaluation metrics, and employed frameworks in these studies. Discussion: However, the analysis highlights several challenges, such as scalability, understanding how the models work (interpretability), and the need for solutions that can adapt to different types of networks. These issues stand out as important areas that need further attention and deeper research. This review provides meaningful insights for researchers working in social network analysis. It offers a detailed summary of recent developments, showcases the most impactful deep learning methods, and identifies key challenges that remain to be explored. metadata El-Moussaoui, Mohamed; Hanine, Mohamed; Kartit, Ali; Gracia Villar, Mónica; Garay, Helena and de la Torre Díez, Isabel mail UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, monica.gracia@uneatlantico.es, helena.garay@uneatlantico.es, UNSPECIFIED (2025) A systematic review of deep learning methods for community detection in social networks. Frontiers in Artificial Intelligence, 8. ISSN 2624-8212

Article Subjects > Engineering Europe University of Atlantic > Research > Scientific Production
Ibero-american International University > Research > Scientific Production
Ibero-american International University > Research > Articles and Books
Universidad Internacional do Cuanza > Research > Scientific Production
University of La Romana > Research > Scientific Production
Closed English Icons are the first visual element users encounter when searching for applications in online store. Icons with eye-catching features can make an app stand out in user searches, playing a crucial role in attracting user attention and influencing selection. This increases the likelihood of downloads, which can expand the user base, improve revenue, and enhance engagement, contributing to the application’s overall success. However, the majority of research focused on evaluating appeal of apps through application icons is empirical in nature and may lack comprehensive data analytical approaches. While empirical research holds its significance, it may still be limited by the size of the dataset analyzed and could also be subjective. This proposed research presents a novel data-analytical methodology to analyze a large dataset of application icons from Google Play to determine their influence on downloads. It clusters the icons using three different techniques: -means clustering with two distinct feature vectors and agglomerative clustering, extracting various visual features from the clusters that are strongly correlated with application installs. Subsequently, validation of results has revealed that factors of varied colors, the dominance of white or black colors, text, and exposure in the icons can be linked to downloads. metadata Bilal, Ahmad; Turab Mirza, Hamid; Ahmad, Adnan; Hussain, Ibrar; Raza, Ali; Garay, Helena; Alemany Iturriaga, Josep and Ashraf, Imran mail UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, helena.garay@uneatlantico.es, josep.alemany@uneatlantico.es, UNSPECIFIED (2025) On the correlation between Google Play Store application icons and downloads. The Computer Journal, 68 (10). pp. 1579-1593. ISSN 0010-4620

Article Subjects > Engineering Europe University of Atlantic > Research > Scientific Production
Fundación Universitaria Internacional de Colombia > Research > Scientific Production
Ibero-american International University > Research > Articles and Books
Universidad Internacional do Cuanza > Research > Scientific Production
University of La Romana > Research > Scientific Production
Open English Efficient image retrieval from a variety of datasets is crucial in today's digital world. Visual properties are represented using primitive image signatures in Content Based Image Retrieval (CBIR). Feature vectors are employed to classify images into predefined categories. This research presents a unique feature identification technique based on suppression to locate interest points by computing productive sum of pixel derivatives by computing the differentials for corner scores. Scale space interpolation is applied to define interest points by combining color features from spatially ordered L2 normalized coefficients with shape and object information. Object based feature vectors are formed using high variance coefficients to reduce the complexity and are converted into bag-of-visual-words (BoVW) for effective retrieval and ranking. The presented method encompass feature vectors for information synthesis and improves the discriminating strength of the retrieval system by extracting deep image features including primitive, spatial, and overlayed using multilayer fusion of Convolutional Neural Networks(CNNs). Extensive experimentation is performed on standard image datasets benchmarks, including ALOT, Cifar-10, Corel-10k, Tropical Fruits, and Zubud. These datasets cover wide range of categories including shape, color, texture, spatial, and complicated objects. Experimental results demonstrate considerable improvements in precision and recall rates, average retrieval precision and recall, and mean average precision and recall rates across various image semantic groups within versatile datasets. The integration of traditional feature extraction methods fusion with multilevel CNN advances image sensing and retrieval systems, promising more accurate and efficient image retrieval solutions. metadata Chaki, Jyotismita; Shabir, Aiza; Ahmed, Khawaja Tehseen; Mahmood, Arif; Garay, Helena; Prado González, Luis Eduardo and Ashraf, Imran mail UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, helena.garay@uneatlantico.es, uis.prado@uneatlantico.es, UNSPECIFIED (2025) Deep image features sensing with multilevel fusion for complex convolution neural networks & cross domain benchmarks. PLOS ONE, 20 (3). e0317863. ISSN 1932-6203

Article Subjects > Engineering Europe University of Atlantic > Research > Scientific Production
Ibero-american International University > Research > Scientific Production
Ibero-american International University > Research > Articles and Books
Universidad Internacional do Cuanza > Research > Scientific Production
University of La Romana > Research > Scientific Production
Open English Hand-drawn mathematical geometric shapes are geometric figures, such as circles, triangles, squares, and polygons, sketched manually using pen and paper or digital tools. These shapes are fundamental in mathematics education and geometric problem-solving, serving as intuitive visual aids for understanding complex concepts and theories. Recognizing hand-drawn shapes accurately enables more efficient digitization of handwritten notes, enhances educational tools, and improves user interaction with mathematical software. This research proposes an innovative machine learning algorithm for the automatic classification of mathematical geometric shapes to identify and interpret these shapes from handwritten input, facilitating seamless integration with digital systems. We utilized a benchmark dataset of mathematical shapes based on a total of 20,000 images with eight classes circle, kite, parallelogram, square, rectangle, rhombus, trapezoid, and triangle. We introduced a novel machine-learning algorithm CnN-RFc that uses convolution neural networks (CNN) for spatial feature extraction and the random forest classifier for probabilistic feature extraction from image data. Experimental results illustrate that using the CnN-RFc method, the Light Gradient Boosting Machine (LGBM) algorithm surpasses state-of-the-art approaches with high accuracy scores of 98% for hand-drawn shape classification. Applications of the proposed mathematical geometric shape classification algorithm span various domains, including education, where it enhances interactive learning platforms and provides instant feedback to students. metadata Alam, Aneeza; Raza, Ali; Thalji, Nisrean; Abualigah, Laith; Garay, Helena; Alemany Iturriaga, Josep and Ashraf, Imran mail UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, helena.garay@uneatlantico.es, josep.alemany@uneatlantico.es, UNSPECIFIED (2025) Novel transfer learning approach for hand drawn mathematical geometric shapes classification. PeerJ Computer Science, 11. e2652. ISSN 2376-5992

2024

Article Subjects > Engineering Europe University of Atlantic > Research > Scientific Production
Ibero-american International University > Research > Articles and Books
Universidad Internacional do Cuanza > Research > Scientific Production
University of La Romana > Research > Scientific Production
Open English The perception and recognition of objects around us empower environmental interaction. Harnessing the brain’s signals to achieve this objective has consistently posed difficulties. Researchers are exploring whether the poor accuracy in this field is a result of the design of the temporal stimulation (block versus rapid event) or the inherent complexity of electroencephalogram (EEG) signals. Decoding perceptive signal responses in subjects has become increasingly complex due to high noise levels and the complex nature of brain activities. EEG signals have high temporal resolution and are non-stationary signals, i.e., their mean and variance vary overtime. This study aims to develop a deep learning model for the decoding of subjects’ responses to rapid-event visual stimuli and highlights the major factors that contribute to low accuracy in the EEG visual classification task.The proposed multi-class, multi-channel model integrates feature fusion to handle complex, non-stationary signals. This model is applied to the largest publicly available EEG dataset for visual classification consisting of 40 object classes, with 1000 images in each class. Contemporary state-of-the-art studies in this area investigating a large number of object classes have achieved a maximum accuracy of 17.6%. In contrast, our approach, which integrates Multi-Class, Multi-Channel Feature Fusion (MCCFF), achieves a classification accuracy of 33.17% for 40 classes. These results demonstrate the potential of EEG signals in advancing EEG visual classification and offering potential for future applications in visual machine models. metadata Rehman, Madiha; Anwer, Humaira; Garay, Helena; Alemany Iturriaga, Josep; Díez, Isabel De la Torre; Siddiqui, Hafeez ur Rehman and Ullah, Saleem mail UNSPECIFIED, UNSPECIFIED, helena.garay@uneatlantico.es, josep.alemany@uneatlantico.es, UNSPECIFIED, UNSPECIFIED, UNSPECIFIED (2024) Decoding Brain Signals from Rapid-Event EEG for Visual Analysis Using Deep Learning. Sensors, 24 (21). p. 6965. ISSN 1424-8220

Article Subjects > Engineering
Subjects > Psychology
Europe University of Atlantic > Research > Scientific Production
Fundación Universitaria Internacional de Colombia > Research > Scientific Production
Ibero-american International University > Research > Scientific Production
Ibero-american International University > Research > Articles and Books
Universidad Internacional do Cuanza > Research > Scientific Production
Open English Predicting depression intensity from microblogs and social media posts has numerous benefits and applications, including predicting early psychological disorders and stress in individuals or the general public. A major challenge in predicting depression using social media posts is that the existing studies do not focus on predicting the intensity of depression in social media texts but rather only perform the binary classification of depression and moreover noisy data makes it difficult to predict the true depression in the social media text. This study intends to begin by collecting relevant Tweets and generating a corpus of 210000 public tweets using Twitter public application programming interfaces (APIs). A strategy is devised to filter out only depression-related tweets by creating a list of relevant hashtags to reduce noise in the corpus. Furthermore, an algorithm is developed to annotate the data into three depression classes: ‘Mild,’ ‘Moderate,’ and ‘Severe,’ based on International Classification of Diseases-10 (ICD-10) depression diagnostic criteria. Different baseline classifiers are applied to the annotated dataset to get a preliminary idea of classification performance on the corpus. Further FastText-based model is applied and fine-tuned with different preprocessing techniques and hyperparameter tuning to produce the tuned model, which significantly increases the depression classification performance to an 84% F1 score and 90% accuracy compared to baselines. Finally, a FastText-based weighted soft voting ensemble (WSVE) is proposed to boost the model’s performance by combining several other classifiers and assigning weights to individual models according to their individual performances. The proposed WSVE outperformed all baselines as well as FastText alone, with an F1 of 89%, 5% higher than FastText alone, and an accuracy of 93%, 3% higher than FastText alone. The proposed model better captures the contextual features of the relatively small sample class and aids in the detection of early depression intensity prediction from tweets with impactful performances. metadata Rizwan, Muhammad; Mushtaq, Muhammad Faheem; Rafiq, Maryam; Mehmood, Arif; Diez, Isabel de la Torre; Gracia Villar, Mónica; Garay, Helena and Ashraf, Imran mail UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, monica.gracia@uneatlantico.es, helena.garay@uneatlantico.es, UNSPECIFIED (2024) Depression Intensity Classification from Tweets Using FastText Based Weighted Soft Voting Ensemble. Computers, Materials & Continua, 78 (2). pp. 2047-2066. ISSN 1546-2226

Generated on Sat Apr 4 23:45:20 2026 UTC.

<a href="/27825/1/s41598-026-39196-x_reference.pdf" class="ep_document_link"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>

en

open

Benchmarking multiple instance learning architectures from patches to pathology for prostate cancer detection and grading using attention-based weak supervision

Histopathological evaluation is necessary for the diagnosis and grading of prostate cancer, which is still one of the most common cancers in men globally. Traditional evaluation is time-consuming, prone to inter-observer variability, and challenging to scale. The clinical usefulness of current AI systems is limited by the need for comprehensive pixel-level annotations. The objective of this research is to develop and evaluate a large-scale benchmarking study on a weakly supervised deep learning framework that minimizes the need for annotation and ensures interpretability for automated prostate cancer diagnosis and International Society of Urological Pathology (ISUP) grading using whole slide images (WSIs). This study rigorously tested six cutting-edge multiple instance learning (MIL) architectures (CLAM-MB, CLAM-SB, ILRA-MIL, AC-MIL, AMD-MIL, WiKG-MIL), three feature encoders (ResNet50, CTransPath, UNI2), and four patch extraction techniques (varying sizes and overlap) using the PANDA dataset (10,616 WSIs), yielding 72 experimental configurations. The methodology used distributed cloud computing to process over 31 million tissue patches, implementing advanced attention mechanisms to ensure clinical interpretability through Grad-CAM visualizations. The optimum configuration (UNI2 encoder with ILRA-MIL, 256 256 patches, 50% overlap) achieved 78.75% accuracy and 90.12% quadratic weighted kappa (QWK), outperforming traditional methods and approaching expert pathologist-level diagnostic capability. Overlapping smaller patches offered the best balance of spatial resolution and contextual information, while domain-specific foundation models performed noticeably better than generic encoders. This work is the first large-scale, comprehensive comparison of weekly supervised MIL methods for prostate cancer diagnosis and grading. The proposed approach has excellent clinical diagnostic performance, scalability, practical feasibility through cloud computing, and interpretability using visualization tools.

Producción Científica

Naveed Anwer Butt mail , Dilawaiz Sarwat mail , Irene Delgado Noya mail irene.delgado@uneatlantico.es, Kilian Tutusaus mail kilian.tutusaus@uneatlantico.es, Nagwan Abdel Samee mail , Imran Ashraf mail ,

Butt

<a href="/27915/1/csbj.0023.pdf" class="ep_document_link"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>

en

open

A Systematic Literature Review on Integrated Deep Learning and Multi-Agent Vision-Language Frameworks for Pathology Image Analysis and Report Generation

This systematic literature review (SLR) investigates the integration of deep learning (DL), vision-language models(VLMs), and multi-agent systems in the analysis of pathology images and automated report generation. The rapidadvancement of whole-slide imaging (WSI) technologies has posed new challenges in pathology, especially due to thescale and complexity of the data. DL techniques in general and convolutional neural networks (CNNs) and transform-ers in particular have significantly enhanced image analysis tasks including segmentation, classification, and detection.However, these models often lack generalizability to generate coherent, clinically relevant text, thus necessitating theintegration of VLMs and large language models (LLMs). This review examines the effectiveness of VLMs and LLMsin bridging the gap between visual data and clinical text, focusing on their potential for automating the generationof pathology reports. Additionally, multi-agent systems, which leverage specialized artificial intelligence (AI) agentsto collaboratively perform diagnostic tasks, are explored for their contributions to improving diagnostic accuracy andscalability. Through a synthesis of recent studies, this review highlights the successes, challenges, and future direc-tions of these AI technologies in pathology diagnostics, offering a comprehensive foundation for the development ofintegrated, AI-driven diagnostic workflows.

Producción Científica

Usama Ali mail , Imran Shafi mail , Jamil Ahmad mail , Arlette Zárate Cáceres mail , Thania Chio Montero mail , Hafiz Muhammad Raza ur Rehman mail , Imran Ashraf mail ,

Ali

<a class="ep_document_link" href="/27970/1/s11357-026-02188-w.pdf"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>

en

open

Fish consumption and cognitive function in aging: a systematic review of observational studies

Epidemiological studies consistently link higher fish intake with slower rates of cognitive decline and lower dementia incidence. The aim of the present study was to systematically review existing observational studies investigating the association between fish consumption and cognitive function in older adults. A total of 25 studies (8 cross-sectional and 17 prospective including mainly healthy older adults, age range of participants ranging from 18 to 30 years at baseline in prospective studies to 65 to 91 years, representing the upper limit of the age spectrum) were reviewed. Cognitive functions currently investigated in most published studies included various domains, such as global cognition, memory (episodic, working), executive function (planning, inhibition, flexibility), attention and processing speed. Existing studies greatly vary in terms of design (cross-sectional and prospective), geographical area, number of participants involved, and tools used to assess the outcomes of interest. The main findings across studies are not univocal, with some studies reporting stronger evidence of association between fish consumption and various cognitive domains, while others addressed rather null findings. The most consistently responsive domains were processing speed, executive functioning, semantic memory, and global cognitive ability among individuals consuming fish at least weekly, which are highly relevant to both neurodegenerative and vascular forms of cognitive impairment. Positive associations were also observed for verbal memory and general memory, though these were less uniform and often attenuated after multivariable adjustment. In contrast, associations with reaction time, verbal-numerical reasoning, and broad composite scores were inconsistent, and several fully adjusted models showed null results. In conclusion, the evidence suggests that regular fish intake (typically ≥1–2 servings per week) is linked to preserved cognitive performance, although some inconsistent findings require further investigations.

Producción Científica

Justyna Godos mail , Giuseppe Caruso mail , Agnieszka Micek mail , Alberto Dolci mail , Carmen Lilí Rodríguez Velasco mail carmen.rodriguez@uneatlantico.es, Evelyn Frias-Toral mail , Jason Di Giorgio mail , Nicola Veronese mail , Andrea Lehoczki mail , Mario Siervo mail , Zoltan Ungvari mail , Giuseppe Grosso mail ,

Godos

<a href="/27554/1/s41598-026-37541-8_reference.pdf" class="ep_document_link"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>

en

open

A scalable and secure federated learning authentication scheme for IoT

Secure and scalable authentication remains a fundamental challenge in Internet of Things (IoT) networks due to constrained device resources, dynamic topology, and the absence of centralized trust infrastructures. Conventional password-based and certificate-driven authentication schemes incur high computation, storage, and communication overhead, limiting their suitability for large-scale deployments. To address these limitations, this paper proposes ScLBS, a federated learning (FL)–based self-certified authentication scheme for distributed and sustainable IoT environments. ScLBS integrates self-certified public key cryptography with FL-driven trust adaptation, enabling decentralized public key derivation without reliance on third-party certificate authorities or exposure of private credentials. A zero-knowledge mechanism combined with location-aware authentication strengthens resistance to impersonation, Sybil, and replay attacks. Hierarchical key management supported by a -tree enables efficient group rekeying and preserves forward and backward secrecy under dynamic membership. Formal security verification is conducted under the Dolev–Yao adversary model using ProVerif, confirming secrecy of private and session keys (SKs) and correctness of authentication. Extensive NS-3 simulations and ablation analysis demonstrate that ScLBS achieves lower authentication delay, reduced message overhead, improved network utilization, and decreased energy consumption compared to representative IoT authentication schemes, while maintaining bounded FL overhead. These results indicate that ScLBS provides a balanced trade-off between security strength, scalability, and resource efficiency for constrained IoT networks.

Producción Científica

Premkumar Chithaluru mail , B. Veera Jyothi mail , Fahd S. Alharithi mail , Wojciech Ksiazek mail , M. Ramchander mail , Aman Singh mail aman.singh@uneatlantico.es, Ravi Kumar Rachavaram mail ,

Chithaluru

<a class="ep_document_link" href="/27968/1/sensors-26-01516-v2.pdf"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>

en

open

Human Activity Recognition in Domestic Settings Based on Optical Techniques and Ensemble Models

Human activity recognition (HAR) is essential in many applications, such as smart homes, assisted living, healthcare monitoring, rehabilitation, physiotherapy, and geriatric care. Conventional methods of HAR use wearable sensors, e.g., acceleration sensors and gyroscopes. However, they are limited by issues such as sensitivity to position, user inconvenience, and potential health risks with long-term use. Optical camera systems that are vision-based provide an alternative that is not intrusive; however, they are susceptible to variations in lighting, intrusions, and privacy issues. The paper uses an optical method of recognizing human domestic activities based on pose estimation and deep learning ensemble models. The skeletal keypoint features proposed in the current methodology are extracted from video data using PoseNet to generate a privacy-preserving representation that captures key motion dynamics without being sensitive to changes in appearance. A total of 30 subjects (15 male and 15 female) were sampled across 2734 activity samples, including nine daily domestic activities. There were six deep learning architectures, namely, the Transformer (Transformer), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Multilayer Perceptron (MLP), One-Dimensional Convolutional Neural Network (1D CNN), and a hybrid Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM) architecture. The results on the hold-out test set show that the CNN–LSTM architecture achieves an accuracy of 98.78% within our experimental setting. Leave-One-Subject-Out cross-validation further confirms robust generalization across unseen individuals, with CNN–LSTM achieving a mean accuracy of 97.21% ± 1.84% across 30 subjects. The results demonstrate that vision-based pose estimation with deep learning is a useful, precise, and non-intrusive approach to HAR in smart healthcare and home automation systems.

Producción Científica

Muhammad Amjad Raza mail , Nasir Mehmood mail , Hafeez Ur Rehman Siddiqui mail , Adil Ali Saleem mail , Roberto Marcelo Álvarez mail roberto.alvarez@uneatlantico.es, Yini Airet Miró Vera mail yini.miro@uneatlantico.es, Isabel de la Torre Díez mail ,

Raza