Items where Author is "Masías Vergara, Manuel"

Up a level
Export as [feed] Atom [feed] RSS 1.0 [feed] RSS 2.0
Group by: Date | Document Type | No Grouping
Jump to: Article
Number of documents: 7.

Article

Article Subjects > Engineering Europe University of Atlantic > Research > Scientific Production
Ibero-american International University > Research > Scientific Production
Ibero-american International University > Research > Articles and Books
Universidad Internacional do Cuanza > Research > Scientific Production
University of La Romana > Research > Scientific Production
Open English Botnets are used for malicious activities such as cyber-attacks, spamming, and data theft and have become a significant threat to cyber security. Despite existing approaches for cyber attack detection, botnets prove to be a particularly difficult problem that calls for more advanced detection methods. In this research, a stacking classifier is proposed based on K-nearest neighbor, support vector machine, decision tree, random forest, and multilayer perceptron, called KSDRM, for botnet detection. Logistic regression acts as the meta-learner to combine the predictions from the base classifiers into the final prediction with the aim of increasing the overall accuracy and predictive performance of the ensemble. The UNSW-NB15 dataset is used to train machine learning models and evaluate their effectiveness in detecting cyber-attacks on IoT networks. The categorical features are transformed into numerical values using label encoding. Machine learning techniques are adopted to recognize botnet attacks to enhance cyber security measures. The KSDRM model successfully captures the complex patterns and traits of botnet attacks and obtains 99.99% training accuracy. The KSDRM model also performs well during testing by achieving an accuracy of 97.94%. Based on 3, 5, 7, and 10 folds, the k-fold cross-validation results show that the proposed method’s average accuracy is 99.89%, 99.88%, 99.89%, and 99.87%, respectively. Further, the demonstration of experiments and results shows the KSDRM model is an effective method to identify botnet-based cyber attacks. The findings of this study have the potential to improve cyber security controls and strengthen networks against changing threats. metadata Ali, Mudasir; Mushtaq, Muhammad Faheem; Akram, Urooj; Gavilanes Aray, Daniel; Masías Vergara, Manuel; Karamti, Hanen and Ashraf, Imran mail UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, daniel.gavilanes@uneatlantico.es, manuel.masias@uneatlantico.es, UNSPECIFIED, UNSPECIFIED (2025) Botnet detection in internet of things using stacked ensemble learning model. Scientific Reports, 15 (1). ISSN 2045-2322

Article Subjects > Biomedicine
Subjects > Engineering
Europe University of Atlantic > Research > Scientific Production
Fundación Universitaria Internacional de Colombia > Research > Scientific Production
Ibero-american International University > Research > Scientific Production
Ibero-american International University > Research > Articles and Books
University of La Romana > Research > Scientific Production
Open English Non-Insulin-Dependent Diabetes Mellitus (NIDDM) is a chronic health condition caused by high blood sugar levels, and if not treated early, it can lead to serious complications i.e. blindness. Human Activity Recognition (HAR) offers potential for early NIDDM diagnosis, emerging as a key application for HAR technology. This research introduces DiabSense, a state-of-the-art smartphone-dependent system for early staging of NIDDM. DiabSense incorporates HAR and Diabetic Retinopathy (DR) upon leveraging the power of two different Graph Neural Networks (GNN). HAR uses a comprehensive array of 23 human activities resembling Diabetes symptoms, and DR is a prevalent complication of NIDDM. Graph Attention Network (GAT) in HAR achieved 98.32% accuracy on sensor data, while Graph Convolutional Network (GCN) in the Aptos 2019 dataset scored 84.48%, surpassing other state-of-the-art models. The trained GCN analyzed retinal images of four experimental human subjects for DR report generation, and GAT generated their average duration of daily activities over 30 days. The daily activities in non-diabetic periods of diabetic patients were measured and compared with the daily activities of the experimental subjects, which helped generate risk factors. Fusing risk factors with DR conditions enabled early diagnosis recommendations for the experimental subjects despite the absence of any apparent symptoms. The comparison of DiabSense system outcome with clinical diagnosis reports in the experimental subjects was conducted using the A1C test. The test results confirmed the accurate assessment of early diagnosis requirements for experimental subjects by the system. Overall, DiabSense exhibits significant potential for ensuring early NIDDM treatment, improving millions of lives worldwide. metadata Alam, Md Nuho Ul; Hasnine, Ibrahim; Bahadur, Erfanul Hoque; Masum, Abdul Kadar Muhammad; Briones Urbano, Mercedes; Masías Vergara, Manuel; Uddin, Jia; Ashraf, Imran and Samad, Md. Abdus mail UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, mercedes.briones@uneatlantico.es, manuel.masias@uneatlantico.es, UNSPECIFIED, UNSPECIFIED, UNSPECIFIED (2024) DiabSense: early diagnosis of non-insulin-dependent diabetes mellitus using smartphone-based human activity recognition and diabetic retinopathy analysis with Graph Neural Network. Journal of Big Data, 11 (1). ISSN 2196-1115

Article Subjects > Engineering Europe University of Atlantic > Research > Scientific Production
Ibero-american International University > Research > Scientific Production
Ibero-american International University > Research > Articles and Books
Universidad Internacional do Cuanza > Research > Scientific Production
University of La Romana > Research > Scientific Production
Open English Software cost and effort estimation is one of the most significant tasks in the area of software engineering. Research conducted in this field has been evolving with new techniques that necessitate periodic comparative analyses. Software project success largely depends on accurate software cost estimation as it gives an idea of the challenges and risks involved in the development. The great diversity of ML and Non-ML techniques has generated a comparison and progressed into the integration of these techniques. Based on varying advantages it has become imperative to work out preferred estimation techniques to improve the project development process. This study aims to present a systematic literature review (SLR) to investigate the trends of the articles published in the recent one and a half decades and to propose a way forward. This systematic literature review has proposed a three-stage approach to plan (Tollgate approach), conduct (Likert type scale), and report the results from five renowned digital libraries. For the selected 52 articles, artificial neural network model (ANN) and constructive cost model (COCOMO) based approaches have been the favored techniques. The mean magnitude of relative error (MMRE) has been the preferred accuracy metric, software engineering, and project management are the most relevant fields, and the promise repository has been identified as the widely accessed database. This review is likely to be of value for the development, cost, and effort estimations. metadata Rashid, Chaudhary Hamza; Shafi, Imran; Ahmad, Jamil; Bautista Thompson, Ernesto; Masías Vergara, Manuel; Diez, Isabel De La Torre and Ashraf, Imran mail UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, ernesto.bautista@unini.edu.mx, manuel.masias@uneatlantico.es, UNSPECIFIED, UNSPECIFIED (2023) Software Cost and Effort Estimation: Current Approaches and Future Trends. IEEE Access. p. 1. ISSN 2169-3536

Article Subjects > Engineering Europe University of Atlantic > Research > Scientific Production
Fundación Universitaria Internacional de Colombia > Research > Scientific Production
Ibero-american International University > Research > Scientific Production
Ibero-american International University > Research > Articles and Books
Universidad Internacional do Cuanza > Research > Scientific Production
Open English In the Internet of things (IoT), data packets are accumulated and disseminated across IoT devices without human intervention, therefore the privacy and security of sensitive data during transmission are crucial. For this purpose, multiple routing techniques exist to ensure security and privacy in IoT Systems. One such technique is the routing protocol for low power and lossy networks (RPL) which is an IPv6 protocol commonly used for routing in IoT systems. Formal modeling of an IoT system can validate the reliability, accuracy, and consistency of the system. This paper presents the formal modeling of RPL protocol and the analysis of its security schemes using colored Petri nets that applies formal validation and verification for both the secure and non-secure modes of RPL protocol. The proposed approach can also be useful for formal modeling-based verification of the security of the other communication protocols. metadata Balfaqih, Mohammed; Ahmad, Farooq; Chaudhry, Muhammad Tayyab; Jamal, Muhammad Hasan; Sohail, Muhammad Amar; Gavilanes Aray, Daniel; Masías Vergara, Manuel and Ashraf, Imran mail UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, daniel.gavilanes@uneatlantico.es, manuel.masias@uneatlantico.es, UNSPECIFIED (2023) Formal modeling and analysis of security schemes of RPL protocol using colored Petri nets. PLOS ONE, 18 (8). e0285700. ISSN 1932-6203

Article Subjects > Engineering Europe University of Atlantic > Research > Scientific Production
Fundación Universitaria Internacional de Colombia > Research > Scientific Production
Ibero-american International University > Research > Scientific Production
Ibero-american International University > Research > Articles and Books
Universidad Internacional do Cuanza > Research > Scientific Production
Open English With the advancement in information technology, digital data stealing and duplication have become easier. Over a trillion bytes of data are generated and shared on social media through the internet in a single day, and the authenticity of digital data is currently a major problem. Cryptography and image watermarking are domains that provide multiple security services, such as authenticity, integrity, and privacy. In this paper, a digital image watermarking technique is proposed that employs the least significant bit (LSB) and canny edge detection method. The proposed method provides better security services and it is computationally less expensive, which is the demand of today’s world. The major contribution of this method is to find suitable places for watermarking embedding and provides additional watermark security by scrambling the watermark image. A digital image is divided into non-overlapping blocks, and the gradient is calculated for each block. Then convolution masks are applied to find the gradient direction and magnitude, and non-maximum suppression is applied. Finally, LSB is used to embed the watermark in the hysteresis step. Furthermore, additional security is provided by scrambling the watermark signal using our chaotic substitution box. The proposed technique is more secure because of LSB’s high payload and watermark embedding feature after a canny edge detection filter. The canny edge gradient direction and magnitude find how many bits will be embedded. To test the performance of the proposed technique, several image processing, and geometrical attacks are performed. The proposed method shows high robustness to image processing and geometrical attacks metadata Faheem, Zaid Bin; Ishaq, Abid; Rustam, Furqan; de la Torre Díez, Isabel; Gavilanes, Daniel; Masías Vergara, Manuel and Ashraf, Imran mail UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, daniel.gavilanes@uneatlantico.es, manuel.masias@uneatlantico.es, UNSPECIFIED (2023) Image Watermarking Using Least Significant Bit and Canny Edge Detection. Sensors, 23 (3). p. 1210. ISSN 1424-8220

Article Subjects > Engineering Europe University of Atlantic > Research > Scientific Production
Fundación Universitaria Internacional de Colombia > Research > Scientific Production
Ibero-american International University > Research > Scientific Production
Ibero-american International University > Research > Articles and Books
Universidad Internacional do Cuanza > Research > Scientific Production
Open English Conventional outage management practices in distribution systems are tedious and complex due to the long time taken to locate the fault. Emerging smart technologies and various cloud services offered could be utilized and integrated into the power industry to enhance the overall process, especially in the fault monitoring and normalizing fields in distribution systems. This paper introduces smart fault monitoring and normalizing technologies in distribution systems by using one of the most popular cloud service platforms, the Microsoft Azure Internet of Things (IoT) Hub, together with some of the related services. A hardware prototype was constructed based on part of a real underground distribution system network, and the fault monitoring and normalizing techniques were integrated to form a system. Such a system with IoT integration effectively reduces the power outage experienced by customers in the healthy section of the faulted feeder from approximately 1 h to less than 5 min and is able to improve the System Average Interruption Duration Index (SAIDI) and System Average Interruption Frequency Index (SAIFI) in electric utility companies significantly metadata Peter, Geno; Stonier, Albert Alexander; Gupta, Punit; Gavilanes, Daniel; Masías Vergara, Manuel and Lung sin, Jong mail UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, daniel.gavilanes@uneatlantico.es, manuel.masias@uneatlantico.es, UNSPECIFIED (2022) Smart Fault Monitoring and Normalizing of a Power Distribution System Using IoT. Energies, 15 (21). p. 8206. ISSN 1996-1073

Article Subjects > Biomedicine
Subjects > Nutrition
Europe University of Atlantic > Research > Scientific Production
Ibero-american International University > Research > Scientific Production
Ibero-american International University > Research > Articles and Books
Closed English In the last decade, specific dietary patterns, mainly characterized by high consumption of vegetables and fruits, have been proven beneficial for the prevention of both metabolic syndrome (MetS)-related dysfunctions and neurodegenerative disorders, such as Alzheimer’s disease (AD). Nowadays, neuroimaging readouts can be used to diagnose AD, investigate MetS effects on brain functionality and anatomy, and assess the effects of dietary supplementations and nutritional patterns in relation to neurodegeneration and AD-related features. Here we review scientific literature describing the use of the most recent neuroimaging techniques to detect AD- and MetS-related brain features, and also to investigate associations between consolidated dietary patterns or nutritional interventions and AD, specifically focusing on observational and intervention studies in humans. metadata Pistollato, Francesca; Sumalla Cano, Sandra; Elío Pascual, Iñaki; Masías Vergara, Manuel; Giampieri, Francesca and Battino, Maurizio mail francesca.pistollato@uneatlantico.es, sandra.sumalla@uneatlantico.es, inaki.elio@uneatlantico.es, manuel.masias@uneatlantico.es, francesca.giampieri@uneatlantico.es, maurizio.battino@uneatlantico.es (2015) The Use of Neuroimaging to Assess Associations Among Diet, Nutrients, Metabolic Syndrome, and Alzheimer’s Disease. Journal of Alzheimer's Disease, 48 (2). pp. 303-318. ISSN 13872877

Generated on Sat Apr 4 23:47:37 2026 UTC.

<a class="ep_document_link" href="/27825/1/s41598-026-39196-x_reference.pdf"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>

en

open

Benchmarking multiple instance learning architectures from patches to pathology for prostate cancer detection and grading using attention-based weak supervision

Histopathological evaluation is necessary for the diagnosis and grading of prostate cancer, which is still one of the most common cancers in men globally. Traditional evaluation is time-consuming, prone to inter-observer variability, and challenging to scale. The clinical usefulness of current AI systems is limited by the need for comprehensive pixel-level annotations. The objective of this research is to develop and evaluate a large-scale benchmarking study on a weakly supervised deep learning framework that minimizes the need for annotation and ensures interpretability for automated prostate cancer diagnosis and International Society of Urological Pathology (ISUP) grading using whole slide images (WSIs). This study rigorously tested six cutting-edge multiple instance learning (MIL) architectures (CLAM-MB, CLAM-SB, ILRA-MIL, AC-MIL, AMD-MIL, WiKG-MIL), three feature encoders (ResNet50, CTransPath, UNI2), and four patch extraction techniques (varying sizes and overlap) using the PANDA dataset (10,616 WSIs), yielding 72 experimental configurations. The methodology used distributed cloud computing to process over 31 million tissue patches, implementing advanced attention mechanisms to ensure clinical interpretability through Grad-CAM visualizations. The optimum configuration (UNI2 encoder with ILRA-MIL, 256 256 patches, 50% overlap) achieved 78.75% accuracy and 90.12% quadratic weighted kappa (QWK), outperforming traditional methods and approaching expert pathologist-level diagnostic capability. Overlapping smaller patches offered the best balance of spatial resolution and contextual information, while domain-specific foundation models performed noticeably better than generic encoders. This work is the first large-scale, comprehensive comparison of weekly supervised MIL methods for prostate cancer diagnosis and grading. The proposed approach has excellent clinical diagnostic performance, scalability, practical feasibility through cloud computing, and interpretability using visualization tools.

Producción Científica

Naveed Anwer Butt mail , Dilawaiz Sarwat mail , Irene Delgado Noya mail irene.delgado@uneatlantico.es, Kilian Tutusaus mail kilian.tutusaus@uneatlantico.es, Nagwan Abdel Samee mail , Imran Ashraf mail ,

Butt

<a href="/27915/1/csbj.0023.pdf" class="ep_document_link"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>

en

open

A Systematic Literature Review on Integrated Deep Learning and Multi-Agent Vision-Language Frameworks for Pathology Image Analysis and Report Generation

This systematic literature review (SLR) investigates the integration of deep learning (DL), vision-language models(VLMs), and multi-agent systems in the analysis of pathology images and automated report generation. The rapidadvancement of whole-slide imaging (WSI) technologies has posed new challenges in pathology, especially due to thescale and complexity of the data. DL techniques in general and convolutional neural networks (CNNs) and transform-ers in particular have significantly enhanced image analysis tasks including segmentation, classification, and detection.However, these models often lack generalizability to generate coherent, clinically relevant text, thus necessitating theintegration of VLMs and large language models (LLMs). This review examines the effectiveness of VLMs and LLMsin bridging the gap between visual data and clinical text, focusing on their potential for automating the generationof pathology reports. Additionally, multi-agent systems, which leverage specialized artificial intelligence (AI) agentsto collaboratively perform diagnostic tasks, are explored for their contributions to improving diagnostic accuracy andscalability. Through a synthesis of recent studies, this review highlights the successes, challenges, and future direc-tions of these AI technologies in pathology diagnostics, offering a comprehensive foundation for the development ofintegrated, AI-driven diagnostic workflows.

Producción Científica

Usama Ali mail , Imran Shafi mail , Jamil Ahmad mail , Arlette Zárate Cáceres mail , Thania Chio Montero mail , Hafiz Muhammad Raza ur Rehman mail , Imran Ashraf mail ,

Ali

<a class="ep_document_link" href="/27970/1/s11357-026-02188-w.pdf"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>

en

open

Fish consumption and cognitive function in aging: a systematic review of observational studies

Epidemiological studies consistently link higher fish intake with slower rates of cognitive decline and lower dementia incidence. The aim of the present study was to systematically review existing observational studies investigating the association between fish consumption and cognitive function in older adults. A total of 25 studies (8 cross-sectional and 17 prospective including mainly healthy older adults, age range of participants ranging from 18 to 30 years at baseline in prospective studies to 65 to 91 years, representing the upper limit of the age spectrum) were reviewed. Cognitive functions currently investigated in most published studies included various domains, such as global cognition, memory (episodic, working), executive function (planning, inhibition, flexibility), attention and processing speed. Existing studies greatly vary in terms of design (cross-sectional and prospective), geographical area, number of participants involved, and tools used to assess the outcomes of interest. The main findings across studies are not univocal, with some studies reporting stronger evidence of association between fish consumption and various cognitive domains, while others addressed rather null findings. The most consistently responsive domains were processing speed, executive functioning, semantic memory, and global cognitive ability among individuals consuming fish at least weekly, which are highly relevant to both neurodegenerative and vascular forms of cognitive impairment. Positive associations were also observed for verbal memory and general memory, though these were less uniform and often attenuated after multivariable adjustment. In contrast, associations with reaction time, verbal-numerical reasoning, and broad composite scores were inconsistent, and several fully adjusted models showed null results. In conclusion, the evidence suggests that regular fish intake (typically ≥1–2 servings per week) is linked to preserved cognitive performance, although some inconsistent findings require further investigations.

Producción Científica

Justyna Godos mail , Giuseppe Caruso mail , Agnieszka Micek mail , Alberto Dolci mail , Carmen Lilí Rodríguez Velasco mail carmen.rodriguez@uneatlantico.es, Evelyn Frias-Toral mail , Jason Di Giorgio mail , Nicola Veronese mail , Andrea Lehoczki mail , Mario Siervo mail , Zoltan Ungvari mail , Giuseppe Grosso mail ,

Godos

<a class="ep_document_link" href="/27554/1/s41598-026-37541-8_reference.pdf"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>

en

open

A scalable and secure federated learning authentication scheme for IoT

Secure and scalable authentication remains a fundamental challenge in Internet of Things (IoT) networks due to constrained device resources, dynamic topology, and the absence of centralized trust infrastructures. Conventional password-based and certificate-driven authentication schemes incur high computation, storage, and communication overhead, limiting their suitability for large-scale deployments. To address these limitations, this paper proposes ScLBS, a federated learning (FL)–based self-certified authentication scheme for distributed and sustainable IoT environments. ScLBS integrates self-certified public key cryptography with FL-driven trust adaptation, enabling decentralized public key derivation without reliance on third-party certificate authorities or exposure of private credentials. A zero-knowledge mechanism combined with location-aware authentication strengthens resistance to impersonation, Sybil, and replay attacks. Hierarchical key management supported by a -tree enables efficient group rekeying and preserves forward and backward secrecy under dynamic membership. Formal security verification is conducted under the Dolev–Yao adversary model using ProVerif, confirming secrecy of private and session keys (SKs) and correctness of authentication. Extensive NS-3 simulations and ablation analysis demonstrate that ScLBS achieves lower authentication delay, reduced message overhead, improved network utilization, and decreased energy consumption compared to representative IoT authentication schemes, while maintaining bounded FL overhead. These results indicate that ScLBS provides a balanced trade-off between security strength, scalability, and resource efficiency for constrained IoT networks.

Producción Científica

Premkumar Chithaluru mail , B. Veera Jyothi mail , Fahd S. Alharithi mail , Wojciech Ksiazek mail , M. Ramchander mail , Aman Singh mail aman.singh@uneatlantico.es, Ravi Kumar Rachavaram mail ,

Chithaluru

<a href="/27968/1/sensors-26-01516-v2.pdf" class="ep_document_link"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>

en

open

Human Activity Recognition in Domestic Settings Based on Optical Techniques and Ensemble Models

Human activity recognition (HAR) is essential in many applications, such as smart homes, assisted living, healthcare monitoring, rehabilitation, physiotherapy, and geriatric care. Conventional methods of HAR use wearable sensors, e.g., acceleration sensors and gyroscopes. However, they are limited by issues such as sensitivity to position, user inconvenience, and potential health risks with long-term use. Optical camera systems that are vision-based provide an alternative that is not intrusive; however, they are susceptible to variations in lighting, intrusions, and privacy issues. The paper uses an optical method of recognizing human domestic activities based on pose estimation and deep learning ensemble models. The skeletal keypoint features proposed in the current methodology are extracted from video data using PoseNet to generate a privacy-preserving representation that captures key motion dynamics without being sensitive to changes in appearance. A total of 30 subjects (15 male and 15 female) were sampled across 2734 activity samples, including nine daily domestic activities. There were six deep learning architectures, namely, the Transformer (Transformer), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Multilayer Perceptron (MLP), One-Dimensional Convolutional Neural Network (1D CNN), and a hybrid Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM) architecture. The results on the hold-out test set show that the CNN–LSTM architecture achieves an accuracy of 98.78% within our experimental setting. Leave-One-Subject-Out cross-validation further confirms robust generalization across unseen individuals, with CNN–LSTM achieving a mean accuracy of 97.21% ± 1.84% across 30 subjects. The results demonstrate that vision-based pose estimation with deep learning is a useful, precise, and non-intrusive approach to HAR in smart healthcare and home automation systems.

Producción Científica

Muhammad Amjad Raza mail , Nasir Mehmood mail , Hafeez Ur Rehman Siddiqui mail , Adil Ali Saleem mail , Roberto Marcelo Álvarez mail roberto.alvarez@uneatlantico.es, Yini Airet Miró Vera mail yini.miro@uneatlantico.es, Isabel de la Torre Díez mail ,

Raza