Detection and classification of brain tumor using a hybrid learning model in CT scan images
Article
Subjects > Biomedicine
Subjects > Engineering
Europe University of Atlantic > Research > Scientific Production
Ibero-american International University > Research > Articles and Books
Open
English
Accurate diagnosis of brain tumors is critical in understanding the prognosis in terms of the type, growth rate, location, removal strategy, and overall well-being of the patients. Among different modalities used for the detection and classification of brain tumors, a computed tomography (CT) scan is often performed as an early-stage procedure for minor symptoms like headaches. Automated procedures based on artificial intelligence (AI) and machine learning (ML) methods are used to detect and classify brain tumors in Computed Tomography (CT) scan images. However, the key challenges in achieving the desired outcome are associated with the model’s complexity and generalization. To address these issues, we propose a hybrid model that extracts features from CT images using classical machine learning. Additionally, although MRI is a common modality for brain tumor diagnosis, its high cost and longer acquisition time make CT scans a more practical choice for early-stage screening and widespread clinical use. The proposed framework has different stages, including image acquisition, pre-processing, feature extraction, feature selection, and classification. The hybrid architecture combines features from ResNet50, AlexNet, LBP, HOG, and median intensity, classified using a multilayer perceptron. The selection of the relevant features in our proposed hybrid model was extracted using the SelectKBest algorithm. Thus, it optimizes the proposed model performance. In addition, the proposed model incorporates data augmentation to handle the imbalanced datasets. We employed a scoring function to extract the features. The Classification is ensured using a multilayer perceptron neural network (MLP). Unlike most existing hybrid approaches, which primarily target MRI-based brain tumor classification, our method is specifically designed for CT scan images, addressing their unique noise patterns and lower soft-tissue contrast. To the best of our knowledge, this is the first work to integrate LBP, HOG, median intensity, and deep features from both ResNet50 and AlexNet in a structured fusion pipeline for CT brain tumor classification. The proposed hybrid model is tested on data from numerous sources and achieved an accuracy of 94.82%, precision of 94.52%, specificity of 98.35%, and sensitivity of 94.76% compared to state-of-the-art models. While MRI-based models often report higher accuracies, the proposed model achieves 94.82% on CT scans, within 3–4% of leading MRI-based approaches, demonstrating strong generalization despite the modality difference. The proposed hybrid model, combining hand-crafted and deep learning features, effectively improves brain tumor detection and classification accuracy in CT scans. It has the potential for clinical application, aiding in early and accurate diagnosis. Unlike MRI, which is often time-intensive and costly, CT scans are more accessible and faster to acquire, making them suitable for early-stage screening and emergency diagnostics. This reinforces the practical and clinical value of the proposed model in real-world healthcare settings.
metadata
Ghasemi, Roja; Islam, Naveed; Bayat, Samin; Shabir, Muhammad; Rahman, Shahid; Amin, Farhan; de la Torre, Isabel; Kuc Castilla, Ángel Gabriel and Ramírez-Vargas, Debora L.
mail
UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, UNSPECIFIED, angel.kuc@uneatlantico.es, debora.ramirez@unini.edu.mx
(2025)
Detection and classification of brain tumor using a hybrid learning model in CT scan images.
Scientific Reports, 15 (1).
ISSN 2045-2322
|
Text
s41598-025-18979-8.pdf Available under License Creative Commons Attribution Non-commercial No Derivatives. Download (2MB) |
Abstract
Accurate diagnosis of brain tumors is critical in understanding the prognosis in terms of the type, growth rate, location, removal strategy, and overall well-being of the patients. Among different modalities used for the detection and classification of brain tumors, a computed tomography (CT) scan is often performed as an early-stage procedure for minor symptoms like headaches. Automated procedures based on artificial intelligence (AI) and machine learning (ML) methods are used to detect and classify brain tumors in Computed Tomography (CT) scan images. However, the key challenges in achieving the desired outcome are associated with the model’s complexity and generalization. To address these issues, we propose a hybrid model that extracts features from CT images using classical machine learning. Additionally, although MRI is a common modality for brain tumor diagnosis, its high cost and longer acquisition time make CT scans a more practical choice for early-stage screening and widespread clinical use. The proposed framework has different stages, including image acquisition, pre-processing, feature extraction, feature selection, and classification. The hybrid architecture combines features from ResNet50, AlexNet, LBP, HOG, and median intensity, classified using a multilayer perceptron. The selection of the relevant features in our proposed hybrid model was extracted using the SelectKBest algorithm. Thus, it optimizes the proposed model performance. In addition, the proposed model incorporates data augmentation to handle the imbalanced datasets. We employed a scoring function to extract the features. The Classification is ensured using a multilayer perceptron neural network (MLP). Unlike most existing hybrid approaches, which primarily target MRI-based brain tumor classification, our method is specifically designed for CT scan images, addressing their unique noise patterns and lower soft-tissue contrast. To the best of our knowledge, this is the first work to integrate LBP, HOG, median intensity, and deep features from both ResNet50 and AlexNet in a structured fusion pipeline for CT brain tumor classification. The proposed hybrid model is tested on data from numerous sources and achieved an accuracy of 94.82%, precision of 94.52%, specificity of 98.35%, and sensitivity of 94.76% compared to state-of-the-art models. While MRI-based models often report higher accuracies, the proposed model achieves 94.82% on CT scans, within 3–4% of leading MRI-based approaches, demonstrating strong generalization despite the modality difference. The proposed hybrid model, combining hand-crafted and deep learning features, effectively improves brain tumor detection and classification accuracy in CT scans. It has the potential for clinical application, aiding in early and accurate diagnosis. Unlike MRI, which is often time-intensive and costly, CT scans are more accessible and faster to acquire, making them suitable for early-stage screening and emergency diagnostics. This reinforces the practical and clinical value of the proposed model in real-world healthcare settings.
| Document Type: | Article |
|---|---|
| Keywords: | Healthcare; CNN models; AI-based cognitive neuroscience; Medical image processing of neuroimaging |
| Subject classification: | Subjects > Biomedicine Subjects > Engineering |
| Divisions: | Europe University of Atlantic > Research > Scientific Production Ibero-american International University > Research > Articles and Books |
| Deposited: | 21 Oct 2025 13:51 |
| Last Modified: | 21 Oct 2025 13:51 |
| URI: | https://repositorio.unib.org/id/eprint/17858 |
Actions (login required)
![]() |
View Object |
<a class="ep_document_link" href="/26722/1/nutrients-18-00257.pdf"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>
en
open
Background/Objectives: The growing integration of Artificial Intelligence (AI) and chatbots in health professional education offers innovative methods to enhance learning and clinical preparedness. This study aimed to evaluate the educational impact and perceptions in university students of Human Nutrition and Dietetics, regarding the utility, usability, and design of the E+DIEting_Lab chatbot platform when implemented in clinical nutrition training. Methods: The platform was piloted from December 2023 to April 2025 involving 475 students from multiple European universities. While all 475 students completed the initial survey, 305 finished the follow-up evaluation, representing a 36% attrition rate. Participants completed surveys before and after interacting with the chatbots, assessing prior experience, knowledge, skills, and attitudes. Data were analyzed using descriptive statistics and independent samples t-tests to compare pre- and post-intervention perceptions. Results: A total of 475 university students completed the initial survey and 305 the final evaluation. Most university students were females (75.4%), with representation from six languages and diverse institutions. Students reported clear perceived learning gains: 79.7% reported updated practical skills in clinical dietetics and communication were updated, 90% felt that new digital tools improved classroom practice, and 73.9% reported enhanced interpersonal skills. Self-rated competence in using chatbots as learning tools increased significantly, with mean knowledge scores rising from 2.32 to 2.66 and skills from 2.39 to 2.79 on a 0–5 Likert scale (p < 0.001 for both). Perceived effectiveness and usefulness of chatbots as self-learning tools remained positive but showed a small decline after use (effectiveness from 3.63 to 3.42; usefulness from 3.63 to 3.45), suggesting that hands-on experience refined, but did not diminish, students’ overall favorable views of the platform. Conclusions: The implementation and pilot evaluation of the E+DIEting_Lab self-learning virtual patient chatbot platform demonstrate that structured digital simulation tools can significantly improve perceived clinical nutrition competences. These findings support chatbot adoption in dietetics curricula and inform future digital education innovations.
Iñaki Elío Pascual mail inaki.elio@uneatlantico.es, Kilian Tutusaus mail kilian.tutusaus@uneatlantico.es, Imanol Eguren García mail imanol.eguren@uneatlantico.es, Álvaro Lasarte García mail , Arturo Ortega-Mansilla mail arturo.ortega@uneatlantico.es, Thomas Prola mail thomas.prola@uneatlantico.es, Sandra Sumalla Cano mail sandra.sumalla@uneatlantico.es,
Elío Pascual
<a href="/26964/1/s44196-025-01123-9_reference.pdf" class="ep_document_link"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>
en
open
Suicide Ideation Detection Using Social Media Data and Ensemble Machine Learning Model
Identifying the emotional state of individuals has useful applications, particularly to reduce the risk of suicide. Users’ thoughts on social media platforms can be used to find cues on the emotional state of individuals. Clinical approaches to suicide ideation detection primarily rely on evaluation by psychologists, medical experts, etc., which is time-consuming and requires medical expertise. Machine learning approaches have shown potential in automating suicide detection. In this regard, this study presents a soft voting ensemble model (SVEM) by leveraging random forest, logistic regression, and stochastic gradient descent classifiers using soft voting. In addition, for the robust training of SVEM, a hybrid feature engineering approach is proposed that combines term frequency-inverse document frequency and the bag of words. For experimental evaluation, “Suicide Watch” and “Depression” subreddits on the Reddit platform are used. Results indicate that the proposed SVEM model achieves an accuracy of 94%, better than existing approaches. The model also shows robust performance concerning precision, recall, and F1, each with a 0.93 score. ERT and deep learning models are also used, and performance comparison with these models indicates better performance of the SVEM model. Gated recurrent unit, long short-term memory, and recurrent neural network have an accuracy of 92% while the convolutional neural network obtains an accuracy of 91%. SVEM’s computational complexity is also low compared to deep learning models. Further, this study highlights the importance of explainability in healthcare applications such as suicidal ideation detection, where the use of LIME provides valuable insights into the contribution of different features. In addition, k-fold cross-validation further validates the performance of the proposed approach.
Erol KINA mail , Jin-Ghoo Choi mail , Abid Ishaq mail , Rahman Shafique mail , Mónica Gracia Villar mail monica.gracia@uneatlantico.es, Eduardo René Silva Alvarado mail eduardo.silva@funiber.org, Isabel de la Torre Diez mail , Imran Ashraf mail ,
KINA
<a class="ep_document_link" href="/26965/1/s40203-025-00539-7.pdf"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>
en
open
Human metapneumovirus (hMPV) is one of the potential pandemic pathogens, and it is a concern for elderly subjects and immunocompromised patients. There is no vaccine or specific antiviral available for hMPV. We conducted an in-silico study to predict initial antiviral candidates against human metapneumovirus. Our methodology included protein modeling, stability assessment, molecular docking, molecular simulation, analysis of non-covalent interactions, bioavailability, carcinogenicity, and pharmacokinetic profiling. We pinpointed four plant-derived bio-compounds as antiviral candidates. Among the compounds, apigenin showed the highest binding affinity, with values of − 8.0 kcal/mol for the hMPV-F protein and − 7.6 kcal/mol for the hMPV-N protein. Molecular dynamic simulations and further analyses confirmed that the protein-ligand docked complexes exhibited acceptable stability compared to two standard antiviral drugs. Additionally, these four compounds yielded satisfactory outcomes in bioavailability, drug-likeness, and ADME-Tox (absorption, distribution, metabolism, excretion, and toxicity) and STopTox analyses. This study highlights the potential of apigenin and xanthoangelol E as an initial antiviral candidate, underscoring the necessity for wet-lab evaluation, preclinical and clinical trials against human metapneumovirus infection.
Hasan Huzayfa Rahaman mail , Afsana Khan mail , Nadim Sharif mail , Wasifuddin Ahmed mail , Nazmul Sharif mail , Rista Majumder mail , Silvia Aparicio Obregón mail silvia.aparicio@uneatlantico.es, Rubén Calderón Iglesias mail ruben.calderon@uneatlantico.es, Isabel De la Torre Díez mail , Shuvra Kanti Dey mail ,
Rahaman
<a href="/27153/1/fpls-16-1720471.pdf" class="ep_document_link"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>
en
open
Introduction: Jackfruit cultivation is highly affected by leaf diseases that reduce yield, fruit quality, and farmer income. Early diagnosis remains challenging due to the limitations of manual inspection and the lack of automated and scalable disease detection systems. Existing deep-learning approaches often suffer from limited generalization and high computational cost, restricting real-time field deployment. Methods: This study proposes CNNAttLSTM, a hybrid deep-learning architecture integrating Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM) units, and an attention mechanism for multi-class classification of algal leaf spot, black spot, and healthy jackfruit leaves. Each image is divided into ordered 56×56 spatial patches, treated as pseudo-temporal sequences to enable the LSTM to capture contextual dependencies across different leaf regions. Spatial features are extracted via Conv2D, MaxPooling, and GlobalAveragePooling layers; temporal modeling is performed by LSTM units; and an attention mechanism assigns adaptive weights to emphasize disease-relevant regions. Experiments were conducted on a publicly available Kaggle dataset comprising 38,019 images, using predefined training, validation, and testing splits. Results: The proposed CNNAttLSTM model achieved 99% classification accuracy, outperforming the baseline CNN (86%) and CNN–LSTM (98%) models. It required only 3.7 million parameters, trained in 45 minutes on an NVIDIA Tesla T4 GPU, and achieved an inference time of 22 milliseconds per image, demonstrating high computational efficiency. The patch-based pseudo-temporal approach improved spatial–temporal feature representation, enabling the model to distinguish subtle differences between visually similar disease classes. Discussion: Results show that combining spatial feature extraction with temporal modeling and attention significantly enhances robustness and classification performance in plant disease detection. The lightweight design enables real-time and edge-device deployment, addressing a major limitation of existing deep-learning techniques. The findings highlight the potential of CNNAttLSTM for scalable, efficient, and accurate agricultural disease monitoring and broader precision agriculture applications.
Gaurav Tuteja mail , Fuad Ali Mohammed Al-Yarimi mail , Amna Ikram mail , Rupesh Gupta mail , Ateeq Ur Rehman mail , Jeewan Singh mail , Irene Delgado Noya mail irene.delgado@uneatlantico.es, Luis Alonso Dzul López mail luis.dzul@uneatlantico.es,
Tuteja
<a href="/27154/1/s41598-026-37191-w_reference.pdf" class="ep_document_link"><img class="ep_doc_icon" alt="[img]" src="/style/images/fileicons/text.png" border="0"/></a>
en
open
End-to-end emergency response protocol for tunnel accidents augmentation with reinforcement learning
Autonomous unmanned aerial vehicles (UAVs) offer cost-effective and flexible solutions for a wide range of real-world applications, particularly in hazardous and time-critical environments. Their ability to navigate autonomously, communicate rapidly, and avoid collisions makes UAVs well suited for emergency response scenarios. However, real-time path planning in dynamic and unpredictable environments remains a major challenge, especially in confined tunnel infrastructures where accidents may trigger fires, smoke propagation, debris, and rapid environmental changes. In such conditions, conventional preplanned or model-based navigation approaches often fail due to limited visibility, narrow passages, and the absence of reliable localization signals. To address these challenges, this work proposes an end-to-end emergency response framework for tunnel accidents based on Multi-Agent Reinforcement Learning (MARL). Each UAV operates as an independent learning agent using an Independent Q-Learning paradigm, enabling real-time decision-making under limited computational resources. To mitigate premature convergence and local optima during exploration, Grey Wolf Optimization (GWO) is integrated as a policy-guidance mechanism within the reinforcement learning (RL) framework. A customized reward function is designed to prioritize victim discovery, penalize unsafe behavior, and explicitly discourage redundant exploration among agents. The proposed approach is evaluated using a frontier-based exploration simulator under both single-agent and multi-agent settings with multiple goals. Extensive simulation results demonstrate that the proposed framework achieves faster goal discovery, improved map coverage, and reduced rescue time compared to state-of-the-art GWO-based exploration and random search algorithms. These results highlight the effectiveness of lightweight MARL-based coordination for autonomous UAV-assisted tunnel emergency response.
Hafiz Muhammad Raza ur Rehman mail , M. Junaid Gul mail , Rabbiya Younas mail , Muhammad Zeeshan Jhandir mail , Roberto Marcelo Álvarez mail roberto.alvarez@uneatlantico.es, Yini Airet Miró Vera mail yini.miro@uneatlantico.es, Imran Ashraf mail ,
ur Rehman
