<?xml version="1.0" encoding="UTF-8"?>
<collection>
<dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:invenio="http://invenio-software.org/elements/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"><dc:identifier>doi:10.1007/s10462-025-11432-2</dc:identifier><dc:language>eng</dc:language><dc:creator>Mehavilla, Lorena</dc:creator><dc:creator>Rodríguez, María</dc:creator><dc:creator>García, José</dc:creator><dc:creator>Alesanco, Álvaro</dc:creator><dc:title>Evaluating large language models effectiveness for flow-based intrusion detection: a comparative study with ML and DL baselines</dc:title><dc:identifier>ART-2026-147679</dc:identifier><dc:description>This paper presents the first systematic benchmark evaluating Large Language Models (LLMs), specifically GPT-2, GPT-Neo-125M, and LLaMA-3.2-1B, as standalone classifiers for intrusion detection, covering both binary and multiclass classification tasks, using structured Zeek logs derived from the CIC IoT 2023 dataset. We compare their performance against established and widely used Machine Learning (XGBoost, Random Forest, Decision Tree) and Deep Learning models (MLP, GRU, LeNet-5) across key evaluation metrics: detection effectiveness (precision, recall and F1-score), inference speed, and resource consumption. All models are consistently trained and rigorously evaluated on the CIC IoT 2023 dataset, ensuring fair, reproducible, and transparent comparisons. Our findings indicate that while LLMs achieve strong F1-score exceeding 95%, and do not fully utilize available GPU resources, they still do not outperform top-performing ML models. Notably XGBoost achieves a higher F1-score of 96.96%, using only 4% of the available CPU. These results emphasize the practical trade-offs between detection capability, inference efficiency, and hardware requirements when applying LLMs in flow-based IDS contexts, particularly in resource-constrained environments such as IoT or edge deployments.</dc:description><dc:date>2026</dc:date><dc:source>http://zaguan.unizar.es/record/168101</dc:source><dc:doi>10.1007/s10462-025-11432-2</dc:doi><dc:identifier>http://zaguan.unizar.es/record/168101</dc:identifier><dc:identifier>oai:zaguan.unizar.es:168101</dc:identifier><dc:relation>info:eu-repo/grantAgreement/ES/DGA/T31-20R</dc:relation><dc:relation>info:eu-repo/grantAgreement/ES/MCINN/PID2022-136476OB-I00</dc:relation><dc:identifier.citation>ARTIFICIAL INTELLIGENCE REVIEW 59, 2 (2026), [38 pp.]</dc:identifier.citation><dc:rights>by</dc:rights><dc:rights>https://creativecommons.org/licenses/by/4.0/deed.es</dc:rights><dc:rights>info:eu-repo/semantics/openAccess</dc:rights></dc:dc>

</collection>