Input and Output Matter: Malicious Traffic Detection With Explainability

  • Wanshuang Lin
  • , Chunhe Xia
  • , Tianbo Wang*
  • , Yuan Zhao
  • , Liang Xi
  • , Song Zhang
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Deep learning-based models demonstrate a remarkable level of accuracy in network traffic identification. However, the black-box nature of neural networks often makes the identification results difficult to explain. Although some eXplainable Artificial Intelligence (XAI) methods have been applied to network traffic identification, most of them focus on model explainability and do not provide sufficient credibility. In emerging network systems that use proprietary protocols, low-credibility malicious traffic detection can result in severe consequences. Therefore, it is imperative to deeply understand network traffic features and trust the detection results. In this paper, we propose an explainable architecture for emerging network systems. This architecture enhances the explainability of malicious traffic detection from both input and output perspectives, aiming to understand network traffic data and improve the reliability of the results. The effectiveness of explaining inputs and outputs is verified through experimental analysis in case studies. Furthermore, we review the research on explainable models in the field of network traffic identification and summarize research opportunities.

Original languageEnglish
Pages (from-to)259-267
Number of pages9
JournalIEEE Network
Volume39
Issue number2
DOIs
StatePublished - Mar 2025

Keywords

  • Network traffic identification
  • explainability analysis
  • extended finite state machine
  • protocol reverse engineering

Fingerprint

Dive into the research topics of 'Input and Output Matter: Malicious Traffic Detection With Explainability'. Together they form a unique fingerprint.

Cite this