iSMELL: Assembling LLMs with Expert Toolsets for Code Smell Detection and Refactoring

  • Di Wu
  • , Fangwen Mu
  • , Lin Shi*
  • , Zhaoqiang Guo
  • , Kui Liu
  • , Weiguang Zhuang
  • , Yuqi Zhong
  • , Li Zhang
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Detecting and refactoring code smells is challenging, laborious, and sustaining. Although large language models have demonstrated potential in identifying various types of code smells, they also have limitations such as input-output token restrictions, difficulty in accessing repository-level knowledge, and performing dynamic source code analysis. Existing learning-based methods or commercial expert toolsets have advantages in handling complex smells. They can analyze project structures and contextual information in-depth, access global code repositories, and utilize advanced code analysis techniques. However, these toolsets are often designed for specific types and patterns of code smells and can only address fixed smells, lacking flexibility and scalability. To resolve that problem, we propose iSMELL, an ensemble approach that employs various code smell detection toolsets via Mixture of Experts (MoE) architecture for comprehensive code smell detection, and enhances the LLMs with the detection results from expert toolsets for refactoring those identified code smells. First, we train a MoE model that, based on input code vectors, outputs the most suitable expert tool for identifying each type of smell. Then, we select the recommended toolsets for code smell detection and obtain their results. Finally, we equip the prompts with the detection results from the expert toolsets, thereby enhancing the refactoring capability of LLMs for code with existing smells, enabling them to provide different solutions based on the type of smell. We evaluate our approach on detecting and refactoring three classical and complex code smells, i.e., Refused Bequest, God Class, and Feature Envy. The results show that, by adopting seven expert code smell toolsets, iSMELL achieved an average F1 score of 75.17% on code smell detection, outperforming LLMs baselines by an increase of 35.05% in F1 score. We further evaluate the code refactored by the enhanced LLM. The quantitative and human evaluation results show that iSMELL could improve code quality metrics and conduct satisfactory refactoring toward the identified code smells. We believe that our proposed solution could provide new insights into better leveraging LLMs and existing approaches to resolving complex software tasks.

Original languageEnglish
Title of host publicationProceedings - 2024 39th ACM/IEEE International Conference on Automated Software Engineering, ASE 2024
PublisherAssociation for Computing Machinery, Inc
Pages1345-1357
Number of pages13
ISBN (Electronic)9798400712487
DOIs
StatePublished - 27 Oct 2024
Event39th ACM/IEEE International Conference on Automated Software Engineering, ASE 2024 - Sacramento, United States
Duration: 28 Oct 20241 Nov 2024

Publication series

NameProceedings - 2024 39th ACM/IEEE International Conference on Automated Software Engineering, ASE 2024

Conference

Conference39th ACM/IEEE International Conference on Automated Software Engineering, ASE 2024
Country/TerritoryUnited States
CitySacramento
Period28/10/241/11/24

Fingerprint

Dive into the research topics of 'iSMELL: Assembling LLMs with Expert Toolsets for Code Smell Detection and Refactoring'. Together they form a unique fingerprint.

Cite this