nav emailalert searchbtn searchbox tablepage yinyongbenwen piczone journalimg journalInfo searchdiv qikanlogo popupnotification paper paperNew
2025, 02, v.31 21-30
基于大语言模型蒙特卡洛树搜索的智算网络故障根因分析系统
基金项目(Foundation):
邮箱(Email):
DOI:
摘要:

提出了一种基于大语言模型(LLM)进行蒙特卡洛树搜索的智算网络故障根因分析系统(RCA-MCTS)。利用LLM推理研究领域在蒙特卡洛树搜索上的前沿研究,面向智算网络复杂故障场景,设计了适用于故障根因分析任务的多策略提示语扩展机制,并基于与故障模拟环境交互反馈的方式设计了模拟机制,使得LLM推理时的蒙特卡洛树搜索过程适配于故障根因分析任务场景。实验表明,RCA-MCTS在故障根因分析任务准确率上提升33%~43%,在故障推理动作序列平均匹配度上提升18%~34%。

Abstract:

A fault root cause analysis(RCA) system of intelligent computing networks based on Monte Carlo tree search(MCTS) and large language models(LLM), named RCA-MCTS, is proposed in this paper. By leveraging cutting-edge research on MCTS in the domain of LLM reasoning, a multi-strategy prompt expansion mechanism is designed for fault root cause analysis tasks in intelligent network fault scenarios. Additionally, a simulation mechanism is developed based on feedback interactions with the fault environment, enabling the MCTS process during LLM reasoning to be adapted to the fault root cause analysis task. Experimental results show that RCA-MCTS improves the accuracy of fault root cause analysis by 33%-43%, and enhances the average matching degree of fault inference action sequences by 18%-34%.

参考文献

[1]OpenAI.OpenAI-ChatGPT[EB/OL].[2025-02-25].https://chatgpt.com

[2]JIN P X,ZHANG S L,MA M H,et al.Assess and summarize:improve outage understanding with large language models[C]//Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering.ACM,2023:1657-1668.DOI:10.1145/3611643.3613891

[3]AHMED T,GHOSH S,BANSAL C,et al.Recommending rootcause and mitigation steps for cloud incidents using large language models[C]//Proceedings of IEEE/ACM 45th International Conference on Software Engineering(ICSE).IEEE,2023:1737-1749.DOI:10.1109/ICSE48619.2023.00149

[4]CHEN Y F,XIE H B,MA M H,et al.Automatic root cause analysis via large language models for cloud incidents[C]//Proceedings of the Nineteenth European Conference on Computer Systems.ACM,2024:674-688.DOI:10.1145/3627703.3629553

[5]WANG Z F,LIU Z C,ZHANG Y Y,et al.RCAgent:cloud root cause analysis by autonomous agents with tool-augmented large language models[C]//Proceedings of the 33rd ACM International Conference on Information and Knowledge Management.ACM,2024:4966-4974.DOI:10.1145/3627673.3680016

[6]WEI J,WANG X Z,SCHUURMANS D,et al.Chain-of-thought prompting elicits reasoning in large language models[EB/OL].[2025-02-25].https://arxiv.org/abs/2201.11903v6

[7]YAO S Y,ZHAO J,YU D,et al.ReAct:synergizing reasoning and acting in language models[EB/OL].[2025-02-25].https://arxiv.org/abs/2210.03629v3

[8]SHINN N,CASSANO F,GOPINATH A,et al.Reflexion:language agents with verbal reinforcement learning[EB/OL].[2025-02-25].https://arxiv.org/abs/2303.11366

[9]HAO S B,GU Y,MA H D,et al.Reasoning with language model is planning with world model[EB/OL].[2025-02-25].https://arxiv.org/abs/2305.14992v2

[10]WU J Y,FENG M K,ZHANG S,et al.Beyond examples:highlevel automated reasoning paradigm in in-context learning via MCTS[EB/OL].[2025-02-25].https://arxiv.org/abs/2411.18478v1

[11]MIN Y Q,CHEN Z P,JIANG J H,et al.Imitate,explore,and selfimprove:a reproduction report on slow-thinking reasoning systems[EB/OL].[2025-02-25].https://arxiv.org/abs/2412.09413v2

[12]ZHANG D,WU J B,LEI J D,et al.LLaMA-berry:pairwise optimization for O1-like Olympiad-level mathematical reasoning[EB/OL].[2025-02-25].https://arxiv.org/abs/2410.02884v2

[13]ZHAO Y,YIN H F,ZENG B,et al.Marco-o1:towards open reasoning models for open-ended solutions[EB/OL].[2025-02-25].https://arxiv.org/abs/2411.14405v2

[14]KANG J K,LI X Z,CHEN X,et al.MindStar:enhancing math reasoning in pre-trained LLMs at inference time[EB/OL].[2025-02-25].https://arxiv.org/abs/2405.16265v4

[15]QI Z T,MA M Y,XU J H,et al.Mutual reasoning makes smaller LLMs stronger problem-solvers[EB/OL].[2025-02-25].https://arxiv.org/abs/2408.06195v1

[16]ZHANG Y X,WU S X,YANG Y Q,et al.o1-coder:an o1replication for coding[EB/OL].[2025-02-25].https://arxiv.org/abs/2412.00154v2

[17]TIAN Y,PENG B L,SONG L F,et al.Toward self-improvement of LLMs via imagination,searching,and criticizing[EB/OL].[2025-02-25].https://arxiv.org/abs/2404.12253v2

[18]SILVER D,HUANG A,MADDISON C J,et al.Mastering the game of Go with deep neural networks and tree search[J].Nature,2016,529:484-489.DOI:10.1038/nature16961

[19]YAO S Y,ZHAO J,YU D,et al.ReAct:synergizing reasoning and acting in language models[EB/OL].[2025-02-25].https://arxiv.org/abs/2210.03629v3

[20]HAO S B,GU Y,MA H D,et al.Reasoning with language model is planning with world model[EB/OL].[2025-02-25].https://arxiv.org/abs/2305.14992v2

基本信息:

DOI:

中图分类号:TP393.06

引用信息:

[1]罗子秋,苗宇铠,李丹.基于大语言模型蒙特卡洛树搜索的智算网络故障根因分析系统[J].中兴通讯技术,2025,31(02):21-30.

基金信息:

检 索 高级检索

引用

GB/T 7714-2015 格式引文
MLA格式引文
APA格式引文