Resources

Interested in AI code auditing? Check out our resources below.

We have published a series of LLM-based static analysis works that form the cornerstone of RepoAudit. Meanwhile, we continuously collect and categorize the latest research. The research projects and paper list below are provided for reference to researchers and practitioners in the field.


Paper List

CodeLLMPaper: A Continuously Updated Collection of CodeLLM Papers GitHub Stars

CodeLLMPaper is a curated collection of the latest research on LLM-for-Code published in top-tier venues in software engineering, programming languages, security, NLP, and machine learning. The collected research works cover diverse coding tasks, foundational principles of code models, empirical studies, and surveys.

BugScope Logo

BugScope: Learn to Find Bugs Like Human

BugScope is an intelligent bug detection agent that learns to identify diverse bugs from examples. It outperforms existing industrial tools like Cursor BugBot and CodeRabbit, detecting twice as many bugs while maintaining high precision.

LLMDFA Logo

LLMDFA: Analyzing Dataflow in Code with Large Language Models GitHub Stars
The Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024)

An LLM-powered summary-based data-flow analysis framework, achieving comparable and even superior precision and recall to state-of-the-art symbolic static analysis tools.