Static analysis is a widely used technique in the software world in bug detection. However, creating a precise static analysis is difficult in the real world, especially since, in practice, many complex codes are difficult to analyze. Large Language Models (LLMs) offer a promising complementary, as recent advances demonstrate remarkable capabilities in comprehending code. Intuitively, LLM’s understanding of complex code can be leveraged to make complicated code snippets become analyzable automatically. In this paper, we describe how to build a practical framework combining the LLM and static analysis, using use-before-initialization (UBI) bugs as a case study. We develop LLift, a fully automated framework that combines both a static analysis tool and an LLM. By carefully designing the procedure and prompts, we are able to overcome a number of challenges, including bug-specific modeling, the large codebase, the non-deterministic nature of LLMs, etc. Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs produced by static analysis, LLift demonstrates potent capability, showcasing a reasonable precision (50%) in previously undecidable code snippets and does not pose any missed bugs. It even identified 13 new UBI bugs in the Linux kernel. This research paves the way for new opportunities and methodologies in the use of LLMs for static analysis.