Well my personal experiences have just been that the ML approach catches a lot of things the static analysis tool hasn't. Those are hard coded by humans and there are dozens of not hundreds of ways to write any given function with identical logic. It's impossible for static analysis to be comprehensive enough to catch and fix a code block more than a few lines
E.g. I had a super ugly nested try catch exception block for this new integration test I was writing. It was using a test framework and language I've never used before, and so it was the only way I knew to write this logic. I asked the LLM to improve it and it broke up the nested try catch logic into 2 top level pieces with timeout functions and assertion checks I didn't know existed. The timeout removing the need to throw an exception and the assertion fixing the issue of catching it
Oh don't get me wrong, I definitely think LLMs are gonna absolutely destroy kids ability to learn anything, including coding if they use it like a teacher
But for those who use it as a tool to build and do instead of learning, I'm quickly starting to become a strong believer in its usefulness