• @taladar@sh.itjust.works
    link
    fedilink
    English
    124 days ago

    Programmers can now use large language models (LLMs) to generate computer code more quickly. However, this only makes programmers’ lives easier if that code follows the rules of the programming language and doesn’t cause a computer to crash.

    If that is their level of understanding of what constitutes code quality I am not surprised they think LLMs can actually produce usable code.

  • hendrik
    link
    fedilink
    English
    64 days ago

    Wasn’t “error-free” one of the undecidable problems in maths / computer science? But I like how they also pay attention to semantics and didn’t choose a clickbaity title. Maybe I should read the paper, see how they did it and whether it’s more than an AI agent at the same intelligence level guessing whether it’s correct. I mean surprisingly enough, the current AI models usually do a good job generating syntactically correct code one-shot. My issues with AI coding usually start to arise once it gets a bit more complex. Then it often feels like poking at things and copy-pasting various stuff from StackOverflow without really knowing why it doesn’t deal with the real-world data or fails entirely.

  • @VintageGenious@sh.itjust.works
    link
    fedilink
    English
    14 days ago

    Very interesting. Fixing one of the most common flaws of LLMs. If you can force them to follow proper structure they can generate better batch of test data samples, correct mathematical proofs in Lean, proper conlang words and sentences, proper json output for latter use by another tools, and obviously correct code generation in a specific version of a programming language.