What I mean is like, what do you think is unironically awesome, even if people now think its cringe or stupid?

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    23 hours ago

    They’re trained on plenty that’s similar enough, as long as its Python or something in the dataset.

    It’s also been shown that LLMs are good at ‘abstracting’ languages to another, like training on (as an example) Chinese martial arts storytelling and translating that ability to english, despite having not seen a single english character in the finetune. That specific example I’m thinking of is:

    https://huggingface.co/TriadParty/Deepsword-34B-Base

    Same with code. If you’re, say, working with a language it doesn’t know well, you can finetune it on a relatively small subset, combine with with a framework to verify it, and get good results, like with this:

    https://huggingface.co/cognition-ai/Kevin-32B

    chart showing kevin 32B outperform openai

    Someone did this with GDScript too (the Godot Game Engine scripting language, fairly obscure), but I can’t find it atm.


    Not that they can be trusted for whole implementations or anything, but for banging out tedious blocks? Oh yeah. Especially if its something local/open one can tailor, and not a corporate API.

    • Jesus_666@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      20 hours ago

      Auto-writing boilerplate code doesn’t change the fact that you still have to reimplement the business logic, which is what we’re talking about. If you want to address the “reinventing the wheel” problem, LLMs would have to be able to spit out complete architectures for concrete problems.

      Nobody complains about reinventing the wheel on problems like “how do I test a method”, they’re complaining about reinventing the wheel on problems like “how can I refinance loans across multiple countries in the SEPA area while being in accord with all relevant laws”.