• kaffiene@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 hours ago

    I think part of the difficulty with these discussions is that people mean all sorts of different things by “AI”. Much of the current usage is that AI = LLMs, which changes the debate quite a lot

    • Rogers@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      27 minutes ago

      No doubt LLMs are not the end all be all. That said especially after seeing what the next gen ‘thinking models’ can do like o1 from ClosedAI OpenAI, even LLMs are going to get absurdly good. And they are getting faster and cheaper at a rate faster than my best optimistic guess 2 years ago; hell, even 6 months ago.

      Even if all progress stopped tomorrow on the software side the benefits from purpose built silicon for them would make them even cheaper and faster. And that purpose built hardware is coming very soon.

      Open models are about 4-6 months behind in quality but probably a lot closer (if not ahead) for small ~7b models that can be run on low/med end consumer hardware locally.

      • kaffiene@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        16 minutes ago

        I don’t doubt they’ll get faster. What I wonder is whether they’ll ever stop being so inaccurate. I feel like that’s a structural feature of the model.