• taanegl
        link
        fedilink
        11 year ago

        Open source locally run LLM that runs on GPU or dedicated PCIe open hardware that doesn’t touch the cloud…

    • @PixxlMan@lemmy.world
      link
      fedilink
      271 year ago

      To be fair - people don’t know what they want until they get it. In 2005 people would’ve asked for faster flip phones, not smartphones.

      I don’t have much faith in current gen AI assistants actually being useful though, but the fact that no one has asked for it doesn’t necessarily mean much.

      • @nossaquesapao@lemmy.eco.br
        link
        fedilink
        111 year ago

        To be fair, in 2005 a lot of people dreamed of “mini portable computers that could fit in their hands”. They just didn’t associate it to the form created with smartphones, and when the smartphones came to be, people were amazed by it. I don’t see the same level of reception when it comes to AI assistants.

      • @superguy@lemm.ee
        link
        fedilink
        6
        edit-2
        1 year ago

        faster flip phones

        I don’t think speed was a complaint anyone had about phones right before smartphones launched.

        People were mostly concerned with cell phone plans. Talking used to be charged by the minute, texting was charged per text, and data was practically non-existent.

        Cell phones have come a long way, but I think a lot of people take for granted just how much cell service has improved. I pay $25/month for a single line that gives me unlimited talk, text, and data (Visible). Couldn’t be happier.

    • @UnculturedSwine@lemmy.world
      link
      fedilink
      141 year ago

      Would be a cool feature if it could be leveraged in a secure, private, efficient way that was more useful than 99% of the algorithmic monkey typewriter garbage that’s on the market these days. I don’t need a glorified Cleverbot rifling through my unspeakables.

      • ffhein
        link
        fedilink
        71 year ago

        Local LLMs are getting better at a very rapid pace. Still a bit too resource hungry to have running in the background all the time, but for example Mistral-7b is quite competent for its size.