• UnderpantsWeevil@lemmy.world
      link
      fedilink
      arrow-up
      25
      arrow-down
      3
      ·
      8 months ago

      What, you don’t like a handful of private mega-corps decimating the groundwater reserves of the upper Midwest so that some dorks can try and scam Amazon with fake books?

      • FiniteBanjo@lemmy.today
        link
        fedilink
        arrow-up
        12
        arrow-down
        2
        ·
        8 months ago

        I especially don’t like how discourse can be poisoned and diluted by some chatbots in favor of military operations.

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          8 months ago

          We need chatbots to bombard all our social media feeds with pro-western military propaganda. Otherwise, Putin and Wumao and Evil Korea and The Muslim Horde and Drumpf will win.

        • Wes_Dev@lemmy.ml
          link
          fedilink
          arrow-up
          4
          ·
          8 months ago

          One of my favorite moments like this was a Reddit thread where some account was pretending to be human and arguing with people in favor of the CEO’s actions during The Purge. Then one person asked it a question about making some dangerous thing or other, and it starting replying with things like “As an AI model, I cannot explain how to do that.” and stuff. It was great.

    • Duamerthrax@lemmy.world
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      8 months ago

      The techbros who are into AI just want to own things without putting in the work. They want to sell you AI generated images as Art and puff up their SEO with LLM chatbots.

      FOSS is the opposite of that.

    • Wes_Dev@lemmy.ml
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      8 months ago

      I’m sorry to hear you’re frustrated. As an AI, my job is to assist and provide you with the information or help you need. Please feel free to let me know how I can better assist you, and I’ll do my best to address your concerns.

      • Klear@lemmy.world
        link
        fedilink
        arrow-up
        13
        ·
        edit-2
        8 months ago

        Yet calling the simple rules that govern video game enemies AI is not controversial. Since when does AI have not to be fake to be called that?

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          4
          ·
          edit-2
          8 months ago

          Good point, however, thinking about it, I would consider those rules to be closer to AI than LLMs, because there are logical rules based on “understanding” input data. As in “using input data in a coherent way that imitates how a human would use it”. LLMs are just sophisticated examples of the dozen of monkeys with typewriters that eventually come up with the works of Shakespeare out of pure chance. Except that they have a bazillion switches to adapt and are trained on desired output, and then the generated output is formed with some admittedly impressive grammar filters to impress humans. However, no one can explain how the result came to pass (with traceable exceptions being the material of ongoing research), and no one can predict the output for a not yet tested input (or for identical input after the model has been altered, regardless how little). Calling it AI is contributing to manslaughter, as evidenced by e.g. Tesla “autopilot” murdering people. PS: I know Tesla’s murder system is not an LLM, but it’s a very good example how misnoming causes deaths. Obligatory fuck the muskrat

      • FiniteBanjo@lemmy.today
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        8 months ago

        Technically the technology is open to the public but regular people cannot afford to implement it.

        The thing that makes Large Language Models hardly functional is scaling up their databases and processing power of one of several of their small models with specialized tasks. One model creates output from input, another model checks it for accuracy/coherency, a third model polices it for things that are not allowed.

        So unless you’ve got a datacenter and three high powered servers with top-grade cooling systems and a military grade power supply, fat fucking chance.

        • AdrianTheFrog@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          8 months ago

          I can run a small LLM on my 3060, but most of those models were originally trained on a cluster of a100s (maybe as few as 10, so more like one largish server than one datacenter)

          Bitnet came out recently and is looking like it will lower these requirements significantly (essentially training a model using ternary numbers instead of floats to reduce requirements, which turns out to not lower the quality that significantly)

        • OozingPositron@feddit.cl
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 months ago

          Basically Mistral, check /lmg/ in /g/, if you have a GPU newer than 2 years you can probably run a 32B quantised model.

        • Simon@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          8 months ago

          Haha try the entire datacenter.

          If LLM was practical on three servers everyone and their mum would have an AI assistant product.