• DonEladio@feddit.org
    link
    fedilink
    arrow-up
    10
    arrow-down
    25
    ·
    14 hours ago

    What’s with all the AI hate? I use it for work and it significantly decreases my workload. I’m getting stuff done in minutes instead of hours. AI slop aside.

    • [deleted]@piefed.world
      link
      fedilink
      English
      arrow-up
      44
      ·
      14 hours ago

      The massive corporate AI (LLMs for the most part) are driving up electricity and water usage, negatively impacting communities. They are creating a stock market bubble that will eventually burst. They are sucking up all the hardware, from GPUs to memory, to hard drives and SSDs.

      On top of all of that they are in such a rush to expand that a lot of them are installing fossil fuel power on top of running the local grid ragged so they pollute, drive up costs, and all for a 45% average rate of incorrect results.

      There are a lot of ethical problems too, but those are the direct negatives to tons of people.

    • mushroommunk@lemmy.today
      link
      fedilink
      arrow-up
      16
      arrow-down
      2
      ·
      13 hours ago

      If AI can do your job in minutes you’re either: A fool pumping out AI slop someone else has to fix and you don’t realize it.

      Or

      Doing a job that really shouldn’t exist.

      LLMs can’t do more than shove out a watered down average of things it’s seen before. It can’t really solve problems, it can’t think, all it can do is regurgitate what it’s seen before. Not exactly conducive to quality.

        • Catoblepas@piefed.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 hours ago

          In convinced the people who think it’s incredible literally just don’t know how to use a search engine, the one and only potentially useful function of LLMs other than writing asinine work related emails.

    • wischi@programming.dev
      link
      fedilink
      arrow-up
      7
      ·
      12 hours ago

      Try to play tic tac toe against ChatGPT for example 🤣 (just ask for “let’s play ASCII tic tac toe”)

      Practically loses every game against my 4yo child - if it even manages to play according to the rules.

      AI: Trained on the entire internet using billions of dollars. 4yo: Just told her the rules of the game twice.

      Currently the best LLMs are certainly very “knowledgeable” (as in, they “know” much more than I - or practically any person - do for most topics) but they are certainly far away from intelligence.

      You should only use them of you are able to verify the correctness of the output yourself.

      • fonix232@fedia.io
        link
        fedilink
        arrow-up
        6
        arrow-down
        2
        ·
        9 hours ago

        “See, no matter how much I’m trying to force this sewing machine to be a racecar, it just can’t do it, it’s a piece of shit”

        Just because there are similarities, if you misuse LLMs, they won’t perform well. You have to treat it as a tool, with a specific purpose. In case of LLMs that purpose is to take a bunch of input tokens, analyse them, and output the most likely output tokens that is statistically the “best response”. The intelligence is putting that together, not “understanding tic tac toe”. Mind you, you can tie in other ML frameworks for specific tasks that are better suited for those -e.g. you can hook up a chess engine (or tic tac toe engine), and that will beat you every single time.

        Or an even better example… Instead of asking the LLM to play tic-tac-toe with you, ask it to write a Bash/Python/JavaScript tic-tac-toe game, and try playing against that. You’ll be surprised.

        • Catoblepas@piefed.blahaj.zone
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          9 hours ago

          If LLMs can’t do whatever you tell them based purely on natural language instructions then they need to stop advertising it that way.

          It’s not just advertisement that’s the problem, do any of them even have user manuals? How is a user with no experience prompting LLMs (which was everyone 3 years ago) supposed to learn how to formulate a “correct” prompt without any instructions? It’s a smokescreen for blaming any bad output on the user.

          Oh, it told you to put glue in your pizza? You didn’t prompt it right. It gives you explicit instructions on how to kill yourself because you talked about being suicidal? You prompted it wrong. It completely makes up new medical anatomical terminology? You have once again prompted it wrong! (Don’t make me dig up links to all those news stories)

          It’s funny the fediverse tends to come down so hard on the side of ‘RTFM’ with anything Linux related, but with LLMs it’s actually the user’s fault for believing they weren’t being sold a fraudulent product without a user manual.

          • fonix232@fedia.io
            link
            fedilink
            arrow-up
            3
            arrow-down
            2
            ·
            6 hours ago

            Sounds like you’re the kind of person who needs the “don’t put your fucking pets in the microwave” warnings.

    • 0nt0p0fth3w0rld@feddit.org
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      14 hours ago

      effect on environment, and the fact that we know it will definitely lose its good, like TV/Cable, Internet, and any honest useful invention that has been raped by the dark side of human culture within history.

      Within the structure of ego driven society we live in I don’t think we are capable of being a good species.

      would be cool if things were different, but Ive never seen it not turn out bad.

    • affenlehrer@feddit.org
      link
      fedilink
      arrow-up
      4
      ·
      13 hours ago

      I hope analog hardware or some other trick will help us in the future to make at least local inference fast and low power.

      • fonix232@fedia.io
        link
        fedilink
        arrow-up
        2
        ·
        11 hours ago

        Local inference isn’t really the issue. Relatively low power hardware can already do passable tokens per sec on medium to large size models (40b to 270b). Of course it won’t compare to an AWS Bedrock instance, but it is passable.

        The reason why you won’t get local AI systems - at least not completely - is due to the restrictive nature of the best models. Most actually good models are not open source. At best you’ll get a locally runnable GGUF, but not open weights, meaning re-training potential is lost. Not to mention that most of the good and usable solutions tend to have complex interconnected systems so you’re not just talking to an LLM but a series of models chained together.

        But that doesn’t mean that local (not hyperlocal, aka “always on your device” but local to your LAN) inference is impossible or hard. I have a £400 node running 3-4b models at lightning speed, at sub-100W (really sub-60W) power usage. For around £1500-2000 you can get a node that gets similar performance with 32-40b models. For about £4000, you can get a node that does the same with 120b models. Mind you I’m talking about lightning fast performance here, not passable.

    • Grimy@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      3
      ·
      13 hours ago

      People got roped into a media campaign spear headed by copyright companies.