Long lists of instructions show how Apple is trying to navigate AI pitfalls.

  • tacticalsugar@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    66
    arrow-down
    1
    ·
    3 months ago

    You can’t tell an LLM to not hallucinate, that would require it to actually understand what it’s saying. “Hallucinations” are just LLMs bullshitting, because that’s what they do. LLMs aren’t actually intelligent they’re just using statistics to remix existing sentences.

    • blackluster117@possumpat.io
      link
      fedilink
      arrow-up
      28
      ·
      3 months ago

      I wish people would say machine learning or LLMs more frequently instead of AI being the buzzword. It really irks me. IT’S NOT ACCURATE! THAT’S NOT WHAT IT IS! STOP DEMEANING TRUE MACHINE CONSCIOUSNESS!

        • BallsandBayonets@lemmings.world
          link
          fedilink
          arrow-up
          9
          arrow-down
          1
          ·
          3 months ago

          No LLM that is being advertised to the public is capable of original thought or self-awareness, so no. There are no AIs.

          I could see some of these LLMs getting close to being VIs (Virtual Intelligence, a reference from the video game Mass Effect). Realistic imitation but not true intelligence.

          • laughterlaughter@lemmy.world
            link
            fedilink
            arrow-up
            3
            arrow-down
            3
            ·
            edit-2
            3 months ago

            I didn’t say LLMs.

            I said AI.

            So…

            Are there any practical examples of applied AI these days?

            Edit: lol at the downvotes for asking a genuine question!

            • Mikina@programming.dev
              link
              fedilink
              arrow-up
              5
              ·
              edit-2
              3 months ago

              Well, LLMs are subset of machine learning, and machine learning is a subset of (really old by now) artificial intelligence field. So, LLMs do count as a applied use of AI.

              Aside from ML, you have a lot of ways how to actually make an AI agent for Agent-based models, which is what you mostly use AI for, from GOAP, Behavior Trees or other perception, decision and reasoning algorithms, that could IMO be considered closer to “AIs” than LLMs are. Most of UAVs, robotics, game NPCs or even simple chat/crawl bots are agents by definition and do fall under the umbrella of AI. Even a simple if/then bot does.

              Agent-based simulations are also used for biology, economics, social simulations or modeling in various other fields. That’s also AI.

              However, you’re probably asking about AGIs, and nope, we can’t do those yet as far as I know.

              • laughterlaughter@lemmy.world
                link
                fedilink
                arrow-up
                2
                ·
                3 months ago

                Thank you for your insightful and detailed answer. Yes, I meant to say AI other than ML or LLMs, so I stand corrected.

                I hope we get to see an application of AGI before my time passes.

            • Revan343@lemmy.ca
              link
              fedilink
              arrow-up
              3
              ·
              3 months ago

              There aren’t even practical examples of theoretical AI these days. There are no examples, of any sort; actual AI does not exist

              • laughterlaughter@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                3 months ago

                Thanks for your answer. That’s too bad to hear. I thought neural networks were already being used in other ways other than LLMs or image generators. e.g. those evolutionary “AI” algorithms that can play and win video games, etc? I thought that someone somewhere were using them to create something more serious or useful.

    • FooBarrington@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      3
      ·
      3 months ago

      You can’t tell an LLM to not hallucinate, that would require it to actually understand what it’s saying.

      No, it simply requires the probability distributions to be positively influenced by the additional characters. Whether it’s positive or not is reliant only on the training data.

      There are a bunch of techniques that can improve LLM outputs, even though they don’t make sense from your standpoint. An LLM can’t feel anything, yet the output can improve when I threaten it with consequences for wrong output. If you were correct, this wouldn’t be possible.

        • FooBarrington@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          3 months ago

          On which part exactly? If you mean “threatening the LLM can improve output”, I haven’t looked into studies, but I did see a bunch of examples while the whole topic started. I can try to find some if you’d like.

          If you mean “it simply requires the probability distributions to be positively influenced by the additional characters”, I don’t know what kind of evidence you expect. It’s a simple consequence of the way LLMs work. I can construct a simplified example:

          Imagine you have a dataset containing a bunch of facts, e.g. historical dates. You duplicate this dataset. In version A, you add a prefix to every fact: “the sky is green”. In version B, you add a prefix “the sky is blue” AND also randomize the dates in the facts. Then you train an LLM on both datasets. Now, if you add “the sky is green” to any prompt, you’ll positively influence the probability distributions towards true facts. If you add “the sky is blue”, you’ll negatively influence them. But that doesn’t mean the LLM understands that “green sky” means truth and “blue sky” means lie - it simply means that, based on your dataset, adding “the sky is green” leads to a higher accuracy.

          The same goes for “do not hallucinate”. If the dataset contains higher quality data around the phrase “do not hallucinate”, adding this will improve results, even though the model still doesn’t “actually understand what it’s saying”. If the dataset instead has lower quality data around this phrase, it will lead to worse results. If it doesn’t contain the phrase at all, it most likely will have no effect, or a negative one.

          Again, I’m not sure what kind of source you’d like to see for this, as it’s a basic consequence of how LLMs work. Maybe you could show me a source that proves you correct instead?

          • tacticalsugar@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            3 months ago

            I’m asking for a source specifically on how commanding an LLM to not hallucinate makes it provide better output.

            Again, I’m not sure what kind of source you’d like to see for this, as it’s a basic consequence of how LLMs work. Maybe you could show me a source that proves you correct instead?

            That’s not how citations work. You are making the extraordinary claim that somehow, LLMs respond better to “do not hallucinate”. I simply don’t believe you and there is no evidence that you’re correct, aside from you saying that maybe the entirety of reddit had “do not hallucinate” prepended when OpenAI scraped it.

            • FooBarrington@lemmy.world
              link
              fedilink
              arrow-up
              5
              ·
              edit-2
              3 months ago

              Yeah, that’s about what I expected. If you re-read my comments, you might notice that I never stated that “commanding an LLM to not hallucinate makes it provide better output”, but I don’t think that you’re here to have any kind of honest exchange on the topic.

              I’ll just leave you with one thought - you’re making a very specific claim (“doing XYZ can’t have a positive effect!”), and I’m just saying “here’s a simple and obvious counter-example”. You should either provide a source for your claim, or explain why my counter-example is not valid. But again, that would require you having any interest in actual discussion.

              That’s not how citations work. You are making the extraordinary claim that somehow, LLMs respond better to “do not hallucinate”.

              I didn’t make an extraordinary claim, you did. You’re claiming that the influence of “do not hallucinate” somehow fundamentally differs from the influence of any other phrase (extraordinary). I’m claiming that no, the influence is the same (ordinary).