Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

  • shiftymccool@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    9 months ago

    I think you might be confusing intelligence with memory. Memory is compressed knowledge, intelligence is the ability to decompress and interpret that knowledge.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      9 months ago

      You mean like create world representations from it?

      https://arxiv.org/abs/2210.13382

      Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see? We investigate this question by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello. Although the network has no a priori knowledge of the game or its rules, we uncover evidence of an emergent nonlinear internal representation of the board state.

      (Though later research found this is actually a linear representation)

      Or combine skills and concepts in unique ways?

      https://arxiv.org/abs/2310.17567

      Furthermore, simple probability calculations indicate that GPT-4’s reasonable performance on k=5 is suggestive of going beyond “stochastic parrot” behavior (Bender et al., 2021), i.e., it combines skills in ways that it had not seen during training.

    • FooBarrington@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      10
      ·
      9 months ago

      No. On a fundamental level, the idea of “making connections between subjects” and applying already available knowledge to new topics is compression - representing more data with the same amount of storage. These are characteristics of intelligence, not of memory.

      You can’t decompress something if you haven’t previously compressed the data.

      • barsoap@lemm.ee
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        9 months ago

        Our current AI systems are T2, and T1 during interference. They can’t decide how they represent data that’d require T3 (like us) which puts them, in your terms, at the level of memory, not intelligence.

        Actually it’s quite intuitive: Ask StableDiffusion to draw a picture of an accident and it will hallucinate just as wildly as if you ask a human to describe an accident they’ve witnessed ten minutes ago. It needs active engagement with that kind of memory to sort the wheat from the chaff.

        • FooBarrington@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          5
          ·
          edit-2
          9 months ago

          They can’t decide how they represent data that’d require T3 (like us) which puts them, in your terms, at the level of memory, not intelligence.

          Where do you get this? What kind of data requires a T3 system to be representable?

          I don’t think I’ve made any claims that are related to T2 or T3 systems, and I haven’t defined “memory”, so I’m not sure how you’re trying to put it in my terms. I wouldn’t define memory as an adaptable system, so T2 would by my definition be intelligence as well.

          Actually it’s quite intuitive: Ask StableDiffusion to draw a picture of an accident and it will hallucinate just as wildly as if you ask a human to describe an accident they’ve witnessed ten minutes ago. It needs active engagement with that kind of memory to sort the wheat from the chaff.

          I just did this:

          Where do you see “wild hallucination”? Yeah, it’s not perfect, but I also didn’t do any kind of tuning - no negative prompt, positive prompt is literally just “accident”.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            7
            ·
            9 months ago

            Where do you get this? What kind of data requires a T3 system to be representable?

            It’s not about the type of data but data organisation and operations thereon. I already gave you a link to Nikolic’ site feel free to read it in its entirety, this paper has a short and sweet information-theoretical argument.

            I don’t think I’ve made any claims that are related to T2 or T3 systems, and I haven’t defined “memory”, so I’m not sure how you’re trying to put it in my terms.

            I’m trying to map your fuzzy terms to something concrete.

            I wouldn’t define memory as an adaptable system, so T2 would by my definition be intelligence as well.

            My mattress is an adaptable system.

            Where do you see “wild hallucination”?

            All of it. Not in the AI but conventional term: Nothing of it ever happened, also, none of the details make sense. When humans are asked to recall an accident they witnessed they report like 10% fact (what they saw) and 90% bullshit (what their brain hallucinates to make sense of what happened). Just like human memory the AI is taking a bit of information and then combining it with wild speculation into something that looks plausible. But which, if reasoning is applied, quickly falls apart.