Poisoned AI went rogue during training and couldn’t be taught to behave again in ‘legitimately scary’ study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

  • Chocrates@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    11 months ago

    No, LLM is the AI, OP is saying if you train it with hate it’s gonna spit out hate

    • JustMy2c@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      2
      ·
      11 months ago

      And I’m saying that reddit data is sublime for Ai. And specifically that it’s invested with toxicity