“There’s no way to get there without a breakthrough,” OpenAI CEO Sam Altman said, arguing that AI will soon need even more energy.

  • lone_faerie@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    80
    arrow-down
    10
    ·
    10 months ago

    So AI can’t exist without stealing people’s content and it can’t exist without using too much energy. Why does it exist then?

    • TransplantedSconie@lemm.ee
      link
      fedilink
      English
      arrow-up
      51
      arrow-down
      4
      ·
      10 months ago

      Because the shareholders need more growth. They might create Ultron along the way, but think of the profits, man!

      • Phanatik@kbin.social
        link
        fedilink
        arrow-up
        9
        ·
        10 months ago

        There’s no way these chatbots are capable of evolving into Ultron. That’s like saying a toaster is capable of nuclear fusion.

        • masonlee@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          It’s the further research being done on top of the breakthrough tech enabling the chat bots applications people are worried about. It’s basically big tech’s mission now to build Ultron, and they aren’t slowing down.

          • Phanatik@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            10 months ago

            What research? These bots aren’t that complicated beyond an optimisation algorithm. Regardless of the tasks you give it, it can’t evolve beyond what it is.

      • BarbecueCowboy@kbin.social
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        10 months ago

        I think we’ve got a bit before we have to worry about another major jump in AI and way longer for an Ultron. The ones we have now are effectively parsers for google or other existing data. I personally still don’t see how we feel like we can get away with calling that AI.

        Any AI that actually creates something ‘new’ that I’ve seen still requires a tremendous amount of oversight, tweaking and guidance to produce useful results. To me, they still feel like very fancy search engines.

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      4
      ·
      10 months ago

      The models get more efficient and smaller very fast if you look just a year back. I bet we’ll run some small LLMs locally on our phones (I don’t really believe in the other form factors yet) sooner as we believe. I’d say prior 2030.

      • FractalsInfinite@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 months ago

        I can already locally host a pretty decent ai chatbot on my old M1 Macbook (llama v2 7B) which writes at the same speed I can read, its probably already possible with the top of the line phones.

        • Petter1@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          10 months ago

          Lol, “old M1 laptop” 3 to 4 years is not old, damn!

          (I have running macbookpro5,3 (mid 2009) on Arch, lol)

          But nice to hear that M1 (an thus theoretically even the iPad, if you are not talking about M1 pro / M1 max) can already run llamma v2 7B.

          Have you tried the mistralAI already, should be a bit more powerful and a bit more efficient iirc. And it is Apache 2.0 licensed.

          https://mistral.ai/news/announcing-mistral-7b/

          • TheRealKuni@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            10 months ago

            But nice to hear that M1 (a thus theoretically even the iPad, if you are not talking about M1 pro / M1 max) can already run llamma v2 7B.

            An iPhone XR/XS can run Stable Diffusion, believe it or not.

          • FractalsInfinite@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            10 months ago

            3 to 4 years is not old

            Huh, nice. I got the macbook air secondhand so I thought it was older. Thanks for the suggestion, I’ll try mistralAI next, perhaps on my phone as a test.

    • TheRealKuni@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      edit-2
      10 months ago

      So AI can’t exist without stealing people’s content

      Using the word “steal” in a way that implies misconduct here is “You wouldn’t download a car” level reasoning. It’s not stealing to use the work of some other artist to inform your own work. If you copy it precisely then it’s plagiarism or infringement, but if you take the style of another artist and learn to use it yourself, that’s…exactly how art has advanced over the course of human history. “Great artists steal,” said Picasso famously.

      Training your model on pirated copies, that’s shady. But training your model on purchased or freely available content that’s out there for anyone else to learn from? That’s…just how learning works.

      Obviously there are differences, in that generative AI is not actually doing structured “thinking” about the creation of a work. That is, of course, the job of the human writing and tweaking the prompts. But training an AI to be able to write like someone else or paint like someone else isn’t theft unless the AI is, without HEAVY manipulation, spitting out copies that infringe on the intellectual property of the original author/artist/musician.

      Generative AI, in its current form, is nothing more than a tool. And you can use any tool nefariously, but that doesn’t mean the tool is inherently nefarious. You can use Microsoft Word to copy Eat, Pray, Love but Elizabeth Gilbert shouldn’t sue Microsoft, she should sue you.

      Edit: fixed a typo

    • LemmyIsFantastic@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      10 months ago

      🙄 iTS nOt stEAliNg, iTS coPYiNg

      By your definition everything is stealing content. Nearly everything in human history is derivative of others work.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      10 months ago

      Because it’s a miracle technology. Both of those things are also engineering problems - ones that have been massively mitigated already. You can run models almost as good as gpt3.5 on a phone, and individuals are pushing the limits on how efficiently we can train every week

      It’s not just making a chatbot or a new tool for art - it’s also protein folding, coming up with unexpected materials, and being another pair of eyes that will assist a person do anything.

      They literally promise the fountain of youth, autonomous robots, better materials, better batteries, better everything. It’s a path for our species to break our limits, and become more.

      The downside is we don’t know how to handle it. We’re making a mess of it, but it’s not like we could stop… The AI alignment problem is dwarfed by the corporation alignment problem