• pup_atlas@pawb.social
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    6 months ago

    While I agree in principle, one thing I’d like to clarify is that TRAINING is super energy intensive, once the network is trained, it’s more or less static. Actually using the network isn’t dramatically more energy than any other indexed database lookup.

    • itslilith@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      19
      ·
      6 months ago

      It’s static, yes, but the static price is orders of magnitude higher. It still involves loading the whole model into VRAM and performing matrix multiplication on trillions of numbers

      • etrotta@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        To be fair, I wouldn’t include “loading the whole model into VRAM” as part of the cost, given they can just keep it in there between different requests, and it might be down to hundreds of billions or dozens of billions instead of trillions… but even after all improvements it should still be orders of magnitude more expensive than normal search, which just makes their decision even crazier

    • towerful@programming.dev
      link
      fedilink
      arrow-up
      6
      ·
      6 months ago

      Training will never stop, tho.
      New models will keep coming out, datasets and parameters are going to change.

      • pup_atlas@pawb.social
        link
        fedilink
        arrow-up
        2
        ·
        6 months ago

        I firmly believe it will slow down significantly. My prediction for the future is that there will be a much bigger focus on a few “base” models that will be tweaked slightly for different roles, rather than “from the ground up” retraining like we see now. The industry is already starting to move in that direction.