• mountainriver@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    7 months ago

    Good article. Captures the bubble growth and the lack of profit growth, with lots of examples. And that the capacity growth of AI is limited by non AI works, so no growth into functionality.

    Good one to hand to people who needs to understand the nature of the bubble (and that it is a bubble).

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      ·
      7 months ago

      What I think is extra relevant is that there’s no sign that the LLMs are magically achieving “sentience” - in that case there would be no further need for training material!

  • skillissuer@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    7 months ago

    For context, 5GW is massive amount of electricity. Two of largest european NPPs have 5.7GW (Zaporozhian NPP, pre-2022) and 5.6GW (Gravelines NPP), and that’s nameplate capacity, some part is always down for maintenance/refueling. That’s quite a significant part of these respective countries electricity generation (over 20% for Ukraine and almost 6% for France). If you want to have 5GW available at all times, then something closer to 8-10GW nameplate capacity would be in order. That’s larger that biggest current nuclear installations in the world (7GW-ish Chinese and Korean NPPs)

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    7 months ago

    Yet AI researcher Pablo Villalobos told the Journal that he believes that GPT-5 (OpenAI’s next model) will require at least five times the training data of GPT-4.

    I tried finding the non-layman’s version of the reasoning for this assertion and it appears to be a very black box assessment, based on historical trends and some other similarly abstracted attempts at modelling dataset size vs model size.

    This is EpochAI’s whole thing apparently, not that there’s necessarily anything wrong with that. I was just hoping for some insight into dataset length vs architecture and maybe the gossip on what’s going on with the next batch of LLMs, like how it eventually came out that gpt4.x is mostly several gpt3.xs in a trench coat.