I got 32 additional GB of ram at a low, low cost from someone. What can I actually do with it?

  • zkfcfbzr@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    5 days ago

    I have 16 GB of RAM and recently tried running local LLM models. Turns out my RAM is a bigger limiting factor than my GPU.

    And, yeah, docker’s always taking up 3-4 GB.

      • zkfcfbzr@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 days ago

        Fair, I didn’t realize that. My GPU is a 1060 6 GB so I won’t be running any significant LLMs on it. This PC is pretty old at this point.

        • fubbernuckin@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 days ago

          You could potentially run some smaller MoE models as they don’t take up too much memory while running. I’d suspect the deepseek r1 8B distill with some quantization would work well.

          • zkfcfbzr@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 days ago

            I tried out the 8B deepseek and found it pretty underwhelming - the responses were borderline unrelated to the prompts at times. The smallest I had any respectable output with was the 12B model - which I was able to run, at a somewhat usable speed even.