Poor nVidia Jetson, you did great for the last 5 years.

Managed to fry the eMMC by shorting pins, it looks like.

Note for future self: fully enclose boards with tight spaces.

  • FooBarrington@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    6 months ago

    Ahh gotcha, similar to Google’s Coral?

    Kind of, it’s a standalone system with the hardware integrated - kinda like Google Coral with a Raspberry Pi.

    I’ve recently been looking into locally hosting some LLMs for various purposes, I haven’t specced out hardware yet. Any good resources you can recommend?

    Not really, sorry - I haven’t gone too deep into LLMs beyond simple use cases. I’ve only really used llama.cpp myself.