Poor nVidia Jetson, you did great for the last 5 years.
Managed to fry the eMMC by shorting pins, it looks like.
Note for future self: fully enclose boards with tight spaces.
Poor nVidia Jetson, you did great for the last 5 years.
Managed to fry the eMMC by shorting pins, it looks like.
Note for future self: fully enclose boards with tight spaces.
Ahh gotcha, similar to Google’s Coral? Neat.
I’ve recently been looking into locally hosting some LLMs for various purposes, I haven’t specced out hardware yet. Any good resources you can recommend?
Kind of, it’s a standalone system with the hardware integrated - kinda like Google Coral with a Raspberry Pi.
Not really, sorry - I haven’t gone too deep into LLMs beyond simple use cases. I’ve only really used llama.cpp myself.
A dedicated NVIDIA GPU in a random x86 pc is a lot faster and more price efficient than a Jetson.
If it isn’t about the form factor the Jetson is not a great contender.