I see people having a small 8 gigs and 4 core system and trying to split that with something like proxmox into multiple VMs. I think that’s not the best way to utilise the resources.

As many services are mostly in ideal mode so in case something is running it should be possible to use the complete power of the machine.

My opinion is using docker and compose to manage things on the whole hardware level for smaller homelab.

Only split VMs for something critical, even decide on that if it’s required.

Do you agree?

  • bityard@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Some people play with VMs for fun, or as a leaning experience. I don’t think it’s very productive or useful to tell them they’re doing it wrong.

  • lilolalu@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Read some articles about the resource overhead of VM’s, or even better Container which use a shared kernel: it’s minimal and mainly effects ram. So if the decision is to put 16gb more into the machine to have a clean seperation of services: I think that’s a no brainer.

    I do agree with you that complete seperation through VM’s is usually overkill, a docker container is enough to isolate config / system requirements etc.

  • stupv@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Proxmox and LXCs vs Docker is just a question of your preferred platform. If you want flexibility and expandability then proxmox is better, if you just want a set and forget device for a specific static group of services, running debian with docker may make more sense to you.

  • ttkciar@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    On one hand, I think VMs are overused, introduce undue complexity, and reduce visibility.

    On the other hand, the problem you’re citing doesn’t actually exist (at least not on Linux, dunno about Windows). A VM can use all of the host’s memory and processing power if the other VMs on the system aren’t using them. An operating system will balance resource utilization across multiple VMs the same as it does across processes to maximize the performance of each.

  • mrmclabber@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    No, I don’t agree, not necessarily. VMs are “heavier” as in use more disk and memory but if they are mostly idling and in a small lab you probably won’t notice the difference. Now if you are running 10 services and want to put each in its own vm on a tiny server, then yea, maybe don’t do that.

    In terms of cpu it’s a non-issue. Vm or docker they will still “share” cpu. I can think of cases I’d rather run proxmox and others I’d just go bare metal and run docker. Depends on what I’m running and the goal.

  • ervwalter@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It depends on your goals of course.

    Personally, I use Proxmox on a couple machines for a couple reasons:

    1. It’s way way easier to backup an entire VM than it is to backup a bare metal physical device. And when you back up a VM, because the VM is “virtual hardware” you can (and I have) restore it to the same machine or to brand new hardware easily and it will “just work”. This is especially useful in the case that hardware dies.
    2. I want high availability. A few things I do in my homelab, I personally concider “critical” to my home happiness. They aren’t really critical, but I don’t want to be without them if I can avoid it. And by having multiple proxmox hosts, I get automatic failover. If one machine dies or crashes, the VMs automatically start up on the other machine.

    Is that overkill? Yes. But I wouldn’t say it “doesn’t make sense”. It makes sense but just isn’t necessary.

    Fudge topping on ice cream isn’t necessary either, but it sure is nice.