I use a very barebones solution in Linux to backup every 2 or 3 weeks my 14Tb of media. I use only as cold backup, to prevent brainfart data losses mainly. I only turn on the disks when I will start a backup and run a very basic rsync command, they stay 90%+ of time turned off.
I have 2 external USB3 HDs (8+10), in ext4 format. Then I use the ‘poor man raid’ MergerFS to fuse both in a unique directory.
MergerFS is a odd duck in the system files, it only ‘binds’ two files systems at directory level, you can see the individual files in one disk or the other even with the filesystem is mounted. No redundancy, no parallel read, but it trys balance the files between drivers.
But WHEN my internal data became bigger than my backup I only need add another driver, change a script command line and have more capacity without the dangers of a RAID rebuild. Or size consistency issues. If you lost a disk you WILL lost data, but is easy peaky change the faulty disk and start again without lost ALL data, a real danger when you need rebuilt a RAID.
It’s slow but rock solid, I use it in my internal mass media disks too. I even migrated the disks (with all data) between two machines, without problems.
By the way, my main system/game disks are two 1Tb M2 in RAID0, for performance, it’s a different need.
A low-end motherboards usually have only 4 native SATA ports, the ones with more are costly. You will need a extra pci-e sata raid card (4 a 6 ports, usually), but they are dirt cheap.
If you are use mostly as a Plex box, a Linux with Docker/Portaineir is good enough. But you can explore some ‘storage oriented’ distros, like Unraid (non-free, but isn’t costly), who also can run a dockerized Plex without problems.
I am very partial about of use of MergerFS storage, the ‘poor man raid’, as I said. Without redundancy or disaster recovery, but easy and cheaper to build.