What has prompted your interest in data hoarding?
Censorship and Memory-holing
What has prompted your interest in data hoarding?
Censorship and Memory-holing
I can’t tell you how many channels have disappeared and been memory-holed. Especially since censorship went into overdrive around 2019.
Data hoarders can show you how the world was before all that happened.
Unraid and Proxmox
Yes and no.
Yes if you have the resources to monitor and update. Companies have entire teams dedicated to this.
No if you don’t have the resources/time to keep up with it regularly.
IMO, no need to take this risk when you have services like Tailscale available today.
Can be safer. Can be worse.
A poorly configured self hosted vaultwarden can be a major security issue.
A properly configured one is arguable safer than hosting with a 3rd party. Lastpass taught me that one.
If you configure it to where it’s not exposed to the web, and only accessed through a VPN, like Tailscale. It can be quite robust.
That sounds easy enough, but it creates a situation where I don’t know what updates are important (security) and what updates are minor. So I have to read the release notes for each update and then decide if I need it to patch a security vulnerability.
Where with the other method, I know the update is likely critical.
For some those frequent updates are a +, for me it is not. So use what works best for you!
But right now I couldn’t use opensense even if I wanted to, as it’s FIPS non-compliant due to them still using the depreciated EOL OpenSSH 1.1.1, and no date set to move to v3
chuckle, butthurt downvotes but not one comment to dispute anything I said. Enjoy the depreciated OpenSSL without security updates.
No, I like pfsense because it has less frequent updates and is better documented.
Here is one of the better guides that helps you config much of what you are talking about:
https://nguvu.org/pfsense/pfsense-baseline-setup/
Plus, opensense gets most of their code from the work done by pfsense, and often have to wait on them to push the code. Just look at what happened with TLS 1.3
and a big part of the reason is taxes and regulations. People with $$ don’t care, but everyone in the bottom 75% really takes a big hit compared to their income.
I love Synology, but for the price you get very little CPU performance. Your i5 would probably outrun Synology units that cost $1000 or more. (Haven’t looked at their lineup in years, but that’s my guess!)
Really, the only bad thing I can think of is higher power consumption and footprint… we’re probably talking about a 50watt difference, if that.
I would use that i5 so I have something to use today! Monitor power usage some and then replace it down the road with something lower power, when you understand your needs better and you feel like it.
I’d run it headless and save another 15w, unless my plex clients need the GPU to transcode. (higher end players like Nvidia shield usually does not)
self hosted git repository.
I setup gitea on my server and use it to track version changes of all my scripts.
And I use a combination of the wiki and .md (readme) files for howto’s and any inventory I’m keeping, like IP addresses, CPU assignments etc.
But mainly it’s all in .md formatted with markdown.
I do this at the file system level, not the file level, using zfs.
Unless the container has a database, I use zfs snapshots. If it has a database, my script dumps the database first and then does a ZFS snapshot. Then that snapshot is sent via sanoid to a zfs disk that is in a different backup pool.
This is a block level backup, so it only backs up the actual data blocks that changed.
I don’t use photoprism, but have experienced similar in other docker containers. What is most likely happening is that something, like headers/ports, needs to be forwarded by NPM, usually b adding additional config to the “advanced” tab in NPM.
Sorry, I’m not familiar enough with photoprism to know what exactly needs to be added to the config, but I since nobody has replied, I thought it might at least give you a direction to search in.
Storage