• 1 Post
  • 8 Comments
Joined 1 year ago
cake
Cake day: October 17th, 2023

help-circle
  • If you’re happy with those services … maybe you shouldn’t?

    I self-host because I prefer to house my data locally when possible. It’s easier for backups and I’m not subject to the whims and financial decisions made by a company about whether their service will remain available, what it will cost, what functions it will offer. The tradeoff is work on my part, but I enjoy tinkering and learning.

    In my case, I self-host a NextCloud instance for remote access to my docs, a Calibre Web server for eBooks (and to share those with a few trusted friends), a Vaultwarden instance because I’d prefer my vaults not be stored by a company whose servers are likely a major target for bad actors and that could change its TOS or offerings in the future.


  • Thanks for the heads up on this project. It looks like it might work very well for some people who basically want a web app as a view right into a filestystem for dealing with folders.

    Unfortunately, it doesn’t really meet the needs I’m laying out. The use case I’m describing is still one where the web app abstracts away the file system and uses albums. It just lays out a (smart, I think) way of recognizing and interpreting the organization in a pre-existing library, like one created from a Google Photos takeout, when bringing photos into its own system – accounting for duplicates in albums without doubling them up on disks.

    Direct editing of EXIF is handy. Memories does that too, and it’s part of why it’s what I’m using. But my ideal situation would be one where the app only writes metadata changes to its own database initially, but then (optionally) applies it to EXIF when exporting/downloading files without touching the original files. And it would also give the user an option to apply metadata to EXIF for the original files, but only after first prompting with warnings.

    It seems your design goals are pretty different than any of that – which isn’t a criticism, as I’m sure it works well for the way a lot of people like to work (just not me).




  • Sorry, but this sounds a bit: “I’d like to eat this piece of cake, but also still have it available to me when I’m done.”

    There are front-ends that can make docker apps easier to manage, like CasaOS. The tradeoff for ease of use is flexibility compared to something like Portainer or the CLI. CasaOS’s app library (for instance) frequently has out-of-date versions of apps, and if their default configuration doesn’t make sense for your purposes, you’re still going to have it delve deeper (whether in the CasaOS UI or another tool) to customize things to your needs.

    That’s pretty much a given with any tool - if you don’t want to deal with how it works, then you need to accept the default configuration and cross your fingers that it works for your purposes.

    And you’re still not going to get away from the fundamentals of how docker works, if you find them troublesome for some reason. Updating a docker app with something like CasaOS is doing the same thing it would be with Portainer or the command line. I’m not quite sure what seems “wrong” about it to you, but it would be “wrong” in the same way no matter what front end you use.


  • It can handle almost any service you might care to self-host - and with that much RAM, several at a time. You could run multiple VMs and still have breathing room.

    But a much less powerful box can also handle most self-hosted services well. If your existing Pi is doing the job, I wouldn’t switch. The 9900K will consume way more power, which is bad for the environment and your wallet.

    Maybe make it into a testing station. Or donate it to a nonprofit. Or sell it. Or turn it into a living room gaming station, playing light games natively and streaming AAA games from another machine with Steam Link or Moonlight (in sleep mode when it’s not in use?). Or give it to a family member. Or make it available to a neighbor via Freecycle/Buy Nothing/similar gifting networks.


  • Safe-r. Not inherently safe. It’s one good practice to consider among others. Like any measure that increases security, it makes your service less accessible - which may compromise usability or interoperability with other services.

    You want to think through multiple security measures with any given service, decide what creates undo hassle, decide what’s most important to you, limit the attack surface by making unauthorized access somewhere between inconvenient and near-impossible. And limit the damage that can be done if someone gets unauthorized access - ie not running as root, giving the container limited access to folders, etc.


  • Only give the container access to the folders it needs for your application to operate as intended.

    Only give the container access to the networks it needs for the application to run as intended.

    Don’t run containers as root unless absolutely necessary.

    Don’t expose an application to the Internet unless necessary. If you’re the only one accessing it remotely, or if you can manage any of the other devices that might (say, for family members), access your home network via a VPN. There are multiple ways to do this. I run a VPN server on my router. Tailscale is a good user-friendly option.

    If you do need to expose an application to the Internet, don’t do so directly. Use a reverse proxy. One common setup: Put your containers on private networks (shared among multiple only in cases where they need to speak to each other), with ports forwarded from the containers to the host. Install a reverse proxy like Nginx Proxy Manager. Forward 80 and 443 from the router to NGM, but don’t forward anything else from the router. Register a domain, with subdomains for each service you use. Point the domain and subdomains to your IP, or using aliases, to a dynamic dns domain that connects to a service on your network (in my case, I use my Asus router’s DDNS service). Have NGM connect each subdomain to the appropriate port on the host (ie, nc.example.com going to a port on the hose being used for NextCloud). Have NGM handle SSL certificate requests and renewals.

    There are other options that don’t involve any open ports, like Cloudflare tunnels. There are also other good reverse proxy options.

    Consider using something like fail2ban or crowdsec to mitigate brute force attacks and ban bad actors. Consider something like Authentik for an extra layer of authentication. If you use Cloudflare, consider its DDOS protection and other security enhancements.

    Keep good and frequent backups.

    Don’t use the same password for multiple services, whether they’re ones you run or elsewhere.

    Throw salt over your shoulder, say three Hail Marys and cross your fingers.


  • In my case, I run a Wireguard server on my router. Not every router firmware has that option, though (and some people may have the option and not realize it).

    I think there are some people who worry about opening up the port for the VPN. But it’s not a particularly high security risk, and services like Tailscale aren’t automatically better just because they initiate outbound connections.

    People overestimate what something like Cloudflare does for them. It can be helpful for a number of use cases and includes some good risk mitigation options, but it a service is still available to the outside world, it’s still a potential vulnerability point that needs to be hardened reasonably at the level of the application and one’s own network, too.