I am in the process of setting up a home server, and I am struggling to decide. I have previously used yunohost but in the meantime, freedom box has matured quite a bit. I have also looked at Tipi.
The use case right now is, running a wireguard server and probably some notes of sorts (to be decided). A web GUI for management and updating would be much desired.
Disclaimer: I don’t have too much surplus of energy, due to a hectic life, so I would prefer something easy and without the requirement of docker/kubernettes
- Yunohost: https://yunohost.org/
- Freedombox: http://freedombox.org/
- Tipi: https://runtipi.io/
I will run on a Gigabyte Brix with:
- AMD Ryzen 4300U (4 core)
- 16 maybe 32 GB RAM
- 512GB SSD
I am open to other suggestions.
P.S. I apologise if this has been debated before, but I have not really found anything.
Thank you in advance
EDIT: I have read your recommendations and arguments, and it is noted, I am watching docker tutorials now :)
- If you don’t have a surplus of time and energy, self hosting is not for you. You’re taking on administration that normal people effectively contract out
- docker is worth learning and using. It’s one thing to learn, and with that knowledge you can run basically everything. It really does make your life easier
Working with traditional technologies does not take up too much of my time, but all I see in the response, is that what I am planning is wrong. I have taken note of your recommendation for learning Docker. Thank you.
Docker and the docker-compose yaml files. They’ll be invaluable. Compose files allows you to create custom networking and run multiple containers.
Super useful and what most people use to run simple docker workloads.
You don’t have to understand how to create containers, just understand how they work and the commands to use them effectively.
I was actively avoiding Docker too after I tried (and succeeded) getting Home Assistant running in Docker many years ago.
It seemed like a confusing mess when I did it back then and the resulting Home Assistant container ran like a dream for many years until it didn’t and I had no clue how to get it working again.
I ended up just throwing Home Assistant OS on thepi and it was very very simple to set up.
Anyway that was then. This is now.
I bought a mini pc in February and installed Proxmox on it.
Initially I just wanted Home Assistant, Plex and some kind of way of populating Plex with media.
I just ran VMs with bare bones programs installed in Windows. Problem is this took a lot of RAM and was flakey.
Cut to now, where I have a Home Assistant VM, a Linux VM and an OMV VM for my NAS.
The Linux VM has a bunch of Docker containers running that do everything my Windows bare bones VM did, but better.
I can access the containers via Portainer and update them with a button press. I cannot access the VM GUI because I passed through my GPU which knackered the console in Proxmox, and that is absolutely fine, if I need to do anything in the VM I have SSH.
My Linux VM uses less RAM than my Home Assistant VM, which is amazing considering what is running on it.
Docker is where it’s at! Takes a little learning but with Portainer installed it’s all in one GUI instead of SSH in to create text files and folders.
Yesterday I wanted to give Immich a try. So I found a tutorial on YouTube, went into his notes and found his GitHub and in there, his Docker Compose file.
I LITERALLY JUST COPIED IT AND PASTED IT INTO PORTAINER AND PRESSED GO AND HAD IMMICH RUNNING IN MINUTES.
Now the caveat here is that I’ve had a few months of playing with Docker now. I’ve tried to get Immich running a couple times and failed in the past few months. But I watched this guy paste his code in and press go, then start talking about how it works, so I was pretty confident he had taken the time to have a working compose file.
Wall of text to say get acquainted with Portainer and try installing and playing with some stuff. Bear in mind that it probably won’t work to start with and don’t rely on it until you’ve proven it out, but tinker with it until it’s working. Eventually you’ll get a feeling for it and it will become simple to you.
I’m actually diving into Linux and Docker for the first time right now, and the setup you described is exactly what I’m looking to do. But seeing as this is my first time. I was hoping I’d be able to get some confirmation of what my idea is or find faults with my setup.
In terms of hardware I have on-hand, I’ve got two laptops. One of which is a half-top, meaning it has no screen. The other is my daily driver. Both are running Linux Mint.
The half-top has 1TB of storage and would have Docker installed with Plex/Jellyfin, the *arr suite, and HomeAssistant. On my daily driver, I would be running Portainer or Heimdall to access everything and make changes within the half-top itself. I would also use this to access any of the media I have stored in the half-top.
Does that sound like everything would work?
Yeah I don’t see why not. It should be as easy as SSH in to the half top, install Docker and have it run the Portainer client then just bang Portainer on your daily driver and start throwing docker compose files at it.
Have a look at Gluetun for your VPN needs. I’ve basically got all my Arr in the same stack with Gluetun as the networking for the stack, then have other containers running independently that don’t need the VPN, like Adguard and Homarr.
I’ve got a Gluetun appreciation post up that should get you started with it.
Choose whatever sounds good and test it
That has been a heavy base of my research. Thank you :)
For what reason are you trying to avoid docker? since most projects provide docker images and an example docker-compose.yml it’s very easy to get the application you want running.
Otger projects that do plug and play application setup like yunohost etc. are casaOS and umbrel (both use docker under the hood btw)
I was trying to stick to technologies that I know and that I am comfortable with.
I have watched some docker tutorials, and it just seems more complicated to me. All tutorials requires a terminal and I am trying to avoid having to having an open port 22.
So, that’s the main reasons.
It’s a fair response. Some uf us aren’t flush with time.
I’d survive SSH for installing Portainer and then you can run most of it from its GUI. If you use Docker Compose it will be super easy to make changes to your setup as well. Just change the file and redeploy your badboys.
I’m a recent dad absolutely strapped for time, but I still managed to set up a headless Debian server with close to zero Linux knowledge. There are so many amazing guides out there, especially on GitHub.
Good luck whatever you go for.
You can use any port for SSH—or you can use something like Cockpit with a browser-based terminal instead of SSH.
I am trying to avoid having to having an open port 22
If you’re working locally you don’t need an open port.
If you’re on a different machine but on the same network, you don’t need to expose port 22 via your router’s firewall. If you use key-based auth and disable password-based auth then this is even safer.
If you want access remotely, then you still don’t have to expose port 22 as long as you have a vpn set up.
That said, you don’t need to use a terminal to manage your docker containers. I use Portainer to manage all but my core containers - Traefik, Authelia, and Portainer itself - which are all part of a single docker compose file. Portainer stacks accept docker compose files so adding and configuring applications is straightforward.
I’ve configured around 50 apps on my server using Docker Compose with Portainer but have only needed to modify the Dockerfile itself once, and that was because I was trying to do something that the original maintainer didn’t support.
Now, if you’re satisfied with what’s available and with how much you can configure it without using Docker, then it’s fine to avoid it. I’m just trying to say that it’s pretty straightforward if you focus on just understanding the important parts, mainly:
- docker compose
- docker networks
- docker volumes
If you decide to go that route, I recommend TechnoTim’s tutorials on Youtube. I personally found them helpful, at least.
Thank you for your input. It is very appreciated. I will take a look at TechnoTim.
If you don’t have a surplus of time, Docker should be your top priority. It will save you many many hours.
Thank you for your feedback. I appreciate it.
As everyone else has said, if your time is limited, your best path is docker. You don’t need to learn all of docker, but understanding how docker compose works at a fairly high level will drastically speed up setup as well as administrative tasks like updating and backups
As for what to run, you mentioned wireguard and a notes app. The notes app could be solved without needing a central server with Obsidian and I’m not seeing the use case here for Wireguard.
I would start with what problem or pain point are you trying to solve for.
In my case, I had a bunch of IOT devices all making excessive DNS queries and I wanted a network level ad blocker so I setup PiHole (2 in fact, they run my network’s DNS).
I had a large music collection and burning mix CDs was no longer practical so I setup Jellyfin (Navidrome might have also worked), and use FinAmp on my phone.
Google started being a pain in my backside so I setup Nextcloud.
Someone got me some smart devices so HomeAssistant was setup.
I needed a way to find these services so I setup Heimdel as a dashboard.
I wanted some of these publicly available so I setup Caddy as a reverse proxy.
Thank you for your input. The reason why I want a Wireguard server, is to have a secure tunnel into my growing gadgets, NAS, rpi and now a server. Using a platform would open up options to ease future deployment, or testing of services/applications.
If all I needed was a Wireguard server, I might have picked up another rpi and USB RJ45 dongle. I just wanted a server so I wasn’t restricting myself, and offering the wireguard some extra power.
This is a journey that will likely fill you with knowledge. During that process what you consider “easy” will change.
So the answer right now for you is use what is interesting to you.
Yes plenty ways to do the same thing in different ways. Imo though right now jump in and install something. Then play with it.
Just remember modern CPUs can host many services from a single box. How they do that can vary.
Thank you for your encouragement :)
I highly recommend using Docker/Podman, even though you say you don’t want to. It is trivial to start up a new service using docker-compose once you get the basics down.
host:container when specifying ports or directories in the compose YAML (e.g. 58333:8080 will route the container’s port 8080 to the host machine’s port 58333.
Thank you for your input. I did note Podman some days ago, and I will take a gander at it. Thank you!
If you’re feeling that it will take too much time to maintain something you’re deploying, then there may also be toolset/skillset mismatch. Take Docker/K8s that you’ve called out for example; they’re the graduated steps to deploy things in the industry. Things deployed via Docker drastically reduces the amount of time to get up and running by eliminating large swaths of dependency management, as well as gives option to use tools on platform to manage self updates if that’s something desired (though this could potentially introduce failures by manual upgrade steps where required). You’d graduate to k8s as your infrastructure footprints start to grow. Learning the correct tools could potentially reduce the barrier to entry and time requirements on the apps front.
Having said that, it is probably better to ask the inverse: what is it that you’re trying to achieve and why?
Without a reason that resonates well with you, you’re not going to find time in your allegedly already life to maintain to keep it working. Nor will you be willing to find the time to learn the correct tools to deploy these things.
Thank you for your input. I work in IT as a support specialist consultant, and have dabbled quite a bit with technologies. I run linux as my primary operating system at home. Your insinuation of a mismatch might sound apt, but the wider picture is that there a some reason to my madness. However, the “push back” again my “no docker please” has been noted, and I will take a closer look at it, and see if I can’t make myself understand the basics. Thank you!
Tor relay (guard/middle or bridge/snowflake) and i2p node (i2p, i2p+ or i2pd)
I am not sure what you are trying to communicate, apologies.
to start with
PiHole DNS server
Jellyfin Home media server
forgejo Git server to hold your docker compose files.So, you are recommending the opposite?
I don’t need jellyfin, so that confuses me a bit.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters DNS Domain Name Service/System Git Popular version control system, primarily for code NAS Network-Attached Storage PiHole Network-wide ad-blocker (DNS sinkhole) Plex Brand of media server package SSH Secure Shell for remote terminal access VPN Virtual Private Network k8s Kubernetes container management package
8 acronyms in this thread; the most compressed thread commented on today has 15 acronyms.
[Thread #735 for this sub, first seen 3rd May 2024, 21:25] [FAQ] [Full list] [Contact] [Source code]
I haven’t personally used any of these, but looking them over, Tipi looks the most encouraging to me, followed by Yunohost, based largely on the variety of apps available but also because it looks like Tipi lets you customize the configuration much more. Freedom Box doesn’t seem to list the apps in their catalog at all and their site seems basically useless, so I ruled it out on that basis alone.
Thank you for your input. I have eliminated Yunohost and Freedombox and will see if Tipi and “normal docker maintenance” can work together, since Tipi is based on docker technologies.
I see you’re open to Docker now - I’m a huge fan of Dockge which is a nice web GUI for managing your Docker Compose files. I tried just using Docker run commands to set up my containers and it was a painful experience. Tried Portainer for a bit as well, but when I came across Dockge, everything just fell into place. You can copy Docker run commands in there and it will convert to Docker compose as well as easily update, set up stacks with multiple related containers, etc.
I also love Tailscale for remotely accessing my network. This video was really helpful in setting up Tailscale for a few of my Docker containers I want to access remotely.
If you decide you don’t have the time to learn all this, I have used Umbrel in the past and they just released a big update. It has an “app store” that handles setting up your Docker containers for you.
Good luck, it’s a fun journey!
Here is an alternative Piped link(s):
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Can you briefly summarize the difference between dockge and portainer?
I use portainer for the most part and have no real complaints aside from some ambiguous error messages when containers fail to deploy.
Dockge is really just a nice GUI for Docker compose. It still creates docker-compose.yml files and you could always switch back to just managing that way with no impact. It can convert Docker run commands to Docker compose, usually it’s pretty close but may need a little tweaking. It also shows the terminal output from the container, which is helpful for troubleshooting. It feels more lightweight to me and does only what I need, nothing more.
I had been managing my own Docker compose files for a while, so Portainer may do some or all of this also, but it always felt a little bloated, so this was a good fit.
I’ve been trying to get a docker composed from my docker run commands, so I’m going to try it out
New Lemmy Post: What should I run and why? (https://lemmyverse.link/lemmy.world/post/14987825)
Tagging: #SelfHosted(Replying in the OP of this thread (NOT THIS BOT!) will appear as a comment in the lemmy discussion.)
I am a FOSS bot. Check my README: https://github.com/db0/lemmy-tagginator/blob/main/README.md