• 0 Posts
  • 17 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle

  • But I think I’m understanding a bit! I need to literally create a file named “/etc/radicale/config”.

    Yes, you will need to create that config file, on one of those paths so you then continue with any of the configuration steps on the documentation, you can do that Addresses step first.

    A second file for the users is needed as well, that I would guess the best location would be /etc/radicale/users

    For the Authentication part, you will need to install the apache2-utils package with sudo apt-get install apache2-utils to use the htpasswd command to add users

    So the command to add users would be htpasswd -5 -c /etc/radicale/users user1 and instead of user1, your username.

    And what you need to add to the config file for it to read your user file would be:

    [auth]
    type = htpasswd
    htpasswd_filename = /etc/radicale/users
    htpasswd_encryption = autodetect
    

    Replacing the path with the one where you created your users file.


  • I’m trying to follow the tutorial on the radicale website but am getting stuck in the “addresses” part.

    From reading from the link you provided, you have to create a config file on one of two locations if they don’t exist:

    “Radicale tries to load configuration files from /etc/radicale/config and ~/.config/radicale/config

    after that, add what the Addresses sections says to the file:

    [server]
    hosts = 0.0.0.0:5232, [::]:5232
    

    And then start/restart Radicale.

    You should be able to access from another device with the IP of the Pi and the port after that


  • Yeah, I started the same, hosting LAN parties with Minecraft and Counter Strike 1.6 servers on my own Windows machine at the time.

    But what happens when you want to install some app/service that doesn’t have a native binary installer for your OS, you will not only have to learn how to configure/manage said app/service, you will also need to learn one or multiple additional layers.

    I could have said “simple bare metal OS and a binary installer” and for some people it would sound as Alien, and others would be nitpicky about it as they are with me saying docker (not seeing that this terminology I used was not for a newbie but for them), If the apps you want to self-host are offered with things like Yunohost or CasaOS, that’s great, and there are apps/services that can be installed directly on your OS without much trouble, that’s also great. But there are cases where you will need to learn something extra (and for me that extra was Docker).


  • XKCD 2501 applies in this thread.

    I agree, there are so many layers of complexity in self-hosting, that most of us tend to forget, when the most basic thing would be a simple bare metal OS and Docker

    you’ll probably want to upgrade the ram soon

    His hardware has a max ram limit of 4, so the only probable upgrade he could do is a SATA SSD, even so I’m running around 15 docker containers on similar specs so as a starting point is totally fine.


  • I get your point, and know it has its merits, I would actually recommend Proxmox for a later stage when you are familiar with handling the basics of a server, and also if you have hardware that can properly handle virtualization, for OP that has a machine that is fairly old and low specs, and also is a newbie, I think fewer layers of complexity would be a better starting point to not be overwhelmed and just quit, and then in the future they can build on top of that.


  • I have a Dell Inspiron 1545, that has similar specs to yours running Debian with Docker and around 15 services in containers, so my recommendation would be to run Debian server (with no DE), install docker, and start from there.

    I would not recommend proxmox or virtual machines to a newbie, and would instead recommend running stuff on a bare metal installation of Debian.

    There are a bunch of alternatives to manage and ease the management of apps you could choose from like, yunohost, casaOS, Yacht, Cosmos Cloud, Infinite OS, cockpit, etc. that you can check out and use on top of Debian if you prefer, but I would still recommend spending time on learning how to do stuff yourself directly with Docker (using docker compose files), and you can use something like Portainer or Dockge to help you manage your containers.

    My last recommendation would be that when you are testing and trying stuff, don’t put your only copy of important data on the server, in case something break you will lose it. Invest time on learning how to properly backup/sync/restore your data so you have a safety net in case that something happens, you have a way to recover.


  • I have no experience with this app in particular, but most of the time there is an issue like this that you can’t reach an app or any other path besides the index, is because the app itself doesn’t work well with path redirection of subfolders, meaning the app expects paths to be something like domain.tld/index.html instead of domain.tld/subfolder/index.html for all its routes.

    Some apps let you add a prefix to all its routes they can work, so you not only have to configure nginx but the app itself to work with the same subfolder.

    Other apps will work with the right configuration in nginx if they do a new full page load every time the page changes its path/route.

    If it is a PWA that doesn’t do a page load every time the path is changed, it’s not going to work with subfolders as they don’t do any page refresh that goes through nginx, and just rewrite the visible URL on the browser

    What I can recommend is to switch to a subdomain like 2fa.domain.tld instead of a subfolder and test if it works, as subdomains are the modern standard for this kind of thing these days, to avoid this type of issues.

    Edit: looking at the app demo, it seems to be a vue.js PWA that doesn’t do any full page refreshes on a path change, so as stated you will probably have to switch to a subdomain to make it work.



  • As others have already commented, what you need is a Dynamic DNS service, where you register a subdomain, and setup a small program or script on your computer that pings the DDNS server every few minutes, that way you leave that running on the background, and if the program detects that the IP with the request changes, it will update the subdomain to point to it automatically.

    You could access the blog from the subdomain of the DDNS directly or if you get your own domain, you can point it to the DDNS.

    If you want a recommendation, I have been using DuckDNS for years, and it has been pretty reliable.


  • what is a good solution to keep a music folder backed up

    syncthing (file sync, update: removed this, not needed, actually need a backup solution)

    Backup solution, you could use Borg or Restic, they are CLI, but there are also GUI for them

    how can I back up my Docker setup in case I screw it up and need to set it all up again?

    learn to use Dockage to replace Portainer (done, happy with this)

    If you did the switch to Dockge, it might be because you prefer having your docker compose files accessible easily on the filesystem, the question is if you have the persistent data of your containers in bind mounts as well, so they are easy to backup.

    I have a git repo of my stacks folder, with all my docker compose files (secrets on env files that are ignored), so that I can track all changes made to them.

    Also, I have a script that stops every container while I’m sleeping and triggers backups of the stacks folder and all my bind mount folders, that way I have a daily/weekly backup of all my stuff, and in case something breaks, I can roll back from any of these backups and just docker compose up, and I’m back on track.

    An important step, is to frequently check that backups are good, I do this by stopping my main service and running from a different folder with the backed up compose file and bind mounts




  • Yeah, these are pretty solid advice, would say that you should be safe with patch version updates, like from 1.17.1 to 1.17.4

    Should be able to jump from 1.17.4 to 2.0.1 and from 2.0.1 to 2.1.3, etc. going straight to the last patch of the next version, but should go one by one minor version, paying close attention to those versions that have breaking changes in the release notes. And always backup and test before each version jump.


  • This probably is the issue, when you download a script or binary from the internet it doesn’t have execution permission, you would have to right click on folder to open in terminal (that way don’t have to cd to it), and check permissions with ls -la if it doesn’t have permission, change it with chmod


  • In my mind it would be super useful, I could sync my photos when my PC is on and when is off rely on my local photos only since my main goal is having a backup of them.

    You could do this perfectly with the docker version, so just curiosity here, why not user docker?

    Is it because you don’t want to install docker for only Immich? (you could also install other selfhosted server/apps as bonus),

    would you be against snap? As someone already mentioned, there is a snap version.

    If the important thing is having backups of your photos, there are alternative apps with different packaging formats.

    You could make a request for flatpak, and see if other users also would like it, but you would have to wait for feedback from devs and understand if they don’t have the resources or willingness to maintain it.

    Am I crazy or it makes sense?

    If I’m interested in a specific app, I see what packaging formats it has and see how to install it and try it out. Only if I’m having issues with it (that can’t be solved), or can’t run it on my specific distro with the available packaging formats, I try to suggest/request a different format.


  • As far as I know, CasaOS (same as Cockpit) is installed on top of a default OS install, so you could always access the OS directly to install/configure things outside of it, if the need arises.

    I would not say you would be held back by it, if it does what you need. And for what I can see online, you can install any docker container even if it’s not on the default catalog of CasaOS, or access the OS.

    If you want to grow your knowledge of how things work, or how to deploy services without CasaOS, you can always do so in parallel of using CasaOS, so I don’t see where the issue could be.