Yeah, I did try that. Basically, if I doubled the memory I allocated, I gave it half again longer before it crashed, but it still crashed, eventually.
It’s no big deal, this was last year, I may try again one day. Loving Searxng though!
Yeah, I did try that. Basically, if I doubled the memory I allocated, I gave it half again longer before it crashed, but it still crashed, eventually.
It’s no big deal, this was last year, I may try again one day. Loving Searxng though!
I tried running yacy for a while but it just ran for a bit less than a day then ran out of memory and crashed, over and over. Tried to figure out the problem, but it’s niche enough that I couldn’t get anywhere googling the issue.
I have dyndns, have since they were 10$ a year, and I’ve gradually realized that my ISP changes my IP on average less than once a year…
I have it working with LaCP’d 4gb networking for the transfers. Five nodes. I agree though, It’s a beast on RAM.
I have tried a couple of Proxmox clusters, one with overkill specs and one with little Mini PCs. Proxmox does eat up a fair amount of memory, but I have used it with Ceph for live migrations. Its really useful to me to be able to power off a machine, work on it, then bring it back up, and have no interruptions in my services. That said, my Mini PCs always seemed to be hurting for RAM. So that’s my pros and cons.
There’s a series of Lemmy posts called the Linux upskill challenge that goes step by step through setting up and using Linux. I tried self hosting and jumping straight in too, and it sucked.
What worked for me:
I’m still in the middle of 6+7. Not super comfy with Docker quite yet, but getting there. I really do love having my stuff self-hosted though. Well worth the effort.
Commenting to register my interest.
I will confess that I was tempted to throw some snarky comment about Linux, but I got over the urge.