This is already implemented on a lot of the settings pages on 11.
Edit: just wanted to add I don’t think well. I use it at work.
This is already implemented on a lot of the settings pages on 11.
Edit: just wanted to add I don’t think well. I use it at work.
No one is pretending anything, and no one is claiming these laws aren’t specifically targeting trans people, of course they are. What they are claiming is that the same law would apply to what is being depicted in the photo, which is true.
lol I would open every port on my router and route them all to wireguard before I would ever consider doing this
I use Nextcloud with Nginx Proxy Manager and just use NPM to handle the reverse proxy, nothing in Nextcloud other than adding the domain to the config so it’s trusted.
I use Plex instead of Jellyfin, but I stream it through NPM with no issues. I can’t speak to the tunnel though, I prefer a simple wireguard tunnel for anything external so I’ve never tried it.
Edit: unless that’s what you mean by tunnel, I was assuming you meant traefik or tailscale or one of the other solutions I see posted more often, but I think one or both of those use wireguard under the hood.
I have a feeling the people making fiber internet faster aren’t the same people installing it in neighborhoods.
The product was an LLM.
I never switched to Proton for exactly this reason. I’d much rather use a service that does one thing really well than one that does 20 things okay.
It’s all just to keep you locked into your subscription. Now they want you to keep other money tied up in it too.
The issue is that the docker container will still be running as the LXC’s root user even if you specify another user to run as in the docker compose file or run command, and if root doesn’t have access to the dir the container will always fail.
The solution to this is to remap the unprivileged LXC’s root user to a user on the Proxmox host that has access to the dir using the LXC’s config file, mount the container’s filesystem using pct mount, and then chown everything in the container owned by the default root mapped user (100000).
These are the commands I use for this:
find /var/lib/lxc/xxx/rootfs -user 100000 -type f -exec chown username {} +;
find /var/lib/lxc/xxx/rootfs -user 100000 -type d -exec chown username {} +;
find /var/lib/lxc/xxx/rootfs -user 100000 -type l -exec chown -h username {} +;
find /var/lib/lxc/xxx/rootfs -group 100000 -type f -exec chown :username {} +;
find /var/lib/lxc/xxx/rootfs -group 100000 -type d -exec chown :username {} +;
find /var/lib/lxc/xxx/rootfs -group 100000 -type l -exec chown -h :username {} +
(Replace xxx with the LXC number and username with the host user/UID)
If group permissions are involved you’ll also have to map those groups in the LXC config, create them in the LXC with the corresponding GIDs, add them as supplementary groups to the root user in the LXC, and then add them to the docker compose yaml using group_add.
It’s super confusing and annoying but this is the workflow I’m using now to avoid having to have any resources tied up in VMs unnecessarily.
I’ve been doing this for at least a decade now and the drives are just as reliable as if you bought them normally. The only downside is having to block one of the pins on the SATA connector with kapton tape for it to work.
I like the workflow of having a DNS record on my network for *.mydomain.com pointing to Nginx Proxy Manager, and just needing to plug in a subdomain, IP, and port whenever I spin up something new for super easy SSL. All you need is one let’s encrypt wildcard cert for your domain and you’re all set.
IIRC from running into this same issue, this won’t work the way you have the volume bind mounts set up because it will treat the movies and downloads directories as two separate file systems, which hardlinks don’t work across.
If you bind mounted /media/HDD1:/media/HDD1 it should work, but then the container will have access to the entire drive. You might be able to get around that by running the container as a different user and only giving that user access to those two directories, but docker is also really inconsistent about that in my experience.
lol Japan invents the three major optical disc storage mediums that became ubiquitous and their government says fuck that and just keeps on using floppy disks
If you want Proxmox to dynamically allocate resources you’ll need to use LXCs, not VMs. I don’t use VMs at all anymore for this exact reason.
RDP does not fill the same role as Teamviewer at all. The M$ alternatives would be Quick Assist or the older MSRA.
America didn’t drop anything because they weren’t saying it in the first place, the Soviets were. America also aren’t the ones that coined a new phrase for it, British royalists were, who probably had no knowledge of the Russian phrase. All of this was explained in the article you linked.
1.it’s a euphemism for “And You Are Lynching Negroes” - that’s literally what people used to say instead of whataboutism
lol who do you think was saying this, and how is “whataboutism” in any way of a euphemism for it? Did you even bother to read the article you linked?
I also take money from possible fascists because I need it to survive. It’s called having a job.
Am I missing something in this article? I’m not defending either company, but it doesn’t seem like they actually have any evidence to confirm either is doing this.
The world’s top two AI startups are ignoring requests by media publishers to stop scraping their web content for free model training data, Business Insider has learned.
It claims this, but then they say this about the source of this info:
TollBit, a startup aiming to broker paid licensing deals between publishers and AI companies, found several AI companies are acting in this way and informed certain large publishers in a Friday letter, which was reported earlier by Reuters. The letter did not include the names of any of the AI companies accused of skirting the rule.
So their source doesn’t actually say which companies are doing this, but then they jump straight into this:
AI companies, including OpenAI and Anthropic, are simply choosing to “bypass” robots.txt in order to retrieve or scrape all of the content from a given website or page.
So they’re just concluding that based on nothing and reporting it as fact?
That I’m not sure of. My proxmox host is headless and none of my containers have a GUI so I haven’t tried.
Sounds like you maybe just have a habit of entering conversations on topics you don’t know much about (and in this case self-admittedly don’t even care about), so you get a lot of people who are more informed and do care expressing their disagreement with you?
Have you considered just not doing that?