

And it’s labelled right on them.
Seems pretty obvious


And it’s labelled right on them.
Seems pretty obvious


Yep.
I run Tailscale on every device that can run it, and have a TS router in one device at home for devices that can’t run it.
Its my fallback if Syncthing ever has a Discovery server failure.


It’s a fantastic app, but doesn’t do sync like SyncThing or Resilio Sync.
It can do things similarly if you work at configuring it, but it can never monitor a remote and sync based on file changes there. That’s not a criticism, it’s a function of the file system approach it takes - it can sync with many different file systems, but it doesn’t have a client at the other end - it simply interfaces with that file system. Fantastic actually.
I’ve used it since about 2010, it was my solution for moving files back and forth for a long time. I still use it for specific things, but I’ve put more effort into ST and Resilio Sync config and management because they’re full-on sync suites.


Instant sync only works for local folders it can monitor. Since it doesnt have a client on the other end, there’s no way to make this happen (it would have to monitor the destination).
This would require keeping a connection open between devices, which is a high cost from a network (and especially battery) perspective.
Its a great app, I’ve used it for 10+ years, paid for it 2 or 3 times because it’s worth it.


I’ve been using Fork for years. Möbius on iOS has financial support from a 3rd party that uses Syncthing in their own processes, so I suspect it will stay around.
That said, Resilio Sync is the other most-viable option I know (and use).
It’s a little less kind to battery with larger folder pairs, and uses more memory since it stores the index in RAM. But it’s robust.


Man, the countertop convection oven has been a game changer for us.
Use it pretty much every meal, has really cut down on using the oven so less heat in kitchen in the summer, and I’m sure it’s made a dent in the electric bill (or maybe it’s a wash because we use it so much more).


Lol.
I have some cups that can’t be microwaved because they have metal foil on them.
Though if you microwave them a couple times, that problem is solved. 😝


For Brits who have tea multiple times a day, and because their appliances are 220v, an electric kettle makes sense. It can boil water in less than half the time the most powerful consumer microwave in the US can, because there’s no magic to a microwave - it can only put as much energy into water as it can draw from its electric circuit, about 2000watts, max.
Outside of those conditions, an electric kettle doesn’t make sense.


The first time I heard about superheating in the microwave was in the mid 80’s.
We tried (as dumbass kids do) to try to do it. Repeatedly.
It takes a pristine container and a lot of heating.
Distilled water works best, because it lacks minerals (fewer nucleation sites).
The risk is waaaay overblown. I boil water in the microwave almost every day, and haven’t superheated it since trying to do it back in the 80’s.
Edit: Quote from an article I once found discussing the risk
The prominence of the warning is disproportionate to the documented injury frequency, even though the underlying physics is sound.
Microwaves have been commonplace for 50 years now. How many people boil how many millions of cups of water, per day, in a single US state. If it were a commonplace occurrence, or even not-so-commonplace, we’d have plenty of records of it all the time.
It takes a narrow set of conditions to produce, so it happens even less than “rarely”.


Nothng official, sorry, wish I did!
Mostly personal experience. But that experience is also shared among a group of peers and friends in the SMB space where their clients think they can keep stuff on externals in an office safe only to find they’ve gone tits up nearly every time they pull them out a couple years later. And not the enclosures, the drives themselves - they all have external drive readers for just these kinds of circumstances.
In the enterprise you’d get laughed out of a datacenter for even suggesting cold drives for anything. Of course that’s based around simple bit rot concerns, and why file systems like ZFS use a methodology to test/verify bits on a regular basis.
If nothing else, that bit rot should be enough of a reason to not store data on cold drives. It’s not what drives were designed (or tested) to do.
Edit: Everything I’ve read over the years suggests failures happen as much from things like lubricants hardening from sitting as from bit rot. I’ve experienced both. I’ve seen drives that spin up after ten years but have numerous data errors, and drives that just won’t spin up, while their counterparts that have run nearly continuously are fine (well, their bit-rot was caught by the OS and mitigated). With a running drive you have monitoring, so you know the state.


Meh, you got a spare kidney…


Fine, I write an extensive bit of help with links to QNAP docs and a few other things, and you downvote.
Fine, how about I just delete it, and ya all go figure it out without my help.


Clearly the DOE has been doing a great job for 40+ years, reducing the average reading level


I would definitely keep them warm, as in a running machine.
Drives on a shelf die more often than always-on drives.


I use a similar Dell Optiplex 7000 series.
It boots from the NVME, with an 8TB 3.5 disc for data, and a 500GB SD for my VMs. (Since spinning disks can idle much lower than SSD, getting my always-on VMs off the big drive lets it idle, with the SSD peak power being lower than the peak of spinning disk Adding the SSD increased net power slightly).
I use a splitter on the 12v power line for both of the drives. It’s fine.
This box only has an 80w power supply, and with both those drives hooked up it draws 20w at idle, and peaks at 70w when converting multiple videos simultaneously.
The manuall tells you what you can do without voiding the warranty.
Edit: Given it’s age, I’d pull the CPU cooler and replace the paste. It’s likely hardened by now. Mine was randomly rebooting because the cpu would overheat. Replaced the thermal paste and its been rock solid since.


I self host on a 5 year old Dell Optiplex Small Form Factor desktop.
I also have a Raspberry Pi, which has about 1/16 the performance of the desktop - Pi can be used for all sorts of stuff.


Yep.
My Pi is about 8 watts. Really hard to beat.
The SFF started at 12w, but swapping out the data drive for a much larger one pushed it up 5w. And now with 2 VMs always running (PiHole and a Windows VM), it hovers at 20w.
The ancient NAS (Drobo) sits at about 15w.


The number one thing you can do, by orders of magnitude, is to start with power-friendly hardware.
For example, my previous server was an old gaming machine. It’s lowest idle power consumption was 80 watts. That was with running an OS that permitted heavy power reduction control, and enabling every power saving feature in the BIOS.
Compare that to my 2019 Dell Optiplex Small-Form-Factor desktop I’m running as a server. The power supply is rated for 80 watts, MAX. It idles at 20w, peaks at about 70w when converting multiple videos simultaneously. This with an 8 TB enterprise drive for data.
So 1/4 the power draw when idle, where it spends perhaps 90%+ of its time. Even things like Resilio Sync and Syncthing don’t significantly raise CPU time.
Streaming with Jellyfin or Mediamonkey have nearly no CPU impact.
There’s nothing in heavier hardware you could tune to get down to 20w.
Very carefully.
I suggest a fully-enclosed space with near-normal earth pressures and air mix. Makes for faster adjustment and less likely to get winded.
I’d have multiple spaces connected to a central space, all with plenty of views to the outside (with appropriate solar blocking in the “glass”).
In the main area I’d have a well stocked bar, with moon/solar system themed drinks. Make it an open bar, anyone who can afford to get there has already paid.
Basically, not much different than hosting on earth, just needs to have a softer floor and walls as people experiment with reduced gravity.