Nebula is awesome.
Nebula is awesome.
IT WAS RIGHT THERE ALL ALONG.
Double-press the lock button and the camera opens on android phones (or at least you can configure it that way). Been that way for years.
Yep! That’s why I said, “something I desperately miss from my Android phones.” :)
I’m kinda confused on why this button is helpful. If you open the camera app on iOS, hitting either volume button will take a picture.
Opens the camera faster than going through the lock screen (something I desperately miss from my Android phones), and it’s multifunctional. You can press it with different intensity levels to get different functionality, and it’s touch sensitive so you can use it to make adjustments.
It’s not like, revolutionary. But it looks pretty cool and I’m looking forward to trying it out.
backlit Kindles
As I understand it, Kindles aren’t backlit but rather frontlit (or sidelit) with a layer designed to diffuse the light across the page evenly. The claim is that the lack of direct lighting helps in the fight against eye strain as well.
Safari offers ad blocking on iOS. You just have to get your extensions through the App Store. But other browsers don’t (yet).
Essentially, yes. Ebonics isn’t inherently offensive or inappropriate, as far as I can tell, but it has connotations that are not attached to AAE. Linguists avoid the term today, and modern uses of it tend to be derogatory.
Dude I’d totally ride a C5 in 2024
Just get a recumbent e-bike and slap some paneling around it.
I never got on the Xperia train cause my buddy had a rough experience with one (long before I went to the dark side and bought an iPhone).
But I did think the camera stuff seemed cool.
One of the few things I truly miss going from my OnePlus 7T Pro 5G McLaren Edition (greatest phone I ever owned, may it rest bricked in peace) to the iPhone is being able to open the camera without waking the phone first. Double tapping the power button to open the camera was great. On iPhone I have to wake the screen and then get to the camera, which is fast but not NEARLY as fast, so I sometimes miss shots I would’ve gotten with an Android phone. This should also fix that (as did the Action button in the iPhone 15, I think, but I don’t have that).
If the rumors for the capture button are true it’s pretty neat. Being able to zoom, focus, and snap a picture with the same capacitive button is neat, and would make smartphone camerawork be nearly as easy as digital cameras from twenty five years ago! 😅
Absolutely it was better. But it’s hard to believe that Apple, who was a part of the USB-IF, didn’t know USB-C was in the works. My conspiracy theory is they knew an open standard was imminent and launched lightning to keep getting those MFI licensing checks and purposely made that long of a commitment strictly so, when regulators asked why they hadn’t switched to the new standard yet, they could say it was to “help the environment.”
Oh probably. We know that by the time they finally dropped Lightning, MFi certification was earning them like $4b per year.
There was even rumor they were going to limit charging speeds over USB-C unless they were detectably “MFi” USB-C cables. Ostensibly to prevent damage to the phone from bad cables, but obviously an attempt to maintain MFi income. I don’t remember if they went through with it.
we promise we’ll use the inferior, proprietary connector
Honestly Lightning wasn’t inferior when it launched in 2012, two years before the design of USB-C was even published. And in some ways I actually prefer it physically (though obviously I would much rather all my devices use USB-C now as it is a much superior connector).
Lightning was reversible where Micro-USB was not, and Lightning’s female port is entirely a hole that the entirely-a-prong male plug goes into, whereas with USB (like with most connectors) the female side has something sticking up inside it that slots into the male plug. This means Lightning is much easier to clean, which becomes necessary because phones in people’s pockets collect lint.
I’m thrilled that iPhone has moved to USB-C, but people forget how much better Lightning was than both the 30-pin iPod connector AND Micro-USB.
Because if they level with you then there is no feeling of getting stronger.
I have seen this complaint about D4 before and I’ve never understood it.
My level 100 characters ALWAYS feel stronger than my level 1 characters. By a LOT. Level 100 character rolling face across big groups of enemies while level 1 character takes a few swings to kill little guys. Even if the mobs around you level with you, you are stronger by virtue of gear and build. And there are MORE enemies the higher level you go.
I have never found that my character feels weaker as I level, except when I change world tier. But that should be expected, since world tiers represent a change in difficulty.
The mobs level with you so you still gain meaningful XP and drops from them. But they get easier to kill as your build comes together.
It is the oldest trick in the bock to have different areas with different level requirements. First area is 1 - 10, second 10 - 20, third 20 - 30.
Sure, in other games. It’s a design decision that one can make. It’s not the direction Blizzard went with D4 (or D3 for that matter). This isn’t World of Warcraft.
So if you are level 1 in the first area you feel challenged. Maybe with level 7 you stumble into the second area and get smoked.
Now you have a frame of reference so when you are level 10 you feel like you actually grew in power because the enemies in the second area become manageable and the enemies in the first bekomme a joke.
There is still some amount of this. There are areas (and world tiers) with minimum levels. So the stuff there will be stronger than you until you reach that level. Then it will level with you.
Imaging being the all powerful being that killed the uberdeamon and still you need 10 hits to kill the bunny in the starting area. The whole fantasy breaks apart.
If a small number of trash mobs take ten hits to kill at a high level, your build is bad. Fix it.
But more importantly, so what if the fantasy breaks apart a bit? Some level of immersion is important, but as with all games some suspension of disbelief is required in service of the gameplay. I would much rather be able to roll my way through a Helltide in any area and still get (somewhat) meaningful rewards than have huge parts of the map that are completely useless to me once I’m leveled up.
And that is why it is a problem.
Honestly? Sounds like your expectations for the game aren’t what the developers intended for the game. That’s not a problem with the game, it’s just not what you wanted. I much prefer a Diablo where I’m stomping on enemies because my build and gear are good, not because I’m a higher level than they are.
Just looked it up to confirm. From DuckDuckGo’s page on the topic:
Most of our search result pages feature one or more Instant Answers. To deliver Instant Answers on specific topics, DuckDuckGo leverages many sources, including specialized sources like Sportradar and crowd-sourced sites like Wikipedia. We also maintain our own crawler (DuckDuckBot) and many indexes to support our results. Of course, we have more traditional links and images in our search results too, which we largely source from Bing. Our focus is synthesizing all these sources to create a superior search experience.
Edit: That said, I’d rather use DDG than Bing because DDG eats Bing’s tracking for me, as I understand it.
Isn’t DDG also tied to Bing? I could be mistaken.
Honestly most of what I’ve learned about how to use SD comes from seeing what other people have done and trying to tweak or adjust to get a feel for the tool and its various models. Spend some time on a site like CivitAI to both see what can be done and to find models. I’m very much a noob and cannot produce results nearly as impressive as a good chunk of what I find on there.
The most important thing I’ve learned is how much generative AI, especially SD, is just a tool. And people with more creativity and a better understanding of the tool use it better, just like every other tool.
I do like the idea of using it in GIMP as an answer to Adobe’s Firefly.
you can’t really fit full image generation AI on iPads - for example Stable Diffusion needs at the very least 6GB of GPU memory to work.
You can currently run Stable Diffusion and Flux on iPads and iPhones with the Draw Things app. Including LoRAs and TIs and ControlNet and a whole bunch of other options I’m too green to understand.
Technically the app even runs on relatively old devices, though I imagine only at lower resolutions and probably takes ages.
But in my limited experience it works quite well on an iPad Pro and an iPhone 13 Pro.
The issue isn’t the final, individual art pieces, it’s the scale. An AI can produce sub-par art quickly enough to threaten the livelyhood of artists, especially now that there is far too much art for anyone to consume and appreciate. AI art can win attention via spam, drowning out human artists.
This is literally what people said about photography.
And they were right, painting became less prolific as photography became available to the masses. People generally don’t get their portrait painted.
But people also generally don’t go to photo studios to have their picture taken, either, and those used to be in every shopping mall. But now we all have camera phones that adjust lighting and color and focus for us, and we can send a sufficiently decent picture off to be printed and mailed back to us. For those who want it done professionally that option is available and will be higher quality, just like portrait painting is still available, but technology has shrunk those client pools.
Technology always changes job markets. Generative AI will, just as others have done. People will lose careers they thought were stable, and it will be awful, but this isn’t anything unique to generative AI.
The only constant is that things change.
Or a PS5. The comparisons between PS5 and PS5 Pro are nearly impossible to notice.