So you just query an AI, just like any other AI, but it posts your request and response publicly on your fedi account??? This shit is fucking stupid. Why would you ever want that
- 0 Posts
- 5 Comments
Cybernews researchers have found that BDSM People, CHICA, TRANSLOVE, PINK, and BRISH apps had publicly accessible secrets published together with the apps’ code.
All of the affected apps are developed by M.A.D Mobile Apps Developers Limited. Their identical architecture explains why the same type of sensitive data was exposed.
What secrets were leaked?
- API Key
- Client ID
- Google App ID
- Project ID
- Reversed Client ID
- Storage Bucket
- GAD Application Identifier
- Database URL
[…] threat actors can easily abuse them to gain access to systems. In this case, the most dangerous of leaked secrets granted access to user photos located in Google Cloud Storage buckets, which had no passwords set up.
In total, nearly 1.5 million user-uploaded images, including profile photos, public posts, profile verification images, photos removed for rule violations, and private photos sent through direct messages, were left publicly accessible to anyone.
So the devs were inexperienced in secure architectures and put a bunch of stuff on the client which should probably have been on the server side. This leaves anyone open to just use their API to access every picture they have on their servers. They then made multiple dating apps with this faulty infrastructure by copy-pasting it everywhere.
I hope they are registered in a country with strong data privacy laws, so they have to feel the consequences of their mismanagement
MissGutsy@lemmy.blahaj.zoneto Technology@lemmy.world•Reddit cofounder Alexis Ohanian says AI should moderate social mediaEnglish1·2 months agoI agree, but it’s also not surprising. I think somebody else posted the article about kenyan Facebook moderators in this comment section somewhere if you want to know more
MissGutsy@lemmy.blahaj.zoneto Technology@lemmy.world•Reddit cofounder Alexis Ohanian says AI should moderate social mediaEnglish3·2 months agoInteresting fact: many bigger Lemmy instances are already using AI systems to filter out dangerous content in pictures before they even get uploaded.
Context: Last year there was a big spam attack of CSAM and gore on multiple instances. Some had to shut down temporarily because they couldn’t keep up with moderation. I don’t remember the name of the tool, but some people made a program that uses AI to try and recognize these types of images and filter them out. This heavily reduced the amount of moderation needed during these attacks.
Early AI moderation systems are actually something more platforms should use. Human moderators, even paid ones, shouldn’t need to go though large amounts of violent content every day. Moderators at Facebook have been arguing these points for a while now, many of which have gotten mental issues though their work and don’t get any medical support. So no matter what you think of AI and if it’s moral, this is actually one of the few good applications in my opinion
You got your numbers mixed around.
1m liters/340m men = 0.00294 liters per day
That’s just under 3ml, which is very little, but still seems high. Assuming that not every man is using only the urinal, the number per urinal usage is even higher. But I also don’t know american public bathrooms, are they that filthy?