• 0 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: August 14th, 2023

help-circle










  • My sister used to be the head of the environmental department of one of the suburbs of Minneapolis. She’s met Tim Walz a few times, and has had nothing but good things to say about both his personality and his attitude toward the environment. The US has pretty slim pickings for good politicians these days, but I really feel that Tim is on that list. I’m feeling hopeful about American politics for the first time since I learned about how American politics works!




  • Ah, I see. It’s true that these issues cast a negative light on AI, but I doubt most people will even hear about most of them, or even really understand them if they do. Even when talking about brand security, there’s little incentive for these companies to actually address the issues - the AI train is already full-steam ahead.

    I work with construction plans in my job, and just a few weeks ago I had to talk the CEO of the company I work for out of spending thousands on a program that “adds AI to blueprints.” It literally just added a chatgpt interface to a pdf viewer. The chat wasn’t even able to actually interact with the PDF in any way. He was enthralled by the “demo” that a rep had shown him at an expo, that I’m sure was set up to make it look way more useful than it really was. After that whole fiasco, I lost faith that the people in charge of whether or not AI programs are adopted will actually do their due diligence to ensure they’re actually helpful.

    Having a good brand image only matters if people are willing to look.


  • I highly doubt that OpenAI or any other AI developer would see any real repercussions, even if they had a security hole that someone managed to exploit to cause harm. Companies exist to make money, and OpenAI is no exception; if it’s more profitable to release a dangerous product than a safe one, and they won’t get in trouble for it, they’ll likely have no issues with releasing their product with security holes.

    Unfortunately, the question can’t be “should we be charging them for this?” Nobody is going to force them to pay, and they have no reason to do it on their own. Barring an entire cultural revolution, the question instead must be “should we do it anyway to prevent this from being used in harmful ways?” And the answer is yes. Our society is designed to maximize profits, usually for people who already have money, so if you’re working within the confines of that society, you need to factor that into your reasoning.

    Companies have long since decided that ethics is nothing more than a burden getting in the way of their profits, and you’ll have a hard time going against the will of the companies in a capitalist country.