ohhhhhhhhhhhhhh i get the push for this now
not just offloading responsibility for ‘downsizing’ and unpopular legal actions onto ‘AI’ and algorithms, fuck it lets make them the ones responsible for the crimes too. what are they going to do, arrest a computer?
I maintain it would have been funnier to train monkeys to trade stocks. They could go around in little suits and wear a fez
you can arrest monkeys though, so i see why they’ve done this
monkey prison labour. also I don’t think an animal can legally be responsible for anything
and just replace the monkey trading stocks
an economic miracle
hire this man!
i firmly hold we should arrest animals because its funny, but it’s usually illegal in modern jurisprudence
This is so asinine. ChaptGPT-4 does not reason. It does not decide. It does not provide instructtions. What it does is write text based on a prompt. That’s it. This headline is complete nonsense.
Maybe this is conspiracy-brained, but I am 99% sure that the way people like Hinto is talking about this technology being so scary and dangerous is marketing to drive up the hype.
There’s no way someone who worked with developing current AI doesn’t understand that what he’s talking about at the end of this article, AI capable of creating their own goals and basically independent thought, is so radically different from today’s probability-based algorithms that it holds absolutely zero relevance to something like ChatGPT.
Not that there aren’t ways current algorithm-based AI can cause problems, but those are much less marketable than it being the new, dangerous, sexy sci-fi tech.
This is the common consensus among AI critics. People who are heavily invested in so-called “AI” companies are also the ones who push this idea that it’s super dangerous, because it accomplishes two goals: a) it markets their product, b) it attracts investment into “AI” to solve the problems that other "AI"s create.
AI papers from most of the world: “We noticed a problem with this type of model, so we plugged in this formula here and now it has state-of-the-art performance. No, we don’t really know why or how it works.”
AI papers from western authors: “If you feed unfiltered data to this model, and ask it to help you do something bad, it will do something bad 😱😱😱”
It’s also a prime example of how stupid rich people are and how easy it is to do their job
So much of the job of investing is just figuring out who is lying. Inside trading gives you an edge precisely because the information is more accurate than what the public is provided.
Calling this a “study” is being a bit too generous. But there is something interesting in it, it seems to use two layers of “reasoning” or interaction (is this how gpt works anyway? Seems like a silly thing to have a chat bot inside a chat bot). The one exposed to the user and the “internal reasoning” behind that. I have a solution, just expose the internal layer to the user. It will tell you its going to do an insider trading in the most simple terms. I’ll take that UK government contract now, 50% off.
This is all equivalent to placing two mirrors facing each other and looking into one saying “don’t do insider trading wink wink” and being surprised at the outcome.
Just like real life
This seems like veiled hype marketing.
It sounds like it’s ahead of schedule in its investment banker studies. Has it already gotten a real gig working in finance?
Bro leaned from reddit/supers
This is some “the camera stole my soul” level of new tech hokum
if it’s not just a load of bullshit, it still isn’t impressive. “oh wow, we taught the AI John Nash’s game theory and it decided to be all ruthless and shit”
Theoretically, having the intelligence to be able to teach itself (in so many words) how to deceive someone to cover for a crime while also carrying out a crime would be pretty impressive imo. Like, actually learning John Nash’s game theory and an awareness of different agents in the actual world, when you are starting from being a LLM, would be pretty significant, wouldn’t it?
But it’s not, it’s just spitting out plausibly-formatted words.
Humans decide the same shit for the same reasons every day.
This isn’t an issue with AI. It is an issue of incentives and punishment (or lack thereof).
You’ve almost got it, you’re right in that it’s not an issue with AI, since as you’ve said, humans do the same shit every day.
The root problem is Capitalism. Sounds reductive, but that’s how you problem solve. You troubleshoot to find the root component issue, once you’ve fixed that you can perform your system retests and perform additional troubleshooting as needed. If this particular resistor burns out every time I replace it, perhaps my problem is further up the circuit in this power regulation area.
It is an issue with AI because it’s not supposed to do that. It is also telling that it decided to do this, based on its training and purpose.
AI is a wild landscape at the moment. There are ethical challenges and questions to ask/answer. Ignoring them because “muh AI” is ridiculous.
They practically told it to do insider trading though
Oh I absolutely agree, I’m just saying that AI has some flaws that also need addressed
What they did was have a learning model sitting on top of another learning model trained on insider data. This is just couching it in a layer of abstraction like how Realpage and YieldStar fix rental prices by abstracting price fixing through a centralized database and softball “recommendations” about what you should rent out a home/unit for.
“Oh no! My job is at risk of automation!”
By: RYAN HOGG
Archive link sire?
ty ty