Because most people are too lazy to bother with making sure the results are accurate when they sound plausible. They want to believe the hype, and lack critical thinking.
I don’t want to believe any hype! I just want to be able to ask “hey Chatgtp, I’m looking for a YouTube video by technology connections where he discusses dryer heat pumps.” And not have it spit out "it’s called “the neat ways your dryer heat pumps save energy!”
And it is not, that video doesn’t exist. And it’s even harder to disprove it on first glance because the LLM is mimicing what Alex would have called the video. So you look and look with your sisters very inefficient PS4 controller-to-youtube interface… And finally ask it again and it shy flowers you…
Damn it… I was sure he mentioned them briefly in one of his heat pump videos but I trust you over Chatgtp…
He should do a video! I am constantly enchanted by his heat pump explainers… I don’t know why but it’s one of those concepts that’s just a bit out of my wheelhouse. So I always “knew” how it worked. But the lightbulb moment. The aha! Pure crack.
Those people were wrong because wikipedia requires actual citations from credible sources, not comedic subreddits and infowars. Wikipedia is also completely open about the information being summarized, both in who is presenting it and where someone can confirm it is accurate.
AI is a presented to the user as a black box and tries to be portray it as equivalent to human with terms like ‘hallucinations’ which really mean ‘is wrong a bunch, lol’.
Pretty weak analogy. Wikipedia was technologically trivial and did a really good job of avoiding vested interests. Also the hype is orders of magnitude different, noone ever claimed Wikipedia was going to lead to superhuman intelligences or to replacement of swathes of human creative/service workers.
Actually since you mention it, my hot take is that Wikipedia might have been a more significant step forward in AI than openAI/latest generation LLMs. The creation of that corpus is hugely valuable in training and benchmarking models of natural language. Also it actually disrupted an industry (conventional encyclopedias) in a way that I’m struggling to think of anything that LLMs has replaced in the same way thus far.
Because most people are too lazy to bother with making sure the results are accurate when they sound plausible. They want to believe the hype, and lack critical thinking.
I don’t want to believe any hype! I just want to be able to ask “hey Chatgtp, I’m looking for a YouTube video by technology connections where he discusses dryer heat pumps.” And not have it spit out "it’s called “the neat ways your dryer heat pumps save energy!”
And it is not, that video doesn’t exist. And it’s even harder to disprove it on first glance because the LLM is mimicing what Alex would have called the video. So you look and look with your sisters very inefficient PS4 controller-to-youtube interface… And finally ask it again and it shy flowers you…
But I swear he talked about it ?!?! Anyone?!?
He hasn’t
I think in a recent video he mentioned he will soon, but he hasn’t done a video with even a segment on heat pumps in dryers yet
Fairly confident in this, recently finished a rewatch of basically all his content
Damn it… I was sure he mentioned them briefly in one of his heat pump videos but I trust you over Chatgtp…
He should do a video! I am constantly enchanted by his heat pump explainers… I don’t know why but it’s one of those concepts that’s just a bit out of my wheelhouse. So I always “knew” how it worked. But the lightbulb moment. The aha! Pure crack.
This sound awfully familiar, like almost exactly what people were saying about Wikipedia 20 years ago…
Those people were wrong because wikipedia requires actual citations from credible sources, not comedic subreddits and infowars. Wikipedia is also completely open about the information being summarized, both in who is presenting it and where someone can confirm it is accurate.
AI is a presented to the user as a black box and tries to be portray it as equivalent to human with terms like ‘hallucinations’ which really mean ‘is wrong a bunch, lol’.
Pretty weak analogy. Wikipedia was technologically trivial and did a really good job of avoiding vested interests. Also the hype is orders of magnitude different, noone ever claimed Wikipedia was going to lead to superhuman intelligences or to replacement of swathes of human creative/service workers.
Actually since you mention it, my hot take is that Wikipedia might have been a more significant step forward in AI than openAI/latest generation LLMs. The creation of that corpus is hugely valuable in training and benchmarking models of natural language. Also it actually disrupted an industry (conventional encyclopedias) in a way that I’m struggling to think of anything that LLMs has replaced in the same way thus far.