I don’t think it will. People are treating it like artificial general intelligence and are trying to make it do tasks a purpose built model would do much better at. But that takes more money, so companies are just trying to make chatgpt do everything and people aren’t using it because of the error rate and privacy concerns.
I don’t think it will. People are treating it like artificial general intelligence and are trying to make it do tasks a purpose built model would do much better at. But that takes more money, so companies are just trying to make chatgpt do everything and people aren’t using it because of the error rate and privacy concerns.
People are treating it like artificial general intelligence
Exactly. And people seem to think that we are close to AGI. We ain’t. AGI is many decades away.
Yes, its definitely been over-hyped (by some) in that regard.
That said, next-gen LLMs that have greatly reduced the hallucination problem are on the horizon, and I would say will be much more successful.
The medical AI the Microsoft employee talks about here (and other peoples versions of it) is almost sure to be a huge global success.