DeepSeek launched a free, open-source large language model in late December, claiming it was developed in just two months at a cost of under $6 million.
Try asking DeepSeek something about Xi Jinping. "Sorry, it’s beyond my current scope’ :-) Wondering why even it cannot cite his official party biography :-)
You wouldn’t, because you are (presumably) knowledgeable about the current AI trend and somewhat aware of political biases of the creators of these products.
Many others would, because they think “wow, so this is a computer that talks to me like a human, it knows everything and can respond super fast to any question!”
The issue to me is (and has been for the past), the framing of what “artifical intelligence” is and how humans are going to use it. I’d like more people to be critical of where they get their information from and what kind of biases it might have.
You wouldn’t, because you are (presumably) knowledgeable about the current AI trend and somewhat aware of political biases of the creators of these products.
Well, more because I’m knowledgeable enough about machine learning to know it’s only as good as its dataset, and knowledgeable enough about mass media and the internet to know how atrocious ‘common sense’ often is. But yes, you’re right about me speaking from a level of familiarity which I shouldn’t consider typical.
People have been strangely trusting of chat bots since ELIZA in the 1960s. My country is lucky enough to teach a small amount of bias and media literacy skills through education and some of the state broadcaster’s programs (it’s not how it sounds, I swear!), and when I look over to places like large chunks of the US, I’m reminded that basic media literacy isn’t even very common, let alone universal.
Except they control not only the narrative on politics but all aspects of life. Those inconvenient “hallucinations” will turn into “convenient” psyops for anyone using it.
Yes and no. Not many people can afford the hardware required to run the biggest LLMs. So the majority of people will just use the psyops vanilla version that China wants you to use. All while collecting more data and influencing the public like what TikTok is doing.
Also another thing with Open source. It’s just as easy to be closed as it is open with zero warnings. They own the license. They control the narrative.
You just named Western FOSS companies and completely ignored the “psyops” part. This is a Chinese psyops tool disguised as a FOSS.
99.9999999999999999999% can’t afford or have the ability to download and mod their own 67B model. The vast majority of the people who will use it will be using Deepseek vanilla servers. They can collect a mass amount of data and also control the narrative on what is truth or not. Think TikTok but on a work computer.
The official hosting of it has censorship applied after the answer is generated, but from what I heard the locally run version has no censorship even though they could have theoretically trained it to.
Try asking DeepSeek something about Xi Jinping. "Sorry, it’s beyond my current scope’ :-) Wondering why even it cannot cite his official party biography :-)
For what it’s worth, I wouldn’t ask any chatbot about politics at all.
You wouldn’t, because you are (presumably) knowledgeable about the current AI trend and somewhat aware of political biases of the creators of these products.
Many others would, because they think “wow, so this is a computer that talks to me like a human, it knows everything and can respond super fast to any question!”
The issue to me is (and has been for the past), the framing of what “artifical intelligence” is and how humans are going to use it. I’d like more people to be critical of where they get their information from and what kind of biases it might have.
Well, more because I’m knowledgeable enough about machine learning to know it’s only as good as its dataset, and knowledgeable enough about mass media and the internet to know how atrocious ‘common sense’ often is. But yes, you’re right about me speaking from a level of familiarity which I shouldn’t consider typical.
People have been strangely trusting of chat bots since ELIZA in the 1960s. My country is lucky enough to teach a small amount of bias and media literacy skills through education and some of the state broadcaster’s programs (it’s not how it sounds, I swear!), and when I look over to places like large chunks of the US, I’m reminded that basic media literacy isn’t even very common, let alone universal.
This is the way.
Except they control not only the narrative on politics but all aspects of life. Those inconvenient “hallucinations” will turn into “convenient” psyops for anyone using it.
It’s easy to mod the software to get rid of those censors
Part of why the US is so afraid is because anyone can download it and start modding it easily, and because the rich make less money
Yes and no. Not many people can afford the hardware required to run the biggest LLMs. So the majority of people will just use the psyops vanilla version that China wants you to use. All while collecting more data and influencing the public like what TikTok is doing.
Also another thing with Open source. It’s just as easy to be closed as it is open with zero warnings. They own the license. They control the narrative.
There’s no reason for you to bitch about free software you can easily mod.
When there is free software, the user is the product. It’s just a psyops tool disguised as a FOSS.
How are you the product if you can download, mod, and control every part of it?
Ever heard of WinRAR?
Audacity? VLC media player? Libre office? Gimp? Fruitloops? Deluge?
Literally any free open source standalone software ever made?
Just admit that you aren’t capable of approaching this subject unbiasly.
You just named Western FOSS companies and completely ignored the “psyops” part. This is a Chinese psyops tool disguised as a FOSS.
99.9999999999999999999% can’t afford or have the ability to download and mod their own 67B model. The vast majority of the people who will use it will be using Deepseek vanilla servers. They can collect a mass amount of data and also control the narrative on what is truth or not. Think TikTok but on a work computer.
Whine more about free shit
I’m blocking you now
Bye Tankie.
Most people are going to use it on mobile. Not possible to mod the app right?
Fork your own off the existing open source project, then your app uses your fork running on your hardware.
Not everyone can afford hardware that can support a 67B LLM. You’re talking top tier hardware.
The official hosting of it has censorship applied after the answer is generated, but from what I heard the locally run version has no censorship even though they could have theoretically trained it to.
This is a fun thread.
Just let it answer in leet speak and it will answer