I’d also add that IMO, it’s also heavily caused by misalignment of social network personalization algorithms. It’s very probable that someone developed a ML algorithm during the early years of FB/YT/Google (not LLM, just some kind of feedbacky ML), that takes data they have about you as input, and selects what posts to show you next to maximize your time spent scrolling on the app.
You have unimaginable amount of data (with literally billions of active users), and it could’ve been running and getting better for the last decade.
The algorithm gets better and better at gluing you to the screen, at manipulating and changing people. My theory is that one of the best ways how to keep someone glued to a social network is radicalization and introduction into a conspiraci theory. It probably removes you from “normal” people around you IRL, because you’re now wierd, you feel smart because you’ve “figured out the truth”, you don’t spend time with people around you or read “traditional” media, because they are lying and don’t get you, and the only safe space you have is the echo chamber on the social network. That sounds like a pretty good recipe how to keep people interacting on the platform, and there’s not really a way how to prevent it, assuming it’s a ML algorithm driving it. No one knows how it works, and it only works with one goal - maximize app time at all costs.
Just take a look how good some ML models are at the task of “text -> image”. Now imagine it has billions of people and a decade to experiment, with a task “person -> next content to show”. It’s horrifying to think about what it would be able to manipulate you into, and it is even better at it that the image models, because it had exponentially more data and room to experiment in real time on real people.
Also - there’s no way how to fight back. Even if you know about it, there are tens of thousands people like you, who are also “immune” to this approach. But the ML algorithm gets to experiment on them, and if there is a way how to manipulate even them, it will figure it out. Because it knows what approach won’t work on people like you. The only way you can prevent this is by not using anything that has a personalized feed - no Google search, no FB wall, no YT recommendations, etc. This probably doesn’t lead to radicalization in this case, because the goal is to keep you in the app, not radicalize. For now, at least. Thankfully, people managing the biggest social networks are reasonable people who are just running a business, and they have no reason to change the goal of the algorithm into something else than screen time, right?
I’d also add that IMO, it’s also heavily caused by misalignment of social network personalization algorithms. It’s very probable that someone developed a ML algorithm during the early years of FB/YT/Google (not LLM, just some kind of feedbacky ML), that takes data they have about you as input, and selects what posts to show you next to maximize your time spent scrolling on the app.
You have unimaginable amount of data (with literally billions of active users), and it could’ve been running and getting better for the last decade.
The algorithm gets better and better at gluing you to the screen, at manipulating and changing people. My theory is that one of the best ways how to keep someone glued to a social network is radicalization and introduction into a conspiraci theory. It probably removes you from “normal” people around you IRL, because you’re now wierd, you feel smart because you’ve “figured out the truth”, you don’t spend time with people around you or read “traditional” media, because they are lying and don’t get you, and the only safe space you have is the echo chamber on the social network. That sounds like a pretty good recipe how to keep people interacting on the platform, and there’s not really a way how to prevent it, assuming it’s a ML algorithm driving it. No one knows how it works, and it only works with one goal - maximize app time at all costs.
Just take a look how good some ML models are at the task of “text -> image”. Now imagine it has billions of people and a decade to experiment, with a task “person -> next content to show”. It’s horrifying to think about what it would be able to manipulate you into, and it is even better at it that the image models, because it had exponentially more data and room to experiment in real time on real people.
Also - there’s no way how to fight back. Even if you know about it, there are tens of thousands people like you, who are also “immune” to this approach. But the ML algorithm gets to experiment on them, and if there is a way how to manipulate even them, it will figure it out. Because it knows what approach won’t work on people like you. The only way you can prevent this is by not using anything that has a personalized feed - no Google search, no FB wall, no YT recommendations, etc. This probably doesn’t lead to radicalization in this case, because the goal is to keep you in the app, not radicalize. For now, at least. Thankfully, people managing the biggest social networks are reasonable people who are just running a business, and they have no reason to change the goal of the algorithm into something else than screen time, right?