The AI boom is screwing over Gen Z | ChatGPT is commandeering the mundane tasks that young employees have relied on to advance their careers.::ChatGPT is commandeering the tasks that young employees rely on to advance their careers. That’s going to crush Gen Z’s career path.
The fucked up part isn’t that AI work is replacing human work, it’s that we’re at a place as a society where this is a problem.
More automation and less humans working should be a good thing, not something to fear.
But that would require some mechanism for redistributing wealth and taking care if those who choose not to work, and everyone knows that’s communism.
So much this. The way headlines like this frame the situation is so ass-backwards it makes my brain hurt. In any sane world, we’d be celebrating the automation of mundane tasks as freeing up time and resources to improve our health, happiness, and quality of life instead of wringing our hands about lost livelihoods.
The correct framing is that the money and profits generated by those mundane tasks are still realized, it’s just that they are no longer going to workers, but funneled straight to the top. People need to get mad as hell not at the tech, but at those who are leveraging that tech to specifically to deny them opportunity rather than improving their life.
I need a beer. 😐
money and profits generated by those mundane tasks are still realized, it’s just that they are no longer going to workers, but funneled straight to the top
Workers should be paid royalties for their contributions. If “the top” is able to reap the rewards indefinitely, so should the folks who built the systems.
I think you misspelled “taxes,” but its possible your spelling will turn out to be more accurate.
Well… the difference is the former has a history of actually working.
deleted by creator
some sort of A Better World?
Exactly. This has nothing to do with AI and everything to do with UBI.
But, the rich and plebes alike will push AI as the Boogeyman as a distraction from the real enemy.
There’s this bizarre right-wing idea that if everyone can afford basic necessities, they won’t do anything. To which I say, so what? If you want to live in shitty government housing and survive off of food assistance but not do anything all day, fine. Who cares? Plenty of other people want a higher standard of living than that and will have a job to do so. We just won’t have people starving in the street and dying of easily fixable health problems.
We also have to be careful of how people define this sort of thing, and how the wide range of our current wealth inequality affects how something like UBI would be implemented.
In the rich’s eyes, UBI is already a thing and it’s called “welfare”. It’s not enough that people on welfare can barely survive on the poverty-level pittance that the government provides, but both the rich and slightly-more-well-off have to put down these people as “mooching off the system” and “stealing from the government”, pushing for even more Draconian laws that punish their situation even further. It is a caste of people who are portrayed as even lower scum than “the poors”, right down to segregating where they live to “Section 8” housing as a form of control.
UBI is not about re-creating welfare. It’s about providing a comfortable safety net while reducing the obscene wealth gap, as technology drives unemployment even higher. Without careful vigilance, the rich and powerful will use this as another wedge issue to create another class of people to hate (their favorite pastime), and push for driving the program down just as hard as they do for welfare.
The differences between UBI and “welfare” are perhaps subtle but very important IMO.
In Australia there’s an entire industry around punishing and humiliating people that need welfare. It’s just absurd and unnecessary. UBI avoids any of that by just making the entitlement universal.
We have “job network providers” which IMO do not provide any value to anyone. Suppose in a particular region there are 4,000 unemployed people and this particular week there are 400 new jobs. To receive welfare you need to be working with a job network provider to find a job. However, those job network providers aren’t creating any jobs. One way or another 400 people will probably get a new job this week. They might help a particular person tidy up their resume or whatever but they’re not actually finding jobs for people. Their only purpose is to make receiving welfare a chore, it’s absurd.
There’s also people stuck in the welfare trap. As in, if I don’t work at all I get $w welfare, but for every $1 I earn I lose $0.50 from $w, so why would I work a shitkicker job flipping burgers for effectively half the pay.
Slightly different systems, but in the US, welfare is a lot like that as well, especially punishing people by removing welfare or food stamps when they make X dollars.
The welfare trap is a feature of all means-tested social security systems.
Yeah, modern welfare isn’t remotely enough to match the spirit of UBI. It’s structured so that you have to have a job. It’s not enough to live by at all. And bizarrely, there’s some jobs where they’d actually be worse than welfare because min wage is so crazy low in many parts of the US.
And even if you’re on disability, you’re gonna have a hard time. It pays barely enough to maybe scrape by if you cut every possible corner.
No form of welfare is close to being livable for the typical recipient. At best, they usually give you some spending cash while you live with friends or family. Maybe if you’re really lucky you can find that rare, rare subsidized housing and manage to just barely make ends meet.
By comparison, most proponents of UBI want it to be livable. Nothing glamorous, admittedly, but enough to live a modest life. Enough that if there’s no jobs available you qualify for (or none that will pay a living wage, at least), you’ll be okay.
Removed by mod
Where are you that 7 days a week 12 hour days is full time? That’s literally just always working. Standard full time in the states is 40 hour work weeks.
The past. You should probably read their comment again.
But eventually
There’s no eventually, people have been killed, murdered and harassed whilst fighting to make it a reality. Someone has to fight to make it happen and an “eventually” diminishes the value of the effort and risks put forth by labor activists all over the world throughout history. It didn’t happen magically, people worked really hard to make it so.
Removed by mod
There are both dystopian (a tiny Elite owns the automatons and gets all gains from their work and a massive unemployed Underclass barelly surviving) and utopian (the machines do almost everything for everybody) outcomes for automation and we’re firmly in the path for Dystopia.
But how will the rich people afford more submarines to commit suicide in?
Wait you expect a wealthy mammal to share?
The problem, as it almost always is, is greed. Those at the top are trying to keep the value derived from the additional efficiency that ai is going to bring for themselves.
This was exactly the problem that Charles Murray pointed out in the bell curve. We’re rapidly increasing the complexity of the available jobs (and the successful people can output 1000-1,000,000 times more than simple labor in the world of computers). It’s the same concept as the industrial revolution, but to a greater degree.
The problem is that we’re taking away the vast majority of the simple jobs. Even working at a fast food place isn’t simple.
That alienates a good chunk of the population from being able to perform useful work.
That book is shit and should not be cited in any serious discussion. Here’s a good video explaining why the book is full of racist shit: https://youtu.be/UBc7qBS1Ujo
Here is an alternative Piped link(s): https://piped.video/UBc7qBS1Ujo
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
Good bot!
If it were full of shit, then you wouldn’t be discussing the exact he pointed out in this book.
There is some racist discussion in there, but that’s secondary and doesn’t detract or impact his main point about what increasingly complex labor does to a society.
deleted by creator
Precisely, the hill to die on is to socialize the profits, not to demand we keep the shitty, injuring, repetitive task jobs that break a person’s back by 35.
You don’t protest street lights to keep the lamp lighters employed. The economy needs to change fundamentally to accommodate the fact that many citizens won’t have jobs yet need income. It won’t change, but it needs to.
So we’ll keep blaming the wrong thing, technology that eases the labor burden on humanity, instead of destroying the wealth class that demands they be the sole beneficiary of said technology and its implementation in perpetuity to the detriment of almost everyone outside the owner class. Because if we did that, we’d be filthy dirty marxist socialist commies that hate freedumb, amirite?!
Bullshit. Learn how to train new hires to do useful work instead of mundane bloat.
100% if an AI can do the job just as well (or better) then there’s no reason we should be making a person do it.
Part of the problem with AI is that it requires significant skill to understand where AI goes wrong.
As a basic example, get a language model like ChatGPT to edit writing. It can go very wrong, removing the wrong words, changing the tone, and making mistakes that an unlearned person does not understand. I’ve had foreign students use AI to write letters or responses and often the tone is all off. That’s one thing but the student doesn’t understand that they’ve written a weird letter. Same goes with grammar checking.
This sets up a dangerous scenario where, to diagnose the results, you need to already have a deep understanding. This is in contrast to non-AI language checkers that are simpler to understand.
Moreover as you can imagine the danger is that the people who are making decisions about hiring and restructuring may not understand this issue.
The good news is this means many of the jobs AI is “taking” will probably come back when people realize it isn’t actually as good as the hype implied
Not quite. It’s more that a job that once had 5-10 people and perhaps an “expert” supervisor will just be whittled down to the expert. Similarly, factories used to employ hundreds and a handful of supervisors to produce a widget. Now, they can employ a couple of supervisors and a handful of robot technicians to produce more widgets.
The problem is, where do those experts come from? Expertise is earned through experience, and if all the entry-level jobs go away then eventually you’ll run out of experts.
Education. If education was free this wouldn’t be a problem, you could take a few more years at university to gain that experience instead of working in a junior role.
This is the problem with capitalism, if you take too much without giving back, eventually there’s nothing left to take.
You don’t get experts from education. You get experts from job experience (after education).
It’s just that I fear that realisation may not filter down.
You honestly see it a lot in industry. Companies pay $$$ for things that don’t really produce results. Or what they consider to be “results” changes. There are plenty of examples of lowering standards and lowering quality in virtually every industry. The idea that people will realise the trap of AI and reverse is not something I’m enthusiastic about.
In many ways AI is like pseudoscience. It’s a black box. Things like machine learning don’t tell you “why” it works. It’s just a black box. ChatGPT is just linear regression on language models.
So the claim that “good science” prevails is patently false. We live in the era of progressive scientific education and yet everywhere we go there is distrust in science, scientific method, critical thinking, etc.
Do people really think that the average Joe is going to “wake up” to the limitations of AI? I fear not.
deleted by creator
And AI is not always the best solution. One of my tasks at my job is to respond to website reviews. There is a button I can push that will generate an AI review. I’ve tested it. It works… but it’s not personal. My responses directly address things they say, especially if they have issues. Their responses are things like, “thanks for your five-star review! We really appreciate it, blah blah blah.” Like a full paragraph of boilerplate bullshit that never feels like the review is addressed.
You would think responding to reviews properly would be a very basic function an AI could do as well as a human, but at present, no way.
This assumes that your company doesn’t decide the AI responses are good enough in exchange for the cost savings of removing a person from the role, and that they don’t improve in a subsequent update.
True, although my company emphasizes human contact with customers. We really go out of our way with tech support and such. That said, I hate responding to reviews. I kind of wish it was good enough to just press the ‘respond to review with AI’ button.
This.
In accounting, 10 years ago, a huge part of the job was categorising bank transactions according to their description.
Now AI can kinda do it, but even providers that would have many billions of transactions to use as training data have a very high error rate.
It’s very difficult for a junior to look at the output and identify which ones are likely to be incorrect.
Exactly!
They don’t want to train new hires to begin with. A lot of work that new hires relied on to get a foothold on a job is bloat and chores that nobody wants to do. Because they aren’t trusted to take on more responsibility than that yet.
Arguably whole industries exist around work that isn’t strictly necessary. Does anyone feel like telemarketing is work that is truly necessary for society? But it provides employment to a lot of people. There’s much that will need to change for us to dismiss these roles entirely, but people need to eat every day.
The “not willing to train” thing is one of the biggest problems IMO. But also not a new one. It’s rampant in my field of software dev.
Most people coming out of university aren’t very qualified. Most have no understanding of how to actually program real world software, because they’ve only ever done university classes where their environments are usually nice and easy (possibly already setup), projects are super tiny, they can actually read all the code in the project (you cannot do that in real projects – there’s far too much code), and usually problems are kept minimal with no red herrings, unclear legacy code, etc.
Needless to say, most new grads just aren’t that good at programming in a real project. Everyone in the field knows this. As a result, many companies don’t hire new grads. Their advertised “entry level” position is actually more of a mid level position because they don’t want to deal with this painful training period (which takes a lot of their senior devs time!). But it ends up making the field painful to enter. Reddit would constantly have threads from people lamenting that the field must be dying and every time it’s some new grad or junior. IMO it’s because they face this extra barrier. By comparison, senior devs will get daily emails from recruiters asking if they want a job.
It’s very unsustainable.
Indeed: at least in knowledge based industries, everybody starts by working with a level of responsability were the natural mistakes a learning person does have limited impact.
One of my interns read the wrong voltage and it took me ten minutes to find his mistake. Ten minutes with me and multiple other senior engineers standing around.
I congratulationed him and damn it I meant it. This was the best possible mistake for him to make. Everyone saw him do it, he gets to know he held everything up, and he has to just own it and move on.
Exactly this.
I fully agree, however doing some mundane work for a few weeks while you learn is useful. You can’t just jump straight into the deep work.
The problem is really going to be in the number of jobs that are left with 40hrs of work to do.
Lol it’s not ChatGPT screwing over Gen Z. It’s the rich business owners who care more about profits than people.
Let’s play devil’s advocate: if AI is capable of doing a job for a fraction of the cost, faster, with no mistakes, no “moods”, no sick days, then why would they hire a person? I honestly see no reason for them to do so and that concerns me.
It was already happenning in things like Software Developmnt with outsourcing: all the entry level stuff was sent away to be done by people who cost a fraction of what even a Junior Dev would cost in the West, and that’s exactly the stuff that one starts one’s career with.
As someone who lives in the east where these jobs are outsourced to, it’s not like junior devs here get to work on them either. Most outsourced stuff is assigned to people higher up. The talented juniors are left sitting on the bench as retainer manpower, others are in an endless string of unpaid internships.
The job situation is more similar then you think all over the world
Can you explain what you mean by retainer manpower? I’ve never worked anywhere where there was an extra person. Usually a job that requires 20 people would be set up for 20, 3 would leave the company, one would go out on disability and you have 16 doing the job of 20. They make a new middle management role with little to no raise but a sense of pride that you are now in charge and they stick that person with ensuring the 16 people don’t fall behind. Which really means you now have 15 workers, and 1 person stuck in meetings all day explaining why we are barely keeping our heads above water.
Companies hire junior devs (and other cheap labour) as “reserves”, in the off chance that you get more projects (sometimes this is negotiated as a bonded contract, which you can’t break for 3-5 years, but I hear that abusive practice is dying slowly).
They are paid abysmally low salaries, but youre not allowed to work, or find work elsewhere while you’re on this type of contract. If a project comes and you’re needed, you’re put on a regular contract that is comparatively not as low paying.
All the factors you mentioned are still at play, these people are almost never put on existing projects, so you end up with less people doing more work, with more people just sitting around doing nothing waiting for new projects.
This type of environment is extremely negative and depressing to be in, and it promotes a lot of office politics to get yourself off that list and into a better salary etc.
In an ideal world, people would start receiving better and more fulfilling opportunities when their mundane tasks are automated away. But that’s way too optimistic and the world is way to cynical. What actually happens is they get shitcanned while the capitalists hoard the profits.
We need a better system. One that, instead of relentlessly churning for the impossibility of infinite growth and funneling wealth upwards, prioritizes personal financial stability and enforces economic equallibrium.
We need to start instituting universal basic income to compensate for the job losses. It’s inevitable. We have to protect the person, not the jobs.
So what would that mean for the company itself long-term? If they’re not training up their employees, and most of the entry level is replaced by text generator work, there would be a hole as executives and managers move out of the company.
It seems like it would be a recipe for the company to implode after a few years/decades, assuming that that the managerial/executive positions aren’t replaced also.
What are these decades? Is that something longer than next quarter?
we are going to hold the month open a few more days
there would be a hole as executives and managers move out of the company.
And why would those executives and managers care about that? They just need to make sure they time their departures to be early enough that those holes don’t impact the share prices. Welcome to modern capitalism where the C suites only goal is to make sure they deploy their golden parachute while the company still has enough cash left over to pay them.
Yeah, it should be obvious by now after 3 decades of 1980s-MBA style corporate management (and a Financial Crash that happenned exactly along those lines) that “the bonus comes now, the problems come after I’ve moved on” measures will always get a go-ahead from the suits at the top floor.
That sounds like someone else’s problem.
I think those in charge often don’t care. A lot of them don’t actually have any incentive for long term performance. They just need a short/medium term stock performance and later they can sell. Heck, they’ll even get cash bonuses based solely on short term performance. Many C-secs aren’t in for the long haul. They’ll stay for maybe 5-10 years tops and then switch jobs, possibly when they see the writing on the wall.
Even the owners are often hoping to just survive until some bigger company buys their business.
And when the company does explode… They’ll just declare bankruptcy and later make a new company. The kinds of people who created companies rarely do it just once. They do it over and over, somehow managing to convince investors every time.
It’s a Tragedy Of The Commons situation: each market actor expects to get the benefits of automating away entry level jobs and expects it’s going to be somebody else who keeps on training people through their junior career years so that mid and senior level professionals are available in the job market.
Since most market actors have those expectations and even those who don’t are pressured by market pressures to do the same (as paying for junior positions makes them less competitive than those who automate that work, so they’re forced to do the same), the tragedy part will eventually ensue once that “field” has kept being overgrazed for long enough.
Why would you want to train people to do it wrong? If you had to train someone tomorrow would you show them the email client or give them a cart and have them deliver memos for a week?
Right now we have handed over some more basic tasks to machines. Train the next generation to take those tasks being automated as a given.
It’s not the tasks that matter, it’s the understanding of the basics, the implications of certain choices and the real life experience in things like “how long I thought it would take vs how long it actually took” that comes with doing certain things from start to end.
Some stuff can’t be learned theoretically, it has to be learnt as painful real life lessons.
So far there seems to be an ill-defined boundary between what AI can successfully do and what it can’t, and sadly you can’t really teach people starting past that point because it’s not even a point, it’s an area where you have to already know enough to spot the AI-made stuff that won’t work, and the understanding there and beyond is built on foundational understanding of how to use simpler building blocks and what are the implications of that.
We have this thing called school
You clearly never worked in an expert knowledge area.
In any complex enough domain knowledge there are elements you can only ever learn from doing it for real, with real requirements, real users and real timeframes.
With my career spanning 4 countries I have yet to see somebody straight out of uni that could just drop-in and start working at mid-level, and that includes the trully gifted types who did that stuff at home for fun.
Engineer for 15 years but go ahead and try patronizing me again or you can read what I wrote and respond to it, not what you wish I wrote. Guess you didn’t learn what a strawman was. Maybe should have worked in 5 countries.
Amazing.
How many junior professionals have you hired (or at least interviewed as domain expert) and how many have you led in your career?!
I’ll refrain from pulling rank here (I could, but having lots of experience and professional seniority doesn’t mean I know everything and besides, let’s keep it serious) so I’m just wondering what kind of engineering area do you work in (if it’s not too much to ask) and what in your career has led you to believe that formal education is capable of bridging any training gap that might form if the junior-professional-stage dissapears?
In my professional area, software development, all I’ve seen so far is that there are elements of experience which formal education won’t teach and my own experience with professional education (training courses) is that they provide you with knowledge, maybe a few techniques, but not professional insight on things like choosing which elements are best for which situation.
This is not to say that education has no value (in fact, I believe it’s the opposite: even the seemingly “too theoretical to be useful” can very much turn out to be essential in solving something highly practical: for example, I’ve used immenselly obscure knowledge of microprocessor architectures in the design of high performance distributed software systems for investment banks, which was pretty unexpected when I learned that stuff in an EE Degree). My point is that things such a “scoping a job”, “selecting the better tool for the job” and even estimating risk and acceptability of it in using certain practices for certain parts of a job, aren’t at all taught in formal education and I can’t really see the pathway in the Business Process (the expression in a Requirements Analysis sense, rather than saying it’s all a business) of Education which will result in both formalizing the teaching of such things and in attracting those who can teach it with knowledge.
Maybe the Education System can find a way of doing it, but we can hardly bet that it will and will do so before any problems from an AI-induced junior-level training gap materialises (i.e. there won’t be any pressure for it before things are blowing up because of a lack of mid-level and above professionals, by which time it there will be at least a decade of problems already in the pipeline).
I’ve actually mentored several junior and mid-level developers and have mainly made them aware of potential pitfalls they couldn’t see (often considerations which were outside the nitty gritty details of programming and yet had massive impact on what needed to be programmed), additional implications of certain choices which they weren’t at all aware of and pointed to them the judgment flaws that lead them to dead-ends, but they still need to actually have real situations with real consequences to, at an emotional-level, interiorise the value of certain practices that at first sight seem counterproductive otherwise they either don’t do it unless forced to (and we need programmers, not code monkeys that need constant surveillance) or do it as a mindless habit, hence also when not appropriate.
Maybe what you think of as “junior” is a code-monkey, which is what I think of as “people who shouldn’t even be in the profession” so you’re picturing the kind of teaching that’s the transmission of “do it like this” recipes that a typical code monkey nowadays finds via Google, whilst I’m picturing developers to whom you can say “here’s a small problem part of a big thing, come up with a way to solve it”, which is a set of practices that’s way harder to teach even in the practical classes on an Educational environment because it’s a synthetic environment with were projects have simulated needs and the consequences of one’s mistakes are way lower.
PS: Mind you, you did put me thinking about how we could teach this stuff in a formal educational context, but I really don’t have an answer for that as even one-to-one mentoring is limited if you’re not dealing with real projects, with real world users (and their real world needs and demands) and implications and real lifecycles (which are measured in years, not “one semester”). I mean, you can have learning placements in real companies, but that’s just working at a junior-level but with a different job title and without paying people a salary.
And?
It it makes you feel better the alternative can be much worse:
People are promoted to their level of incompetence. Sure she is a terrible manager but she was the best at sales and is most senior. Let’s have her check to make sure everyone filled out expense reports instead of selling.
You don’t get the knowledge sharing that comes from people moving around. The rival spent a decade of painful trial and error to settle on a new approach, but you have no idea so you are going to reinvent this wheel.
People who do well on open-ended creative tasks are not able to do as they failed to rise above repetitive procedural tasks. Getting started in the mailroom sounds romantic but maybe not the best place to learn tax law.
The tech and corporate and general operational knowledge drifts further and further away from the rest of the industry. Eventually everyone is on ancient IT systems that sap (yes pun intended) efficiency. Parts and software break that it is hard to replace. And eventually the very systems that were meant to make things easier become burdens.
For us humans there really is no alternative to work and thinking is the hardest work of all. You need to consistently reevaluate what the situation calls for and any kinda rigid rule system of promotion and internal training won’t perform as well.
Bro service industry jobs and similar are booming. Train under a plumber, electrician or gassist and you will be set for years
Go where the future is…HVAC. Soon everyone is going to need AC just to survive.
Which will accelerate the destruction of the planet. Yay!
Whether you have ac or not, the planet is set on course to be destroyed unless big oil countries suddenly find their kindness to all mankind and stop drilling oil. Which is impossible.
Removed by mod
I’m aware of that, hence why I said accelerate
Maybe. But if they stop drilling for petroleum, I wonder where the electricity to make the solar panels and wind turbines will come from. Oh and the polymers and plastics used to make those things. While ue use the available electricity for charging our electric cars.
What’s the alternative? Just die?
Yup
Pumping all my fun money into trane stock…
Where I’m from even those jobs pay shitty salaries that haven’t kept up with the cost of living. I know electricians who can barely afford rent.
Ok fair enough. Where is that? Most of the world has less than enough people for those service jobs so unless you live just in the right place or the planet where almost everyone is a plumber, I’d call that extreme bad luck haha
Oh we definitely don’t have enough tradespeople, but their unions have not kept them up to the cost of living. It’s causing a huge problem here. The only way to make real money is to start your own business, and most people aren’t interested in that or can’t afford to.
But that’s exactly what I meant can’t you go independent? That’s weird, tradesmanship here even has unions but they have so much work they can’t handle the load ( ayyy lmao) and they are trying to stimulate people to take into this trades.
Suggesting an alternative industry as an escape from AI doesn’t work. The media tried this with the millions of truck drivers, pushing them to go into software development 5-10 years ago, as we started conversations around the impending automation of their careers.
The thought at the time, and this seemed like an accurate forecast to me, was that the tech industry would continue to grow and software engineers would be extraordinarily safe for decades to come. I was already in this profession, so I figured my career was safe for a long while.
Then a massive AI boom happened this year that I hadn’t anticipated would come for 15ish more years, and similarly AI experts are now pushing up predictions of AGI by literally decades, average estimates being under 10 years now instead of 30 years.
At the same time, the tech industry went through massive layoffs. Outsourcing, massive increases in output with generative AI automating away repetitive copy/paste programming or even slightly more complicated boilerplate that isn’t strictly copy/paste, amongst natural capitalist tendencies to want to restrict high value labor to keep it cheap.
Those people who shifted away from truck driving and towards software engineer 4+ years ago, thinking it was a “safe path” and now being told that it’s impossible to find a junior dev position might become desperate enough to change paths again. Maybe they’ll take your advice and join a trade school, only to find in 4 years we’ll hit massive advancements in robotics and AGI that allows general problem solving skills from robots in the real world.
We already have the tech for it. Boston dynamics has showcased robots that can move more than fluently enough to be a plumber, electrician, etc. Now we just need to combine generative AI with senses and the ability to process information from those senses and react (this already works with images, moving to a video feed and eventually touch/sound/etc is a next step).
While everyone constantly plays a game of chicken, trying to move around this massive reserve army of labor, we’ll see housing scalpers continue to raise rents, and cost of living becoming prohibitive for this growing class of underemployed or unemployed people. The reserve army of labor, when kept around 5-10% of the population, serves as an incentive for people to be obedient workers and not to rock the bed too much. That number growing to 20-50% is enough to rock the bed, and capitalists will advocate for what they’ve already advocated in the third world, a massive reduction or total annihilation of welfare, so millions more can starve to death.
We already have millions of people dying a year due to starvation, and nearly a billion people are malnourished due to lack of food access. Raising this number is a logical next step for capitalists as workers try to fight for a share of the automated economy.
Now we just need to combine
“Just” is doing a lot of heavy lifting in that sentence. Sensor integration is currently the biggest hurdle in AI and one of the most complex but less understood areas of research. Everyone can make a magnetic sensor, anyone can make an image recognition AI, anyone can make an inverse kinematic robotic control arm. But having them integrate and coordinate together to create fluid problem analysis and motion has proven to be elusive and non-trivial. Tesla commits traffic offenses, taxi networks are brought to a halt by shirts with traffic cones on them. For things the most basic human context aware analysis can solve instantly. It has cost Boston Dynamics billions of defense budget money to create a partial solution that still requires the permanent supervision of a human operator. A full solution is not on the table in the short-term.
Not gonna read all that but alternative industries will happen ai or not.
The necessity to look for an alternative industry will happen ai or not.
And it’s not impossible to find junior tech positions what the hell are you talking about.
Also there is not a conspiracy to reduce the planetary population. And if you claim that I want proof.
If you’re not going to spend the 60 seconds it takes to read my comment, don’t bother responding. Nobody mentioned a conspiracy to cull the population, the millions of people who are dying a year from hunger or entirely curable diseases like TB aren’t dying because of some deep state conspiracy, they’re dying because it’s what’s logical in a capitalist economy. These people have no economic power, so they get no resources.
Similarly, as the economy gets further automated, workers lose economic power, and we’ll be treated with the same capitalist logic that anyone else in the world is treated with, once we have no economic power we are better off dead, and so that’s what will happen.
The position that “alternative industries will always exist” is pretty foolish, humans aren’t some exceptional supreme beings that can do something special artificial beings cannot. Maybe you’re religious and believe in a soul, and you think that soul gives you some special powers that robots will never have, but you’d be simply mistaken.
Once the entire economy is automated, there will still be two classes, owners and non-owners, instead of owners and workers. Non-owners will either seize the means of production or die per the logic of capitalism (not some conspiracy).
governments need to take seriously what we are looking at in the next 40 years. There IS going to be less work, and less need for it. We can no longer play a game of work = virtue and that you must work to live.
If we fail to address this we will be complicit in a slow genocide
Removed by mod
listen, im gonna be hopeful, ok?
Removed by mod
who says i don’t have plenty of dashes of that, you don’t know me
The problem is the concept of work hasn’t shifted to keep up with the technological reality that has been created. Jobs should slowly be phased out. We need a new economical concept to take hold that doesn’t rely solely on class and fear to make it trundle along. Jobs should be what you do to grow your own fruit and veggies for fun, while the administration and maintenance of basically everything should be left to technology. Wealth and wealth accumulation should no longer exist or be seen as anything other than childish and irrelevant.
“Haha best I can do is lower wages and more homelessness”
Removed by mod
No offense, but this sounds like the pipe dream to end all pipe dreams
Things can change very fast depending on specific circumstances. I hope you’re more wrong than you think.
Removed by mod
It’s not just Gen Z, everyone’s jobs are at risk as AI improves and automates away human labor. People who think that with exponential rate of progress of AI there will continue to be an abundance of good jobs are completely delusional. Companies hire people out of necessity, not some goodness of the heart. If machines can do everything humans can do and better, then companies will hire less people and outsource to machines. Sure there will be people working on the bleeding edge of what AI isn’t yet capable of, but that’s a bar that’s only going to get higher and higher as the performance advantage gap of humans over machines reduces.
Of course none of this would be an issue if we had an economic system that aligned technological progress with improved quality of life and human freedom, but instead we cling on to antiquated systems of the past that just disproportionately accrue wealth to a dwindling minority while leaving the rest of civilization at their mercy. Anyone with any brain or sense of integrity realizes how absurd this is, and it’s been obvious we need a Universal Basic Income for a long time. The hope I have is that Andrew Yang explained it eloquently 4 years ago and it resonated way stronger than I expected with the American population, so I think in a few years when AI is starting to automate any job where one doesn’t need a 160 IQ, people will see the writing on the wall and there will finally be the political capital to implement a UBI.
Yeah we’re quickly approaching a tipping point where people can no longer scoff at the idea of UBI. The more jobs that get automated, the fewer people working and pumping money back into the economy. This can only go on for so long before the economy completely collapses.
It’s the march of progress, but it’s coming for previously “safe” jobs. I make a good living as a consultant, but about 80-90% of my job could be automated by AI. I just went to a conference in my field and everyone in the room was convinced that they couldn’t be replaced by AI - and they’re dead wrong. By the time my small corner of industry gets fully automated I’ll be retired or, at the least, in a position where I’m the human gathering the field data and backchecking the automated workflows before it goes out the door.
political capital to implement a UBI
I applaud your optimism, and genuinely hope you’re right.
I think AI is a very good example of science advancing much faster than wisdom in society. I think as these large companies continue to implement AI to increase profits while simultaneous driving out the working class, it’s only going to further drive a wedge between the upper and lower class. I foresee a “dark age” of AI characterized by large unemployment and a renewed fight focus on human rights. We might already be seeing the early stages of this in some industries like fast food and with the Hollywood strikes.
We might already be seeing the early stages of this in some industries like fast food and with the Hollywood strikes.
It’s not even a might, we are absolutely seeing the early stages of this. The dark age will also involve vast amounts of misinformation and just plain bad information spewed out by AI writing tools because they’re great at that, which will make it more and more difficult to find true information about anything. We’re going to be snowed in by a pile of AI garbage, and it will happen faster than anyone is prepared for because speed and amplification are the whole point of these tools.
The best outcome so far is that this issue is prompting more workers to unionize.
science advancing much faster than wisdom
I think that pretty much sums up civilization.
And the funny part is that ChatGPT isn’t good enough at anything to be trusted with doing it alone. You still need an expert on the subject matter to proofread anything that will be seen by the public or used to make a business decision.
You can say the same for entry level employees though. I’m not trusting anyone new to post without review.
Granted I rather the company pay someone so they can be taught and eventually become autonomous over time.
And presumably a human who works has some intention to get it right so they can prove their worth or learn or any of a million reasons to want to succeed at work.
ChatGPT is just math in a black box that spits out random language stems filtered and organized by the input parameters you choose.
presumably a human who works has some intention … to succeed at work
Which is the one in ten who really love what they do and want to go into management or oversee the process for professional fulfillment. Of the other nine, three are waiting to move to a company that pays better, two will decide they don’t like it and change careers entirely, and four really are terrible at it but HR decided they met the minimum requirements and would work for entry level wages so they’ll be in that job for the foreseeable future with zero upward growth, eventually getting bitter and doing a worse and worse job while complaining about their lack of promotion.
Not sure what industry you’re in but that sounds like a fair wages and training problem, not an ambition problem. Most people are content to advance in an industry for the sake of job security and professional development, even if they don’t have a particular passion for the specific job role, as long as they are being compensated fairly and see a path for advancement or transferable skills.
I’m architecture-adjacent, so I’m working with clients across a bunch of different market sectors, many are business owners, but my avocations are heavily into performing arts so many people I know in that group are a pretty substantial cross section of low to moderate wage, often entry level workers. I also own my business so I’ve been in the hiring and training side of things.
Removed by mod
There’s something that a person close to me said about certain tech/features that stuck with me and seems to click here, it was: “A lot of it just stops you from using your brain.”
Is this not similar to the introduction of calculators in schools? We don’t need to use our brains anymore to do the “mechanical calculation”. Instead we can offload this task to the machine and use our brain for other tasks.
Not exactly. When it comes to calculations that could be super unreasonable and impractical to do by hand (think multiple exponents on a number, or cosine, sine, and tangent as simple examples), they help reduce that tedium in the overall process of what you’re trying to do. There comes a point where it’d be absurd to do certain kinds of math by hand primarily. I’m not largely math-oriented, but even with calculators one could understand the reasoning behind certain concepts despite using a calculator to work through them. People who take calculus can understand it but still use a calculator.
To have a calculator to do your times tables instead of knowing them, or any basic stuff in the four units would be detrimental I feel, because you’d benefit in knowing those up front, and how to process them mentally.
Have you used these tools for a complicated project? I’ve played around a little and it didn’t feel like turning off my brain at all, more like working with a genius drone and figuring out how to direct its skills to my ends and constantly evaluating the 10,000 foot view to edge the project forward.
You can’t turn your brain off and use current AI. You have to be constantly watching for whenever it will happen to fabricate some innocent-looking code that is actually very destructive.
Removed by mod
Not sure what you mean?
Mathematics is about reasoning. Calculations and arithmetics are merely a little insiginificant part of it. I believe mental math should be encouraged at early stages of education, as it develops cognitive skills, memory and brain plasticity; all research confirm this. Sure, calculating 65*82 is tricky to do in head, but if you understand that this is equivalent to (60+5)(80+2) and work from that then it suddenly becomes approachable for everyone, you just have to reason this out in your mind. My algebra teacher once said something which perhaps translates poorly but let me try to convey what he meant: “A mediocre mathematician seeks analogies between problems, so that they can solve new problems using tools they are already familiar with. However a good mathematician seeks analogies between analogies”. Will you ever require mental math? Probably no, but consider it a workout for your brain, which creates neuroconnections which will later come in helpful when learning new stuff and needing to understand new, complex concepts quickly
My boy says this as if underpaying and abusing (usually female) office workers to do the boring algebraic and arithmetic for you wasn’t a thing in engineering business and academia before the advent of digital computers.
Removed by mod
Good thing our governments are totally on top of making sure this doesn’t cause some kind of crisis /s
Unfortunately international competition will prevent any country from enacting sane and effective regulation. The first country that moves to restrict AI development and implementation will quickly fall behind the other countries without restrictions.
The only thing that would really work would be a global agreement to limit development, but I can’t see that happening anytime soon, or nations like China, Iran, or India actually respecting such limits even if they were agreed upon.
The only thing that would really work would be a global agreement to limit development
Really? That’s the only thing? Or maybe just unemployment, something that’s been around for almost 100 years.
Or maybe just unemployment, something that’s been around for almost 100 years.
This might work after the AI systems have already become a major problem, and unemployment affects a large percentage of the population.
It won’t prevent AI systems from becoming a major problem in the first place.
I would much rather have the prevention than a cure.
If enough people find themselves without a way to put food on the table, that country might find a sudden and severe obstacle to their economic prospects.
The rich people who own and benefit from the AI systems and have control over the governments and major businesses will be the last ones to feel the economic impact. When (and if) they do they will simply move to another country that is not yet failing, because people in this group experience no national loyalty and feel no remorse for their exploitation. They will move on to another place that they can draw profit from until that is also burnt out.
By that point the AI systems will already be developed and implemented and it will be too late to establish any functional regulation.
I am not talking about regulation.
Ok, I am talking about a way to avoid the world getting to the point of “If enough people find themselves without a way to put food on the table”. I want us to address the AI problem before countries find “sudden and severe” obstacles to their economic prospects.
How do we do that, if not by regulation? What can we talk about that leads to prevention?
We need to be proactive, not reactive.
I agree, but that was my response to the likely attitude of the wealthy, businesses and their government supporters that you pointed out, who will oppose regulations.
They can’t expect to move out of the way forever as they make the living conditions of average people untenable everywhere. The people’s unrest has been constantly rising.
Oh I see, I misunderstood. Unfortunately, it looks like the intent may be to mislead regulators and have them waste time on more sensationalized “AI takes over the world” ideas, while they continue to make a profit off of more mundane forms of exploitation.
They can’t expect to move out of the way forever as they make the living conditions of average people untenable everywhere.
Never underestimate the capacity for shortsightedness and the ambition for immediate profit.