Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves
In terms of hype it’s the crypto gold rush all over again, with all the same bullshit.
At least the tech is objectively useful this time around, whereas crypto adds nothing of value to the world. When the dust settles we will have spicier autocomplete, which is useful (and hundreds of useless chatbots in places they don’t belong…)
For something that is showing to be useful, there is no way it will simply fizzle out. The exact same thing was said for the whole internet, and look where we are now.
The difference between crypto and AI, is that as you said crypto didn’t show anything tangible to the average person. AI, instead, is spreading like wildfire in software and research and being used by people even without knowing worldwide.
I’ve seen my immediate friends use chatbots to help them from passing boring yearly trainings at work, make speeches for weddings, and make rough draft lesson plans
deleted by creator
Why are we in the fallacy that we assume this tech is going to be stagnant? At the current moment it does very low tier coding but the idea we are even having a conversation about a computer even having the possibility of writing code for itself (not in a machine learning way at least) was mere science fiction just a year ago.
And even in its current state it is far more useful than just generating “hello world.” I’m a professional programmer and although my workplace is currently frantically forbidding ChatGPT usage until the lawyers figure out what this all means I’m finding it invaluable for whatever projects I’m doing at home.
Not because it’s a great programmer, but because it’ll quickly hammer out a script to do whatever menial task I happen to need done at any given moment. I could do that myself but I’d have to go look up new APIs, type it out, such a chore. Instead I just tell ChatGPT “please write me a python script to go through every .xml file in a directory tree and do <whatever>” and boom, there it is. It may have a bug or two but fixing those is way faster than writing it all myself.
I have the same job and my company opened the floodgates on AI recently. So far it’s been assistive tools, but I can see the writing on the wall. These tools will be able to do much more given enough context.
As a thought experiment, we might consider that any function that is too complicated to explain to ChatGPT and have it produce a working result might need to be refactored for complexity. Obviously not in every case, and our own ability to translate the requirements into a useful prompt must be considered, but I think it’s worth consideration.
deleted by creator
Genuine question: Based on what? GPT4 was a huge improvement on GPT3, and came out like three months ago.
I’ve gotten it to give boiler plate for converting one library to another for certain embedded protocols for different platforms. It creates entry level code, but nothing that’s too hard to clean up or to get the gist of how a library works.
Exactly my experience as well. Seeing CoPilot suggestions often feels like magic. Far from perfect, sure, but it’s essentially a very context “aware” snippet generator. It’s essentially code completion ++.
I have the feeling that people who laugh about this and downplay it either haven’t worked with it and/or are simply stubborn and don’t want to deal with new technology. Basically the same kind of people who, when IDEs with code completion came to be, laughed at it and proclaimed only vim and emacs users to be true programmers.
deleted by creator
can we have an “un-ampify” bot?
In the meantime, de-AMP your life.
By now, most of us have heard about the survey that asked AI researchers and developers to estimate the probability that advanced AI systems will cause “human extinction or similarly permanent and severe disempowerment of the human species”. Chillingly, the median response was that there was a 10% chance.
How does one rationalize going to work and pushing out tools that carry such existential risks? Often, the reason given is that these systems also carry huge potential upsides – except that these upsides are, for the most part, hallucinatory.
Ummm how about the obvious answer: most AI researchers won’t think they’re the ones working on tools that carry existential risks? Good luck overthrowing human governance using ChatGPT.
The chance of Fossil Fuels causing human extinction carries a much higher chance, yet the news cycle is saturated with fears that a predictive language model is going to make calculators crave human flesh. Wtf is happening
Capitalism. Be afraid of this thing, not of that thing. That thing makes people lots of money.
I agree that climate change should be our main concern. The real existential risk of AI is that it will cause millions of people to not have work or be underemployed, greatly multiplying the already huge lower class. With that many people unable to take care of themselves and their family, it will make conditions ripe for all of the bad parts of humanity to take over unless we have a major shift away from the current model of capitalism. AI would be the initial spark that starts this but it will be human behavior that dooms (or elevates) humans as a result.
The AI apocalypse won’t look like Terminator, it will look like the collapse of an empire and it will happen everywhere that there isn’t sufficient social and political change all at once.
I dont disagree with you, but this is a big issue with technological advancements in general. Whether AI replaces workers or automated factories, the effects are the same. We dont need to boogeyman AI to drive policy changes that protect the majority of the population. Just frustrated with AI scares dominating the news cycle while completely missing the bigger picture.
Yeah - green energy puts coal miners and oil drillers out of work (as the right likes to constantly remind us) but that doesn’t make green energy evil or not worth pursuing, it just means that we need stronger social programs. Same with AI in my opinion - the potential benefits far outweigh the harm if we actually adequately support those whose jobs are replaced by new tech.
That’s only a problem because of our current economic system. The AI isn’t the problem, the society that fails to adapt is.
I think that the results are “high” as much as 10 percent because the researcher do not want to downplay how “intelligent” their new technology is. But it’s not that intelligent as we and they all know it. There is currently 0 chance any “AI” can cause this kind of event.
Not directly, no. But the tools we have already that allow to imitate voice and faces in video streams in realtime can certainly be used by bad actors to manipulate elections or worse. Things like that - especially if further refined - could be used to figuratively pour oil into already burning political fires.
the results are “high” as much as 10 percent because the researcher do not want to downplay how “intelligent” their new technology is. But it’s not that intelligent as we and they all know it. There is currently 0 chance any “AI” can cause this kind of event.
Yes, the current state is not that intelligent. But that’s also not what the expert’s estimate is about.
The estimates and worries concern a potential future, if we keep improving AI, which we do.
This is similar to being in the 1990s and saying climate change is of no concern, because the current CO2 levels are no big deal. Yeah right, but they won’t stay at that level, and then they can very well become a threat.
The less obvious answer is Roko’s Basilisk.
AI bots don’t ‘hallucinate’ they just make shit up as they go along mixed with some stuff that they found in google, and tell it in a confident manner so that it looks like they know what they are talking about.
Techbro CEO’s are just creeps. They don’t believe their own bullshit, and know full well that their crap is not for the benefit of humanity, because otherwise they wouldn’t all be doomsday preppers. It all a perverse result of American worship of self-made billionaires.
See also The super-rich ‘preppers’ planning to save themselves from the apocalypse
“hallucination” works because everything an LLM outputs is equally true from its perspective. trying to change the word “hallucination” seems to usually lead to the implication that LLMs are lying which is not possible. they don’t currently have the capacity to lie because they don’t have intent and they don’t have a theory of mind.
Well neither can it hallucinate by the “not being able to lie” standard. To hallucinate would mean there was some other correct baseline behavior from which hallucinating is deviation.
LLM is not a mind, one shouldn’t use words like lie or hallucinate about it. That antromorphises a mechanistic algorhitm.
This is simply algorhitm producing arbitrary answers with no validity to reality checks on the results. Since neither are those times it happens to produce correct answer “not hallucinating”. It is hallucinating or not hallucinating exactly as much regardless of the correctness of the answer. Since its just doing it’s algorhitmic thing.
The evolution is fast. We have AI with a theory of mind:
https://www.discovermagazine.com/mind/ai-chatbot-spontaneously-develops-a-theory-of-mind
Do we have a AI with a theory of mind or just a AI that answers the questions in the test correctly?
Now whether or not there is a difference between those two things is more of a philosophical debate. But assuming there is a difference, I would argue it’s the latter. It has likely seen many similar examples during training (the prompts are in the article you linked, it’s not unlikely to have similar texts in a web-scraped training set) and even if not, it’s not that difficult to extrapolate those answers from the many texts it must’ve read where a character was surprised at an item missing that that character didn’t see being stolen.
Good point. How will we be able to tell the difference?
You can make an educated guess if you would understand the intricacies of the programming. In this case, it’s most likely blurting out words and phrases that statistically most adequately fit the (perhaps somewhat leading) questions.
AI bots don’t ‘hallucinate’ they just make shit up as they go along mixed with some stuff that they found in google, and tell it in a confident manner so that it looks like they know what they are talking about.
The technical term for that is “hallucinate” though, like it or not.
https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)See also The super-rich ‘preppers’ planning to save themselves from the apocalypse
Cory Doctorow wrote a pretty good short story about this.
It will, and is helping humanity in different fields already.
We need to diverge PR speech from reality. AI is already being used in pharmaceutical fields, aviation, tracking (of the air, of the ground, of the rains…), production… And there is absolutely no way you can’t say these are not helping humanity in their own way.
AI will not solve the listed issues on its own. AI as a concept is a tool that will help, but it will always end up on how well its used and with what other tools.
Also, saying AI will ruin humanity’s existence or bring “disempowerment” of the species is a completely awful view that has no way of happening just simply due to the fact that its not profitable.
saying AI will ruin humanity’s existence or bring “disempowerment” of the species is a completely awful view that has no way of happening just simply due to the fact that its not profitable.
The economic incentives to churn out the next powerful beast as quickly as possible are obvious.
Making it safe costs extra, so that’s gonna be a neglected concern for the same reason.
We also notice the resulting AIs are being studied after they are released, with sometimes surprising emergent capabilities.
So you would be right if we would approach the topic with a rational overhead view, but we don’t.
I guess the important thing to understand about spurious output (what gets called “hallucinations”) is that it’s neither a bug nor a feature, it’s just the nature of the program. Deep learning language models are just probabilities of co-occurrence of words; there’s no meaning in that. Deep learning can’t be said to generate “true” or “false” information, or rather, it can’t be meaningfully said to generate information at all.
So then people say that deep learning is helping out in this or that industry. I can tell you that it’s pretty useless in my industry, though people are trying. Knowing a lot about the algorithms behind deep learning, and also knowing how fucking gullible people are, I assume that—if someone tells me deep learning has ended up being useful in some field, they’re either buying the hype or witnessing an odd series of coincidences.
The thing is, this is not “intelligence” and so “AI” and “hallucinations” are just humanizing something that is not. These are really just huge table lookups with some sort of fancy interpolation/extrapolation logic. So lot of the copyright people are correct. You should not be able to take their works and then just regurgitate them out. I have problem with copyright and patents myself too because frankly lot of it is not very creative either. So one can look at it from both ends. If “AI” can get close to what we do and not really be intelligent at all, what does that say about us. So we may learn a lot about us in the process.
I would agree that either you have to start saying the ai is smart or we are not.
Deep learning can be and is useful today, it’s just that the useful applications are things like classifiers and computer vision models. Lots of commercial products are already using those kinds of models to great effect, some for years already.
What do you think of the AI firms who are saying it could help with making policy decisions, climate change, and lead people to easier lives?
Absolutely. Computers are great at picking out patterns across enormous troves of data. Those trends and patterns can absolutely help guide policymaking decisions the same way it can help guide medical diagnostic decisions.
The article was skeptical about this. It said that the problem with expecting it to revolutionize policy decisions isn’t that we don’t know what to do, it’s that we don’t want to do it. For example, we already know how to solve climate change and the smartest people on the planet in those fields have already told us what needed to be done. We just don’t want to make the changes necessary.
Thats been the case time and again, how many disruptions from the tech bros came to industries that had been stagnant or moving at a snails pace when it came to adopting new technology (esp to lock into more expensive legacy systems).
Most of those industries disrupted could have been secured by the players in those markets instead the allowed a disruptor to appear unchallenged.
Remember the market is not as rational as some might think, you start filling gaps and people often won’t ask about the fallout and many of these services did have people warning against these things.
We are for the most part, in a nation that lets you do whatever you want until the effects have hit people, this is even more a thing if you are a business. I don’t know an easy answer, in some of these cases, old gaurd needed a smack, in others a more controlled entry may have been better. As of now “controlled” is jut about the size of ones cash pile.
Cue the ethical corporations discussion…
I think it can be useful. I have used it myself, even before chatgpt was there and it was just gpt 3. For example I take a picture, OCR it and then look for mistakes with gpt because it’s better than a spell check. I’ve used it to write code in a language I wasn’t familiar with and having seen the names of the commands needed I could fix it to do what I wanted. I’ve also used it for some inspiration, which I could also have done with an online search. The concept just blew up and people were overstating what it can do, but I think now a lot of people know the limitations.
I mean AI is already generating lots of bullshit ‘reports’. Like you know, stuff that reports ‘news’ with zero skill. It’s glorified copy-pasting really.
If you think about how much language is rote, in like law and etc. Makes a lot of sense to use AI to auto generate it. But it’s not intelligence. It’s just creating a linguistic assembly line. And just like in a factory, it will require human review to for quality control.
The thing is - and what’s also annoying me about the article - AI experts and computational linguistics know this. It’s just the laypeople that end up using (or promoting) these tools now that they’re public that don’t know what they’re talking about and project intelligence onto AI that isn’t there. The real hallucination problem isn’t with deep learning, it’s with the users.
The article really isn’t about the hallucinations though. It’s about the impact of AI. its in the second half of the article.
I read the article yes
Spot on. I work on AI and just tell people “Don’t worry, we’re not anywhere close to terminator or skynet or anything remotely close to that yet” I don’t know anyone that I work with that wouldn’t roll their eyes at most of these “articles” you’re talking about. It’s frustrating reading some of that crap lol.
This is the curation effect: generate lots of chaff, and have humans search for the wheat. Thing is, someone’s already gotten in deep shit for trying to use deep learning for legal filings.
In a way, NLP is just sort of an exercise in mental muscle-memory. The AI can’t do the math that 1+1=2, but if you ask it what 1+1 equals, it will give you a two. Pretty much like any human would do - we don’t hold up one finger and another finger and count them.
So in a way, AI embodies a sort of “fuzzy common sense” knowledge. You can ask it questions it hasn’t seen before and it can give answers that haven’t been given before, but conceptually it will spit out “basically the answer” to “basically that question”. For a lot of things that don’t require truly novel thinking, it does sort of know things.
Of course, just like we can misunderstand a question or phrase an answer badly or even just misremember an answer, the AI can be wrong. I’d say it can help out quite a bit, but I think it works best as a sort of brainstorming partner to bounce ideas off of. As a software developer, I find it a useful coding partner. It definitely doesn’t have all the answers, but you can ask it something like, “why the hell doesn’t his code work?” and it might give you a useful answer. It might not, of course, but nothing ventured, nothing gained.
It’s best to not think of it or use it like a database, but more like a conversational partner who is fallible like any other, but can respond at your level on just about any subject. Any job that cannot benefit from discussing ideas and issues is probably not a good fit for AI assistants.
I don’t know exactly where to start here, because anyone who claims to know the shape of the next decade is kidding themself.
Broadly:
AI will decocratize creation. If technology continues on the same pace that it has for the last few years, we will soon start to see movies and TV with hollywood-style production values being made by individual people and small teams. The same will go for video games. It’s certainly disruptive, but I seriously doubt we will want to go back once it happens. To use the article’s examples, most people prefer a world with street view and Uber to one without them.
The same goes for engineering.
That’s putting millions of people out of a job with no real replacement. The ones that aren’t unemployed will be commanding significantly smaller salaries.
It’s actually not as easy as you think, it “looks” easy because all you seen is the result of survivorship bias. Like instagram people, they don’t post their failed shots. Like seriously, go download some stable diffusion model and try input your prompt, and see how good the result you can direct that AI to get things you want, it’s fucking work and I bet a good photographer with a good model can do whatever and quicker with director.(even with greenscreen+etc).
I dab the stable diffusion a bit to see how it’s like, with my mahcine(16GB vram), 30 count batch generation only yields maybe about 2~3 that’s considered “okay” and still need further photoshopping. And we are talking about resolution so low most game can’t even use as texture.(slightly bigger than 512x512, so usually mip 3 for modern game engine). And I was already using the most popular photoreal model people mixed together.(now consider how much time people spend to train that model to that point.)
Just for the graphic art/photo generative AI, it looks dangerous, but it’s NOT there yet, very far from it. Okay, so how about the auto coding stuff from LLM, welp, it’s similar, the AI doesn’t know about the mistake it makes, especially with some specific domain knowledge. If we have AI that trained with specific domain journals and papers, plus it actually understand how math operates, then it would be a nice tool, cause like all generative AI stuff, you have to check the result and fix them.
The transition won’t be as drastic as you think, it’s more or less like other manufacturing, when the industry chase lower labour cost, local people will find alternatives. And look at how creative/tech industry tried outsource to lower cost countries, it’s really inefficient and sometimes cost more + slower turn around time. Now, if you have a job posting that ask an artist to “photoshop AI results to production quality” let’s see how that goes, I can bet 5 bucks that the company is gonna get blacklisted by artists. And you get those really desperate or low skilled that gives you subpar results.
Somehow the same artist:
It’s like the google dream with dogs for the hand one. lol, I’ve seen my fair share of anatomy “inspirations” when I experiment the posing prompts. (then later learn there are 3D posing extensions.) If it’s a uphill battle for more technical people like me, it would be really hard for artists. The ones I know that use mid journey just think it’s fun and not something really worth worrying about. A good hybrid gaping tools for fast prototype/iteration with specific guidance rules would be neat in the future.
ie. 3D DCC for base model posing and material selection/lighting -> AI generate stuff -> photogrammetry(pretty hard to generate cause AI doesn’t know how to generate same thing from different angle, lol) to convert generated images back to 3D models and textures-> iterate.
There are people working on other part like building replacement or actor replacement, I bet there are people working on above as well.
I seriously doubt this technology will pass by without a complete collapse of the labor market. What happens after is pretty much a complete unknown.
I think its fair to assert that society will shift dramatically. Though the climate will have as much to do with that as AI.
The same goes for engineering.
I can’t wit to drive over a bridge where the contruction parameters and load limits were creatively autocompleted by a generative AI
There’s a guy at this maker-space I work out of who’s been using ChatGPT to do engineering work for him. There was some issue with residue being left in the parking lot on the pavement and came forward saying it had to do with “ChatGPT giving him a bad math number,” whatever the hell that means. This is also not the first time he’s said something like this, and its always hilarious.
Generative design is already a mature technology. NASA already uses it for spaceship parts. It’ll probably be used for bridges when large-format 3D printers that can manage the complexity it introduces.
It’s still just a tool for engineers though. Half of the job is determining what the design requirements are, another quarter is figuring out what general scheme (i.e. water vs air cooling) works best to meet those requirements. Things like this are great, but all they really do is effectively connect point A to point B in order to free up some man-hours for more high-level work.
It will shift a lot of human effort from generative to review. For example the core role of an engineer in many ways already is validation of a plan. Well that will become nearly the only role.
the core role of an engineer in many ways already is validation of a plan.
I disagree, this implies that AI are doing a lot more than they actually are. Before you design the physical layout of some thing, you have to identify a problem, and identify guidelines and empirical metrics against which you can compare your design to determine efficacy. This is half the job for engineers.
There’s one step of the design process that I see current AI completing autonomously (implementation), and I view it as nontrivial to get the technology working higher up on the “V”.
That assumes that the classes of problems that AI’s can solve remains stagnant. I don’t think that’s a good assumption, especially given that GPT4 can already self-review and refine its output.
It will take a very long time for people to believe and trust AI. That’s just the nature of trust. It may well surpass humant in always soon, but trust will take much more time. What would be required for an AI designed bridge be accepted without review by a human engineer?
We’ll probably see sooner or later.
This is my favorite perspective on AI and it’s impact. I am curious as to what your thoughts are.
I think there’s a problem with people wanting a fully developed brand new technology right out the gate. The cell phones of today didn’t happen overnight, it started with a technology that had limitations and people innovated.
AI is a technology that has limitations, people will innovate it. Hopefully.
I think my favorite potential use case for AI is academics. There are countless numbers of journal articles that get published by students, grad students and professors, and the vast majority of those articles don’t make an impact. Very few people read them, and they get forgotten. Vast amounts of data, hypotheses and results that might be relevant to someone trying to do something good, important or novel but they will never be discovered by them. AI can help with this.
Of course there’s going to be problems that come up. Change isn’t good for everyone involved, but we have to hope that there is a net good at the end. I’m sure whoever was invested in the telegram was pretty choked when the phone showed up, and whoever was invested in the carrier pigeon was upset when the telegram showed up. People will adapt, and society will benefit. To think otherwise is the cynical take on the same subject. The glass is both half full and half empty. You get to choose your perspective on it.
In my opinion, both can be true and it’s not either one or the other:
ML has surprised even many experts, in so far as a very simple mechanism at huge scale is able to produce some aspects of human abilities. It does not seem strange to me that it also reproduces other human abilities, like hallucinations. Maybe they are closer related then we think.
Company leaders and owners are doing what the capitalistic system incentives them to do: raise their companies value by any means possible, call that hallucinating or just marketing.
IMO it’s the responsibility of government to make sure AI does not become another capital concentration scheme like many other technologies have, widening the gap between rich and poor.
Agreed. Private-owned AI competing against humans for limited jobs in a capital based market is a nightmare.
Public-owned AI producing and providing for all is not.
AI was trained on the work of millions and is inhuman in its productive capabilities. It has no business being private owned
Comments are heavily focused on the title of the article and the opening paragraphs. I’m more interested in peoples’ takes on the second half of the article, that highlights how the goals companies are touting are at odds with the most likely consequences of this trend.
Yes, the second half is where the conversation gets interesting, by far.
I see both sides.
They’re probably going to completely (and intentionally) collapse the labor market. This has never happened before, so there is no historical prescedent to look at. The closest thing we have was the industrial revolution, but even that was less disruptive because it also created a lot of new factory jobs. This doesn’t.
The public hope is that this catastrophic widening of the gap between the rich and poor will force labor to organize and take some of the gains through legislation as an altenative to starving in the streets. Given that the technology will also make coercing people to work mostly pointless, there may not be as much pressure against it as there historically has been. Altman seems to be publically thinking in this direction, given the early basic income research and the profit cap for OAI. I can’t pretend to know his private thoughts, but most people with any shred of empathy would be pushing for that in his shoes.
Of course, if this fails, we could also be headed for a permanent, robotically-enforced nightmare dystopia, which is a genuine concern. There doesn’t seem to be much middle-ground, and the train has no brakes.
The IP theft angle from the end of the article seems like a pointless distraction though. All human knowledge and innovation is based on what came before, whether AI is involved or not. By all accounts, the remixing process it applies is both mechanically and functionally similar to the remixing process that a new generation of artists applies to its forebears, and I’ve not seen any evidence that they are fundamentally different enough to qualify as theft, except in the normal Picasso sense.
Interesting times.
…but most people with any shred of empathy would be pushing for that in his shoes.
Empathy? In late-stage capitalism? 😏
I mean, so… I’m a software engineer who used to specialize in automation. I ended up having a crisis of conscience decades back, realizing that I was putting people out of work. “Hey, good job on that project, our client can afford to let 30 people go now!” never really felt like great praise to me. It actually felt really really shitty knowing the work I was doing was making it possible for the “nobility” to further gain back control of the “serfs”.
I figured that the only way this could ever benefit society as a whole instead of shareholders and owners would be if we moved more to a society with things like UBI, with perhaps the people who end up getting something extra being the ones who actually DO the dirty jobs and provide actual worth to society, instead of becoming obscenely wealthy at the expense of empathy and good human spirit. Unfortunately, at least here in the states, anything that smacks of “socialism” automatically equals dictatorship (glossing over that capitalism offers just as many examples of being abused by the “ruling” class). So there’s the whole zeitgeist to battle against before the comfortable and less-informed majority will even listen to anything that’s in their best interest.
As you say, interesting times indeed. I’m not hopeful that we’ll see that sort of shift in my lifetime however, sigh…
Merits of the tech aside, It is amazing to see how many people are becoming ludites in response to this technology, especially those in industries who thought they were safe from automation. I feel like there has always been a sense of hubris between the creative industries and general labor, and AI is now forcing us to look in a computer generated mirror and reassess how special we really are.
The article complains the usage of the word “hallucinations” would be …
feeding the sector’s most cherished mythology: that by building these large language models, and training them on everything that we humans have written, said and represented visually, they are in the process of birthing an animate intelligence on the cusp of sparking an evolutionary leap for our species.
Wether that is true or not depends on wether we eventually create human-level (or beyond) machine intelligences. No one can read the future. Personally I think it’s just a matter of time, but there are good arguments for both sides.
I find the term “hallucinations” fitting, because it conveys to uneducated people that a claim by ChatGPT should not be trusted, even if it sounds compelling. The article suggests “algorithmic junk”, or “glitches” instead. I believe naive users would refuse to accept an output as junk or a glitch. These terms suggest something is broken, althought the output still seems sound. “Hallucinations” is a pretty good term for that job, and also already established.
The article instead suggests the creators are hallucinating in their predictions of how useful the tools will be. Again no one can read the future, but maybe. But mostly: It could be both.
Reading the rest of the article required a considerable amount of goodwill on my part. It’s a bit too polemical for my liking, but I can mostly agree with the challenges and injustices it sees forthcoming.
I mostly agree with #1, #2 and #3. #4 is particularly interesting and funny, as I think it describes Embrace, Extend, Extinguish.
I believe AI could help us create a better world (in the large scopes of the article), but I’m afraid it won’t. The tech is so expensive to develop, the most advanced models will come from people who already sit on top of the pyramid, and foremost multiply their power, which they can use to deepen the moat.
On the other hand, we haven’t found a solution to alignment and control problem, and aren’t certain we will. It seems very likely we will continue to empower these tools without a plan for what to do when one model actually shows near-human or even super-human capabilities, but can already copy, backup, debug and enhance itself.
The challenges to economy and society along the way are profound, but I’m afraid that pales in comparison to the end game.
Some great conversation here. Thanks everyone who responded so far!
Thanks for sharing this article. I agree that those points mentioned are not possible for GenAI. It is a pipe dream that GenAI is capable of global governance, because it can’t really understand the implications of what it means. It’s a Clever Hans and just outputs what it thinks that you want to see.
I think that with GenAI there are some job classes that are in danger (tech support continues to shrink for common cases, etc.), but mostly the entry-level positions. Ultimately, someone who actually knows what’s going on would need to intervene.
Similarly for things like writing or programming, GenAI can produce okay work, but it needs to be prompted by someone who can understand the bigger picture and check it’s work. Writing becomes more editing in this case, and programming becomes more code review.
I truly believe that multiple medical specialties will be taken over by AI.
Assisted diagnosis? Yes… The rest? Not for many years.
There have been studies that show patients already prefer the bedside manner of ChatGPT over human physicians, so that’s another thing we’ll likely see soon.