• 1984@lemmy.today
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      7 months ago

      It’s frankly ridiculous. Ai is useful but the only company that deserves to get this much gains on the stock market is possibly Nvidia, because they make actual sales of actual hardware.

      Everyone else puts out AI that is much worse than open gpt with tons of marketing to try and make up for it.

      There is aws summit now in a couple of days where I live and there was more AI breakout sessions than you can shake a stick at. It’s definently overhyped.

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 months ago

        Even Nvidia’s gains are based on selling that hardware to outlets buying GPUs in shipping container quantities to run huge LLMs. If those companies dry up, so will Nvidia’s market position. At the other side of things, their traditional market is gaming, but that industry is contracting and may not be making games that push hardware limits the way it used to. Their best long term bet might be a rumored Steam Deck/Switch like handheld (one of their own making; they do supply the chips for the Switch and upcoming Switch 2), but that’s not going to justify a $3T market cap.

  • paddirn@lemmy.world
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    5
    ·
    7 months ago

    It’s cool guys, I asked ChatGPT and it said:

    The term “AI bubble” suggests a speculative frenzy similar to previous bubbles in tech. While there’s certainly a lot of excitement and investment in AI, it’s unlikely to cause an economic crash on its own. However, if promises aren’t met and investments don’t yield expected returns, there could be adjustments in the market. AI’s impact is profound, but its realization takes time and nuanced understanding.

    So we just might see an “adjustment”, no way this is a bubble.

  • Pohl@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    2
    ·
    7 months ago

    Nvidia is making a thing and selling it. No matter what happens with AI tech, they are going to keep their winnings.

    Everybody else… well they borrowed/raised and spent a FORTUNE on R&D, chips, and electricity to make a product that has no realized commercial value (yet?). They are either going to figure out where the money comes from soon or the bills gonna come due. The next 12 months are going to be popcorn worthy if you like watching the tech industry.

    • Bassman1805@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      1
      ·
      7 months ago

      “In a gold rush, you want to be the guy selling the shovels”

      Nvidia is the shovel-seller right now. A couple of AI companies will probably get huge, 99% will fail. But Nvidia is getting paid by them all.

    • bobs_monkey@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      7 months ago

      Well yeah, a company making the hardware that these AI companies utilize will probably be just fine, especially an entrenched one like Nvidia. Once the AI fad subsides, they might see a drop in revenue, but considering Nvidia makes more than AI processors, especially their graphics division (aka diversified), they won’t have too much issue pivoting to what’s next.

      It’s kind of like the gold rush in the 1800s: the people who made bank and struck it rich weren’t necessarily the guys doing the mining, it was the ones supplying the miners with tools, clothes, supplies, etc.

      • sudo42@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 months ago

        Nvidia is probably making a huge sigh of relief right now. When the Crypto-Currency scam popped, they were looking at a huge inventory of graphics cards being dumped on the market. They lucked out that self-driving cars were the new shiny that kept their business growing. Now it’s AI. The AI hype-cycle is crashing fast though.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Recently Dell’s stock slipped 16%. Not because they were losing sales, not because their revenue was declining, but because analysts said their margins on AI computers weren’t big enough.

        So there seems to be some patience wearing thin in the AI rush that includes the shovel makers.

    • hark@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      7 months ago

      Nvidia is making tons of money now, but they’re still going to be in hot water with investors when they have to explain why exponential revenue growth cannot be sustained as revenues crash back down to the norm.

      • DudeDudenson@lemmings.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        edit-2
        7 months ago

        You wanna know the real issue with the world? This right here, society as a whole accepting and promoting a system of measuring a company’s worth that by definition will make it fail after getting as much reach as possible. Almost ensuring everything will always be shit

        Oh this quarter the company only got 74 billion in revenue instead of the projected 80? Sell everything it’s no longer profitable!!! Abandon ship!

        • hark@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          7 months ago

          Yep, it’s a system that relies on infinite growth and gets upset if there’s growth but it’s not fast enough or if there is any temporary setback no matter how minor or even if things level off for a bit. It’s ridiculous.

  • Etterra@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    5
    ·
    7 months ago

    It’s a hype bubble. AI had been around for a while and will continue to be. The problem is specifically Large Language Models. They’ve been trained to SOUND human, but not to actually use that ability for anything more useful than small talk and bullshit. However because it SOUNDS charasmatic, and that is interesting to people, companies have started cramming it into everything they can think of to impress shareholders.

    Shareholders are a collective group of people who are, on average, really more psychologically similar to crows than other humans - they like shiny things, have a mob mentality, and can only use the most basic of tools available, in their case usually money. New things presented in a flashy way by a charasmatic individual are most attractive to them, and they will seldom do any research beyond superficial first impressions. Any research they actually do generally skews towards confirmation bias.

    This leads to an unfortunate feature of capitalism, which is the absolute need to make the numbers go up. To impress their shareholders, companies have to jangle keys in front of their faces. So whenever The Hip New Things comes along, it’s all buzzwords and bullshit as they try and find any feasible way to cram it into their product. If they could make Smart Corn 2.0 powered by Chat GPT they would, and sell it for three times as much in the same produce isle as normal corn. And then your corn would tell you this great recipe if knows where the sauce is made with a battery acid base.

    In most recent memory, this exact scenario played out with NFTs. When the NFT market collapsed as was inevitable, the corporations who swore it would supercharge their sales all quietly pretended it never happened. Soon something new was jangled in front of the shareholders and everybody forgot about them.

    Now that generative AI is proving itself to just be a really convincing bullshitter, it’s only a matter of time until it either dies and quietly slinks away or mutates into the next New Things and the cycle repeats. Like a pandemic of greed and stupidity. Maybe they’ll figure out how to teach Chat GPT how to check and cite verified sources and make it actually do what they currently claim it does.

    I guess it depends on if they can make it shiny enough to impress the crows.

    • knightmare1147@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      7 months ago

      Hey this comparison is unfair to crows, they’re way smarter and more empathetic than wealthy tech industry investors.

    • EatATaco@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 months ago

      I think we’re in an ai bubble because Nvidia is way over valued and I agree with you that often people flock to shiny new things and many people are taking risk with the hope of making it big…and many will get left holding the bag.

      But how do you go from NFTs, which never had widespread market support, to the market pumping a trillion dollars into Nvidia alone? This makes no sense. And to down play this as “just a bullshitter” leads me to believe you have like zero real world experience with this. I use copilot for coding and it’s been a boost to productivity for me, and I’m a seasoned vet. Even the ai search results, which many times have left me scratching my head, have been a net benefit to me in time savings.

      And this is all still pretty new.

      While I think there it is over hyped and people are being ridiculous with how much this will change things, at the very least this is going to be a huge new tool, and I think you’re setting yourself up to be left behind if you aren’t embracing this and learning how to leverage it.

      • Aceticon@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        7 months ago

        Nvidia are the ones selling shovels in this gold rush, so it makes some sense that they’ll make a lot of money out of it even if it was fool’s gold all along.

        Is Nvidias shovel-selling sustainable? Doubt it: when the gold rush is over, the demand for shovels will fall. However, we’re long past the era when most money being pumped into the stockmarket was actually controlled by investors who cared about prospects beyond the next quarter, and it does make sense that speculative investors would be seeking to profit from the rise on Nvidias profits due to their shove-selling for this gold rush, even if later it falls back again.

          • Aceticon@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            7 months ago

            Which together with my point explains your own “But how do you go from NFTs, which never had widespread market support, to the market pumping a trillion dollars into Nvidia alone?” question.

            Not only would LLMs and other more advanced generative AI have a significantly broader impact than NFTs if it lived up to the hype, but in technical terms it’s much more dependent on GPUs for its functionality with any decent speed than NFTs as you can see in this comparison I just found with DDG.

            Mind you, if you meant Bitcoin rather that NFTs (since the last big demand for GPUs was for Bitcoin mining rather than NFTs) the point that the possible impact of generative AI is much broader still explains it, plus if I remember it correctly Nvidia stock did got pulled up by the whole Bitcoin mining demand for GPUs (I vaguelly remember something about their share price doubling within a few years, but am not sure anymore).

            Also keep in mind that Stock Markets at their most speculative end - i.e. Tech - have a huge herding effect: everybody wants to jump into the “next big thing” hype train as soon as possible and keep wanting to do so as long as it seems to be going (i.e. as long as they think there’s a “Greater Fool” they can dump their overvalued stock on if an when it stops going) so there’s a huge snowballing effect that pushes stock prices and company valuations far beyond anything explainable by actual and likely future financial improvements of their situation: this is how we get Tesla reaching a market valuation which is more that the rest of the Auto-Industry put together even though the former sells far fewer vehicles than the latter.

            Stock Market rational considerations, especially in the most speculative parts of it, are not on “how much wealth can this company produce” but on “for how much more money can I sell this stock later”, which is about one’s own “smarts” or advantages that others don’t have such as insider info and the gullibility of others, not about actual financial and accounting reasons of the company itself, which is why hype works so well to pump up the valuations of Tech companies.

      • NoMoreCocaine@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        7 months ago

        The AI technology we’re using isn’t “new” the core idea is several decades old, only minor updates since then. We’re just using more parallel processing and bigger datasets to brute force the “advances”. So, no, it’s not actually that new.

        We need a big breakthrough in the technology for it to actually get anywhere. Without the breakthrough, we’re going to burst the bubble once the hype dies down.

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          7 months ago

          The landmark paper that ushered in the current boom in generative AI (Attention is all you need (Vaswani et al 2017)) is less than a decade old (and attention itself as a mechanism is from 2014), so I’m not sure where you are getting the idea that the core idea is “decades” old. Unless you are taking the core idea to mean neural networks, or digital computing?

        • EatATaco@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          I just don’t get this. There has not been some huge leap in processing power over the past few years, but there has been in generative AI. parallel processing, on the other hand, has been around for decades.

          I just don’t know how one can look at this and think there hasn’t been some big step forward in ai, but instead claim it’s all processing power. I think it’s pretty obvious that there has been some huge leap in the generative AI world.

          Also I’ve been incorporating it more and more. It boggles my mind that someone would look at this and seea passing fad.

  • MehBlah@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    ·
    7 months ago

    Of course we are. Its really all a hype of half ass helpful prototypes at the moment.

  • just another dev@lemmy.my-box.dev
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    3
    ·
    7 months ago

    Ai isn’t the bubble, that’ll keep on improving, although probably not at this rate.

    The hype bubble is companies adding AI to their product where it offers very little, if any, added value, which is incredibly tedious.

    The latter bubble can burst, and we’ll all be better for it. But generative AI isn’t going anywhere.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      2
      ·
      7 months ago

      We referred to the dotcom bubble as the dotcom bubble, but that didn’t mean that the web went away, it just meant that companies randomly tried stuff and had money thrown at them because the investors had no idea either.

      So same here, AI bubble because it’s being randomly attempted without particular vision with lots and lots of money, not because the technology fundamentally is a bust.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 months ago

          Yeah, right now the loudest voices are either “AI is ready to do everything right now or in a few months” or “This AI thing is worthless garbage” (both in practice refer to LLM specifically, even though they just say “AI”, the rest of AI field is pretty “boringly” accepted right now). There’s not a whole lot of attention given to more nuanced takes on what it realistically can/will be able to do or not do. With proponents glossing over the limitations and detractors pretending that every single use of LLM is telling people to eat rocks and glue.

          • AnarchistArtificer@slrpnk.net
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            7 months ago

            Yeah, I’m super salty about the hype because if I had to pick one side or the other, I’d be on team “AI is worthless”, but that’s just because I’d rather try convincing a bunch of skeptics that when used wisely, AI/ML can be super useful, than to try talk some sense into the AI fanatics. It’s a shame though, because I feel like the longer the bubble takes to pop, the more harm actual AI research will receive

        • A Phlaming Phoenix@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 months ago

          Some of it is a fad that will go away. Like you indicated, we’re in the “Marketing throws everything at the wall” phase. Soon we’ll be in the “see what sticks” phase. That stuff will hang around and improve, but until we get there we get AI in all conceivable forms whether they’re a worthwhile use of technology or not.

          • afraid_of_zombies@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            I just plan to keep using it. Also interesting thing at work I had an idea about a year ago for a sensing system that could predict when the machinery we sell needed some maintenance. No one thought it would work. This past month the CEO told me to go ahead with “his AI idea” and plans for us to file a patent. Be my second.

            My point is if nothing else this raised the bar that much higher.

        • TubularTittyFrog@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          7 months ago

          it’s a fad in terms of the hype and the superstition.

          it won’t go away. it will just become boring and mostly a business to business concern that is invisible to the end consumer. just like every other big fad of the past 20 years. ‘big data’, ‘crypto’, etc.

          5 years ago everyone was suddenly a ‘data scientist’. where are they now? yeah… exactly.

          • afraid_of_zombies@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            5 years ago everyone was suddenly a ‘data scientist’. where are they now? yeah… exactly.

            Quants. They make more money than most doctors. I know one who is 39 and is retiring.

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      7 months ago

      Improving but to what end? If it’s not something that the public will ultimately perceive as useful it will tank no matter how hard it’s pushed.

      I saw a quote that went something like, “I want AI to do my laundry so I can have time for my art, not to do art while I keep doing laundry”.

      Art vs laundry is an extreme example but the gist of it is that it should focus on practical applications of the mundane sort. It’s interesting that it can make passable art but ultimately it’s mediocre and meaningless.

    • egeres@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      7 months ago

      This

      AI is actually providing value and advancing to a huge rate, I don’t know how people can dismiss that so easily

      • lemmyvore@feddit.nl
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        7 months ago

        How has it helped you personally in every day life?

        And if it’s doing some of your job with prompts that anybody could write, should you be paid less, or should you be replaced by someone juggling several positions?

        • egeres@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          edit-2
          7 months ago

          I’m using LLMs to parse and organize information in my file directory, turning bank receipts into json files, I automatically rename downloaded movies into a more legible format I prefer, I summarize clickbaity-youtube-videos, I use copilot on vscode to code much faster, chatGPT all the time to discover new libraries and cut fast through boilerplate, I have a personal assistant that has access to a lot of metrics about my life: meditation streak, when I do exercise, the status of my system etc and helps me make decisions…

          I don’t know about you but I feel like I’m living in an age of wonder

          I’m not sure what to say about the prompts, I feel like I’m integrating AI in my systems to automate mundane stuff and oversee more information, I think one should be paid for the work and value produced

        • afraid_of_zombies@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          7 months ago

          Your question sounds like a trap but I found a bunch of uses for it.

          • Rewriting emails
          • Learning quickly how to get popular business software to do stuff
          • Wherever I used to use a search engine
          • Setup study sessions on a topic I knew very little about. I scan the text. Read it. Give it to the AI/LLM. Discuss the text. Have it quiz me. Then move to the next page.
          • Used it at a poorly documented art collection to track down pieces.
          • Basically everything I know about baking. If you are curious my posts document the last 7 months or so of my progress.
          • Built a software driver (a task I hate) almost completely by giving it the documentation
          • Set it up so it can make practice tests for my daughters school work
          • Explored a wide range of topics

          Now go ahead and point out that I could have done all this myself with just Google, the way we did back in the day. That’s the thing about this stuff. You can always make an argument that some new thing is bad by pointing out it is solving problems that were already solved or solving problems no one cares about. Whenever I get yelled at or hear people complain about opposite things I know that they just want to be angry and they have no argument. It’s just rage full throwing things at the wall to see what sticks.

          • lemmyvore@feddit.nl
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            7 months ago

            You can always make an argument that some new thing is bad by pointing out it is solving problems that were already solved or solving problems no one cares about.

            That’s not the issue. I’m not a luddite. The issue is that you can’t rely on its answers. The accuracy varies wildly. If you trust it implicitly there’s no way of telling what you end up with. Human learning process normally involves comparing information to previous information, some process of vetting, during which your brain “muscles” are exercised so they become better at it all the time. It’s like being fed in bed and never getting out to do anything by yourself, and to top it off you don’t even know if you’re being fed correct information.

            • afraid_of_zombies@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              7 months ago

              The issue is that you can’t rely on its answers.

              Cough… Wikipedia…cough. You remember being told how Wikipedia wasn’t accurate and the only true sources were books made by private companies that no one could correct?

              Human learning process normally involves comparing information to previous information, some process of vetting, during which your brain “muscles” are exercised so they become better at it all the time. It’s l

              Argument from weakness. Classic luddite move. I am old enough to remember the fears that internet search engines would do this.

              In any case no one is forcing you to use it. I am sure if you called up Britianica and told them to send you a set they would be happy to.

    • DudeDudenson@lemmings.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      7 months ago

      Nah but once the put AI on everything bubble bursts companies will have a sour taste on it and won’t be so interested in investing into it.

      I believe we’ll get a lot of good improvements over it but in people’s minds AI will be that weird thing that never worked quite right. It’ll be another meme like Cortana on windows so it won’t drive stock price at all unless you’re doing something really cutting edge.

      And good luck competing with the tech giants

  • Ibaudia@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    7 months ago

    Yes. Tech is a hype-based sector where the actual value of products is obfuscated by marketing. When the AI craze settles down, hopefully we’ll stop seeing it injected into everything.

    • Croquette@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      It will be like the IoT bloom of 2015ish where everyone and their mother had a new IoT product on Kickstarter.

      I just hope it dies fast so that we can concentrate on the actual utility of LLMs and other AI sectors.

      • Ibaudia@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        7 months ago

        I mean AI bullshit like Copilot being added to Windows then quietly paywalled, ridiculous products like the Rabbit R1, and the arms race involving products that aren’t ready for market or produced ethically. LLMs still just make shit up a lot of the time.

  • ChanSecodina@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    7 months ago

    Anyone remember the dotcom boom (and bust)? The AI hype bubble reminds me a lot of that. It ticks all the same boxes: wild new tech showing up all the time, stratospheric hype, corporate FOMO, a money spigot that seems to be spraying investments at any company with AI in the name, business plans that lose money per unit sold but plan to “make it up at scale.” And unlike the last 16 years this is all happening when interest rates are non-zero so money actually costs something.

    When I think about the dotcom boom and bust I tend to group the companies into 3 or so broad categories:

    • Companies that were doing the right thing at the right time. These are the companies that weren’t necessarily pushing the envelope from a technology perspective; they were building a business model on where the technology was at the time but that could improve as the technology did. In the dotcom days the business model that most exemplifies that was e-commerce. Amazon and eBay grew up in the dotcom era and survived the bust no problem because they were already profitable by the time the investment money stopped flowing.
    • Companies that were way too early. These are the ones that had a great vision but that were too far ahead of the technology curve. Did you know we had online grocery delivery in 1999? Webvan tried to move fast and corner the market but due to mismanagement and the tech and market not being ready they crashed hard in 2001. Grocery delivery is of course totally commonplace today, but even if Webvan wasn’t mismanaged I find it highly unlikely that they could have succeeded when less than half the country even had dialup and the common wisdom of the day was to not type your credit card number online.
    • And last but not least, you’ve got the startups that never really had a business plan and the existing companies just jumping on the hype train because of FOMO. Startups were getting investment dollars just to … build a website. Big companies were putting up totally contentless “web experiences.” Suddenly every breakfast cereal had a website. Did it have nutrition information? No. Online ordering? No. Mostly it was just marketing drivel and maybe a recipe for snack mix if you’re lucky. These are the ones I think of when I hear that Taco Bell is going “AI-First.”

    Anyways, there’s more I could say about why I think this will play out faster than crypto did but this is already a wall of text. For all the people who missed the dotcom boom: Enjoy the hype cycle. It’ll be a smoking crater before you know it. :)

    • Semi-Hemi-Lemmygod@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 months ago

      I was in high school during the boom and my career plan was:

      1. Go work for a startup doing computer stuff
      2. Get stock options
      3. Retire before 30 when it goes public

      The landscape after graduating college was… different.

      • bobalot@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        edit-2
        7 months ago

        I remember 1998.

        It wasn’t as dumb as this shit.

        And these dorks should have learnt from the dot com and crypto-currency crashes.

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      7 months ago

      The old default. They’ve been a part of market economies for a very long time. If anything we might have learned to tame them a bit as of late.

  • BrightCandle@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    7 months ago

    Pretty much all technology goes through the same odd shape of adoption.

    https://www.infusedinnovations.com/wp-content/uploads/2021/01/Screen-Shot-2020-10-10-at-8.46.48-AM-1.png

    What is often really hard to tell is where you are until your definitely in the trough of Disillusionment. We could be practically very early on the way up and human level or above AI is coming or near the peak of Inflated expectations and its about to crash down before finally finding a use that is less hype and more worthwhile. The regulation will certainly slow things down a bit towards the peak.

    I am not sure whether slightly better chat bots that still lie and image generators that do look reasonably good is the peak or just the beginning. Progress has been dramatic in the past years since invention but the cost of training is now immense and it requires a breakthrough to make big steps of improvement and I am not sure if we are going to make that. A lot of billionaire money riding on it.

    • barsoap@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      The trough of disillusionment is definitely already there in academia but the market forces that be override that with massive amounts of hype, silicon, and YOLO. Billionaire capital and attitude can provide that, it can’t provide basic research, because velocity is valued way higher than going anywhere sensible.

    • lad
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      7 months ago

      It depends

    • AnarchistArtificer@slrpnk.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 months ago

      Eh, it depends on what we count as “AI”. I’m in a field where machine learning has been a thing for years, and there’s been a huge amount of progress in the last couple of years[1]. However, it’s exhausting that so much is being rebranded as “AI”, because the people holding the purse strings aren’t necessarily the same scientists who are sick of the hype.

      [1] I didn’t get into the more computational side of things until 2021 or so, but if I had to point to a catalyst for this progress, I’d say that the transformer mechanism outlined in the 2017 paper “Attention is all you need”, by Google scientists.

  • linearchaos@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    7 months ago

    Yes, and no.

    The tech is absolutely astounding. Somebody posted a random idea on Facebook and it caused me to make it have a conversation between Fred Rogers and Steve Irwin and it absolutely nailed it. I’ve had it look at pictures of memes it got details like one person looking at another person.

    The power wise, it’s rather unsustainable. There’s a real cost associated with each one of these queries were running and the price to train it with all the data. There are many jobs at which it makes financial sense to pay for it is a service but the vast majority of work or sending off to AI is nothing that anyone is willing to pay for.

    We’re in an AI bubble because we can only make queries against AI as long as Microsoft and Google decide that it’s in their plans to allow us to do it for free.

    • postscarce@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      7 months ago

      You’re forgetting about models that are open source and able to run locally. Llama3 is not the best model, but it’s still very useful and will continue to get better along with the top closed source models.

      • aesthelete@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        7 months ago

        This round of technology reminds me of when Google was the shiniest thing on the block, and everyone was trying to cram a site specific search engine into their website.

        This resulted in open source projects such as Lucene, which is incredibly useful, but is not in reality anything like Google. Over time interest in these projects faded and now they’re just another pretty optional component of a website (many sites just use SQL queries rather than a search engine).

        I think chatbots are pretty similar. The premier versions of these things cost way too much to run to be practical for most sites, so they’ll play with the scaled down, easier versions for a time before abandoning the functionality entirely over time unless it’s found to be actually useful.

  • geography082@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    7 months ago

    Making it a question, makes me question in which reality the autor lives