• Riskable
    link
    English
    22 months ago

    I can answer one of these criticisms regarding innovation: AI is incredibly inefficient at what it does. From training to execution, it’s but a fraction as efficient as it could be. For this reason most of the innovation going on in AI right now is related to improving efficiency.

    We’ve already had massive improvements to things like AI image generation (e.g. SDXL Turbo which can generate an image in 1 second instead of 10) and there’s new LLMs coming out all the time that are a fraction of the size of their predecessors, use a fraction of the computing power, and yet perform better for most use cases.

    There’s other innovations that have the potential to reduce the power requirements by factors of one thousand to millions such as ternary training and execution. If ternary AI models turn out to be workable in the real-world (I see no reason why they couldn’t) we’ll be able to get the equivalent of ChatGPT 4 running locally on our phones and it won’t even be a blip on the radar from a battery life perspective nor will it require more powerful CPUs/GPUs.