• 0 Posts
  • 174 Comments
Joined 2 years ago
cake
Cake day: July 29th, 2023

help-circle
  • One to watch from a safe distance: dafdef, an “ai browser” aimed at founders and “UCG creators”, named using the traditional amazon-keysmash naming technique and and following the ai-companies-must-have-a-logo-suggestive-of-an-anus style guide.

    Dafdef learns your browsing patterns and suggests what you’d do next After watching you fill out similar forms a few times, Dafdef starts autocompleting them. Apply with your startup to YC, HF0 and A16z without wasting your time.

    So… spicy autocomplete.

    But that’s not all! Tired of your chatbot being unable to control everything on your iphone, due to irksome security features implemented by those control freaks at apple? There’s a way around that!

    Introducing the “ai key”!

    A tiny USB-C key that turns your phone into a trusted AI assistant. It sees your screen, acts on your behalf, and remembers — all while staying under your control.

    I’m sure you can absolutely trust an ai browser connected to a tool that has nearly full control over your phone to not do anything bad, because prompt injection isn’t a thing, right?

    (I say nearly full, because I think Apple Pay requires physical interaction with a phone button or face id, but if dafdef can automate the boring and repetitive parts of using your banking app then having full control of the phone might not matter)

    h/t to ian coldwater



  • the possibility of such power falling into government hands is one that all-but guarantees Nineteen Eighty-Four levels of mass surveillance and invasion of privacy if it comes to pass

    Dealing with an implementation of Grover’s algorithm just means that you need to double the key length of your symmetric ciphers (because it only provides a root-2 speed up over brute force search). Given that the current recommended key length for eg. AES is 128 bits and we have off-the-shelf implementations that can already handle 256 bit keys, this isn’t really a serious problem.

    A working implementation of Shor’s algorithm would be significantly more problematic, but we’ve already had plenty of work done on post-quantum cryptography, eg. NISTPQC which has given us some standards, and there are even ML-KEM implementations in the wild.

    Even for the paranoid sort who might think that NIST approving a load of new cryptographic algorithms is not because quantum computers are a risk, but because the NSA has already backdoored them, there are things like X-Wing and PQXDH (used in signal) that combine conventional cryptography like ed25519 with ML-KEM, such that even if ML-KEM turn out to be backdoored or vulnerable to a new attack the tried-and-tested elliptic curve algorithm will still have done its job and your communications should remain secure, and if ML-KEM remains effective then your communications will remain secure even if a working quantum computer can implement shor’s algorithm for large enough numbers.

    Honestly though, if a state-level actor wants access to your encrypted secrets, they’ve got plenty of mechanisms to let them do that and don’t need a quantum computer to do it. The classic example might be xkcd (2009) or Mickens (2014):

    If your adversary is the Mossad, YOU’RE GONNA DIE AND THERE’S NOTHING THAT YOU CAN DO ABOUT IT. The Mossad is not intimidated by the fact that you employ https://. If the Mossad wants your data, they’re going to use a drone to replace your cellphone with a piece of uranium that’s shaped like a cellphone, and when you die of tumors filled with tumors, they’re going to hold a press conference and say “It wasn’t us” as they wear t-shirts that say “IT WAS DEFINITELY US,” and then they’re going to buy all of your stuff at your estate sale so that they can directly look at the photos of your vacation instead of reading your insipid emails about them.

    Quantum decryption is a little bit like the y2k problem, in that we have all the tools needed to deal with the issue well in advance of it actually happening. Except that unlike y2k it may never happen, but it is nice not to have to worry about it in either case.



  • It isn’t clear to me at this point that such research will ever be funded in english-speaking places without a significant set of regime changes… no politician or administrator can resist outsourcing their own thinking to llm vendors in exchange for funding. I expect the US educational system will eventually provide a terrible warning to everyone (except the UK, whose government looks at the US and says “oh my god, that’s horrifying. How can we be more like that?”).

    I’m probably just feeling unreasonably pessimistic right now, though.



  • It is related, inasmuch as it’s all generated from the same prompt and the “answer” will be statistically likely to follow from the “reasoning” text. But it is only likely to follow, which is why you can sometimes see a lot of unrelated or incorrect guff in “reasoning” steps that’s misinterpreted as deliberate lying by ai doomers.

    I will confess that I don’t know what shapes the multiple “let me just check” or correction steps you sometimes see. It might just be a response stream that is shaped like self-checking. It is also possible that the response stream is fed through a separate llm session when then pushes its own responses into the context window before the response is finished and sent back to the questioner, but that would boil down to “neural networks pattern matching on each other’s outputs and generating plausible response token streams” rather than any sort of meaningful introspection.

    I would expect the actual systems used by the likes of openai to be far more full of hacks and bodges and work-arounds and let’s-pretend prompts that either you or I could imagine.


  • It’s just more llm output, in the style of “imagine you can reason about the question you’ve just been asked. Explain how you might have come about your answer.” It has no resemblance to how a neural network functions, nor to the output filters the service providers use.

    It’s how the ai doomers get themselves into a flap over “deceptive” models… “omg it lied about its train of thought!” because if course it didn’t lie, it just edited a stream of tokens that were statistically similar to something classified as reasoning during training.



  • I might be the only person here who thinks that the upcoming quantum bubble has the potential to deliver useful things (but boring useful things, and so harder to build hype on) but stuff like this particularly irritates me:

    https://quantumai.google/

    Quantum fucking ai? Motherfucker,

    • You don’t have ai, you have a chatbot
    • You don’t have a quantum computer, you have a tech demo for a single chip
    • Even if you had both of those things, you wouldn’t have “quantum ai”
    • if you have a very specialist and probably wallet-vaporisingly expensive quantum computer, why the hell would anyone want to glue an idiot chatbot to it, instead of putting it in the hands of competent experts who could actually do useful stuff with it?

    Best case scenario here is that this is how one department of Google get money out of the other bits of Google, because the internal bean counters cannot control their fiscal sphincters when someone says “ai” to them.




  • I was reading a post by someone trying to make shell scripts with an llm, and at one point the system suggested making a directory called ~ (which is a shorthand for your home directory in a bunch of unix-alikes). When the user pointed out this was bad, the llm recommended remediation using rm -r ~ which would of course delete all your stuff.

    So, yeah, don’t let the approximately-correct machine do things by itself, when a single character substitution can destroy all your stuff.

    And JFC, being surprised that something called “YOLO” might be bad? What were people expecting? --all-the-red-flags