- cross-posted to:
- [email protected]
- [email protected]
- programming
- Aii
- cross-posted to:
- [email protected]
- [email protected]
- programming
- Aii
cross-posted from: https://feddit.org/post/28915274
cross-posted from: https://feddit.org/post/28915273
[…]
That marketing may have outstripped reality. Early reports from Mythos preview users including AWS and Mozilla indicate that while the model is very good and very fast at finding vulnerabilities, and requires less hands-on guidance from security engineers - making it a welcome time-saver for the human teams - it has yet to eclipse human security researchers.
“So far we’ve found no category or complexity of vulnerability that humans can find that this model can’t,” Mozilla CTO Bobby Holley said, after revealing that Mythos found 271 vulnerabilities in Firefox 150. Then he added: “We also haven’t seen any bugs that couldn’t have been found by an elite human researcher.” In other words, it’s like adding an automated security researcher to your team. Not a zero-day machine that’s too dangerous for the world.
The dangerous part is how many resources were wasted to train a model that performs below the current open source offerings?
Not to be too much of a “knew it” person, but I remember when they first started rumors about how dangerous it was that my bullshit detector started going off, because it felt like they were intentionally trying to make it sound like a threat large enough to get attention, and more importantly funding. I kinda expect it to be adopted, then opsec to complain about it sucking in a couple of weeks to months at a few bigger companies.
I enjoy how much this headline works as a critique of their new model, as well as of the company in general
i dunno if im off base here, but staying 1 step ahead in the infinitely escalating digital security war requires using AI… which will, in turn, merely escalate, rather than prevent, further security threats. everything else in the article seems like fluff
i dunno if im off base here, but staying 1 step ahead in the infinitely escalating digital security war requires using AI
Or just writing new code in Rust, which is much cheaper and prevents a large fraction of bugs
While that does mitigate a lot of things, it doesn’t fundamentally guarantee security.
For example, the language will not guard against things like SQL injection, path traversal, shell injection, the language itself can’t guard against those (however core libraries may discourage dangerous patterns, but ultimately using a library or manually doing something yourself the wrong way.
I would even venture in this day and age most vulnerabilities are no longer from C misadventures. Between popularity of languages that have more safety rails and more analysis tools…
Supply chain attacks: exist
I do find it funny how Mozilla has created both Rust and Servo, yet FireFox’s Gecko is still written in C/++
Usually with new security tools, e,g. fuzzers, you catch a whole bunch of bugs, and then that class of bugs is essentially eliminated, but the security arms race switches to different classes of bugs not solved by the tools. So yôu have a big initial peak of bugs found/fixed that slows to trickle. Remains to be seen if LLMs follow the same pattern.




