• 2 Posts
  • 66 Comments
Joined 11 days ago
cake
Cake day: January 7th, 2026

help-circle

  • I don’t know: it’s not just the outputs posing a risk, but also the tools themselves. The stacking of technology can only increase attack-surface it seems, at least to me. The fact that these models seem to auto-fill API values, without user-interaction, is quite unacceptable to me; it shouldn’t require additional tools, checking for such common flaws.

    Perhaps AI tools in professional contexts, can be best seen as template search tools. Describe the desired template, and it simply provides the template, it believes most closely matches the prompt. The professional can then “simply” refine the template, to match it to set specifications. Or perhaps rather use it as inspiration and start fresh, and not end up spending additional time resolving flaws.




  • I understand you’ve read the comment as a single thing, mainly because it is. However, the BLE part is an additional piece of critique, which is not directly related to this specific exploit; neither is the tangent on the headphone jack “substitution”. It’s, indeed, this fast pairing feature, which is the subject of the discussed exploit; so you understood that correctly (or I misunderstood it too…).

    I’m however of the opinion, BLE being a major attack vector, by design. These are IoT devices that, especially when “find my device” is enabled (which in many cases isn’t even optional: “turned off” iPhones for example), do announce themselves periodically to the surrounding mesh, allowing for the precise location of these devices; and therefore also the persons carrying them. If bad actors gain access, to for example Google’s Sensorvault (legally in the case of state-actors), or would find ways of building such databases themselves; then I’d argue you’re in serious waters. Is it a convenient feature, to help one relocate lost devices? Yes. But this nice-to-have, also comes with this serious downside, which I believe doesn’t even near justify the means. Rob Braxman has a decent video about the subject if you’re interested.

    It’s not even a case of kids not wanting to switch, most devices don’t even come with 3.5mm jack connectors anymore…




  • AI reviews don’t replace maintainer code review, nor do they relieve maintainers from their due diligence.

    I can’t help but to always be a bit skeptical, when reading something like this. To me it’s akin to having to do calculations manually, but there’s a calculator right beside you. For now, the technology might not yet be considered sufficiently trustworthy, but what if the clanker starts spitting out conclusions, which equal a maintainer’s, like 99% of the time? Wouldn’t (partial) automation of the process become extremely tempting, especially when the stack of pull request starts piling up (because of vibecoding)?

    Such a policy would be near-impossible to enforce anyway. In fact, we’d rather have them transparently disclose the use of AI than hide it and submit the code against our terms. According to our policy, any significant use of AI in a pull request must be disclosed and labelled.

    And how exactly do you enforce that? It seems like you’re just shifting the problem.

    Certain more esoteric concerns about AI code being somehow inherently inferior to “real code” are not based in reality.

    I mean, there’s hallucination concerns, there’s licensing conflicts. Sure, people can also copy code from other projects with incompatible licenses, but someone without programming experience is less likely to do so, than when vibecoding with a tool directly trained on such material.

    Malicious and deceptive LLMs are absolutely conceivable, but that would bring us back to the saboteur.

    If Microsoft itself, would be the saboteur, you’d be fucked. They know the maintainers, because GitHub is Microsoft property, and so is the proprietary AI model, directly implemented in the toolchain. A malicious version of Copilot could, hypothetically, be supplied to maintainers, specifically targeting this exploit. Microsoft is NOT your friend, it closely works together with government organizations; which are increasingly interested in compromising consumer privacy.

    For now, I do believe this to be a sane approach to AI usage, and believe developers to have the freedom to choose their preferred environment. But the active usage of such tools, does come with a (healthy) dose of critique, especially with regards to privacy-oriented pieces of software; a field where AI has generally been rather invasive.


  • Yes, because they constitute a significant portion, of the eyes, traditionally involved with doing the verification of software. You can allow a potentially cherry-picked group of researchers to do the verification, on behalf of the user-base, but that hinges on a “trust me bro” basis. I appreciate you’ve looked into the process in practice, but please understand that these pieces of software, are anything but simple. Also if a state-actor were to deliberately implement an exploit, it wouldn’t be necessarily obvious at all, even if source-code was available; they’re state-backed, top of their game security-researchers themselves. Even higher tier consumer-grade computer viruses, won’t execute in a virtualized environment, precisely to avoid being detected. They won’t compromise when unnecessary, and might only be exploited when absolutely required; again to avoid suspicion.

    I fully agree with the last paragraph though, and believe there to be an overreliance on digital systems over all. In terms of FOSS software, you have to rely on many, many different contributors to facilitate maintenance, packaging and distribution in good faith; and sometimes all it takes is just one package, for the whole system to become compromised. But even so, I’m more comfortable knowing, the majority of software I’m running on my machines, to be open-source; than relying on a single entity, like Microsoft, having an abysmal track record in respect of privacy, while doing so in the dark. Of course you could restrict access to Microsoft servers using network filtering, but it’s not just that aspect, it’s also not having to deal with Microsoft’s increasingly restricted experience, primarily serving their perverse dark patterns. I do believe people should handle sensitive files with care, for instance: put Tails on a live-USB, leave it off the internet, put the files on an encrypted drive, dismount the drives physically, and store them somewhere safe.



  • Ah sorry, it seems I read over that part. Unless programmers have the exceptional skills and time required, to effectively reverse engineer these complex algorithms, nobody will bother to do so; especially when required after each update. On the contrary, if source code was available, the bar of entry is significantly lower and requires way less specialized skills. So save to say, most programmers won’t even bother inspecting a binary, unless there’s absolutely no other way around or have time to burn. Where as, if you’d open up the source, there would be a lot more, let’s say C programmers, able to inspect the algorithm. Really, have a look at what it requires to write binary code, let alone reverse engineering complicated code, that somebody else wrote.

    I agree with Linus’ statement though: I rarely inspect source-code myself, but I find it more comforting knowing, package-maintainers for instance, could theoretically check the source before distribution. I stand by my opinion that it’s a bad look for a privacy- and security-oriented piece of software, to restrict non-“experts” from inspecting that, which should ensure that.


  • If that is true, that might reinforce the concept of valuing expert opinions, over the that of the democracy; which makes it a technocracy instead. I think we’ve all realized many, if not all governments are completely incompetent, and have been for quite a while. But this doesn’t mean a government can appoint unelected “experts” (which literally happened in The Netherlands) to steer the ship, with a complete disregard for democratic will. And if one wanted to manufacture consent, it couldn’t be easier than doing it during a deathly pandemic, with all eyes focused on centralized press convergences and active repression of contradicting narratives. Maybe we should outsource politics entirely to digital systems, which surveil the public constantly, to poll their interests in real time; what could possibly go wrong?

    It’s not just about the masks: (commercial) buildings still utilize occupancy level sensors and air quality monitors, required during COVID in order to stay operational. What you see happening here, is that these sensors are increasingly supplying data to “smart”: air filters, heating/cooling systems, lighting, room/workplace reservation, etc.; leading them to become entire “smart buildings”. This monitoring “for safety” is also extended with regulation of online platforms, or the physical private and public sphere (by use of cameras or alternative sensors). Sure “safety” has always been a bite-sized argument, but I’d argue the COVID pandemic having substantiated it.

    I would argue the COVID pandemic having expanded its influence drastically, also in areas previously unexplored. Need an appointment at the barber? Got to plan that using a digital calendar on their website. Need some groceries? Oh, we can now just DoorDash. Have a job interview? Have a Zoom call instead of coming over in person.

    COVID regulations directly required “contactless payment”, at least here; so that might be a direct consequence of the pandemic.









  • I fully agree with you comment. I can understand you’ve interpreted it that way, and have since updated the body to clarify this. Regarding technology having increasing since its inception: that may be true, however I would argue the COVID pandemic having expanded its influence drastically, also in areas previously unexplored. Need an appointment at the barber? Got to plan that using a digital calendar on their website. Need some groceries? Oh, we can now just DoorDash. Have a job interview? Have a Zoom call instead of coming over in person. And I could go on, and on, and on. And regarding your last point, perhaps my issue lies more with the enforcement of expert opinions, and them being presented as ultimate truths, disregarding people’s own opinions. Although I do agree genuine experts to be valuable, there’s also a lot that pretend to be that, while having a conflict of interest.