• 32 Posts
  • 741 Comments
Joined 2 years ago
cake
Cake day: August 10th, 2023

help-circle


  • I’ve heard of thumbnails being used to deliver malware.

    You’ve heard of critical vulnerabilities in media processing applications that mean that thumbnails can theoretically be used to be spread malware. That is not the same as “this issue was being actively exploited in the wild and used to spread malware before it was found and patched”.

    These vulnerabilities, (again, cost money), and are fixed rapidly when found. Yes, disabling thumbnails is more secure. But I am of the belief that average users should not worry about any form of costly zero day in their threat model, because they don’t have sensitive information on their computers that makes them a target.


  • less distro-dependent like a privilege escalation attack

    These also are valuable. Less valuable than browser escapes IMO though.

    A keylogger is more likely, and it’s just as possible with sudo as it is with run0. They would replace sudo, run0, doas, etc with a fake command (since that only require access to the user), that either keylogs, or inserts a backdoor while it does the other sudo things.

    I’ve heard a fair few times about thumbnailer attacks, but no real detail from KDE about what if any mitigations they have in place.

    Please ignore the entire cybersecurity hype news cycle about images being used to spread malware. They often like to intentionally muddy the waters, and not clearly explain the difference between a malformed file being used as a vulnerability to exploit a code execution exploit, and an image file being used as a container for a payload (steganography). The former is a big deal, the latter is a non issue because the image is not the issue, whatever means the malware actually used to get onto the systems is.

    Here’s a recent example of me calling this BS out. The clickbait title implies that users got pwned by viewing a malicious image, when in actually it was a malicious extension that did the bad things.

    Unless you are using windows media player, the microsoft office suite, or adobe acrobat, code execution from loading a media file is a really big deal and fixed extremely quickly. Just stay updated to dodge these kind of issues.

    As for zero days, unknown and unpatched vulnerabilities, again, that’s a different threat model because those exploits cost money to execute. Using an existing known (but fixed in updated versions of apps) is free.


  • If I uninstall sudo and switch to run0 (

    Sudo and run0 are both problematic. Sudo is a setuid binary, which is problematic, but run0 is not much better. It works by making calls to systemd/polkit/dbus, services that constantly run as root, and they themselves expose a massive attack surface. Many privilege escalation CVE’s similar to sudo have been released that exploit that attack surface.

    When it comes to actually being secure, systemd somewhat screws you over, due to having a massive attack surface, a way to run things as root, and the interesting decision to have polkit parse and run javascript in order to handle authorization logic (parsing is a nightmare to do securely).

    The other thing, is that the browser sandbox is much, much stronger than the separation of privileges between users in Linux. Browser sandbox escapes (because they work the same on windows or Linux) are worth immense amounts of cash, and are the kinds of exploits that are used in targeted manners against people who have information on their computer worth that much. If you don’t have information worth millions of dollars on your computer, you shouldn’t worry about browser sandbox escape exploits.

    The reality is that any attacker who is willing and able to pierce through a browser sandbox, will probably also have a Linux privilege escalation vulnerability on hand. In my opinion, trying to add more layers to security is pointless unless you are adding stronger layers. If your attacker has a stronger “spear”, it doesn’t matter how many weak “shields” you try to put in front to stop it.

    If the million dollar industry of browser escapes is in your threat model, I recommend checking out the way that Openbsd’s sandboxing interacts with chromium. Or check out google’s gvisor sandbox and see if you can run a browser in there.




  • Is this because of the xz utils thing? The backdoor was included into the tarball, but it wasn’t in the git repo.

    By switching away from tarballs they pribably hope to prevent that, although this article doesn’t mention that. It’s possible this shift has been happening since before the xz utils.





  • Not really? From this page, all it looks like you need is a salsa.debian.org account. They call this being a “Debian developer”, but registration on Debian Salsa is open to anybody, and you can just sign up.

    Once you have an account, you can use Debian’s Debusine normally. I don’t really see how this is any different from being required to create an Ubuntu/Launchpad account for a PPA. This is really just pedantic terminology, Debian considers anybody who contributes to their distro in any way to be a “Debian Developer”, whereas Ubuntu doesn’t.

    If you don’t want to create an account, you can self host debusine — except it looks like you can’t self host the server that powers PPA’s. I consider this to be a win for Debusine.






  • Proxmox is based on debian, with it’s own virtualization packages and system services that do something very similar to what libvirt does.

    Libvirr + virt manager also uses qemu kvm as it’s underlying virtual machine software, meaning performance will be identical.

    Although perhaps there will be a tiny difference due to libvirt’s use of the more performant spice for graphics vs proxmox’s novnc but it doesn’t really matter.

    The true minimal setup is to just use qemu kvm directly, but the virtual machine performance will be the same as libvirt, in exchange for a very small reduction in overhead.


  • If this is the thread you are referring to, this is far from “vitreol” or being “combatative”. You said it yourself, there are two others users testing and were able to reproduce your issue. And the person who was unable to reproduce your issue is still being helpful, because we confirm that their specific setup (powerful server + ubuntu snap) doesn’t encounter this issue. Of course they are not going to offer any further troubleshooting advice, what can they do? They aren’t encountering the issue so they can’t really help you in the hands on way the other commenters are. So instead they pointed you to some other places you could ask for further troubleshooting. “I can’t help you” is very, very different than “fuck off!”.

    Look, I get it. You’re tired, and probably frustrated. Just take a break or something. It’s clear that making this post didn’t advance your goal of troubleshooting this issue.

    Now, let me take a crack at it. Nextcloud is one of like 3 software that I know off, off the top of my head that can encounter performance issues when it is deployed in a manner that doesn’t include an in memory cache of some sort. It looks like you were trying to install redis here, although I don’t know how far you got, or if this was even the same nextcloud setup?

    But many people frequently encounter performance issues with the manual install, that they don’t encounter with “distributions” of Nextcloud that include Redis or other performance optimizations like the docker-AIO installl… or the Snap version that the person who wasn’t encountering the issue used. So yes. Knowing that someone doesn’t encounter an issue is useful information to me.

    Can you confirm what deployment method your hosting provider is using for nextcloud? Both here and in the original thread, that would isolate a lot of variables, and it would allow people to give you more precise advice on debugging the service, since debugging a docker or snap version will be different from debugging a raw LAMP stack install. Right now, we are essentially flying blind, so it’s no wonder that no progress has been made.

    have you considered contacting hosting support?

    Of course not. I came to the available discussion forum to investigate a situation which may or may not be a flaw, and is clearly not a hosting company’s responsibility. Besides the fact that they would likely tell me exactly that if I get a response at all, I always explore all other avenues before opening tickets and GitHub issues.

    Lmao. You pay them for a service of seamless nextcloud, and that includes support. But to be blunt, we can’t really help you if we don’t know what the hosting provider is doing.

    If this is a performance optimization problem, you may not have the privileges on the server you would need to finetune nextcloud in order to fix this.

    If this is a bug, you can’t really see granular logs from the nextcloud host, same thing.

    Idk what to tell you. You are trying to manage managed nextcloud like it is selfhosted nextcloud and you are getting frustrated when people tell you that you might not have the under the hood access needed to fix what you want to fix.




  • To copy what I said when this was posted in another community:

    The png didn’t do shit. Users where compromised by a malicious extension.

    Steganagrophy (hiding data in a png) is a non issue and cannot do anything independently. It is also impossible to really stop.

    Which is probably why the cybersecurity news cycle likes to pretend that steganagrophy is a risk on it’s own, so that they can sell you products to stop this “theat”.

    I hate the clickbait title is what I’m trying to say. But the writeup is pretty interesting.

    Although the real solution to this problem is probably only letting users install known safe extensions from an allowlist, instead of “pay us for consulting!”.