I got a cheap netbook style laptop for traveling some weeks ago (HP Stream 11" with 4 GB of RAM and a N4120). Didn’t expect much more from this hardware than opening a few browser tabs and doing some retro gaming via Steam.
Shared RAM with graphics card means that 3.64 GB of RAM are effectively usable for the OS. This was even too little RAM to open a handful of tabs w/o having tabs being unresponsive for seconds sometimes in a very annoying way. Another thing which made trouble was the Wifi - I guess it went into power saving, was swapped and didn’t load fast enough to provide a good experience. (Of course I wasted an hour checking for Wifi drivers/support.)
In short: Even for my low expectations for this laptop it was an underwhelming experience.
First step was to look at my vm.swappiness and set it to 10, which already helped, but still the machine had hiccups and annoying timeouts.
In a last, desperate effort I enabled ZRAM on the laptop… and literally WTF: Saying it is a night and day difference doesn’t do the experience justice. Typing this words now on the Stream, which I use exactly the same way as my much more beefy other machines (my next worst computer has 8G of RAM and an Intel Core i3), browsing with 10 open tabs, e-mail client open on another virtual desktop… it is crazy, it makes the Stream fun to use and I use it at home for everything which isn’t heavily CPU/IO bound.
What surprised me the most: No hiccups, no timeouts and it even fixed the Wifi issues on this little machine. Didn’t expect this would be possible, especially with a N4120 and 3.64 GB of RAM.
In short, my laptop changed from not even reaching my low/realistic expectations to being my favorite technical purchase of the last years, thanks to ZRAM.
Besides making this a ZRAM appreciation post, I really want to spread the word about it. Especially for old hardware and limited RAM situations, IMHO it should be the first thing which comes to mind/is recommended.
Fedora and PopOS use it by default, so it is well tested and should IMHO again, be a default at least for desktop setups.
Give it a try - supposedly it even improved the experience on much more beefy computers for gaming etc.
[This comment has been deleted by an automated system]
Thanks, great input.
I totally agree - ZRAM isn’t magic and of course it will fall flat on its face for loads of encrypted, compressed or pure random data. In my limited day to day usage I just never hit that situation, so far.
Again, I fully agree, I wouldn’t have expected that the N4120 works so well with ZRAM. For work I am forced to use a recent mac with loads of RAM. When just browsing the web/checking emails I don’t feel any noticeable difference between the mac and the Stream for the CPU. (Of course, CPU/IO bound tasks are another story, and the display of the mac is in another league.) Usually I would consider myself to be quite sensitive to speed, I notice a real difference between using Gnome (with impatience etc. extensions) and Xfce, concerning the responsiveness of my desktop.
I’ll check the BIOS settings, I expect the same as you. Not sure, if I will lower the reserved RAM for the iGPU, everything works fine at the moment and I want to try some light gaming on this machine.
I’m not going to remember the right terminology, but you can also configure it with a chunk of disk to stick files that it can’t compress into so they don’t end up clogging up your swapspace.
Not sure if you are referring to ZSWAP, which is backed by physical swap and writes uncompressable pages to the physical swap. ZRAM has AFAIK an option called WRITEBACK, which allows it to also use physical swap, but I didn’t find that ZRAM is concerned if it can compress the page or not. (Grain of salt and if someone is more knowledgeable I happily be corrected.)
I’m talking about ZRAM and I did mean writeback.
“With CONFIG_ZRAM_WRITEBACK, zram can write idle/incompressible page to backing storage rather than keeping it in memory.”
From https://www.kernel.org/doc/html/latest/admin-guide/blockdev/zram.html
Thank you.
One naive question, because it honestly confuses me:
Right now, my ZRAM swap space has a higher priority than my physical swap space. In my tests, this does what I would expect: It doesn’t touch physical swap before the ZRAM is filled.
In my mind, it works like this: I should be better off, with this setup as long as my swapping always only touches the ZRAM.
If I configure writeback, does it mean that ZRAM could just write pages out before it is filled, because the algorithms decide the page is IDLE/does not compress. Additional it seems it could wear out my flash memory (“If there are lots of write IO with flash device, potentially, it has flash wearout problem so that admin needs to design write limitation to guarantee storage health for entire product life.”).
In short, for me it looks like a worse option in my use case compared to the two separate swap partitions. Any thoughts?
As someone who has always been cautious about SSD writes (possibly overcautious/ paranoid? Idk, some seem to think it’s not a concern with modern SSDs. But I haven’t really spent any time researching recently.) I always like to have a hard disk as well as an SSD and I put my writeback device and any swap partitions there.
Sorry this probably isn’t a helpful answer.
No worries, SSDs/sd cards feel different when one grew up with hard disks.
Anecdotal: I am running Raspberrry Pis as home servers for nearly 10 years by now. In the beginning, the sd cards would die within a short period of time. (The first we months, perhaps year?)
I don’t know if it is a combination of better tuning of Raspbian and improvements in the sd card technology, but I haven’t had a problem with sd cards for at least 9 years now, and at least one Pi is running 24/7 at all times, with lots of IO. (Full disclosure: I am also over provisioning the Pis with big sd cards, to hopefully have better wear leveling.)
With SSDs I personally never experienced hardware failures, OTOH with the dropping prices I bought new ones and replaced the old ones regularly.
What I do nowadays on my machines:
- ZRAM w/o writeback and a 2nd physical swap device (the physical swap is never touched, even in my low memory devices)
- I set noatime mount option for all my SSD/sd card to decrease unnecessary reads/write
- After a hard disk failure many years ago, my important data is backed up to other machines nearly continuously (that’s the primary role of the 24/7 Pi)
Don’t know if it is helpful, but I would recommend you to use noatime, experiment with ZRAM with zstd and see if your swap is touched at all and use BTRFS for checksums, to detect any trouble with your SSDs early on.
Programs shouldn’t get confused since RAM/swap is transparent for them.
Anyone know of an eli5 on zram? I don’t see why just compressing your RAM would make it run better.
stops having to swap pages to disk, slightly more overhead on the CPU side but for most systems that will be an order of magnitude faster that swapping to disk
Let me give it a try:
Imagine you are having breakfast and sitting on your breakfast table. Everything on your table and reachable w/o getting up is what your CPU holds in its register. When you need something from the fridge in your home, this is your RAM. If you need something that is not in your fridge, you have to get dressed, get out of our home, walk to the groceries store which is half an hour away, find what you are looking for, pay for it, walk home for around half an our, switch back to your relaxed clothing and finally you can continue your breakfast. The groceries store is your hard disk/ssd whatever. With compression, imagine you have a big second fridge in the basement (or the house next to yours, you get the idea). Not as good as having stuff on your table or in your fridge, but usually at least an order of magnitude better than having to visit the groceries store.
Turns out usually a significant amount of RAM is compressible. I was surprised at it too, and actually still am. But of course it also depends on how you use your system. If you run a media player that caches a lot to RAM, its cache wont be compressible, but they say its efficient for example for memory of web browsers.
I’m running ZRAM on my old Netgear ReadyNAS’s, which has 512MB or 256MB RAM. It enables them to do a lot more than they otherwise would be able to, running a modern linux distribution.
I’ve been so satisfied with it that I even started running it on my modern desktop with 32GB RAM, it helps with my tab addiction :)
Rolling out ZRAM to all my boxes right now! 🙃
Do you also tweak other settings for ZRAM? According to ArchWiki PopOS settled for the following settings:
vm.swappiness = 180 vm.watermark_boost_factor = 0 vm.watermark_scale_factor = 125 vm.page-cluster = 0
I am testing this settings right now and cannot say I experience a difference compared to the defaults.
vm.swappiness
value should be between 0 and 100 IIRC.Kernel 5.8 changed this, so the value is from 0 to 200.
https://docs.kernel.org/admin-guide/sysctl/vm.html#swappiness
Yes. I stand corrected.
So the kernel memory management system wants to free up some RAM… it’ll either kick stuff out of the page cache (this is the disk cache), or write some stuff out to swap. vm.swappiness determines “relative I/O cost” of swapping something out versus dropping some disk cache (i.e. how much you think it’ll slow the system down.) Total value is 200, so default vm.swappiness=60 means page cache is 140 (200-60). 140/60=2.33, so it considers regular disk I/o to be about 2.33 times the speed of swap. Settings vm.swappiness=100 means swap and disk I/O are equally fast; on modern kernels, in case you had a fast swap system (like some auxillary RAM disk or optane ssds or something) you can even set vm.swappiness over 100 to say 150 to say swap is faster than your regular disk.
https://stackoverflow.com/questions/72544562/what-is-vm-swappiness-a-percentage-of
I read several articles on this topic, and it seems there are urban legends and misunderstandings about setting vm.swappiness. The 180 was an experimental result from PopOS and Fedora people, and it only makes sense with ZRAM AFAIK. … anyway, long story short, I ended up with the default of 60 after some testing. ;-)
I don’t really tweak much. I use the Debian default of 50 percent RAM used. For the NAS’s I tell it to use lz4 as they’re pretty weak cpu-wise.
Has anyone ever actually benchmarked vm.page-cluster = 0? Makes no sense to me to suggest a cpu is so bottlenecked that disabling read-ahead would actually help. If anything it would mitigate the decompression time if it guessed correctly as the work would already be done if left at the default of 3. Normally cpu is not bound when using zram because its quite low cpu anyway.
For my own usage, I just enabled ZRAM, 50% with zstd and left all other options unchanged, which seems to be the conservative approach and more important: It works on my machines™.
I would love to see some benchmarks or even a structured run down about the available options and configurations.
zstd compression algorithm can be overkill depending of your use case. I’d try lz4 compression algorithm just to see if it sits well with you, since it’s overally faster, less CPU intensive, than zstd at a possibly negligible compression rate.
Have been using zstd for compression with ZRAM at least for my raspberry pi for years now, and I don’t experience any CPU bottleneck in usual usage, but memory is really scarce, so I’ll stick to it.
If I remember correctly, other people on reddit, from Fedora and PopOS did some benchmarks. Fedora defaults to lz4, AFAIK and PopOS to zstd.
If you like ZRAM, make sure you also enable MG-LRU and consider using ZSWAP with z3fold allocator instead because it’s capable of dumping older compressed pages to swap file whereas ZRAM, once full, is simply bypassed until pages in the store are freed.
I do number crunching on memory constrained systems. MG-LRU improves the efficiency of page reclaim, and ZSWAP interacts much more nicely if you have a swap file also.
Thank you for your advice. Right now my setup works so I’ll try to not waste my entire weekend playing with technology, but if I need further tweaks in the future I’ll look into ZSWAP.
One question: Do you know why Fedora and PopOS decided for ZRAM and against ZSWAP? As I wrote already somewhere else: I did choose ZRAM for it being default in Fedora, which gave me some trust that people better informed/experienced choose ZRAM over ZSWAP.
Totally understandable. I have done the wasting of weekends just to go back to where I started.
I think that ZRAM has a simpler implementation and has a history of being more widely used whereas ZSWAP is only recently seeing more usage as a default. I suspect it’s in the interest of stability and because the implementation is better characterized.
With that said, MG-LRU is not enabled by default for the same reason, but it has a big impact such that it’s the default on newer Android devices. Stability is a relative term.
Thank you very much for the write up! I’ll have to investigate more about ZSWAP. Have a nice weekend!
This is a theoretical advantage of ZSWAP over ZRAM, but when I researched it, every real world comparison I found seemed to find that ZRAM performed better even when this advantage should have come into play.
I’d be very interested in references to that research so I can improve the performance of my memory constrained systems.
Sorry, it’s been a while since I read this stuff and I don’t have the links. The state of web searching these days sucks and I can’t easily find them.
One bit I remember was that a lot of the concern about LRU inversion in ZRAM that might make ZSWAP look preferable is out of date since the addition of a writeback option to ZRAM. I also remember people claiming that ZRAM had an advantage in being multithreaded.
FWIW I find this three year old answer saying the kswapd that ZSWAP uses is single threaded but there is a patch to make it multithreaded that significantly improves it’s performance. No idea if this is out of date.
Yes, I researched this, and I can’t tell if one of those patches to enable multithread compression was accepted. Considering that zswap is making it into some default configurations of big distros, I would hope that they’d do some testing, but I can’t say for sure.
So ZRAM is RAM, but compressed? I didn’t know about it, thanks for sharing!
Technically it is a compressed SWAP disk in RAM, but the compression ratio is impressive and it feels like more RAM for me. My pleasure to share, hope it will help you someday! :-)
It’s a compressed ram disk (virtual block device) that is often used for swap.
I use this on all my Pis.
Some years ago I needed to use more than 100 GB RAM in a 32GB RAM computer, repeatedly. zRAM and swapping in a SSD made the whole thing bearable. In the pandemic, RAM was expensive and I was broke.
How does this compare to zswap. For me, if you still want a swap device on a real disk, this might be better? Idk >.<
Edit: arch has zswap enabled by default https://wiki.archlinux.org/title/Zswap - someone below says it is better if you have zswap when you already have a swap device :)
You don’t want to combine zram swap and physical swap. When zram swap is full, you’ll get LRU inversion because it won’t ever evict from zram swap.
Either zram-only or physical+zswap.
So, if you’re using a swap partition/file you don’t want zRAM, but zSwap is okay. Am I understanding that right?
Yup.
Thanks .
I have a very shitty notebook this is likely to be very useful for ;p
Sorry, just a user myself and because ZRAM is default on Fedora and was easy to setup on Debian, I am just using it.
On the Stream, I have a 4G physical swap partition as a backup, though. AFAIK the physical partition will be utilized by ZRAM when needed. (ZRAM simply has higher priority swap).
A lot of androids are running this now and it seems to work pretty ok.
Isn’t zswap enabled by default?
having zram + swap on disk isn’t the same as having zswap + swap on disk? the difference should be only that zram show as a swap device while zswap does not.
having only zram, you are still confined by the total ram you have. idk how the average compression ratio is, but you can gain 1.5x ram max. to get more, you need a physical swap device.
is there an advantage of using zram instead of zwap? when you still have a physical swap with lower priority.
bonus question: What if I use all 3 of them? would this just be redundant?
I used zram + swap for years. I dedicated 25% of my memory to zram. The problem is that zram would get filled with infrequently used data, and disk swap would get the frequently used data. Once that happens everything slows down.
Zswap tries to fix that be creating a compressed swap buffer in memory. Older/less used data will get written to disk, but fresh/frequently used data will stay in the compressed ram buffer. That’s my understanding, at least. I don’t remember how to query Zswap usage stats.
I am using Debian 12 and I am pretty sure zswap is not enabled by default.
With zstd the compression ratio is better than 3 in all experiments I saw. (Of course you won’t get this numbers for random/encrypted data). Right now I have 12MB compressed to 2MB via ZRAM (fresh reboot) which is a factor of 6.
So, taking 3 as factor learned from others and my experience, with 1.8 GB of ZRAM I can store 5.4 GB of memory, adding the 1.8 GB of usual RAM, I end up with 7.2 GB of RAM which is double my 3.6 GB RAM I started with. (All this is backed by 4 GB of physical SWAP.)
Sorry, I cannot really answer your questions concerning zswap vs ZRAM, I just follow the herd and trust that the Fedora people usually have good reasons for their technical choices and are deploying ZRAM by default for several years now.
zstd
Just btw, while zstd’s compression ratio might be stronger, it will not be as fast as something like lzo-rle. When it comes to RAM you will definitely want to prefer speed unless you have a strict space usage requirement.
A valid point and hard to find an objectively right answer (I searched for some information regarding this topic and found for example [this](https://www.reddit.com/r/Fedora/comments/mzun99/new_zram_tuning_benchmarks/ thread on reddit).
In my totally not scientific benchmark of running my backup script with time, LZ4 clearly outperformed ZSTD and no ZRAM at all, when I was generating some CPU load in the background. (LZ4 was 14% faster compared with ZSTD, 19% faster compared to no ZRAM).
I am thinking about switching to LZ4 or even lzo-rle for some time and observe, if I hit the physical swap. If not, I hope to get some speed advantage and perhaps a few minutes longer battery life.
Arch and RHEL enable zswap by default. Regular x86_64 Ubuntu and Debian do not have zswap nor zram enabled by default.
For me zram made my system more responsive but the oomd didn’t really count it as more ram so sometimes things would get killed earlier than I’d want. ZRAM + regular swap fixed this unless an application was getting really greedy. I kinda wanna try all three now
check if zswap is already enabled:
zgrep CONFIG_ZSWAP_DEFAULT_ON /proc/config.gz
zram did wonders back then to the performance/usability of Palm/HP webOS devices, especially the 1st Pre.
I loved my Palm Pre 😔
I adored my HP Veer, I bet that if I get a working one in my hands I’ll blind-type flawless again in seconds, I was better on that thing than any full size keyboard. Best phone I’ve ever had because of the keyboard and the ease of customization when you know Javascript. Screen was a bit too small (I consider every modern phone way, way too big) and speed could’ve been better but wasn’t bad.
Having had massive problems using normal swap in combination with zfs in out of memory situations with high memory pressure I now only - but as default - use zram on bare-metal machines.
Thanks [email protected] !
Thanks to your post and plenty of config tutorials, I’ve got this set up on my Oracle Free Tier VPS running Ubuntu. It has recently been struggling hard with 1GB ram, requiring reboot. The difference with zram enabled… WOW.
I haven’t been able to get zram working on my Ubuntu VPS with another provider though - search results suggest that some providers don’t allow swap.
My pleasure, happy my post inspired you and thank you for your kind words! :-)
I have 64 GB of memory in my desktop with 16 GB of zswap. Can’t say I’ve noticed any difference because I haven’t actually been in a situation that uses all this memory yet (aside from some programs leaking memory), but the thought of getting “free” RAM is appealing to me.
ZRAM is great and all, but why would anyone buy a computer with less than 32 gigs these days?
Because not everyone has the means to?
RAM is very cheap these days.
I have 16 gigs…
Not many go over 16 gigs
deleted by creator
i have 8gb on my server, and i need stuff like zram / zwap to keep all the services running. still swaps out ~10gb. but it is on ssd and is fast enough.