Early BSD VM pre-allocated swap backing for every anonymous page — you couldn't allocate virtual memory without a swap slot reserved for it, even if the page was never paged out.
When a process forks, the child needed swap reservations for the parent's entire address space (before exec replaces it). A large process forking temporarily needs double its swap allocation. If your working set is roughly equal to physical RAM, fork alone gets you to 2x.
This was the practical bottleneck people actually hit. Your system had enough RAM, swap wasn't full, but fork() failed because there wasn't enough contiguous swap to reserve. 2x was the number that made fork() stop failing on a reasonably loaded system.
The later overcommit/copy-on-write changes made this less relevant, but the rule of thumb outlived the technical reason. Most people repeating "2x RAM" today are running systems where anonymous pages aren't swap-backed until actually paged out.
Today swap is no longer about extending your address space, it's about giving the kernel room to page out cold anonymous pages so that RAM can be used for disk cache.
A little swap makes the system faster even when you're nowhere near running out of memory, because the kernel can evict pages it hasn't touched in hours and use that RAM for hot file data instead.
The exception is hibernation — you need swap >= RAM for that, which is why Ubuntu's recommendations are higher than RedHat's 20% of RAM.
Today's swap also is not preallocated by the user. It is entirely handled by the OS itself. If it needs swap space to hibernate it will go ahead and allocate it itself.
It does? Last I checked Linux doesn't do dynamic swap sizes, and while Windows has dynamic swap sizes it has a separate big non-dynamic file for hibernation. I have no idea what MacOS does.
TBF, I think overcommit was and remains an ugliness in how we manage memory. I wish we'd solved the fork commit-charge-spike issue by encouraging vfork (and later, posix_spawn) more heavily, not by making the OS lie about the availability of memory.
The ship's long sailed though, so even I run with overcommit enabled and only grumble about what might have been.
The OP clearly states that he wants to know the earliest origin of the rule, and the only answers he gets are people giving their own opinions on how much swap space you should have.
Too bad because it's an interesting question that I would also like to know the answer to.
Nope. Those are not the only answers I am seeing. I’m still curious though. 2x was nice because nobody really questioned it. Now that we have there doesn’t seem to be one “answer”. This is a fun/interesting question that comes up every now and then here and elsewhere :-)
I suspect someone smarter than me about system tuning will have a much smarter and nuanced answer than “just use 2x”
I thought the modern advice was you don't need it at all. No more spinning disks, so the there's no speed gain using the inner-most ring, and modern OSes manage memory in more advanced, and dynamic ways. That's what I choose to believe anyway, I don't need anymore hard choices when setting up Linux :)
The main downside to not having swap is that Linux may start discarding clean file backed pages under memory pressure, when if you had swap available it could go after anonymous pages that are actually cold.
On a related note, your program code is very likely (mostly) clean file backed pages.
Of course, in the modern era of SSDs this isn't as big of a problem, but in the late days of running serious systems with OS/programs on spinning rust I regularly saw full blown collapse this way, like processes getting stuck for tens of seconds as every process on the system was contending on a single disk pagefaulting as they execute code.
I don't think that's correct. Having swap still allows you to page out rarely-used pages from RAM, and letting that RAM be used for things that positively impact performance, like caching actually used filesystem objects. Pages that are backed by disk (e.g. files) don't need that, but anonymous memory that e.g. has only been touched once and then never even read afterwards should have a place to go as well. Also, without swap space you have to write out file backed pages, instead of including anonymous memory in that choice.
For that reason, I always set up swap space.
Nowadays, some systems also have compression in the virtual memory layer, i.e. rarely used pages get compressed in RAM to use up less space there, without necessarily being paged out (= written to swap). Note that I don't know much about modern virtual memory and how exactly compression interacts with paging out.
Every time I've ran out of physical memory on Linux I've had to just reboot the machine, being unable to issue any kind of commands by input devices. I don't know what it is, but Linux just doesn't seem to be able to deal with that situation cleanly.
The mentioned situation is not running out of memory, but being able to use memory more efficiently.
Running out of memory is a hard problem, because in some ways we still assume that computers are turing machines with an infinite tape. (And in some ways, theoretically, we have to.) But it's not clear at all which memory to free up (by killing processes).
If you are lucky, there's one giant with tens of GB of resident memory usage to kill to put your system back into a usable state, but that's not the only case.
Managed over 50k servers with zero swap. Set overcommit ratio to 0, min_free configured based on a Redhat formula and had application teams keep some memory free. Adjust oom scores at application startup especially for database servers where panic is set to 0.
Servers ranged from 144GB ram to 3TB ram and that memory is heavily utilized. On servers meant to be stateless app and web servers panic was set to 2 to reboot on oom which mostly occurred in the performance team that were constantly load testing hardware and apps and a few dev machines were developers were not sharing nicely. Engineered correctly OOM will be very rare and this only gets better with time as applications have more controls over memory allocation and other tools like namespaces/cgroups. Java will always leak, just leave more room for it.
I think more people should know about the existence of ZRAM on modern Linux distributions. It's really changed the way I look at swap configs.
ZRAM is a compressed block device that is stored in RAM. It's great!
Previously, if I ever had high memory pressure situations, I really dreaded the slowdowns. Now, with swap sitting on top of /dev/zram0 it's a completely different experience.
I have ZRAM enabled on all of my personal machines, both laptops with limited memory, and desktops with 64 or 128GB of RAM. It's rarely used, but it is nice to have that extra room sometimes.
The performance of a zram device is so much faster than even the latest NVMe drives.
I install more RAM so I can swap less. If I have 8 GB, then the 2x rule means I should have a 16 GB swap file, giving me 24 GB of total memory to work with. If I then stumble upon a good deal on RAM and upgrade to 32 GB, then if I never had memory problems with 24 GB, then I should be able to completely disable paging and not have a problem. But instead, the advice would be to increase my paging file to 64 GB!?
It's not meant for that kind of comparison. It's a variant of Simpson's paradox. Any individual system with a fixed set of tasks needs less swap when it gets more RAM. But when you look at the aggregate of systems, the systems that have more tasks to run get more RAM to run them, and systems with fewer tasks get less RAM. And since more tasks need more swap, everything scales together (though often not linearly).
I've had arguments with people about this for 20 years now and the most compelling case I heard involved the price of storage vs the price of RAM in the mid to late 1990s, and that this 2x represented an optimal use of money in designing a system at that point in time.
Also I still put my swap partition at the end of the drive out of habit because you used to want to do that for performance reasons when it was stored on a spinning magnet.
Zero. My office workstation has 48 GB of RAM, my home computer has 64 (I went a bit overboard). I have very bad memories of swap thrashing and the computer becoming totally unresponsive until I forced a reset; if I manage to fill up so much RAM, I very much prefer the offending process to die instead of killing the whole computer.
I have 64GB of RAM and 16GB of swap. Swap is small enough it can't get really out of hand.
I have memories from like 20 years ago that even when I had plenty of RAM, and plenty of it was free, I would get random OOM killer events relatively regularly. Adding just a tiny bit of swap made that stop happening.
I'm like 90% sure at this point it's just a stupid superstition I carry. But I'm not gonna stop doing it even though it is stupid.
Same here, though I settled on 32GB of swap because I have a 4TB SSD (caught a good sale on a Samsung EVO SSD at Newegg). But whenever I run `top`, I constantly see:
MiB Swap: 32768.0 total, 32768.0 free, 0.0 used.
I could safely get away with 4GB of swap, and see no difference.
I'm not an expert, but aren't you just reducing the choice of what pages can be offloaded from RAM? Without swap space, only file-backed pages can be written out to reclaim RAM for other uses (including caching). With swap space, rarely used anonymous memory can be written out as well.
Swap space is not just for overcommitting memory (in fact, I suspect nowadays it rarely ever is), but also for improving performance by maximizing efficient usage of RAM.
With 48GB, you're probably fine, but run a few VMs or large programs, and you're backing your kernel into a corner in terms of making RAM available for efficient caching.
It's funny how people think they're disabling swapping just because they don't have a swap file. Where do you think mmap()-ed file pages go? Your machine can still reclaim resident file-backed pages (either by discarding them if they're clear or writing them to their backing file if dirty) and reload them later. That's.... swap.
Instead of achieving responsiveness by disabling swap entirely (which is silly, because everyone has some very cold pages that don't deserve to be stuck in memory), people should mlockall essential processes, adjust the kernel's VM swap propensity, and so on.
Also, I wish we'd just do away with the separation between the anonymous-memory and file-backed memory subsystems entirely. The only special about MAP_ANONYMOUS should be that its backing file is the swap file.
mmap is not swap. It's using the same virtual memory mechanisms to load/dump pages to disk. The policy for when to read and write those pages is completely different.
When the room for memory mapped files gets low enough you get bad thrashing anyway, so the policy difference isn't that important.
Having no swap limits how much you can overburden your computer, but you also hit problems earlier. Here's some example numbers for 64GB of memory: With swap you can go up to 62GB of active program data (85GB allocated and used) before you have performance issues. Without swap you can go up to 45GB of active program data (63GB allocated and used) before you hit a brick wall of either thrashing or killing processes. The no-swap version is better at maintaining snappiness within its happy range, but it's a tradeoff.
It is doing exactly what swap is doing. That it's swap with a different policy doesn't make it not-swap.
Also, that separate policy shouldn't even exist. For LRU/active-list/inactive-list purposes, why does it matter whether a page is anonymous or file-backed? If you need it, you need it, and if you don't, you don't. No reason for anonymous and file-backed memory to be separate sub-sub-systems under vm.
I did similar with my 32GB laptop, but it was fairly flaky for ~4 years and I just recently put 48GB of swap on and it's been so much better. It's using over 20GB of the swap. The are cases in Linux where running without swap results in situations very similar to swapping too much.
Windows: I set min size to whatever is necessary to make RAM+swap add up to ~2 GBytes per CPU thread, to avoid problems with parallel Visual Studio builds. (See, e.g., https://devblogs.microsoft.com/cppblog/precompiled-header-pc...) Performance is typically fine with ~0.75+ GBytes RAM per job, but if the swapfile isn't preconfigured then Windows can seemingly end up sometimes refusing to grow it fast enough. Safest to configure it first
macOS: never found a reason not to just let it do whatever it does. There's a hard limit of ~100 GBytes swap anyway, for some reason, so, either you'll never run out, or macOS is not for you
Linux: I've always gone for 1x physical RAM, though with modern RAM sizes I don't really know why any more
Fwiw you’ll see technical reasons for swap being a bad idea on servers. These are valid. Virtualised servers don’t really have great ways to make swap work.
On a personal setup though there’s no reason not to have swap space. Your main ram gets to cache more files if you let the os have some space to place allocated but never actually used objects.
As in ‘I don’t use swap because i don’t use all my ram’ isn’t valid since free ram caches files on all major OS’s. You pretty much always end up using all your ram. Having swap is purely a win, it lets you cache even more.
But then you're putting data that used to be on RAM on storage, in order to keep copies of stored data on RAM. Without any advance knowledge of access patterns, it doesn't seem like it buys you anything.
On systems with 32/64/128 GB of ram, I'll typically have a 1GB or 2GB swap. Just so that the system can page out here and there to run optimally. Depending on the system, swap is typically either empty or just has a couple hundred MB kicking around.
On what OS are you using these settings? I found that Windows will refuse to allocate more virtual memory when the commit charge hit the max RAM size even if there is plenty of physical memory left to use.
I have 64 GiB of RAM and programs would start to crash at only 25 GiB of physical memory usage in some workloads because of high commit charge. I had to re-enable a 64 GiB SWAP file again just to be able to actually use my RAM.
My understanding is that Linux will not crash on the allocation and instead crash when too much virtual memory becomes active instead. Not sure how Mac handles it.
My work laptop currently has 96GB of RAM. 32 of it is allocated to the graphics portion of the APU. I have 128GB (2x) of SWAP allocated, since I sometimes do big FPGA Synthesizations, which take up 50GB of RAM on its own. Add another two IDEs and a browser, and my 64GB or remaining RAM is full.
in 1997 people talking about it on the Slackware Usenet group:
>Question: Why do you need 500MB of swap space? You would be better of
>spending your money on more RAM than wasting it on so much swap space,
>considering that it would most likely never be used anyways.
I work with systems that have between 256MB and 1GB of RAM and
between 4GB and 16GB available for Linux. My experience with other
operating systems is that swap should be 2X to 3X RAM
...
The info that I have read about Linux is that the 2x for swap space is
only for those running less than 16mb of ram. Your swap space could be
equal to your ram
...
I know there are broken OSes out there where it's recomended to
have 2x RAM swapspace, but Linux is not broken in that way.
With Linux you should have <Max needed memory> - <RAM> swapspace,
and depending on your needs that might range from 0 to infinity
MBs of swap.
...
THIS IS CRAZY!!!! YOU DON'T KNOW WHAT THE F--K YOU'RE TALKING ABOUT.
As has been mentioned a few times in other comments here, I don't believe that's correct. Swap space is not just for "using more memory than you have RAM".
When a process forks, the child needed swap reservations for the parent's entire address space (before exec replaces it). A large process forking temporarily needs double its swap allocation. If your working set is roughly equal to physical RAM, fork alone gets you to 2x.
This was the practical bottleneck people actually hit. Your system had enough RAM, swap wasn't full, but fork() failed because there wasn't enough contiguous swap to reserve. 2x was the number that made fork() stop failing on a reasonably loaded system.
The later overcommit/copy-on-write changes made this less relevant, but the rule of thumb outlived the technical reason. Most people repeating "2x RAM" today are running systems where anonymous pages aren't swap-backed until actually paged out.
Today swap is no longer about extending your address space, it's about giving the kernel room to page out cold anonymous pages so that RAM can be used for disk cache.
A little swap makes the system faster even when you're nowhere near running out of memory, because the kernel can evict pages it hasn't touched in hours and use that RAM for hot file data instead.
The exception is hibernation — you need swap >= RAM for that, which is why Ubuntu's recommendations are higher than RedHat's 20% of RAM.
The ship's long sailed though, so even I run with overcommit enabled and only grumble about what might have been.
Too bad because it's an interesting question that I would also like to know the answer to.
On a related note, your program code is very likely (mostly) clean file backed pages.
Of course, in the modern era of SSDs this isn't as big of a problem, but in the late days of running serious systems with OS/programs on spinning rust I regularly saw full blown collapse this way, like processes getting stuck for tens of seconds as every process on the system was contending on a single disk pagefaulting as they execute code.
For that reason, I always set up swap space.
Nowadays, some systems also have compression in the virtual memory layer, i.e. rarely used pages get compressed in RAM to use up less space there, without necessarily being paged out (= written to swap). Note that I don't know much about modern virtual memory and how exactly compression interacts with paging out.
Running out of memory is a hard problem, because in some ways we still assume that computers are turing machines with an infinite tape. (And in some ways, theoretically, we have to.) But it's not clear at all which memory to free up (by killing processes).
If you are lucky, there's one giant with tens of GB of resident memory usage to kill to put your system back into a usable state, but that's not the only case.
Servers ranged from 144GB ram to 3TB ram and that memory is heavily utilized. On servers meant to be stateless app and web servers panic was set to 2 to reboot on oom which mostly occurred in the performance team that were constantly load testing hardware and apps and a few dev machines were developers were not sharing nicely. Engineered correctly OOM will be very rare and this only gets better with time as applications have more controls over memory allocation and other tools like namespaces/cgroups. Java will always leak, just leave more room for it.
ZRAM is a compressed block device that is stored in RAM. It's great!
Previously, if I ever had high memory pressure situations, I really dreaded the slowdowns. Now, with swap sitting on top of /dev/zram0 it's a completely different experience.
I have ZRAM enabled on all of my personal machines, both laptops with limited memory, and desktops with 64 or 128GB of RAM. It's rarely used, but it is nice to have that extra room sometimes.
The performance of a zram device is so much faster than even the latest NVMe drives.
I install more RAM so I can swap less. If I have 8 GB, then the 2x rule means I should have a 16 GB swap file, giving me 24 GB of total memory to work with. If I then stumble upon a good deal on RAM and upgrade to 32 GB, then if I never had memory problems with 24 GB, then I should be able to completely disable paging and not have a problem. But instead, the advice would be to increase my paging file to 64 GB!?
It doesn't make any sense. At all.
I have memories from like 20 years ago that even when I had plenty of RAM, and plenty of it was free, I would get random OOM killer events relatively regularly. Adding just a tiny bit of swap made that stop happening.
I'm like 90% sure at this point it's just a stupid superstition I carry. But I'm not gonna stop doing it even though it is stupid.
Swap space is not just for overcommitting memory (in fact, I suspect nowadays it rarely ever is), but also for improving performance by maximizing efficient usage of RAM.
With 48GB, you're probably fine, but run a few VMs or large programs, and you're backing your kernel into a corner in terms of making RAM available for efficient caching.
Instead of achieving responsiveness by disabling swap entirely (which is silly, because everyone has some very cold pages that don't deserve to be stuck in memory), people should mlockall essential processes, adjust the kernel's VM swap propensity, and so on.
Also, I wish we'd just do away with the separation between the anonymous-memory and file-backed memory subsystems entirely. The only special about MAP_ANONYMOUS should be that its backing file is the swap file.
Having no swap limits how much you can overburden your computer, but you also hit problems earlier. Here's some example numbers for 64GB of memory: With swap you can go up to 62GB of active program data (85GB allocated and used) before you have performance issues. Without swap you can go up to 45GB of active program data (63GB allocated and used) before you hit a brick wall of either thrashing or killing processes. The no-swap version is better at maintaining snappiness within its happy range, but it's a tradeoff.
Also, that separate policy shouldn't even exist. For LRU/active-list/inactive-list purposes, why does it matter whether a page is anonymous or file-backed? If you need it, you need it, and if you don't, you don't. No reason for anonymous and file-backed memory to be separate sub-sub-systems under vm.
macOS: never found a reason not to just let it do whatever it does. There's a hard limit of ~100 GBytes swap anyway, for some reason, so, either you'll never run out, or macOS is not for you
Linux: I've always gone for 1x physical RAM, though with modern RAM sizes I don't really know why any more
Fwiw you’ll see technical reasons for swap being a bad idea on servers. These are valid. Virtualised servers don’t really have great ways to make swap work.
On a personal setup though there’s no reason not to have swap space. Your main ram gets to cache more files if you let the os have some space to place allocated but never actually used objects.
As in ‘I don’t use swap because i don’t use all my ram’ isn’t valid since free ram caches files on all major OS’s. You pretty much always end up using all your ram. Having swap is purely a win, it lets you cache even more.
The contents of swap could be read after a power cut.
I have 64 GiB of RAM and programs would start to crash at only 25 GiB of physical memory usage in some workloads because of high commit charge. I had to re-enable a 64 GiB SWAP file again just to be able to actually use my RAM.
My understanding is that Linux will not crash on the allocation and instead crash when too much virtual memory becomes active instead. Not sure how Mac handles it.
Edit: oh and I don’t have an actual personal system with swap configuration on it anymore to give my own answer anymore either.
people are too negative these days :|
https://groups.google.com/g/alt.os.linux.slackware/c/hWy0h_S...