Intro
Researching what is vmcom and vmlim from atop output, led me to research more interesting info on RAM and I learned alot. I want to include the interesting excerpts that I read from which I learned alot.
Physical memory is not used on the systems. Instead physical Ram along with Swap ram, are mapped out onto Virtual Memory. Pages of Virtual Memory (which are usually 4K but differ from system to system) are mapped to different pages of Physical Memory (which can be on RAM or on Swap)
NOTE: lots of this, applies to other systems like Windows (especially the stuff on virutal memory and the states of memory)
States
Reference for below section is from here: http://stackoverflow.com/questions/22174310/windows-commit-size-vs-virtual-size)
Memory can be reserved, committed, first accessed, and be part of the working set. When memory is reserved, a portion of address space is set aside, nothing else happens.
When memory is committed, the operating system guarantees that the corresponding pages could in principle exist either in physical RAM or on the page file. In other words, it counts toward its hard limit of total available pages on the system, and it formally creates pages. That is, it creates pages and pretends that they exist (when in reality they don’t exist yet).
When memory is accessed for the first time, the pages that formally exist are created so they truly exist. Either a zero page is supplied to the process, or data is read into a page from a mapping. The page is moved into the working set of the process (but will not necessarily remain in there forever).
Every running process has a number of pages which are factually and logically in RAM, that is these pages exist, and they exist “officially”, too. This is the process’ working set.
Further, every running process has pages that are factually in RAM, but do not officially exist in RAM any more. They may be on what’s called the “standby list” or part of the buffer cache, or something different. When these are accessed, the OS may simply move them into the working set again.
Lastly, every process has pages that are not in RAM at all (either on swap or they don’t exist yet).
Virtual size comprises the size of all pages that the process has reserved.
Commit size only comprises pages that have been committed.
That is, in layman terms, “virtual size” is pretty much your own problem, and only limited by the size of your address space, whereas “commit size” is everybody’s problem since it consumes a global limited resource (RAM plus swap). It therefore affects other processes.
How memory Commit is handled
Reference for below section: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/vm/overcommit-accounting?id=HEAD
The Linux kernel supports the following overcommit handling modes
These modes can be seen with sysctl (and set with this command)
sysctl -a | grep vm.overcommit_memory
This vm.overcommit_memory can be 1 or 3 values: 0,1, or 2. With these meanings:
0 – Heuristic overcommit handling. Obvious overcommits of
address space are refused. Used for a typical system. It
ensures a seriously wild allocation fails while allowing
overcommit to reduce swap usage. root is allowed to
allocate slightly more memory in this mode. This is the
default.
1 – Always overcommit. Appropriate for some scientific
applications. Classic example is code using sparse arrays
and just relying on the virtual memory consisting almost
entirely of zero pages.
2 – Don’t overcommit. The total address space commit
for the system is not permitted to exceed swap + a
configurable amount (default is 50%) of physical RAM.
Depending on the amount you use, in most situations
this means a process will not be killed while accessing
pages but will receive errors on memory allocation as
appropriate.
Useful for applications that want to guarantee their
memory allocations will be available in the future
without having to initialize every page.
The overcommit policy is set via the sysctl `vm.overcommit_memory’.
The overcommit amount can be set via `vm.overcommit_ratio’ (percentage)
or `vm.overcommit_kbytes’ (absolute value).
The current overcommit limit and amount committed are viewable in
/proc/meminfo as CommitLimit and Committed_AS respectively.
More info
When /proc/sys/vm/overcommit_memory is 2 (or sysctl vm.overcommit_memory is 2, which is the same property, just accessed from another place) all processes requesting memory from the OS when vmcom greater than vmlim will receive errors (even if physical memory is available).
vmlim = SwapSize + 0.5 * RamSize
That 0.5 or 50% (as its written into the property) is held here /proc/sys/vm/overcommit_ratio
Defaults: The vm.overcommit_memory is 0 by default. The vm.overcommit_ration is 50%.
Consequences of OverCommit
Read up on it, but a system can run into a little problem called OOM (out of memory). Read more here:
http://en.wikipedia.org/wiki/Out_of_memory
ATOP
Atops memory man page information
For MEM (Memory occupation) here is the explanation:
This line contains the total amount of physical memory (‘tot’), the amount of memory which is currently free (‘free’), the amount of memory in use as page cache (‘cache’), the amount of memory within the page cache that has to be flushed to disk (‘dirty’), the amount of memory used for filesystem meta data (‘buff’) and the amount of memory being used for kernel malloc’s (‘slab’ – always 0 for kernel 2.4).
If the screen-width does not allow all of these counters, only a relevant subset is shown.
For SWP (Swap occupation and overcommit info.) here is the explanation:
This line contains the total amount of swap space on disk (‘tot’) and the amount of free swap space (‘free’).
Furthermore the committed virtual memory space (‘vmcom’) and the maximum limit of the committed space (‘vmlim’, which is by default swap size plus 50% of memory size) is shown. The committed space is the reserved virtual space for all allocations of private memory space for processes. The kernel only verifies whether the committed space exceeds the limit if strict overcommit handling is configured (vm.overcommit_memory is 2).
Linux Kernel Memory Management
The linux kernel uses a portion of its RAM (not SWAP) for certain kernel operations. This giant portion is allocated on boot. 3 kernel memory management techniques are used to manage this ram (only 1 per running system, as the technique is chosen upon configuration of the kernel – during kernel compiling). The 3 techniques are SLAB/SLUB and SLOB. SLUB is the current default which builds off SLAB. SLOB is best used in embedded machines with little memory.
Here is more information from external sources:
First read this about memory management (the above techniques SLAB and SLUB – solve memory fragmentation, external and internal. SLOB uses first-fit allocation so its stuck with some fragmentation issues): http://en.wikipedia.org/wiki/Memory_management
Below reference from: http://stackoverflow.com/questions/15470560/what-to-choose-between-slab-and-slub-allocator-in-linux-kernel
First, “slab” has become a generic name referring to a memory allocation strategy employing an object cache, enabling efficient allocation and deallocation of kernel objects. It was first documented by Sun engineer Jeff Bonwick1 and implemented in the Solaris 2.4 kernel.
Linux currently offers three choices for its “slab” allocator:
Slab is the original, based on Bonwick’s seminal paper and available since Linux kernel version 2.2. It is a faithful implementation of Bonwick’s proposal, augmented by the multiprocessor changes described in Bonwick’s follow-up paper[2]. (more info: http://en.wikipedia.org/wiki/Slab_allocation)
Slub is the next-generation replacement memory allocator, which has been the default in the Linux kernel since 2.6.23. It continues to employ the basic “slab” model, but fixes several deficiencies in Slab’s design, particularly around systems with large numbers of processors. Slub is simpler than Slab. (more info: http://en.wikipedia.org/wiki/SLUB_(software))
SLOB (Simple List Of Blocks) is a memory allocator optimized for embedded systems with very little memory—on the order of megabytes. It applies a very simple first-fit algorithm on a list of blocks, not unlike the old K&R-style heap allocator. In eliminating nearly all of the overhad from the memory allocator, SLOB is a good fit for systems under extreme memory constraints, but it offers none of the benefits described in 1 and can suffer from pathological fragmentation. (more info: http://en.wikipedia.org/wiki/SLOB)
What should you use? Slub, unless you are building a kernel for an embedded device with limited in memory. In that case, I would benchmark Slub versus SLOB and see what works best for your workload. There is no reason to use Slab; it will likely be removed from future Linux kernel releases.
Below reference from: http://en.wikipedia.org/wiki/SLOB
SLOB currently uses a first-fit algorithm, which uses the first available space for memory. Recently a reply from Linus Torvalds on a Linux mailing list[1] was made where he suggested the use of a best-fit algorithm, which tries to find a memory block which suits needs best. Best fit finds the smallest space which fits the required amount available, avoiding loss of performance, both by fragmentation and consolidation of memory.