nicolas angel wrote:
Hi, i would to ask something about swap space:
i quote from the book "How Linux Works—What Every Super-User Should Know"
"Reserve two to five times as much disk space as you have real memory
for swap. It doesn't make sense to go any lower, because you may
actually risk running out of memory. If you go higher and actually
intend to use all of this swap space, you will likely suffer serious
performance problems because the system will spend all of its time
swapping (a condition known as thrashing). "
i can't understand why if create a really big swap partition i will
have a performance decrease????It seems to me
that in the worst case scenario, i will be throwing disk space
[because the system will never use the swap partition if it doesn't
need it......why this would have a negative impact on the system......]
Because in older systems, the largest file size possible was 2GB. The
swap space is treated by the kernel as a 'file'. So if you are running
a 32 bit version of Linux, you MUST limit swap to 2GB 'chunks' if you
need more than 2GB. If you specify more, only 2GB will be used, no harm
done, and you have more space to 'dump' the kernel into in case of a crash.
With a 64bit kernel the 2GB file restriction does not apply.
Also, all modern UNIX/Linux kernels are page based and not swap based.
In an old UNIX kernel it was necessary to allocate swap space equal to
the size of an application as it loaded. It did that so that if it ran
out of RAM it could write the running image to disk at the location
specified at run time, and then pull it from there when it could later
on. That is why you needed at least twice RAM for swap, because you
needed to map all running applications directly to a physical swap space.
Paging kernels don't work that way at all.
Instead, they load pages of a program until the kernel gets 'enough to
start'. Then it begins to run it. The 'enough to start/run' amount of
memory is called the resident set size of the application.
Paging kernels have a fixed location for kernel requirements in RAM,
then they add the amount of swap space to the end of physical RAM and
use the whole as virtual memory. The kernel uses a sliding set of rules
to determine where in RAM or swap things get put. The higher priority
items like additional kernel requirements (load a new kernel module) are
put closest to the fixed kernel on the scale. The lowest priority items
like disk cache are put at the opposite end of the sliding scale. User
space (and other things) are in between those and can grow and shrink as
necessary, thus forcing the disk cache to give up pages, or giving pages
back to cache.
However, the kernel knows where the line is in virtual memory where swap
starts. If user memory starts to squeeze the disk cache too tightly
(there is a minimum threshold) then parts of the sliding scale do in
fact get pushed onto the physical disk, and swapping occurs.
How much swap do you need? In modern kernels it is a different
question. How much disk do you consume when you swap? Never swap?
Then you don't need a physical swap space. All paging can be (and
usually is) done in RAM.
Do I ever recommend no swap space? No. But now days the answer is 'some'.
1xRAM? Probably fine for most things
2xRAM? Ok, go ahead, but remember the 32bit rule. Stick to 2GB swap on
32bit systems.
For servers:
4GB RAM = 8GB swap (remember to do chunks of 2GB for 32bit systems)
8GB RAM = 8GB swap (there may be some applications that need more, but
its not Oracle or Informix)
>8GB RAM? You need to study the application mix and whether or not you
may get a RAM sized kernel dump, and if so do you want it in the swap space?
It is just fine to run a large Oracle database on a system with 64GB RAM
and no swap.
But at that level of costs and system, you will need to consult with
experts before doing something like that.
Good luck!
--
fedora-list mailing list
fedora-list@xxxxxxxxxx
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list