Re: [CentOS] Swap: typical rehash. Why?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Mon, 2006-06-05 at 18:40 -0500, Les Mikesell wrote:
> On Mon, 2006-06-05 at 17:37 -0400, William L. Maltby wrote:
> > ><snip>

> > > > <snip> ... Sys V, had tunables that let admins
> > > > tune to their needs. A single "swappiness" value is woefully inadequate.
> > > 
> > > Actually, having these computed dynamically is much better than
> > > having to manually tune them every time <snip>

> > > ... consider whether you'd rather hire an expert admin to
> > > keep your system tuned or buy an extra gig of ram and let the
> > > OS figure out how to use it.
> > 
> > I agree, sort of. The problem occurs in that the OS is only somewhat
> > heuristic. It learns, but only a small amount. The admin can run SAR
> > reports and use other tools that profile activities and he can use that
> > information to "pre-bias" the VM so that it better achieves *long-term*
> > performance and responsiveness goals and is less sensitive to "spikes"
> > in one factor or another.
> 
> If you remember those tunables, you might also remember that there was
> a whole fairly large (and boring) book that told an administrator how
> to interpret various permutations of SAR reports and which way to adjust
> the tunable values.  Even when Linux had more hard-coded tunables I was
> never able to find any equivalent reference to use them correctly.

Amen to that. When several flavors of real UNIX were being aggressively
marketed by several corporations, when "craftsmanship" and engineering
were somewhat important in keeping your job, when marketing had to offer
"The Complete Package" (TM) in order to woo customers from a competitor,
corporations saw value in investing in good documentation. Plus, your
performance evaluation as an engineer might depend on producing
documents that your PHB could appreciate and that helped to sell the
product. Development was "structured" (in *some* fashion, yes it
*really* was!) and there was peer pressure to "Do a Good Job". Complete
opposite of what seems to be predominate today. I *think* - no way to
tell with my limited view of the virtual world. From what I see, it
seems to be "throw the s.*t out there and fix the reported bugs. Later".

A side-effect of labor becoming more and more expensive and hardware
becoming cheaper and "free software" and "open source" reducing the
returns to business on proprietary product. Now add the fact that
business can *cheaply* put something out and have it "maintained" for
free and they rake the profits...

> 
> > For many business applications, there is a list of performance
> > requirements that can be prioritized. <snip>

> > When the best of the code is combined with the best of an admin's effort
> > and data analysis, a good outcome is likely. Code only or admin with
> > tools/means both produce less optimal results.
> 
> Yes, but if you can define the solution, the computer could probably
> implement it better and certainly faster by itself.  I think these days
> the developers would rather write the code to do it instead of the
> documentation to tell you what needs to be done.

That is so true. None of us (myself, as a long-time developer, included)
enjoyed the documentation process. It's like washing the dishes after
eating a fine meal. Just no interest in that part of the evening.

The trouble with the real UNIX params, aside from the problem of
learning curve you mention, was that the parameters were the
instantiation (in effect) of the *results* of the calculations that the
admin must perform and the programmer had to envision in the first
place. Since the programmer had envisioned what data had to be processed
for what processing scenarios, an ideal solution would have been to have
a data entry sheet that accepted various expected load parameters, time
frames, performance objectives, ... and would have generated the set of
appropriate tunables, which is what you suggest.

The often overlooked point in these retrospectives is that hardware was
much less powerful (and still expensive) when all this was developed (my
1st UNIX PC, a 186 IIRC, with 64K ram, 5MB HD, 40ms seek?, 12" mono
monitor, nada for graphics, ... like that about $4,000). And I
specifically recall a PC with DOS only advertised in 1985 PC Tech
Journal with a "huge fast 10MB HD 2 5.25" FDs, only $10,995. With a
12MHz 286 IIRC. Sold like hotcakes.

The point of that is there was good reason the developers did not
generate the software to automatically determine the correct parameters.
Too labor intensive and too hardware/cost intensive to implement.
Programmers weren't a dime-a-dozen back then and time constraints also
often limited what could be provided in a given release cycle.

Now, post a request on the web somewhere and there is already free
software from someone or someone will develop it for you for free if
you're patient enough.

The documentation will still suck (if you want the truth, read the code)
and a substantial portion of the user community will still be
dissatisfied.

> 
-- 
Bill

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux