Re: [CentOS] Shrinking a volume group

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Wed, 2006-09-13 at 19:27 -0500, Steve Bergman wrote:
> On Wed, 2006-09-13 at 13:06 -0400, William L. Maltby wrote:
> > <snip some good personal opinions>

> It's not that admins aren't smart enough, these days.
> 
> It's that it's just plain silly to think that a human being could tune
> for these things.
> 
> There is no such thing as a "workload" to be tuned for.  Every time I
> see that word, I have to laugh.  Because it doesn't exist.

As to "workload", I respectfully disagree, based on the below. I am glad
that you can enjoy the laugh. There's not enough in the world.

I could raise the same sort of objections for automotive "tuning" that
you raise for OS tuning. You may respond that there are tangibles that
can be measured. And so it is in a computer. And as in a computer
system, precision is lacking in how/when those variables are applied.
If a car runs different circuits, runs on days when ambient conditions
vary, track conditions change during the race, multiple drivers (ala
ALMS), ...

Yet, folks successfully tune automobiles today and have done so for over
a century. But not every "mechanic" is capable of producing the desired
results alone, although the mechanic may be capable of rebuilding the
car totally.

And the one who can "tune" the car may be very poor at rebuilding it. Or
even of applying the tuning principles by turning a wrench.

As with anything that has variable conditions and/or intangibles that
must be considered (such as necessarily ill-defined workload traits) or
has imprecise available or predictive metrics (like incomplete
definition of every possible performance related activity, load and
timing), the problem is not finding a solution.

The task is to properly formulate a problem that is solvable and applies
to the intended environment. Think "subsets". Of all possible problems,
of all possible solutions, of available expertise, of available
manpower, of available money, ...

Although their task was relatively simple (essentially, they defined and
solved their problem in isolation), that is what folks who developed the
vm sub-system did. They achieved "success" only because no one else can
find a better "problem definition" that allows solution for an audience
any broader than the current audience *unless* a higher level of
required expertise and expense is to be tolerated (not likely in this
cheap-ass-only world we now inhabit). Better results could be achieved
for a well-defined subset of that audience. And *that* is why the
pointless debates continue. Each speaks to a "local" environment.

I did tuning successfully for many years. How do I know I was
successful? Because folks kept paying me money, based on word-of-mouth,
to come and help them make their system "run better". Was almost always
able to do so. But in some cases I had to suggest upgrades instead
because, after typical interviews and analysis, it was obvious their
system was under-powered for the load vs. performance vs. time-frame
they desired.

There was a willingness to dedicate oneself to the necessary hard-work
and study required to understand the tools available, the OS, the user
"profile", etc. And the environment then endorsed that concept:
"craftsmanship" I guess.

In todays "quick-fix-only, lowest-possible-cost, instant-response-
required" world, may not be possible.

> 
> Perhaps on a large enough system, an admin can reasonably treat a
> workload as a statistical entity, ala thermodynamics.

On a large enough system, there is no debate. Cost is justified.

> 
> But CS equations are never going to be as neat as thermodynamic ones.
> So it just means that when the hammer falls, it's just going to be that
> much more impossible to deal with.

There is a valid point: just give up, roll over and spend more money on
more hardware. It's cheaper than developing/obtaining/maintaining
seldom-used expertise. And since only business drives this, their
parameters are the determining factors.

> 
> The system really needs to tune itself dynamically.

And so you believe that it will be as good as, or better, at predicting
that some external "random" event will occur that needs "this"
particular piece of cache? Or buffer? Theoretically, a heuristic
algorithm could be developed that does better. But that ultimately comes
down to just redefining the problem to be solvable with a changed set of
parameters. The same thing a "human" tuner would do. But it would do it
"cheaper" and so biz will be happy.

> 
> I know that you are saying that we can't go back to the days of manual
> tuning.  And I agree.  But for different reasons, I think.

Yes, I think reasons are different. Apparently, from your comments, it
is because you see the problem as not able to be defined. I see it as
being due to an environment where all things are driven by cost and
there is no need or regard for certain "craftsmanship".

> 
> It's not that admins aren't smart enough, these days.
> 
> It's that they never were...

Only so you see from where I come: I started working on UNIX in 1978.
Doing computer "stuff" since 1969 (including school). I disagree with
the "smart enough" assertion. I believe it is likely the old "80/20"
rule (or minor variation thereof) applies. And it would not be a "smart
enough" issue, it would be an "exposure and need" issue. 80% never
needed and were not exposed to...

> 
> -Steve
> <snip sig stuff>

--
Bill

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux