Re: SCSI vs SATA

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Apr 07, 2007 at 08:46:33PM -0400, Ron wrote:
> The Google and CMU studies are =not= based on data drawn from 
> businesses where the lesser consequences of an outage are losing 
> $10Ks or $100K per minute... ...and where the greater consequences 
> include the chance of loss of human life.
> Nor are they based on businesses that must rely exclusively on highly 
> skilled and therefore expensive labor.

Google up time seems to be quite good. Reliability can be had from trusting
more reputable (and usually more expensive) manufacturers and product lines,
or it can be had through redundancy. The "I" in RAID.

I recall reading the Google study before, and believe I recall it
lacking in terms of how much it costs to pay the employees to maintain
the system.  It would be interesting to know whether the inexpensive
drives require more staff time to be spent on it. Staff time can
easily become more expensive than the drives themselves.

I believe there are factors that exist that are not easy to calculate.
Somebody else mentioned how Intel was not the cleanest architecture,
and yet, how Intel architecture makes up the world's fastest machines,
and the cheapest machines per work to complete. There is a game of
numbers being played. A manufacturer that sells 10+ million units has
the resources, the profit margin, and the motivation, to ensure that
their drives are better than a manufacturer that sells 100 thousand
units. Even if the manufacturer of the 100 K units spends double in
development per unit, they would only be spending 1/50 as much as the
manufacturer who makes 10+ million units.

As for your experience - no disrespect - but if your experience is over
the last 25 years, then you should agree that most of those years are
no longer relevant in terms of experience. SATA has only existed for
5 years or less, and is only now stabilizing in terms of having the
different layers of a solution supporting the features like command
queuing. The drives of today have already broken all sorts of rules that
people assumed were not possible to break 5 years ago, 10 years ago, or
20 years ago. The playing field is changing. Even if your experience is
correct or valid today - it may not be true tomorrow.

The drives of today, I consider to be incredible in terms of quality,
reliability, speed, and density. All of the major brands, for desktops
or servers, IDE, SATA, or SCSI, are amazing compared to only 10 years
ago. To say that they don't meet a standard - which standard?

Everything has a cost. Having a drive never break, will have a very
large cost. It will cost more to turn 99.9% to 99.99%. Given that the
products will never be perfect, perhaps it is valid to invest in a
low-cost fast-to-implement recovery solution, that will assumes that
some number of drives will fail in 6 months, 1 year, 2 years, and 5
years. Assume they will fail, because regardless of what you buy -
their is a good chance that they *will* fail. Paying double price for
hardware, with a hope that they will not fail, may not be a good
strategy.

I don't have a conclusion here - only things to consider.

Cheers,
mark

-- 
mark@xxxxxxxxx / markm@xxxxxx / markm@xxxxxxxxxx     __________________________
.  .  _  ._  . .   .__    .  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/    |_     |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
                       and in the darkness bind them...

                           http://mark.mielke.cc/



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux