RE: Good news / bad news - The joys of RAID

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have no idea as to what the tier1 vendors say as I have only worked
within the storage business.. the figures I quoted are based on the last
time I consulted on this are would been provided by IBM / Seagate as
these are the only two scsi vendors we use.   If you really want to dig,
then ask Seagate, they are respected in both camps and will openly
justify the technology and price difference.  They produce extremely
in-depth docs on the testing methods and assumptions.

In terms of reset I am not sure what you mean... we and all raid
manufacturers will reset a scsi bus on scsi timeouts.. this is normal
practice and simple to achieve.  It is not achievable on sata..  I have
not used pata much, but I do not recall a reset line that we could
trigger from firmware level.

RAID in isolation does not increase the i/o load as we all know... but
the reality is that raid applications do.  Non of us can refuse the cost
effective nature of sata drives, this means we can often use raid in
places where we could not afford or justify scsi.  Add multiple users
and the stress on the drives increase dramatically.  

If you want a real life situation... one of our scsi designs is used
around the world and has probably 10m+ users (many systems).. in some
cases these have been running for 4 / 5 years and therefore we have to
look at drive replacement. For a trial we used sata to obviously see if
we could save costs or offer an intermediate solution.  We could not
keep a single system going for more than 14 days. The load varied
between 10-250 users at any one time.. we tried Maxtor and IBM.  There
was also a 40% occurrence of fatal state errors.. this was simple the
rate that the drives were failing meant it was likely to fail whilst in
rebuild state and obviously die.

Take the sata box and stick it in many applications and it will last you
to your dying day.

You may be right that there has been ata and scsi drive manufactured
with the same components excluding the interface.... but the last time I
saw this was a bearing shortage in 95... I don't know of any
manufactures today that even hint at this. But I could well be wrong..

The discussion could probably go on forever, but the point is that we
are not stupid... sata solutions are probably 30% of the cost of the
scsi..... there is a difference and we know it. the important thing is
accepting the difference and using the right technology for the right
application. 





-----Original Message-----
From: Mark Hahn [mailto:hahn@xxxxxxxxxxxxxxxxxxx] 
Sent: 20 November 2004 21:58
To: Mark Klarzynski
Subject: RE: Good news / bad news - The joys of RAID

> SATA / IDE drives have an MTBF similar to that of SCSI / Fibre. But
this
> is based upon their expected use... i.e. SCSI used to be [power on
hours
> = 24hr] [use = 8 hours].. whilst SATA used to be [power on = 8 hours]
> and [use = 20 mins].

can you cite a source for these numbers?  the vendors I talk to
(tier1 system vendors, not disk vendors) usually state 24x7 100%
duty cycles for scsi/fc, and 100% poweron, 20% duty cycles for
PATA/SATA.





-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux