Re: Running on disks that lose their head

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It is cool - and it's interesting that more and more access to the inner workings of the drives would be useful, given ATA controller history (an evolution of the WD1010 MFM controller) having hidden steadily more, to maintain compatibility with the old CHS addressing (later LBA).

The streaming command set could (I think) help, since the number of re-tries can be specified down to 3 revolutions only, after which the drive will return whatever it has but with the error flag set accordingly. Especially with btrfs with it's own checksumming, in a way this seems logical... there's another 2 copies elsewhere anyway and you know which of those is good via the FS level checksum.

The head failure is an interesting scenario. But a drive in such state could surely never complete its own POST with the firmwares implemented today - if you take the lid off and scratch up the top surface, it kills the drive immediately, though they will run without a lid for a bit (minus the scratches).

IDE integrated the MFM controller on the drive, now it seems there is opportunity to integrate the rest of the system onto the drive too - IDS drives perhaps? This would give us the ability to run our own OS (i.e. Linux), and if each surface were to be presented as a separate block device (/sda, /sdb etc) along with a separate SSD device or other NVRAM for journalling or caching, so the disk can be used in whatever way the use case requires, and the drive becomes much more useful surely?

Ultimately cheaper deployments should be possible: say such drives were 4TB, cost £400 and used 12W, with 16A per rack power budget then drives, rack, power, backplane and 10Gb switches all guestimated... something like £460 per usable TB to operate for 3 years (1.3p/GB-month).

Would love to know where the info from WD came from ;)


On 2013-11-06 00:32, Loic Dachary wrote:
Hi Ceph,

People from Western Digital suggested ways to better take advantage
of the disk error reporting. They gave two examples that struck my
imagination. First there are errors that look like the disk is dying (
read / write failures ) but it's only a transient problem and the
driver should be able to make the difference by properly interpreting
the available information. They said that the prolonged life you get
if you don't decommission a disk that only has a transient error is
significant. The second example is when one head out of ten fails :
disks can keep working with the nine remaining heads. Losing 1/10 of
the disk is likely to result in a full re-install of the Ceph osd.
But, again, the disk could keep going after that, with 9/10 of its
original capacity. And Ceph is good at handling osd failures.

All this is news to me and sounds really cool. But I'm sure there are
people who already know about it and I'm eager to hear their opinion
:-)

Cheers

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux