Re: flashcache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2013/1/16 Sage Weil <sage@xxxxxxxxxxx>:
> You should not worry, except to the extent that 2 might fail
> simultaneously, and failures in general are not good things.

Are you talking about 2 OSDs failure (with the same replicated data)?
Ok, but it's very very hard.

> You could use the first (single) disk for os and logs.  You might not even
> bother with raid1, since you will presumably be replicating across hosts.
> When the OSD disk dies, you can re-run your chef/juju/puppet rule or
> whatever provisioning tool is at work to reinstall/configure the OS disk.
> The data on the SSDs and data disks will all be intact.

OS failure will result in the whole node down. This will cause a
rebalance of many TB of data that I prefer to avoid, if possible.
I also prefer to avoid a dedicated disks for the OS. I'll loose to much space.
With OS in a RAID-1 partition of the first and second spinning disks
(the same disks will also act as OSD with another, bigger, partition),
OS failure will not bring down anything and I can hot-replace the
failed disk.

RAID1 will handle OS reconstruction, ceph will handle the cluster rebalance
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux