Re: Ceph and RAID

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/03/2013 12:40 PM, Andy Paluch wrote:

> Don't you have to take down a ceph node to replace defective drive? If I have a 
> ceph node with 12 disks and one goes bad, would I not have to take the entire 
> node down to replace and then reformat?
> 
> If I have a hotswap chassis but using just an hba to connect my drives will the 
> os (say latest Ubuntu)  support hot-swapping the drive or do I have to shut it 
> down to replace the drive then bring ip and format etc.

Linux supports hotswap. You'll have to restart an osd, but not reboot
the node.

The issue with cluster rebalancing is bandwidth: basically, sata/sas
backplane on one node vs (potentially) the slowest network link in your
cluster that also carries data traffic for everybody. There's too many
variables involved, you figure out the balance between ceph replication
and raid replication for your cluster & budget.

-- 
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux