Re: [ANN] A framework for deploying Octopus using cephadm in the cloud

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>Except for your mds, mrg and radosgw, your osd daemons are  bound to 
>>the hardware / disks they are running on. It is not like if
>> osd.121 goes down, you can start it on some random node.

>Why not?

The data stays on the old node not? If you did automate destroy/create 
of a new osd, that would also mean that the cluster has to replicate 
data to this 'new' osd.121, if there is even storage available on that 
node. And if there is indeed storage available and not used at that 
node, why not use it in the first place for an osd.122?




_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux