Re: Ceph Very Small Cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/29/16 14:07, Ranjan Ghosh wrote:
> Wow. Amazing. Thanks a lot!!! This works. 2 (hopefully) last questions
> on this issue:
>
> 1) When the first node is coming back up, I can just call "ceph osd up
> 0" and Ceph will start auto-repairing everything everything, right?
> That is, if there are e.g. new files that were created during the time
> the first node was down, they will (sooner or later) get replicated
> there?
Nope, there is no "ceph osd up <id>"; you just start the osd, and it
already gets recognized as up. (if you don't like this, you set it out,
not just down; and there is a "ceph osd in <id>" to undo that.)
>
> 2) If I don't call "osd down" manually (perhaps at the weekend when
> I'm not at the office) when a node dies - did I understand correctly
> that the "hanging" I experienced is temporary and that after a few
> minutes (don't want to try out now) the node should also go down
> automatically?
I believe so, yes.

Also, FYI, RBD images don't seem to have this issue, and work right away
on a 3 osd cluster. Maybe cephfs would also work better with a 3rd osd,
even an empty one (weight=0). (and I had an unresolved issue testing the
same with cephfs on my virtual test cluster)
>
> BR,
> Ranjan
>
>
> Am 29.09.2016 um 13:00 schrieb Peter Maloney:
>>
>> And also you could try:
>>      ceph osd down <osd id>
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux