The lines might not be in the file, but did the thing writing to the file say it succeeded to write or did it fail to write? I'm guessing the latter which means that just check that the write was successful and don't just assume it was before continuing on.
On Sun, Dec 17, 2017, 10:07 PM Wei Jin <wjin.cn@xxxxxxxxx> wrote:
On Fri, Dec 15, 2017 at 6:08 PM, John Spray <jspray@xxxxxxxxxx> wrote:
> On Fri, Dec 15, 2017 at 1:45 AM, 13605702596@xxxxxxx
> <13605702596@xxxxxxx> wrote:
>> hi
>>
>> i used 3 nodes to deploy mds (each node also has mon on it)
>>
>> my config:
>> [mds.ceph-node-10-101-4-17]
>> mds_standby_replay = true
>> mds_standby_for_rank = 0
>>
>> [mds.ceph-node-10-101-4-21]
>> mds_standby_replay = true
>> mds_standby_for_rank = 0
>>
>> [mds.ceph-node-10-101-4-22]
>> mds_standby_replay = true
>> mds_standby_for_rank = 0
>>
>> the mds stat:
>> e29: 1/1/1 up {0=ceph-node-10-101-4-22=up:active}, 1 up:standby-replay, 1
>> up:standby
>>
>> i mount the cephfs on the ceph client, and run the test script to write data
>> into file under the cephfs dir,
>> when i reboot the master mds, and i found the data is not written into the
>> file.
>> after 15 seconds, data can be written into the file again
>>
>> so my question is:
>> is this normal when reboot the master mds?
>> when will the up:standby-replay mds take over the the cephfs?
>
> The standby takes over after the active daemon has not reported to the
> monitors for `mds_beacon_grace` seconds, which as you have noticed is
> 15s by default.
>
> If you know you are rebooting something, you can pre-empt the timeout
> mechanism by using "ceph mds fail" on the active daemon, to cause
> another to take over right away.
Why reboot mds must wait for grace time?
Is it possible or reasonable to tell monitor during reboot by that
daemon itself?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com