Hi, (nautilus, 14.2.8, whole cluster)
I doodled with adding a second cephfs and the project got canceled. I removed the unused cephfs with "ceph fs rm dream --yes-i-really-mean-it" and that worked as expected. I have a lingering health warning though which won't clear.
The original cephfs1 volume exists and is healthy:
[root@cephmon-03]# ceph fs ls
name: cephfs1, metadata pool: stp.cephfs_metadata, data pools: [stp.cephfs_data ]
[root@cephmon-03]# ceph mds stat
cephfs1:3 {0=cephmon-03=up:active,1=cephmon-02=up:active,2=cephmon-01=up:active}
[root@cephmon-03]# ceph health detail
HEALTH_WARN insufficient standby MDS daemons available
MDS_INSUFFICIENT_STANDBY insufficient standby MDS daemons available
have 0; want 1 more
[root@cephmon-03]#
I have not yet deleted the pools for 'dream', the second cephfs definition. (There is nothing in it.)
Before deleting the pools is there a command to clear this warning?
Thanks!
peter
I doodled with adding a second cephfs and the project got canceled. I removed the unused cephfs with "ceph fs rm dream --yes-i-really-mean-it" and that worked as expected. I have a lingering health warning though which won't clear.
The original cephfs1 volume exists and is healthy:
[root@cephmon-03]# ceph fs ls
name: cephfs1, metadata pool: stp.cephfs_metadata, data pools: [stp.cephfs_data ]
[root@cephmon-03]# ceph mds stat
cephfs1:3 {0=cephmon-03=up:active,1=cephmon-02=up:active,2=cephmon-01=up:active}
[root@cephmon-03]# ceph health detail
HEALTH_WARN insufficient standby MDS daemons available
MDS_INSUFFICIENT_STANDBY insufficient standby MDS daemons available
have 0; want 1 more
[root@cephmon-03]#
I have not yet deleted the pools for 'dream', the second cephfs definition. (There is nothing in it.)
Before deleting the pools is there a command to clear this warning?
Thanks!
peter
| |||||||
| |||||||
| |||||||
| |||||||
| |||||||
|
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx