Suggestions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all

Few more suggestions.. if can be added to further releases. 

1. We faced some issue, can we add more command to control clients using watcher, 

rbd status pool/image

Watchers:
	watcher=10.160.0.245:0/2076588905 client.12541259 cookie=140446370329088

Some commands to control watcher and kill client.id <http://client.id/>. something like 

rbd lock remove <pool-name>/<image-name> <client_id>
Or 
rbd watchers <pool-name>/<image-name>

Or something 
rbd check <pool-name>/<image-name>

Or 
rbd list watchers pool-name or pool/image



2. Also, as we have multiple ceph clusters, so on dashboard every time by going to hosts we are able to see hosts names to identify the nodes and cluster type lets say, dev or prod. 
Can we have a variable on dashboard to see “Name: Location-Dev”?, I think we have enough space to list name in this are. 



3. Seems dashboard/mgr is not cleaning itself. Most of time we need to fail to manager to clear such errors, But it seems a similar issue as in step1 above.
I mounted this volume and unmounted and clean everything, even mount point. But for last three days this alert is active, I have tried failing back to different mgrs. 

CephNodeDiskspaceWarning
Mountpoint /mnt/dst-volume on prod-host1 will be full in less than 5 days based on the 48 hour trailing fill rate.

4. We need more command to control pool repair. 
If we have started a pool repair command, how we can stop it? 


Regards
Dev
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux