Re: Seeking Participation! Take the new Ceph User Stores Survey!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Laura 

Few more suggestions.. 

1. As we are facing some issue, can we add more command to control clients using watcher, 

rbd status pool/image

Watchers:
	watcher=10.160.0.245:0/2076588905 client.12541259 cookie=140446370329088

Some commands to control watcher and kill client.id <http://client.id/>. something like 

rbd lock remove <pool-name>/<image-name> <client_id>
Or 
rbd watchers <pool-name>/<image-name>

Or something 
rbd check <pool-name>/<image-name>

Or 
Rbd list watchers pool-name or pool/image



2. Also, as we have multiple ceph clusters, so on dashboard every time by going to hosts we are able to see hosts names to identify the nodes and cluster type lets say, dev or prod. 
Can we have a variable on dashboard to see “Name: Location-Dev”?, I think we have enough space to list name in this are. 



3. Seems dashboard/mgr is not cleaning itself. Most of time we need to fail to manager to clear such errors, But it seems a similar issue as in step1 above.
I mounted this volume and unmounted and clean everything, even mount point. But for last three days this alert is active, I have tried failing back to different mgrs. 

CephNodeDiskspaceWarning
Mountpoint /mnt/dst-volume on prod-host1 will be full in less than 5 days based on the 48 hour trailing fill rate.

4. We need more command to control pool repair. 
If we have started a pool repair command, how we can stop it? 


Regards
Dev

> On Jan 21, 2025, at 2:35 PM, Laura Flores <lflores@xxxxxxxxxx> wrote:
> 
> Hi Robin,
> 
> As fast feedback when I passed the survey on to somebody else - to
>> improve responses, if CUC can offer commands to make it easier to grab
>> some of the quantitative data:
> 
> 
> Do you have the pg autoscaler enabled?
>> How many OSDs per node are you using?
>> How many clients are reading/writing from the Ceph cluster in parallel?
>> How many nodes are in your largest Ceph cluster?
>> How many placement groups (PGs) per OSD are you using?
>> What is the size of the largest files being stored in your Ceph cluster(s)?
>> What is the size of the largest objects being stored in your Ceph
>> cluster(s)?
>> What is the size of your largest Ceph cluster?
>> What’s the average Read/Write ratio/percentage in your workload?
> 
> 
> Thanks for the suggestions! I updated the survey to address all but three
> of these queries, which I need to check on. Here are the ones I updated:
> 
> 1. Do you have the pg autoscaler enabled?
> Run `ceph osd pool autoscale-status` and check to see if "AUTOSCALE" is on
> for any of your pools.
> 
> 2. How many OSDs per node are you using?
> Run `ceph osd tree` to check this.
> 
> 3. How many nodes are in your largest Ceph cluster?
> Run `ceph osd tree` to check this.
> 
> 4. How many placement groups (PGs) per OSD are you using?
> Run `ceph osd df` and check the "PGS" column.
> 
> 5. What is the size of your largest Ceph cluster?
> Run `ceph df` and look at the "TOTAL / SIZE" entry to check for this.
> 
> 6. What’s the average Read/Write ratio/percentage in your workload?
> You may check `ceph -s` and look at the "io" section to get a sense of this.
> 
> I need to check on these three for the best commands:
> - How many clients are reading/writing from the Ceph cluster in parallel?
> - What is the size of the largest files being stored in your Ceph
> cluster(s)?
> - What is the size of the largest objects being stored in your Ceph
> cluster(s)?
> 
> And an additional guidance - if you have multiple Ceph clusters, how
>> should the form be answered? I think some of these were also previously
>> decided in other survey efforts, and could be reused?
> 
> 
> Another good question. We structured the survey so you can elaborate in the
> text boxes if you
> have multiple clusters, or we ask in terms of your largest cluster since we
> are interested in large-scale situations.
> However, if you would like to take the survey multiple times for multiple
> clusters, feel free to do so- I would just
> indicate in the "name/email" question that this is "take 2" etc. of the
> survey- just somehow make it obvious that it is
> part of a previous response. This is also why we ask for contact
> information- so we can follow up with you to elaborate
> on anything that wasn't covered in the survey!
> 
> I will check on the three unanswered questions and respond back if there
> are any good commands to run for this. If any users
> would like to chime in on helpful commands as well, feel free to do so!
> 
> Thanks,
> Laura
> 
> On Tue, Jan 21, 2025 at 2:55 PM Robin H. Johnson <robbat2@xxxxxxxxxx> wrote:
> 
>> On Tue, Jan 21, 2025 at 10:43:13AM -0600, Laura Flores wrote:
>>> Hi all,
>>> 
>>> The Ceph User Council is conducting a survey to gather insights from
>>> community members who actively use production Ceph clusters. We want to
>>> hear directly from you: *What is the use case of your production Ceph
>>> cluster?*
>> As fast feedback when I passed the survey on to somebody else - to
>> improve responses, if CUC can offer commands to make it easier to grab
>> some of the quantitative data:
>> 
>> Do you have the pg autoscaler enabled?
>> How many OSDs per node are you using?
>> How many clients are reading/writing from the Ceph cluster in parallel?
>> How many nodes are in your largest Ceph cluster?
>> How many placement groups (PGs) per OSD are you using?
>> What is the size of the largest files being stored in your Ceph cluster(s)?
>> What is the size of the largest objects being stored in your Ceph
>> cluster(s)?
>> What is the size of your largest Ceph cluster?
>> What’s the average Read/Write ratio/percentage in your workload?
>> 
>> And an additional guidance - if you have multiple Ceph clusters, how
>> should the form be answered? I think some of these were also previously
>> decided in other survey efforts, and could be reused?
>> 
>> --
>> Robin Hugh Johnson
>> Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
>> E-Mail   : robbat2@xxxxxxxxxx
>> GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
>> GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> 
> 
> 
> -- 
> 
> Laura Flores
> 
> She/Her/Hers
> 
> Software Engineer, Ceph Storage <https://www.google.com/url?q=https://ceph.io&source=gmail-imap&ust=1738103898000000&usg=AOvVaw0wWXEK5D4AIoEyHNBva2Zu>
> 
> Chicago, IL
> 
> lflores@xxxxxxx | lflores@xxxxxxxxxx <lflores@xxxxxxxxxx>
> M: +17087388804
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux