Re: Seeking Participation! Take the new Ceph User Stores Survey!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Robin,

As fast feedback when I passed the survey on to somebody else - to
> improve responses, if CUC can offer commands to make it easier to grab
> some of the quantitative data:


Do you have the pg autoscaler enabled?
> How many OSDs per node are you using?
> How many clients are reading/writing from the Ceph cluster in parallel?
> How many nodes are in your largest Ceph cluster?
> How many placement groups (PGs) per OSD are you using?
> What is the size of the largest files being stored in your Ceph cluster(s)?
> What is the size of the largest objects being stored in your Ceph
> cluster(s)?
> What is the size of your largest Ceph cluster?
> What’s the average Read/Write ratio/percentage in your workload?


Thanks for the suggestions! I updated the survey to address all but three
of these queries, which I need to check on. Here are the ones I updated:

1. Do you have the pg autoscaler enabled?
Run `ceph osd pool autoscale-status` and check to see if "AUTOSCALE" is on
for any of your pools.

2. How many OSDs per node are you using?
Run `ceph osd tree` to check this.

3. How many nodes are in your largest Ceph cluster?
Run `ceph osd tree` to check this.

4. How many placement groups (PGs) per OSD are you using?
Run `ceph osd df` and check the "PGS" column.

5. What is the size of your largest Ceph cluster?
Run `ceph df` and look at the "TOTAL / SIZE" entry to check for this.

6. What’s the average Read/Write ratio/percentage in your workload?
You may check `ceph -s` and look at the "io" section to get a sense of this.

I need to check on these three for the best commands:
- How many clients are reading/writing from the Ceph cluster in parallel?
- What is the size of the largest files being stored in your Ceph
cluster(s)?
- What is the size of the largest objects being stored in your Ceph
cluster(s)?

And an additional guidance - if you have multiple Ceph clusters, how
> should the form be answered? I think some of these were also previously
> decided in other survey efforts, and could be reused?


Another good question. We structured the survey so you can elaborate in the
text boxes if you
have multiple clusters, or we ask in terms of your largest cluster since we
are interested in large-scale situations.
However, if you would like to take the survey multiple times for multiple
clusters, feel free to do so- I would just
indicate in the "name/email" question that this is "take 2" etc. of the
survey- just somehow make it obvious that it is
part of a previous response. This is also why we ask for contact
information- so we can follow up with you to elaborate
on anything that wasn't covered in the survey!

I will check on the three unanswered questions and respond back if there
are any good commands to run for this. If any users
would like to chime in on helpful commands as well, feel free to do so!

Thanks,
Laura

On Tue, Jan 21, 2025 at 2:55 PM Robin H. Johnson <robbat2@xxxxxxxxxx> wrote:

> On Tue, Jan 21, 2025 at 10:43:13AM -0600, Laura Flores wrote:
> > Hi all,
> >
> > The Ceph User Council is conducting a survey to gather insights from
> > community members who actively use production Ceph clusters. We want to
> > hear directly from you: *What is the use case of your production Ceph
> > cluster?*
> As fast feedback when I passed the survey on to somebody else - to
> improve responses, if CUC can offer commands to make it easier to grab
> some of the quantitative data:
>
> Do you have the pg autoscaler enabled?
> How many OSDs per node are you using?
> How many clients are reading/writing from the Ceph cluster in parallel?
> How many nodes are in your largest Ceph cluster?
> How many placement groups (PGs) per OSD are you using?
> What is the size of the largest files being stored in your Ceph cluster(s)?
> What is the size of the largest objects being stored in your Ceph
> cluster(s)?
> What is the size of your largest Ceph cluster?
> What’s the average Read/Write ratio/percentage in your workload?
>
> And an additional guidance - if you have multiple Ceph clusters, how
> should the form be answered? I think some of these were also previously
> decided in other survey efforts, and could be reused?
>
> --
> Robin Hugh Johnson
> Gentoo Linux: Dev, Infra Lead, Foundation Treasurer
> E-Mail   : robbat2@xxxxxxxxxx
> GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
> GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 

Laura Flores

She/Her/Hers

Software Engineer, Ceph Storage <https://ceph.io>

Chicago, IL

lflores@xxxxxxx | lflores@xxxxxxxxxx <lflores@xxxxxxxxxx>
M: +17087388804
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux