Hello I am working on an application that uses rados gateway to store objects onto a ceph cluster. I am currently working on optimizing the latency for storing/retrieving objects from the cluster. My goal to improve read/write latencies is to have RGW write/read multiple rados objects in parallel as described here <https://docs.ceph.com/en/latest/architecture/#data-striping> - "Significant write performance occurs when the client writes the stripe units to their corresponding objects in parallel.". Just like with RAID0, by having a large number of rados objects that each radosgw object gets mapped to, we can achieve lower latencies as we are not bound by the throughput of a single disk. That documentation suggests that we can configure the stripe count as well as stripe width which would let us indirectly control how many rados objects each radosgw object gets mapped to. I want to be able to change these parameters and run benchmarks against my pools. The key parameter I am therefore interested in controlling is the stripe count (i.e. the number of distinct objects each radosgw object is mapped to). More specifically, in the diagram <https://docs.ceph.com/en/latest/_images/ditaa-96a6fc80dad17fb53f161987ed64f0779930ffe1.png> attached to those docs, I see that the stripe_count is 4 (4 rados objects being written to for a single RGW object). I want to be able to experiment with varying numbers for that stripe_count. I am having trouble figuring out what configuration parameters exist in radosgw that lets me control this. I see that there is a stripe_width <https://github.com/ceph/ceph/blob/714cdc4e8767a153f825e857efdc28bb481528a1/src/common/options/rgw.yaml.in#L1736> and a rgw_max_chunk_size <https://docs.ceph.com/en/latest/radosgw/config-ref/#confval-rgw_max_chunk_size> but did not find anything for stripe_count. Configuring the stripe_width alone is not sufficient as I would need to set the stripe_unit size as well to get the desired number for stripe_count, but I did not find either. Am I understanding that correctly? If so, can someone please point me to where and how this configuration should be set? I appreciate your help! _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx