Re: Performance improvement suggestion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


Hi Anthony,
     Thanks for your reply.

     I didn't say I would accept the risk of losing data.

     I just said that it would be interesting if the objects were first recorded only in the primary OSD.
     This way it would greatly increase performance (both for iops and throuput).
     Later (in the background), record the replicas. This situation would avoid leaving users/software waiting for the recording response from all replicas when the storage is overloaded.

     Where I work, performance is very important and we don't have money to make a entire cluster only with NVMe. However, I don't think it's interesting to lose the functionality of the replicas.
     I'm just suggesting another way to increase performance without losing the functionality of replicas.


De: "Anthony D'Atri" <anthony.datri@xxxxxxxxx>
Enviada: 2024/01/31 17:04:08
Para: quaglio@xxxxxxxxxx
Cc: ceph-users@xxxxxxx
Assunto: Re: Performance improvement suggestion
Would you be willing to accept the risk of data loss?
On Jan 31, 2024, at 2:48 PM, quaglio@xxxxxxxxxx wrote:
Hello everybody,
     I would like to make a suggestion for improving performance in Ceph architecture.
     I don't know if this group would be the best place or if my proposal is correct.

     My suggestion would be in the item, at the end of the topic "Smart Daemons Enable Hyperscale".

     The Client needs to "wait" for the configured amount of replicas to be written (so that the client receives an ok and continues). This way, if there is slowness on any of the disks on which the PG will be updated, the client is left waiting.
     It would be possible:
     1-) Only record on the primary OSD
     2-) Write other replicas in background (like the same way as when an OSD fails: "degraded" ).

     This way, client has a faster response when writing to storage: improving latency and performance (throughput and IOPS).
     I would find it plausible to accept a period of time (seconds) until all replicas are ok (written asynchronously) at the expense of improving performance.
     Could you evaluate this scenario?


ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]

  Powered by Linux