The problem is not so much ceph, but the fact that sync workloads tend to mean you have an effective queue depth of 1 because it serialises the IO from the application, as it waits for the last write to complete before issuing the next one. From: Matteo Dacrema [mailto:mdacrema@xxxxxxxx] Sent: Wednesday, 8 March 2017 10:36 AM To: Adrian Saul Cc: ceph-users Subject: Re: MySQL and ceph volumes Thank you Adrian! I?ve forgot this option and I can reproduce the problem. Now, what could be the problem on ceph side with O_DSYNC writes? Regards Matteo -------------------------------------------- This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. Il giorno 08 mar 2017, alle ore 00:25, Adrian Saul <Adrian.Saul at tpgtelecom.com.au<mailto:Adrian.Saul at tpgtelecom.com.au>> ha scritto: Possibly MySQL is doing sync writes, where as your FIO could be doing buffered writes. Try enabling the sync option on fio and compare results. -----Original Message----- From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Matteo Dacrema Sent: Wednesday, 8 March 2017 7:52 AM To: ceph-users Subject: MySQL and ceph volumes Hi All, I have a galera cluster running on openstack with data on ceph volumes capped at 1500 iops for read and write ( 3000 total ). I can?t understand why with fio I can reach 1500 iops without IOwait and MySQL can reach only 150 iops both read or writes showing 30% of IOwait. I tried with fio 64k block size and various io depth ( 1.2.4.8.16?.128) and I can?t reproduce the problem. Anyone can tell me where I?m wrong? Thank you Regards Matteo _______________________________________________ ceph-users mailing list ceph-users at lists.ceph.com<mailto:ceph-users at lists.ceph.com> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com Confidentiality: This email and any attachments are confidential and may be subject to copyright, legal or some other professional privilege. They are intended solely for the attention and use of the named addressee(s). They may only be copied, distributed or disclosed with the consent of the copyright owner. If you have received this email by mistake or by breach of the confidentiality clause, please notify the sender immediately by return email and delete or destroy all copies of the email. Any confidentiality, privilege or copyright is not waived or lost because this email has been sent to you by mistake. -- Questo messaggio e' stato analizzato con Libra ESVA ed e' risultato non infetto. Seguire il link qui sotto per segnalarlo come spam: http://mx01.enter.it/cgi-bin/learn-msg.cgi?id=13CCD402D0.AA534 Confidentiality: This email and any attachments are confidential and may be subject to copyright, legal or some other professional privilege. They are intended solely for the attention and use of the named addressee(s). They may only be copied, distributed or disclosed with the consent of the copyright owner. If you have received this email by mistake or by breach of the confidentiality clause, please notify the sender immediately by return email and delete or destroy all copies of the email. Any confidentiality, privilege or copyright is not waived or lost because this email has been sent to you by mistake. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20170307/a48062e7/attachment.htm>