MySQL and ceph volumes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Deepak,

thank you.

Here an example of iostat

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.16    0.00    2.64   15.74    0.00   76.45

Device:         rrqm/s		wrqm/s	r/s		w/s		rkB/s	wkB/s		avgrq-sz		avgqu-sz		await	r_await	w_await	svctm	%util
vda               0.00     	0.00    	0.00    	0.00     	0.00     	0.00      		0.00     		0.00    		0.00    	0.00    	0.00   	0.00   	0.00
vdb               0.00     	1.00   	96.00  	292.00  	4944.00 	14065 2.00   	750.49    		17.39   		43.89   	17.79   	52.47   	2.58 	100.00

vdb is the ceph volumes with xfs fs.


Disk /dev/vdb: 2199.0 GB, 2199023255552 bytes
255 heads, 63 sectors/track, 267349 cylinders, total 4294967296 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/vdb1               1  4294967295  2147483647+  ee  GPT

Regards
Matteo

> Il giorno 07 mar 2017, alle ore 22:08, Deepak Naidu <dnaidu at nvidia.com> ha scritto:
> 
> My response is without any context to ceph or any SDS, purely how to check the IO bottleneck. You can then determine if its Ceph or any other process or disk.
>  
> >> MySQL can reach only 150 iops both read or writes showing 30% of IOwait.
> Lower IOPS is not issue with itself as your block size might be higher. But MySQL doing higher block not sure.  You can check below iostat metrics to see why is the IO wait higher.
>  
> *  avgqu-sz(Avg queue length)                        ?  Higher the Q length more the IO wait
> * avgrq-sz[The average size (in sectors)]     ?  Shows IOblock size( check this when using mysql). [ you need to calculate this based on your FS block size in KB & don?t just you the avgrq-sz # ]
>  
>  
> --
> Deepak
>  
>  
>  
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces at lists.ceph.com <mailto:ceph-users-bounces at lists.ceph.com>] On Behalf Of Matteo Dacrema
> Sent: Tuesday, March 07, 2017 12:52 PM
> To: ceph-users
> Subject: [ceph-users] MySQL and ceph volumes
>  
> Hi All,
>  
> I have a galera cluster running on openstack with data on ceph volumes capped at 1500 iops for read and write ( 3000 total ).
> I can?t understand why with fio I can reach 1500 iops without IOwait and MySQL can reach only 150 iops both read or writes showing 30% of IOwait.
>  
> I tried with fio 64k block size and various io depth ( 1.2.4.8.16?.128) and I can?t reproduce the problem.
>  
> Anyone can tell me where I?m wrong?
>  
> Thank you
> Regards
> Matteo
>  
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
> This email message is for the sole use of the intended recipient(s) and may contain confidential information.  Any unauthorized review, use, disclosure or distribution is prohibited.  If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. 
> 
> -- 
> Questo messaggio e' stato analizzato con Libra ESVA ed e' risultato non infetto. 
> Clicca qui per segnalarlo come spam. <http://mx01.enter.it/cgi-bin/learn-msg.cgi?id=DCF01401CF.AE456> 
> Clicca qui per metterlo in blacklist <http://mx01.enter.it/cgi-bin/learn-msg.cgi?blacklist=1&id=DCF01401CF.AE456>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20170307/9cc1fe7d/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux