Re: Storing VM Images on CEPH with RBD-QEMU driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




----- Original Message -----
From: "Wido den Hollander" <wido@xxxxxxxx>
To: ceph-users@xxxxxxxxxxxxxx
Sent: Friday, December 20, 2013 8:04:09 AM
Subject: Re:  Storing VM Images on CEPH with RBD-QEMU driver

Hi,


> Hi,
>
> I'm testing CEPH with the RBD/QEMU driver through libvirt to store my VM
> images on. Installation and configuration all went very well with the
> ceph-deploy tool. I have set up authx authentication in libvirt and that
> works like a charm too.
>
> However, when coming to performance I have big issues getting expected
> results inside the hosted VM. I see high latency and bad write
> performance, down to 20MB/s in VM.
>

Have you tried running "rados bench" to see the throughput that is getting?

Yes i have tried it:

rados bench -p vm_system 50 write
...
 Total time run:         50.578626
Total writes made:      1363
Write size:             4194304
Bandwidth (MB/sec):     107.793 

Stddev Bandwidth:       19.8729
Max bandwidth (MB/sec): 136
Min bandwidth (MB/sec): 0
Average Latency:        0.59249
Stddev Latency:         0.341871
Max latency:            2.08384
Min latency:            0.14101


> My setup:
> 3xDELL R410,
> 2xXeon X5650,
> 48 GB RAM,
> 2xSATA RAID1 for System,
> 2x250GB Samsung Evo SSD for OSD's (with XFS on each one)

So you are running the journal on the same system? With XFS that means 
that you will do three writes for one write coming in to the OSD.

We are running journal on all xfs disk, but our test shows there is only a problem when ran in qemu vms. I have tested to turn off journal on ext4 on the qemu image, with no effect.

>
> ceph version 0.72.1 (4d923861868f6a15dcb33fef7f50f674997322de)
> Linux server1 3.11.0-14-generic #21-Ubuntu SMP Tue Nov 12 17:04:55 UTC
> 2013 x86_64 x86_64 x86_64 GNU/Linux
> Ubuntu 13.10
>

Which Qemu version do you use? I suggest to use at least Qemu 1.5 and 
enable the RBD write cache.

We are running:
QEMU emulator version 1.5.0 (Debian 1.5.0+dfsg-3ubuntu5.1)

> In total:
> 6 OSD
> 1 MON
> 3 MDS

For RBD the MDS is not required.

>
> So, question is; is there anyone out there that have experience of
> running the RBD/QEMU driver in production, and getting any good
> performance inside the VM?
>
> I suspect the main performance issue to be caused by high latency, since
> it all feels quite high when running those tests below with bonnie++.
> (bonnie++ -s 4096 -r 2048 -u root -d X -m BenchClient)
>
> Inside VPS running on native image in RBD pool:
>
> -- Without any Cache
>
> Version  1.96       ------Sequential Output------ --Sequential Input-
> --Random-
> Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
> /sec %CP
> BenchClient      4G   733  96 64919   8 20271   3  3013  97 30770   3
> 2887  82
> Latency             17425us    1093ms     894ms   16789us   19390us
> 89203us
> Version  1.96       ------Sequential Create------ --------Random
> Create--------
> BenchClient         -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
>                 files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> /sec %CP
>                    16 27951  52 +++++ +++ +++++ +++ 24921  45 +++++ +++
> 22535  29
> Latency              1986us     826us    1065us     216us      41us
> 611us
>
> --With Writeback Cache(QEMU)
> Version  1.96       ------Sequential Output------ --Sequential Input-
> --Random-
> Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
> /sec %CP
> BenchClient      4G   872  96 67327   8 22424   3  2516  94 32013   3
> 2800  82
> Latency             16196us     657ms     843ms   37889us   19207us
> 85407us
> Version  1.96       ------Sequential Create------ --------Random
> Create--------
> BenchClient         -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
>                 files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> /sec %CP
>                    16 27225  51 +++++ +++ +++++ +++ 27325  47 +++++ +++
> 21645  28
> Latency              1986us     852us     874us     252us      34us
> 595us
>
> --With Writethrough Cache(QEMU)
> Version  1.96       ------Sequential Output------ --Sequential Input-
> --Random-
> Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
> /sec %CP
> BenchClient      4G   833  95 27469   3  6520   1  2743  93 33003   3
> 1912  61
> Latency             17330us    2388ms    1165ms   48442us   19577us
> 91228us
> Version  1.96       ------Sequential Create------ --------Random
> Create--------
> BenchClient         -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
>                 files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> /sec %CP
>                    16 16378  31 +++++ +++ 18864  24 18024  33 +++++ +++
> 14734  19
> Latency              2028us     761us    1188us     271us      36us
> 567us
>
> ---With Writeback Cache (CEPH)
> Version  1.96       ------Sequential Output------ --Sequential Input-
> --Random-
> Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
> /sec %CP
> BenchClient      4G   785  95 67573   8 19906   3  2777  96 32681   3
> 2764  80
> Latency             17410us     729ms     737ms   15103us   22802us
> 88876us
> Version  1.96       ------Sequential Create------ --------Random
> Create--------
> BenchClient         -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
>                 files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> /sec %CP
>                    16 24286  46 +++++ +++ +++++ +++ 31392  57 +++++ +++
> +++++ +++
> Latency              1925us     760us    1136us     191us      65us
> 612us
>
> --- Without cache (CEPH)
> Version  1.96       ------Sequential Output------ --Sequential Input-
> --Random-
> Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
> /sec %CP
> BenchClient      4G   743  95 53350   6  6568   1  2400  90 28769   2
> 2024  67
> Latency             18056us    1503ms    2408ms   97616us   42963us
> 89855us
> Version  1.96       ------Sequential Create------ --------Random
> Create--------
> BenchClient         -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
>                 files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> /sec %CP
>                    16 20070  40 +++++ +++ 18488  24 20123  36 +++++ +++
> 15856  20
> Latency              1926us     833us    1386us     207us      64us
> 591us
>
> --- Without Cache test 2 (CEPH)
> Version  1.96       ------Sequential Output------ --Sequential Input-
> --Random-
> Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
> /sec %CP
> BenchClient      4G   731  88 47184   6  6461   1  2926  97 27001   2
> 1915  61
> Latency             17084us    2106ms     947ms    5563us   21173us
> 88365us
> Version  1.96       ------Sequential Create------ --------Random
> Create--------
> BenchClient         -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
>                 files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> /sec %CP
>                    16 13473  27 +++++ +++ 13531  17 15646  28 +++++ +++
> 17251  21
> Latency              1979us     841us    1034us     190us      66us
> 696us
>
>
> With Mounted RBD image to /mnt on host system
>
> Version  1.96       ------Sequential Output------ --Sequential Input-
> --Random-
> Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
> /sec %CP
> BenchClient      4G  1531  86 106478   7 106441   5  1881  72 4820502
> 100  8202 132
> Latency              7167us     226us     211us    4198us     185us
> 3115us
> Version  1.97       ------Sequential Create------ --------Random
> Create--------
> BenchClient         -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
>                 files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> /sec %CP
>                    16 26881  58 +++++ +++ 22656  79 21652  76 +++++ +++
> 14217  37
> Latency              1043us     144us     838us     830us       8us
> 114ms
>
> Directly to SSD drive
>
> Version  1.97       ------Sequential Output------ --Sequential Input-
> --Random-
> Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
> /sec %CP
> BenchClient      4G  1687  98 121456   8 124699   5  2942  99 5465973
> 99  8527 142
> Latency              7323us     221us     214us    3605us     205us
> 3402us
> Version  1.97       ------Sequential Create------ --------Random
> Create--------
> BenchClient         -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
>                 files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> /sec %CP
>                    16 24850  57 +++++ +++ 22672  79 18802  72 +++++ +++
> 28463  74
> Latency               129us     223us     223us     459us      15us
> 212us
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


-- 
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux