Re: Howto reduce the impact from cephx with small IO

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Mark,
thanks for the links.

If I search for wip-auth I found nothing in docs.ceph.com... this mean, that wip-auth don't find the way in the ceph code base?!

But I'm wonder about the RHEL7 position at the link http://www.spinics.net/lists/ceph-devel/msg22416.html
Unfortunality there are no values for RHEL7 with auth...
But is known on which side (or how many percent) the bottleneck for cephx is (client, mon, osd)? My clients (qemu on proxmox-ve) are not changeable, but my OSDs can also run on RHEL7/CentOS if this bring an performance boost. The Mons are running on the proxmox-ve host yet.

Udo


Am 20.04.2016 um 19:13 schrieb Mark Nelson:
Hi Udo,

There was quite a bit of discussion and some partial improvements to cephx performance about a year ago. You can see some of the discussion here:

http://www.spinics.net/lists/ceph-devel/msg22223.html

and in particular these tests:

http://www.spinics.net/lists/ceph-devel/msg22416.html

Mark

On 04/20/2016 11:50 AM, Udo Lembke wrote:
Hi,
on an small test-system (3 nodes (mon + osd), 6 OSDs, ceph 0.94.6) I
compare with and without cephx.

I use fio for that inside an VM on an host, outside the 3 ceph-nodes,
with this command:
fio --max-jobs=1 --numjobs=1 --readwrite=read --blocksize=4k --size=4G
--direct=1 --name=fiojob_4k
All test are run three times (after clearing caches) and I take the
average (but the values are very close together).

cephx or not don't matter for an big blocksize of 4M - but for 4k!

If I disable cephx I got:
7040kB/s bandwith
1759IOPS
564µS clat

The same config, but with cephx I see this values:
4265 kB/s bandwith
1066 IOPS
933µS clat

This shows, that the performance drop by 40% with cephx!!

To disable cephx is no alternative, because any system which have access
to the ceph-network can read/write all data...

ceph.conf without cephx:
[global]
          auth_cluster_required = none
          auth_service_required = none
          auth_client_required = none
          cephx_sign_messages = false
          cephx_require_signatures = false
          #
          cluster network =...

ceph.conf with cephx:
[global]
          auth client required = cephx
          auth cluster required = cephx
          auth service required = cephx
          #
          cluster network =...

Is it possible to reduce the cephx impact?
Any hints are welcome.


regards

Udo
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux