Re: suse_enterprise_storage3_rbd_LIO_vmware_performance_bad

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2016-07-01T13:04:45, mq <maoqi1982@xxxxxxx> wrote:

Hi MQ,

perhaps the upstream list is not the best one to discuss this. SUSE
includes adjusted backports for the iSCSI functionality that upstream
does not; very few people here are going to be intimately familiar with
the code you're running. If you're evaluating SES3, you might as well
give our support team a call ;-)

That said:

First, let me start with the same others have pointed out: the iSCSI
gateway (via the LIO targets) will introduce an additional network hop
between your clients and the Ceph cluster. That's perfectly fine for
bandwidth-oriented workloads, but for latency/IOPS, it is quite
expensive. It also negates some of the benefits of Ceph (namely, that a
client can directly talk to the OSD holding the data without an
intermediary).

So, you need to check whether the iSCSI access method fits your use
case, and then the iSCSI gateways really need good network interfaces,
both facing to the clients and to the Ceph cluster (on its public
network).

> My cluster
> 3 ceph nodes :2*E5-2620 64G , mem 2*1Gbps
> (3*10K SAS, 1*480G  SSD) per node, SSD as journal
> 1 vmware node  2*E5-2620 64G , mem 2*1Gbps

And here we are. 1 GbE NICs just aren't adequate for any reasonable
performance numbers. I'm assuming you're running the iSCSI GW on the
Ceph nodes, just like the MONs (since you didn't specify any additional
nodes and the node[123] names are kind of suspicious).

This environment lacks network performance. You barely have enough
network bandwidth to sustain a single of those drives - and then add in
that you're replicating over the same NIC, and that the OSD traffic is
multiplexed on the same network as the iSCSI/client traffic.

You also lack scale out capacity - Ceph scales horizontally, but each of
your only three nodes only has 3 drives. That doesn't give Ceph a lot to
work with.

> anyone can give me some suggestion to improve the performance ?

Yes. I'd start with ordering a lot more and faster hardware ;-) But even
then, you'll have to understand that iSCSI will not - and really,
really, cannot - deliver quite the same performance as native RBD.

So that'd make me look into replacing VMWare with an OpenStack cloud,
where you get native Ceph drivers, proper integration, and performance.

After all - if you're avoiding proprietary lock-in for the storage in
favor of Open Source / Ceph (which is a great choice!), why would you
accept this on the hypervisor/private cloud ;-)



Regards,
    Lars

-- 
Architect SDS, Distinguished Engineer
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux