Re: Poor performance with three nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2 Oct 2013, Eric Lee Green wrote:
> By contrast, that same dd to an iSCSI volume exported by one of the servers
> wrote at 240 megabytes per second. Order of magnitude difference.

Can you see what 'rados -p rbd bench 60 write' tells you?

I suspect the problem here is an unfortunate combination of what dd does 
(1 outstanding write at a time) and what iSCSI is probably doing 
(acknowledging the write before it is written to the disk--I'm guess a 
write to /dev/* doesn't also send a scsi flush).  This lets you approach 
the disk or network bandwidth even though the client/app (dd) is only 
dispatching a single 512K IO at a time.

I'm curious if the iSCSI number changes if you add oflag=direct or 
oflag=sync.

It's also worth pointing out that what dd is doing (single outstanding IO) 
no sane file system would do, except perhaps during commit/sync time when 
it is carefully ordering IOs.  You might want to try the dd to a file 
inside a mounted fs insted of to the raw device.

sage


> 
> > The OSDs should be individual drives, not part of a RAID set, otherwise
> > you're just creating extra work, unless you've reduced the number of copies
> > to 1 in your ceph config.
> > 
> If all I wanted was 1 copy I would use iSCSI, which is much faster. My purpose
> was to distribute copies to multiple servers / SAS channels. As for individual
> drives this is on shared infrastructure that provides NFS and iSCSI services
> to the rest of my network, so that is not going to happen.
> 
> > What I've seen is that a single threaded Ceph client maxes out around 50
> > MB/s for us, but the overall capacity is much, much higher.
> 
> I suppose I was looking for performance similar to Amazon EBS. Oh well.
> 
> It's looking suspiciously like Ceph simply isn't a viable option for my
> environment. I can't put out something that performs worse than the current
> environment, my users would lynch me. iSCSI is a PITA because if one of the
> servers goes down I have to hijack its IP on its replica server and export its
> replica volumes to the clients, but at least I can get more than 50MB/s out of
> iSCSI. Oh well, it was a good idea anyhow...
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux