Re: CephFS Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 9, 2017 at 9:07 PM, Brady Deetz <bdeetz@xxxxxxxxx> wrote:
So with email, you're talking about lots of small reads and writes. In my experience with dicom data (thousands of 20KB files per directory), cephfs doesn't perform very well at all on platter drivers. I haven't experimented with pure ssd configurations, so I can't comment on that. 

Yes, that's pretty much why I'm using cache tiering on SSDs.
 
Somebody may correct me here, but small block io on writes just makes latency all that much more important due to the need to wait for your replicas to be written before moving on to the next block. 

I think that is correct. Smaller blocks = more I/O, so SSDs benefit a lot.
 
Without know exact hardware details, my brain is immediately jumping to networking constraints. 2 or 3 spindle drives can pretty much saturate a 1gbps link. As soon as you create contention for that resource, you create system load for iowait and latency.  
You mentioned you don't control the network. Maybe you can scale down and out. 

 I'm constrained with the topology I showed you for now. I did planned another (see https://creately.com/diagram/j1eyig9i/7wloXLNOAYjeregBGkvelMXL50%3D) but it won't be possible at the time.
 That setup would have a 10 gig interconnection link.

On Wed, May 10, 2017 at 3:55 AM, John Spray <jspray@xxxxxxxxxx> wrote:

Hmm, to understand this better I would start by taking cache tiering
out of the mix, it adds significant complexity.

The "-direct=1" part could be significant here: when you're using RBD,
that's getting handled by ext4, and then ext4 is potentially still
benefiting from some caching at the ceph layer.  With CephFS on the
other hand, it's getting handled by CephFS, and CephFS will be
laboriously doing direct access to OSD.

John

I won't be able to change that by now. I would need another testing cluster.
The point of direct=1 was to remove any caching possibility in the middle.
That fio suite was suggested by username peetaur on IRC channel (thanks :)

Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
Belo Horizonte - Brasil

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux