Re: What a maximum theoretical and practical capacity in ceph cluster?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>
> And finally the SAS drive. For CEPH I don't see this drive making much
> sense. Most manufacturers enterprise SATA drives are identical to the SAS
> version with just the different interface. Performance seems identical in
> all comparisons I have seen, apart from the fact that SATA can only queue up
> to 32 IO's, not sure how important this is? But they also command a price
> premium.
>

Anecdotal: I got a good deal on some new systems, including WD
Nearline SAS disks.  It wasn't amazing, but the whole system was
cheaper than me manually assembling a SuperMicro with some HGST SATA
disks.  The SATA nodes have a battery backed RAID0 setup.  The SAS
nodes are using a SAS HBA (no write cache).  All nodes' journals are
the same model Intel SATA SSD, with no write caching.

My load test was snapshot trimming, and I noticed it from watching
atop.  Completely quantifiable  and repeatable ;-).

The SAS disks would consistently finish sooner than the SATA disks.
For a rmsnap that took ~2 hours to trim, the SAS disks would finish up
about 15 minutes sooner.  Regardless of uneven data distribution, all
SAS disks were completely done trimming before the first SATA disk
started to ramp down it's IOPS.

This is something I just noticed, so I haven't (yet) spent any time
trying to actually quantify.

I only noticed when the load was high enough to make cluster
completely unresponsive.  I have no idea if the difference will show
up under normal loads.  I'm not even sure how I'm going to quantify
this, since the lack of write cache on the SAS disks makes the graphs
much harder to compare.

So far, the best I can say is that the SAS disks are "faster", even
without a write cache.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux