Re: Did maximum performance reached?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 28/07/15 11:53, John Spray wrote:


On 28/07/15 11:17, Shneur Zalman Mattern wrote:
Oh, now I've to cry :-)
not because it's not SSDs... it's SAS2 HDDs

Because, I need to build something for 140 clients... 4200 OSDs

:-(

Looks like, I can pickup my performance by SSDs, but I need a huge capacity ~ 2PB Perhaps, tiering cache pool can save my money, but I've read here - that it's slower than all people think...

:-(

Why Lustre is more performable? There're same HDDs?

Lustre isn't (A) creating two copies of your data, and it's (B) not executing disk writes as atomic transactions (i.e. no data writeahead log).

The A tradeoff is that while a Lustre system typically requires an expensive dual ported RAID controller, Ceph doesn't. You take the money you saved on RAID controllers have spend it on having a larger number of cheaper hosts and drives. If you've already bought the Lustre-oriented hardware then my advice would be to run Lustre on it :-)

The efficient way of handling B is to use SSD journals for your OSDs. Typical Ceph servers have one SSD per approx 4 OSDs.

Oh, I've just re-read the original message in this thread, and you're already using SSD journals.

So I think the only point of confusion was that you weren't dividing your expected bandwidth number by the number of replicas, right?

> Each spindel-disk can write ~ 100MB/s , and we have 10 SAS disks on each node = aggregated write speed is ~ 900MB/s (because of striping etc.) And we have 3 OSD nodes, and objects are striped also on 30 osds - I thought it's also aggregateble and we'll get something around 2.5 GB/s, but not...

Your expected bandwidth (with size=2 replicas) will be (900MB/s * 3)/2 = 1300MB/s -- so I think you're actually doing pretty well with your 1367MB/s number.

John





_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux