Re: Did maximum performance reached?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 28/07/15 11:17, Shneur Zalman Mattern wrote:
Oh, now I've to cry :-)
not because it's not SSDs... it's SAS2 HDDs

Because, I need to build something for 140 clients... 4200 OSDs

:-(

Looks like, I can pickup my performance by SSDs, but I need a huge capacity ~ 2PB
Perhaps, tiering cache pool can save my money, but I've read here - that it's slower than all people think...

:-(

Why Lustre is more performable? There're same HDDs?

Lustre isn't (A) creating two copies of your data, and it's (B) not executing disk writes as atomic transactions (i.e. no data writeahead log).

The A tradeoff is that while a Lustre system typically requires an expensive dual ported RAID controller, Ceph doesn't. You take the money you saved on RAID controllers have spend it on having a larger number of cheaper hosts and drives. If you've already bought the Lustre-oriented hardware then my advice would be to run Lustre on it :-)

The efficient way of handling B is to use SSD journals for your OSDs. Typical Ceph servers have one SSD per approx 4 OSDs.

John
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux