Re: Question about big EC pool.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



13-Sep-15 21:25, Somnath Roy пишет:
<<inline

-----Original Message-----
From: Mike Almateia [mailto:mike.almateia@xxxxxxxxx]
Sent: Sunday, September 13, 2015 10:39 AM
To: Somnath Roy; ceph-devel
Subject: Re: Question about big EC pool.

13-Sep-15 01:12, Somnath Roy пишет:
12-Sep-15 19:34, Somnath Roy пишет:
I don't think there is any limit from Ceph side..
We are testing with ~768 TB deployment with 4:2 EC on Flash and it is working well so far..

Thanks & Regards
Somnath
Thanks for answer!

It's very interesting!

What is hardware you use for your the test cluster?
[Somnath] Three 256 TB SanDisk's JBOF (IF100) and 2 heads in front of that , so, total of 6 node cluster. FYI, each IF100 can support max 512 TB. Heads are with 128GB  RAM and Xeon 2690 V3 dual socket on each of the server.

What a version of ceph you use?
[Somnath] As of now, it's giant , but, will be moving to Hammer soon..

Good, may be you write a result perfomance/benefits after migrating to Hammer?

How cluster working in degraded state? Performance degradation is huge?

[Somnath] That's one of the reason we are using Cauchy_good, it's performance in degraded state is much better. By reducing the recovery traffic (lower values of recovery settings) , we are able to get significant performance improvement during degraded state as well...BTW, degradation will depend on how much data cluster has to recover. In our case, we are seeing ~8% degradation if say ~64 TB (one ceph node) is failed, but, ~28% if ~128TB (2 node) is down. This is for 4M reads..

Cool, I see now.

I think that e5-2690 didn't enough for that flash cluster.

[Somnath] In our case and specially for bigger block sizes object use cases, dual socket E5-2690 should be more than sufficient. We are not able to saturate that in this case. For smaller block size block use cases we are almost saturating the cpus with our config though. If you are planning to use EC with RGW and considering you have object size at least 256K or so, this cpu complex is good enough IMO.


Ok.


How you have 6 node if as you say "Three 256 TB SanDisk's JBOF (IF100) and 2 heads in front of that", may be I not realized how IF100 working.

[Somnath] It is 2 ceph nodes connected to each IF100 (you can connect upto 8 servers in front). The IF100 drives are partitioned between 2 head servers. We used 3 IF100s, so, total 3 * 2 = 6 head nodes or 6 node ceph servers. Hope that make sense now.


Yes, I understood it now.

Thanks for info!

--
Mike, yes.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux