Re: Question about big EC pool.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



13-Sep-15 01:12, Somnath Roy пишет:
12-Sep-15 19:34, Somnath Roy пишет:
>I don't think there is any limit from Ceph side..
>We are testing with ~768 TB deployment with 4:2 EC on Flash and it is working well so far..
>
>Thanks & Regards
>Somnath
Thanks for answer!

It's very interesting!

What is hardware you use for your the test cluster?
[Somnath] Three 256 TB SanDisk's JBOF (IF100) and 2 heads in front of that , so, total of 6 node cluster. FYI, each IF100 can support max 512 TB. Heads are with 128GB  RAM and Xeon 2690 V3 dual socket on each of the server.

What a version of ceph you use?
How cluster working in degraded state? Performance degradation is huge?
I think that e5-2690 didn't enough for that flash cluster.

How you have 6 node if as you say "Three 256 TB SanDisk's JBOF (IF100) and 2 heads in front of that", may be I not realized how IF100 working.

You use only SSD or SSD+NVE?

[Somnath] For now, it is all SSDs.

Journal is located on the same SSD or not?

[Somnath] Yes, journal is on the same SSD.

What a plugin you use?

[Somnath] Cauchy_good jerasure.

You did try the isa plugin?


You catch some bugs or strange things?

[Somnath] So far all is well :-)


It's good :)

Thanks for answer!
--
Mike, yes.

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux