Re: Fast Ceph a Cluster with PB storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear community,

  I've had a conversation with Alexander, and he asked me to explain the situation and will be very grateful for any advices.

  So demands look like these:

1. He has a number of clients which need to periodically write a set of data as big as 160GB to a storage. The acceptable write speed is about a minute for the such amount, so it is around 2700-2800MB per second. Each write session will happend in a dedicated manner. Data read should also be pretty fast. The written data must be shared after the write. Clients OS - Windows.
2. It is necessary to have a regular storage as well. He thinks about 1.2TB HDD storage with 34TB SSD cache tier at the moment.

The main question with an answer I don't have is how to calculate\predict per client write speed for a ceph cluster? For example, if there will be a cache tier or even a dedicated SSD-only pool with Intel S3710 or Samsung SM863 drives - how to get approximation for the write speed? Concurent writes to the 6-8 good SSD drives could probably give such speed, but is it true for the cluster in general? 3 sets per 8 drives in 13 servers (with an additional overhead for the network operations, ACKs and placement calculations), QDR or FDR Inifiniband or 40GbE; we know drive specs, is there a formula exists to calculate speed expectations from the raw speed and/or IOPS point of view?

Or, from another side, if there are pre-requisites exist, how to be sure the projected cluster meets them? I'm pretty sure it's a typical task, how would you solve it?

Thanks a lot in advance and best regards,
Vladimir


С уважением,
Дробышевский Владимир                                     
Компания "АйТи Город"
+7 343 2222192

Аппаратное и программное обеспечение
IBM, Microsoft, Eset
Поставка проектов "под ключ"
Аутсорсинг ИТ-услуг

2016-08-08 19:39 GMT+05:00 Александр Пивушков <pivu@xxxxxxx>:

Hello dear community!
I'm new to the Ceph and not long ago took up the theme of building clusters.
Therefore it is very important to your opinion.

It is necessary to create a cluster from 1.2 PB storage and very rapid access to data. Earlier disks of "Intel® SSD DC P3608 Series 1.6TB NVMe PCIe 3.0 x4 Solid State Drive" were used, their speed of all satisfies, but with increase of volume of storage, the price of such cluster very strongly grows and therefore there was an idea to use Ceph.
There are following requirements:

- The amount of data 160 GB should be read and written at speeds of SSD P3608
- There must be created a high-speed storage of the SSD drives 36 TB volume with read / write speed tends to SSD P3608
- Must be created store 1.2 PB with the access speed than the bigger, the better ...
- Must have triple redundancy
I do not really understand yet, so to create a configuration with SSD P3608 Disk. Of course, the configuration needs to be changed, it is very expensive.

InfiniBand will be used, and 40 GB Ethernet.
We will also use virtualization to high-performance hardware to optimize the number of physical servers.
I'm not tied to a specific server models and manufacturers. I create only the cluster scheme which should be criticized :) 

1. OSD - 13 pieces.
     a. 1.4 TB SSD-drive analogue Intel® SSD DC P3608 Series - 2 pieces
     b. Fiber Channel 16 Gbit / c - 2 port.
     c. An array (not RAID) to 284 TB of SATA-based drives (36 drives for 8TB);
     d. 360 GB SSD- analogue Intel SSD DC S3500 1 piece
     e. SATA drive 40 GB for installation of the operating system (or booting from the network, which is preferable)
     f. RAM 288 GB
     g. 2 x CPU - 9 core 2 Ghz. - E-5-2630v4
2. MON - 3 pieces. All virtual server:
     a. 1 Gbps Ethernet / c - 1 port.
     b. SATA drive 40 GB for installation of the operating system (or booting from the network, which is preferable)
     c. SATA drive 40 GB
     d. 6GB RAM
     e. 1 x CPU - 2 cores at 1.9 Ghz
3. MDS - 2 pcs. All virtual server:
     a. 1 Gbps Ethernet / c - 1 port.
     b. SATA drive 40 GB for installation of the operating system (or booting from the network, which is preferable)
     c. SATA drive 40 GB
     d. 6GB RAM
     e. 1 x CPU - min. 2 cores at 1.9 Ghz


I assume to use for an acceleration SSD for a cache and a log of OSD.

--
Alexander Pushkov

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux