Re: Need advice on Ceph design

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Sebastien,

Let me answer all of your question which i missed out, Let me tell you
this is first cluster so i have no idea what would be best or worst
here, also you said we don't need SSD Journal for BlueStore but i
heard people saying  WAL/RockDB required SSD, can you explain?

If i have SATA 500GB 7.5k HDD in that case running journal WAL/RockDB
on same OSD disk will slowdown right?




On Wed, Jul 18, 2018 at 2:42 PM, Sébastien VIGNERON
<sebastien.vigneron@xxxxxxxxx> wrote:
> Hello,
>
> What is your expected workload? VMs, primary storage, backup, objects storage, ...?

All VMs only ( we are running openstack and all i need HA solution
live migration etc)

> How many disks do you plan to put in each OSD node?

6 Disk per OSD node ( I have Samsung 850 EVO Pro 500GB  & SATA 500GB 7.5k)

> How many CPU cores? How many RAM per nodes?

2.9GHz  (32 core in /proc/cpuinfo)

> Ceph access protocol(s): CephFS, RBD or objects?

RBD only

> How do you plan to give access to the storage to you client? NFS, SMB, CephFS, ...?

Openstack Nova / Cinder

> Replicated pools or EC pools? If EC, k and m factors?

I didn't thought of it, This is first cluster so don't know what would be best.

> What OS (for ceph nodes and clients)?

CentOS7.5  (Linux)

>
> Recommandations:
>  - For your information, Bluestore is not like Filestore, no need to have journal SSD. It's recommended for Bluestore to use the same disk for both WAL/RocksDB and datas.
>  - For production, it's recommended to have dedicated MON/MGR nodes.
>  - You may also need dedicated MDS nodes, depending the CEPH access protocol(s) you choose.
>  - If you need commercial support afterward, you should see with a Redhat representative.
>
> Samsung 850 pro is consumer grade, not great.
>
>
>> Le 18 juil. 2018 à 19:16, Satish Patel <satish.txt@xxxxxxxxx> a écrit :
>>
>> I have decided to setup 5 node Ceph storage and following is my
>> inventory, just tell me is it good to start first cluster for average
>> load.
>>
>> 0. Ceph Bluestore
>> 1. Journal SSD (Intel DC 3700)
>> 2. OSD disk Samsung 850 Pro 500GB
>> 3. OSD disk SATA 500GB (7.5k RPMS)
>> 4. 2x10G NIC (separate public/cluster with JumboFrame)
>>
>> Do you thin this combination is good for average load?
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux