Re: Optimise Setup with Bluestore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



*resend... this Time to the list...*
Hey David Thank you for the response!

My use case is actually only rbd for kvm Images where mostly Running Lamp systems on Ubuntu or centos.
All Images (rbds) are created with "proxmox" where the ceph defaults are used (actually Jewel in the near Future luminous...)

What i want to know is Primary which constelation would be optimal for bluestore?

I.e.
Put db and RAW Device on HDD and the Wal on nvme in my case?
Should i replace a HDD with an ssd and use this for the dbs so thats finally HDD as RAW Device, db on ssd and Wal on nvme?
Or... use the HDD for RAW and the nvme for Wal and db?

Hope you (and others) understand what i mean :)

- Mehmet

Am 16. August 2017 19:01:30 MESZ schrieb David Turner <drakonstein@xxxxxxxxx>:
>Honestly there isn't enough information about your use case. RBD usage
>with small IO vs ObjectStore with large files vs ObjectStore with small
>files vs any number of things. The answer to your question might be
>that
>for your needs you should look at having a completely different
>hardware
>configuration than what you're running. There is no correct way to
>configure your cluster based on what hardware you have. What hardware
>you
>use and what configuration settings you use should be based on your
>needs
>and use case.
>
>On Wed, Aug 16, 2017 at 12:13 PM Mehmet <ceph@xxxxxxxxxx> wrote:
>
>> :( no suggestions or recommendations on this?
>>
>> Am 14. August 2017 16:50:15 MESZ schrieb Mehmet <ceph@xxxxxxxxxx>:
>>
>>> Hi friends,
>>>
>>> my actual hardware setup per OSD-node is as follow:
>>>
>>> # 3 OSD-Nodes with
>>> - 2x Intel(R) Xeon(R) CPU E5-2603 v3 @ 1.60GHz ==> 12 Cores, no
>>> Hyper-Threading
>>> - 64GB RAM
>>> - 12x 4TB HGST 7K4000 SAS2 (6GB/s) Disks as OSDs
>>> - 1x INTEL SSDPEDMD400G4 (Intel DC P3700 NVMe) as Journaling Device
>for
>>> 12 Disks (20G Journal size)
>>> - 1x Samsung SSD 840/850 Pro only for the OS
>>>
>>> # and 1x OSD Node with
>>> - 1x Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz (10 Cores 20 Threads)
>>> - 64GB RAM
>>> - 23x 2TB TOSHIBA MK2001TRKB SAS2 (6GB/s) Disks as OSDs
>>> - 1x SEAGATE ST32000445SS SAS2 (6GB/s) Disk as OSDs
>>> - 1x INTEL SSDPEDMD400G4 (Intel DC P3700 NVMe) as Journaling Device
>for
>>> 24 Disks (15G Journal size)
>>> - 1x Samsung SSD 850 Pro only for the OS
>>>
>>> As you can see, i am using 1 (one) NVMe (Intel DC P3700 NVMe – 400G)
>>> Device for whole Spinning Disks (partitioned) on each OSD-node.
>>>
>>> When „Luminous“ is available (as next LTE) i plan to switch vom
>>> „filestore“ to „bluestore“ 😊
>>>
>>> As far as i have read bluestore consists of
>>> - „the device“
>>> - „block-DB“: device that store RocksDB metadata
>>> - „block-WAL“: device that stores RocksDB „write-ahead journal“
>>>
>>> Which setup would be usefull in my case?
>>> I Would setup the disks via "ceph-deploy".
>>>
>>> Thanks in advance for your suggestions!
>>> - Mehmet
>>> ------------------------------
>>>
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux