Re: Bluestore Hardwaresetup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
 
thank you.
 
Networksetup is like that:
 
2 x 10 GBit LACP for public
2 x 10 GBit LACP for clusternetwork
1 x 1 GBit for management
 
Yes Joe, the sizing for block.db and blockwal would be interesting!
 
Is there another advice for SSDs like blog from Sebastian Han?:
 
https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
 
Best regards
 
Peter
 
 
Gesendet: Freitag, 16. Februar 2018 um 19:09 Uhr
Von: "Joe Comeau" <Joe.Comeau@xxxxxxxxxx>
An: "Michel Raabe" <rmichel@xxxxxxxxxxx>, "Jan Peters" <haseningo@xxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Betreff: Re: Bluestore Hardwaresetup
I have a question about  block.db and block.wal
 
How big should they be?
Relative to drive size or ssd size ?
 
Thanks Joe


>>> Michel Raabe <rmichel@xxxxxxxxxxx> 2/16/2018 9:12 AM >>>
Hi Peter,

On 02/15/18 @ 19:44, Jan Peters wrote:
> I want to evaluate ceph with bluestore, so I need some hardware/configure advices from you.
>
> My Setup should be:
>
> 3 Nodes Cluster, on each with:
>
> - Intel Gold Processor SP 5118, 12 core / 2.30Ghz
> - 64GB RAM
> - 6 x 7,2k, 4 TB SAS
> - 2 x SSDs, 480GB

Network?

> On the POSIX FS you have to set your journal on SSDs. What is the best way for bluestore?
>
> Should I configure separate SSDs for block.db and block.wal?

Yes.

Regards,
Michel
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux