Re: Full FLash NVME Cluster recommendation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Yoann,

So I would not be putting 1 x 6.4TB device in but multiple smaller devices in each node.

What CPU are you thinking of using?
How many CPU's? If you have 1 PCIe card then it will only be connected to 1 CPU so will you be able to use all of the performance of multiple CPU's.
What network are you thinking of? Wouldn't be doing less than 100G or multiple 25G connections.
Where is you CephFS metadata being stored? What are you doing about CephFS metadata servers?
What about some faster storage for the WAL?

What is your IO Profile? Read/Write split?

It may be the case that EC is not the best fit for the workload you are trying to do.

Darren



On 15/11/2019, 15:26, "Yoann Moulin" <yoann.moulin@xxxxxxx> wrote:

    Hello,
    
    I'm going to deploy a new cluster soon based on 6.4TB NVME PCI-E Cards, I will have only 1 NVME card per node and 38 nodes.
    
    The use case is to offer cephfs volumes for a k8s platform, I plan to use an EC-POOL 8+3 for the cephfs_data pool.
    
    Do you have recommendations for the setup or mistakes to avoid? I use ceph-ansible to deploy all myclusters.
    
    Best regards,
    
    -- 
    Yoann Moulin
    EPFL IC-IT
    _______________________________________________
    ceph-users mailing list -- ceph-users@xxxxxxx
    To unsubscribe send an email to ceph-users-leave@xxxxxxx
    

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux