Re: Micron SSD/Basic Config

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The RocksDB rings are 256MB, 2.5GB, 25GB, and 250GB.  Unless you have a workload that uses a lot of metadata, taking care of the first 3 and providing room for compaction should be fine.  To allow for compaction room, 60GB  should be sufficient.  Add 4GB to accommodate WAL and you're at a nice multiple of 2, 64GB.

David Byte
Sr. Technology Strategist
SCE Enterprise Linux 
SCE Enterprise Storage
Alliances and SUSE Embedded
dbyte@xxxxxxxx
918.528.4422

On 1/31/20, 8:16 AM, "adamb@xxxxxxxxxx" <adamb@xxxxxxxxxx> wrote:

    vitalif@yourcmc.ru wrote:
    > I think 800 GB NVMe per 2 SSDs is an overkill. 1 OSD usually only 
    > requires 30 GB block.db, so 400 GB per an OSD is a lot. On the other 
    > hand, does 7300 have twice the iops of 5300? In fact, I'm not sure if a 
    > 7300 + 5300 OSD will perform better than just a 5300 OSD at all.
    > 
    > It would be interesting if you could benchmark & compare it though :)
    
    The documentation I read said it was 4% of the block device.  Also been told the rule of thumb is basically 3/30/300.  
    
    The 7.68TB 5300 pro does 11k random write IOPS, the 800GB 7300 MAX NVMe does 60k random write IOPS.  The micron white paper is using 9200 MAX's with the 5210 SATA SSD's.  Only reason I am going for the 5300's is for a bit more write endurance.
    _______________________________________________
    ceph-users mailing list -- ceph-users@xxxxxxx
    To unsubscribe send an email to ceph-users-leave@xxxxxxx
    

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux