Configuration about using nvme SSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi ~
   We have used nvme ssd as storage medium, and found that the
hardware performance was not well developed. Should we change some
parameter? And bluestore provides NVMEDevice using SPDK to optimize
performance, dose anyone know if this module is stable?

Our hardware configuration is 8 nvme SSD in 3 nodes:

cluster:
    id:     ed0e8dfd-63d8-4862-b9fc-652eac270b08
    health: HEALTH_ERR
            nodeep-scrub flag(s) set
            1 full osd(s)
            3 nearfull osd(s)
            4 pool(s) full
            application not enabled on 3 pool(s)

  services:
    mon: 1 daemons, quorum SH-IDC1-10-5-39-171
    mgr: SH-IDC1-10-5-39-170(active), standbys: SH-IDC1-10-5-139-172,
SH-IDC1-10-5-39-171
    osd: 96 osds: 96 up, 96 in
         flags nodeep-scrub

  data:
    pools:   5 pools, 5376 pgs
    objects: 271M objects, 30448 GB
    usage:   65622 GB used, 104 TB / 168 TB avail
    pgs:     5376 active+clean

  io:
    client:   0 B/s rd, 0 op/s rd, 0 op/s wr
We can get 513558 IOPS in 4K read per nvme by fio but only 45146 IOPS
per OSD.by rados.

Thanks~
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux