Re: Decrepit ceph cluster performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Aug 13, 2023 at 10:43 PM Anthony D'Atri <anthony.datri@xxxxxxxxx> wrote:
> Think you meant s/SSD/SAS|SATA/.

Indeed, (SATA/SAS) SSD - thanks!

> The OP implies that the cluster's performance *degraded* with the Quincy upgrade.I wonder if there was a kernel change at the same time.

Also a good point. OP: do you have any non-standard ceph.conf
settings? Was cluster performance acceptable before the upgrade? There
are some settings like bluestore_rocksdb_options that changed in an
impactful way, and if the setting is now overridden in your ceph.conf,
you will need to account for this...

> Do you have the latest available firmware installed on them?  Did you perform a secure-erase on each before deploying?  What manner of HBA is driving them?  The first generation NVMe AIC SKUs definitely had issues with initial firmware.

Also, make sure they're not behind a RAID controller... or that it's
in an HBA mode.

Thanks,
Tyler
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux