Re: block.db/block.wal device performance dropped after upgrade to 14.2.10

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I really would not focus that much on a particular device model.
Yes, Kingston SSDs are slower for reads, we knew that since we tested them.
But that was before they were used as block.db devices, they first were
intended purely as block.wal devices. This was even before bluestore
actually, so their primary function was to serve as XFS journals.
I see increased read activity on all block.db devices while snaptrims are
active, be them Intel or Kingston. Latter just suffer more.
This read load was not there before 14.2.10/system upgrade.

I'm actually thinking about rolling back to 14.2.8. Any ideas how safe that
procedure is? I suppose it should be safe since there was no change in the
actual data storage scheme?

вт, 4 авг. 2020 г. в 14:33, Vladimir Prokofev <v@xxxxxxxxxxx>:

> > What Kingston SSD model?
>
> === START OF INFORMATION SECTION ===
> Model Family:     SandForce Driven SSDs
> Device Model:     KINGSTON SE50S3100G
> Serial Number:    xxxxxxxxxxxxxxxx
> LU WWN Device Id: xxxxxxxxxxxxxxxx
> Firmware Version: 611ABBF0
> User Capacity:    100,030,242,816 bytes [100 GB]
> Sector Size:      512 bytes logical/physical
> Rotation Rate:    Solid State Device
> Form Factor:      2.5 inches
> Device is:        In smartctl database [for details use: -P show]
> ATA Version is:   ATA8-ACS, ACS-2 T13/2015-D revision 3
> SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
> Local Time is:    Tue Aug  4 14:31:36 2020 MSK
> SMART support is: Available - device has SMART capability.
> SMART support is: Enabled
>
> вт, 4 авг. 2020 г. в 14:17, Eneko Lacunza <elacunza@xxxxxxxxx>:
>
>> Hi Vladimir,
>>
>> What Kingston SSD model?
>>
>> El 4/8/20 a las 12:22, Vladimir Prokofev escribió:
>> > Here's some more insight into the issue.
>> > Looks like the load is triggered because of a snaptrim operation. We
>> have a
>> > backup pool that serves as Openstack cinder-backup storage, performing
>> > snapshot backups every night. Old backups are also deleted every night,
>> so
>> > snaptrim is initiated.
>> > This snaptrim increased load on the block.db devices after upgrade, and
>> > just kills one SSD's performance in particular. It serves as a
>> block.db/wal
>> > device for one of the fatter backup pool OSDs which has more PGs placed
>> > there.
>> > This is a Kingston SSD, and we see this issue on other Kingston SSD
>> > journals too, Intel SSD journals are not that affected, though they too
>> > experience increased load.
>> > Nevertheless, there're now a lot of read IOPS on block.db devices after
>> > upgrade that were not there before.
>> > I wonder how 600 IOPS can destroy SSDs performance that hard.
>> >
>> > вт, 4 авг. 2020 г. в 12:54, Vladimir Prokofev <v@xxxxxxxxxxx>:
>> >
>> >> Good day, cephers!
>> >>
>> >> We've recently upgraded our cluster from 14.2.8 to 14.2.10 release,
>> also
>> >> performing full system packages upgrade(Ubuntu 18.04 LTS).
>> >> After that performance significantly dropped, main reason beeing that
>> >> journal SSDs are now have no merges, huge queues, and increased
>> latency.
>> >> There's a few screenshots in attachments. This is for an SSD journal
>> that
>> >> supports block.db/block.wal for 3 spinning OSDs, and it looks like
>> this for
>> >> all our SSD block.db/wal devices across all nodes.
>> >> Any ideas what may cause that? Maybe I've missed something important in
>> >> release notes?
>> >>
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@xxxxxxx
>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>>
>> --
>> Eneko Lacunza                   | Tel.  943 569 206
>>                                  | Email elacunza@xxxxxxxxx
>> Director Técnico                | Site. https://www.binovo.es
>> BINOVO IT HUMAN PROJECT S.L     | Dir.  Astigarragako Bidea, 2 - 2º izda.
>> Oficina 10-11, 20180 Oiartzun
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux