Re: bluestore_min_alloc_size and bluefs_shared_alloc_size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Joel,

Please be aware that it is not recommended to keep a mix of OSDs
created with different bluestore_min_alloc_size values within the same
CRUSH device class. The consequence of such a mix is that the balancer
will not work properly - instead of evening out the OSD space
utilization, it will create a distribution with two bands.

This is a bug in the balancer. A ticket has been filed already:
https://tracker.ceph.com/issues/64715

On Tue, Mar 12, 2024 at 4:45 AM Joel Davidow <jdavidow@xxxxxxx> wrote:
>
> For osds that are added new, bfm_bytes_per_block is 4096. However, for osds
> that were added when the cluster was running octopus, bfm_bytes_per_block
> remains 65535.
>
> Based on
> https://github.com/ceph/ceph/blob/1c349451176cc5b4ebfb24b22eaaa754e05cff6c/src/os/bluestore/BitmapFreelistManager.cc
> and the space allocation section on page 360 of
> https://pdl.cmu.edu/PDL-FTP/Storage/ceph-exp-sosp19.pdf, it appears
> bfm_bytes_per_block is the bluestore_min_alloc_size that the osd was built
> with.
>
> Below is a sanitized example of what I was referring to as the osd label
> (which includes bfm_bytes_per_block) that was run on an osd built under
> octopus. The cluster was later upgraded to pacific.
>
> user@osd-host:/# ceph-bluestore-tool show-label --path
> /var/lib/ceph/osd/ceph-36
> inferring bluefs devices from bluestore path
> {
>     "/var/lib/ceph/osd/ceph-36/block": {
>         "osd_uuid": "xxxx",
>         "size": 4000783007744,
>         "btime": "2021-09-14T15:16:55.605860+0000",
>         "description": "main",
>         "bfm_blocks": "61047168",
>         "bfm_blocks_per_key": "128",
>         "bfm_bytes_per_block": "65536",
>         "bfm_size": "4000783007744",
>         "bluefs": "1",
>         "ceph_fsid": "xxxx",
>         "kv_backend": "rocksdb",
>         "magic": "ceph osd volume v026",
>         "mkfs_done": "yes",
>         "osd_key": "xxxx",
>         "osdspec_affinity": "xxxx",
>         "ready": "ready",
>         "require_osd_release": "16",
>         "whoami": "36"
>     }
> }
>
> I'm really interested in learning the answers to the questions in the
> original post.
>
> Thanks,
> Joel
>
> On Wed, Mar 6, 2024 at 12:11 PM Anthony D'Atri <anthony.datri@xxxxxxxxx>
> wrote:
>
> >
> >
> > On Feb 28, 2024, at 17:55, Joel Davidow <jdavidow@xxxxxxx> wrote:
> >
> > Current situation
> > -----------------
> > We have three Ceph clusters that were originally built via cephadm on
> > octopus and later upgraded to pacific. All osds are HDD (will be moving to
> > wal+db on SSD) and were resharded after the upgrade to enable rocksdb
> > sharding.
> >
> > The value for bluefs_shared_alloc_size has remained unchanged at 65535.
> >
> > The value for bluestore_min_alloc_size_hdd was 65535 in octopus but is
> > reported as 4096 by ceph daemon osd.<id> config show in pacific.
> >
> >
> > min_alloc_size is baked into a given OSD when it is created.  The central
> > config / runtime value does not affect behavior for existing OSDs.  The
> > only way to change it is to destroy / redeploy the OSD.
> >
> > There was a succession of PRs in the Octopus / Pacific timeframe around
> > default min_alloc_size for HDD and SSD device classes, including IIRC one
> > temporary reversion.
> >
> > However, the osd label after upgrading to pacific retains the value of
> > 65535 for bfm_bytes_per_block.
> >
> >
> > OSD label?
> >
> > I'm not sure if your Pacific release has the back port, but not that along
> > ago `ceph osd metadata` was amended to report the min_alloc_size that a
> > given OSD was built with.  If you don't have that, the OSD's startup log
> > should report it.
> >
> > -- aad
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx



-- 
Alexander E. Patrakov
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux