On Fri, Aug 9, 2019 at 8:04 AM Florian Haas <florian@xxxxxxxxxxxxxx> wrote: > > Hi Sage! > > Whoa that was quick. :) > > On 09/08/2019 16:27, Sage Weil wrote: > >> https://tracker.ceph.com/issues/38724#note-26 > > > > { > > "op_num": 2, > > "op_name": "truncate", > > "collection": "2.293_head", > > "oid": "#-4:c96337db:::temp_recovering_2.293_11123'6472830_288833_head:head#", > > "offset": 4457615932 > > }, > > > > That offsize (size) is > 4 GB. BlueStore has a hard limit of 2^32-1 for > > object sizes (because it uses a uint32_t). This cluster appears to have > > some ginormous rados objects. Until those are removed, you > > can't/shouldn't use bluestore. > > OK, this is interesting. > > This is an OpenStack Cinder volumes pool, so all the objects in there > belong to RBDs. I couldn't think of any situation in which RBD would > create a huge object like that. > > But, as it happens that PG is currently mapped to a primary OSD that is > still on FileStore, so I can do a "find -size +1G" on that mount point, > and here's what I get: > > -rw-r--r-- 1 ceph ceph 4457615932 Mar 29 2018 > DIR_3/DIR_9/DIR_6/DIR_C/obj-vS6RN9\uQwvXU9DP__head_DBECC693__2 > > So, bingo. That's a 4.2GB size file whose size matches that offset exactly. > > But I'm not familiar with that object name format. How did that object > get here? And how do I remove it, considering I seem to be unable to > access it? > > rados -p volumes stat 'obj-vS6RN9\uQwvXU9DP' > error stat-ing volumes/obj-vS6RN9\uQwvXU9DP: (2) No such file or directory I believe you need to substitute \u with _ > > Or is that file just an artifact that doesn't even map to an object? > > This is turning out to be a learning experience. :) > > Thanks again for your help! > > Cheers, > Florian > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx