Den tors 27 feb. 2025 kl 18:48 skrev quaglio@xxxxxxxxxx <quaglio@xxxxxxxxxx>: > > Hello, > I recently installed a new cluster. > After the first node was working, I started transferring the files I needed. As I was in some urgency to do rsync, I enabled size=1 for the CephFS data pool. > After a few days, when I managed to place a new node, I put size = 2 for that pool. > Replicas of existing objects are already being recorded, but the available space has not yet been updated. > The available space should automatically increase as I add more disks. Right? > > Could you help me identify where I'm going wrong? The free space listed in "ceph df" and so on is based on "number of OSDs, times the free-size of the OSD with smallest amount of free space currently". Since you can't control where individual objects end up, "ceph df" will assume worst-case placement and hence give you info about the least possible space it can promise to give you, so if your objects dont get placed on the worst possible place, the "ceph df" output will not be lower by adding data until it makes a new OSD the smallest-free one. In the end, you need to wait until rebalancing has moved the data into the new OSDs so that one of the previous OSDs get more free space. This can take a while. -- May the most significant bit of your life be positive. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx