hello now backup is running since 3hours and cephfs metadata goes from 20G to 479Go... POOL ID STORED OBJECTS USED %USED MAX AVAIL cephfs-metadata 12 479 GiB 642.26k 1.4 TiB 18.79 2.0 TiB cephfs-data0 13 2.9 TiB 9.23M 9.4 TiB 10.67 26 TiB is that a normal behaviour ? oau Le lundi 05 avril 2021 à 15:17 +0200, Olivier AUDRY a écrit : > hello > > when I run my borgbackup over cephfs volume (10 subvolumes for 1.5To) > I > can see a big increase of osd space usage and 2 or 3 osd goes near > full, or full, then out and finally the cluster goes in error. > > Any tips to prevent this ? > > My cluster is cephv15 with : > > 9 nodes : > > each node run : 2x6to hdd and 2x600to ssd > the cephfs got data on hdd and metadata on ssd. > the cephfs md cache is : 32Go > > 128pg for data and metadata (this is has been setup by auto balancer) > > Perhaps I can fix the pg num for each of cephfs pool and prevent > autobalancer to run for them. > > what do you think ? > > thx you for your help and advices. > > UPDATE : I increase the pg number to 256 for data and 1024 for > metadata > > Here the df during the backup started since 30min > > POOL ID STORED OBJECTS USED %USED MAX AVAIL > cephfs-metadata 12 183 GiB 514.68k 550 GiB 7.16 2.3 TiB > > Before the backup the stored was 20GiB > > oau > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx