Le 13/02/2016 06:31, Christian Balzer a écrit : > [...] > --- > So from shutdown to startup about 2 seconds, not that bad. > However here is where the cookie crumbles massively: > --- > 2016-02-12 01:33:50.263152 7f75be4d57c0 0 filestore(/var/lib/ceph/osd/ceph-2) limited size xattrs > 2016-02-12 01:35:31.809897 7f75be4d57c0 0 filestore(/var/lib/ceph/osd/ceph-2) mount: enabling WRITEAHEAD journal mode > : checkpoint is not enabled > --- > Nearly 2 minutes to mount things, it probably had to go to disk quite a > bit, as not everything was in the various slab caches. And yes, there is > 32GB of RAM, most of it pagecache and vfs_cache_pressure is set to 1. > During that time, silence of the lambs when it came to ops. Hum that's surprisingly long. How much data (size and nb of files) do you have on this OSD, which FS do you use, what are the mount options, what is the hardware and the kind of access ? The only time I saw OSDs take several minutes to reach the point where they fully rejoin is with BTRFS with default options/config. For reference our last OSD restart only took 6 seconds to complete this step. We only have RBD storage, so this OSD with 1TB of data has ~250000 4M files. It was created ~ 1 year ago and this is after a complete OS umount/mount cycle which drops the cache (from experience Ceph mount messages doesn't actually imply that the FS was not mounted). > Next this : > --- > 2016-02-12 01:35:33.915981 7f75be4d57c0 0 osd.2 1788 load_pgs > 2016-02-12 01:36:32.989709 7f75be4d57c0 0 osd.2 1788 load_pgs opened 564 pgs > --- > Another minute to load the PGs. Same OSD reboot as above : 8 seconds for this. This would be way faster if we didn't start with an umounted OSD. This OSD is still BTRFS but we don't use autodefrag anymore (we replaced it with our own defragmentation scheduler) and disabled BTRFS snapshots in Ceph to reach this point. Last time I checked an OSD startup was still faster with XFS. So do you use BTRFS in the default configuration or have a very high number of files on this OSD ? Lionel |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com