Re: Chown in Parallel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Interesting. I have all the inodes in cache on my nodes so I expect the bottleneck to be filesystem metadata -> journal writes. Unless something else is going on in here ;-)

Jan

On 10 Nov 2015, at 13:19, Nick Fisk <nick@xxxxxxxxxx> wrote:

I’m looking at iostat and most of the IO is read, so I think it would still take a while if it was still single threaded
 
Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda               0.00     0.50    0.00    5.50     0.00    22.25     8.09     0.00    0.00    0.00    0.00   0.00   0.00
sdb               0.00     0.50    0.00    5.50     0.00    22.25     8.09     0.00    0.00    0.00    0.00   0.00   0.00
sdc               0.00   356.00  498.50    3.00  1994.00  1436.00    13.68     1.24    2.48    2.38   18.00   1.94  97.20
sdd               0.50     0.00  324.50    0.00  1484.00     0.00     9.15     0.97    2.98    2.98    0.00   2.98  96.80
sde               0.00     0.00  300.50    0.00  1588.00     0.00    10.57     0.98    3.25    3.25    0.00   3.25  97.80
sdf               0.00    13.00  197.00   95.50  1086.00  1200.00    15.63   121.41  685.70    4.98 2089.91   3.42 100.00
md1               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md0               0.00     0.00    0.00    5.50     0.00    22.00     8.00     0.00    0.00    0.00    0.00   0.00   0.00
sdg               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdm               0.00     0.00  262.00    0.00  1430.00     0.00    10.92     0.99    3.78    3.78    0.00   3.76  98.60
sdi               0.00   113.00  141.00  337.00   764.00  3340.00    17.17    98.93  191.24    3.65  269.73   2.06  98.40
sdk               1.00    42.50  378.50   74.50  2004.00   692.00    11.90   145.21  278.94    2.68 1682.44   2.21 100.00
sdn               0.00     0.00  250.50    0.00  1346.00     0.00    10.75     0.97    3.90    3.90    0.00   3.88  97.20
sdj               0.00    67.50   94.00  287.50   466.00  2952.00    17.92   144.55  589.07    5.43  779.90   2.62 100.00
sdh               0.00    85.50  158.00  176.00   852.00  2120.00    17.80   144.49  500.04    5.05  944.40   2.99 100.00
sdl               0.00     0.00  173.00    9.50   956.00   300.00    13.76     2.85   15.64    5.73  196.00   5.41  98.80
 
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Jan Schermer
Sent: 10 November 2015 12:07
To: Nick Fisk <nick@xxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: Chown in Parallel
 
I would just disable barriers and enable them afterwards(+sync), should be a breeze then.
 
Jan
 
On 10 Nov 2015, at 12:58, Nick Fisk <nick@xxxxxxxxxx> wrote:
 
I’m currently upgrading to Infernalis and the chown stage is taking a log time on my OSD nodes. I’ve come up with this little one liner to run the chown’s in parallel
 
find /var/lib/ceph/osd -maxdepth 1 -mindepth 1 -print | xargs -P12 -n1 chown -R ceph:ceph
 
NOTE: You still need to make sure the other directory’s in the /var/lib/ceph folder are updated separately but this should speed up the process for machines with larger number of disks.
 
Nick

<image001.jpg> _______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux