On Fri, Jun 21, 2013 at 10:39:05AM +0200, Leen Besselink wrote: > On Fri, Jun 21, 2013 at 12:11:23PM +0800, Da Chun wrote: > > Hi List, > > The default journal size is 1G, which I think is too small for my Gb network. I want to extend all the journal partitions to 2 or 4G. How can I do that? The osds were all created by commands like "ceph-deploy osd create ceph-node0:/dev/sdb". The journal partition is on the same disk together with the corresponding data partition. > > I notice there is an attribute "osd journal size" which value is 1024. I guess this is why the command "ceph-deploy osd create" set the journal partition size as 1G. > > > > > > I want to do this job using steps as below: > > 1. Change the "osd journal size" in the ceph.conf to 4G > > 2. Remove the osd > > 3. Readd the osd > > 4. Repeat 2 and 3 steps for all the osds. > > > > > > This needs lots of manual work and is time consuming. Are there better ways to do that? Thanks! > > Have a look at these commands: > > http://ceph.com/docs/master/man/8/ceph-osd/#cmdoption-ceph-osd--flush-journal > http://ceph.com/docs/master/man/8/ceph-osd/#cmdoption-ceph-osd--mkjournal > Actually, I'm slightly mistaken. I don't think you need the mkjournal. If you stop the osd, flush the journal, change the setting, remove the journal and start the osd. I think it would create a new journal automatically. I hope you have a test-environment or maybe someone with more knowledge of these things can confirm or deny what I mentioned. > And this setting: > http://ceph.com/docs/master/rados/configuration/osd-config-ref/#index-2 > > If I'm not mistaken that is a per-machine global or per-osd setting in /etc/ceph/ceph.conf > > > _______________________________________________ > > ceph-users mailing list > > ceph-users@xxxxxxxxxxxxxx > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com