Replace journals disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sage,

On Fri, May 9, 2014 at 12:32 AM, Sage Weil <sage at inktank.com> wrote:

>
>
> On Fri, 9 May 2014, Indra Pramana wrote:
>
> > Hi Sage,
> > Thanks for your reply!
> >
> > Actually what I want is to replace the journal disk only, while I want
> to keep the OSD FS
> > intact.
> >
> > I have 4 OSDs on a node of 4 spinning disks (sdb, sdc, sdd, sde) and 2
> SSDs (sdf and sdg)
> >
> > osd.28 on /dev/sdb, journal on /dev/sdf1
> > osd.29 on /dev/sdc, journal on /dev/sdf2
> > osd.30 on /dev/sdd, journal on /dev/sdg1
> > osd.31 on /dev/sde, journal on /dev/sdg2
> >
> > I want to replace the SSDs (sdf and sdg) without losing the data on the
> OSD. I believe
> > ceph-disk prepare will destroy the data on the OSD?
>
> Right.
>
> Your procedure looks about right, assumin gyou are using ceph.conf to
> indicate what the data and journal paths are:
>

That's the problem, we don't. :) We used ceph-deploy to prepare the OSDs so
I believe we are using the so-called "udev magic" that you mentioned
earlier?

The journal is mounted on this file (or folder?):
/var/lib/ceph/osd/ceph-XXX/journal, which in turns is a symbolic link to
this file:

lrwxrwxrwx 1 root root 58 Apr  6 23:45 journal ->
/dev/disk/by-partuuid/3ff2c20c-6b58-41d3-9e7e-b3ec63c62c2f

which in turns is a symbolic link to the journal partition on the device:

lrwxrwxrwx 1 root root 10 May  3 23:48 3ff2c20c-6b58-41d3-9e7e-b3ec63c62c2f
-> ../../sdf1

Since we don't use ceph.conf to indicate the data and journal paths, how
can I recreate the journal partitions?

Looking forward to your reply, thank you.

Cheers.




>
> >
> > - set noout
> > - stop the osds
> > - flush the journal
> > - replace journal SSDs
> - recreate journal partitions
> - update ceph.conf to reflect new journal device names
> > - recreate the journal (for the existing osds)
> > - start the osds
> > - unset noout
>
> sage
>
>
>
> >
> > Is it possible?
> >
> > Looking forward to your reply, thank you.
> >
> > Cheers.
> >
> >
> >
> > On Fri, May 9, 2014 at 12:12 AM, Sage Weil <sage at inktank.com> wrote:
> >       Hi Indra,
> >
> >       The simplest way to do the fs and journal creation is to use the
> ceph-disk
> >       tool:
> >
> >        ceph-disk prepare FSDDISK JOURNALDISK
> >
> >       For example,
> >
> >        ceph-disk prepare /dev/sdb           # put fs and journal on same
> disk, or
> >        ceph-disk prepare /dev/sdb /dev/sdc  # fs on sdb, journal on (a
> new part on) sdc
> >
> >       It will create the partitions, label them, and then create the (by
> >       default, XFS) fs and initialize the journal.  After that, udev
> magic will
> >       take care of all the mounting and starting of daemons for you.
> >
> >       sage
> >
> >
> >       On Fri, 9 May 2014, Indra Pramana wrote:
> >
> >       > Hi Sage,
> >       > Sorry to chip you in, do you have any comments on this? Since I
> noted you advised
> >       Tim Snider on
> >       > similar situation before. :)
> >       >
> >       > http://www.spinics.net/lists/ceph-users/msg05142.html
> >       >
> >       > Looking forward to your reply, thank you.
> >       >
> >       > Cheers.
> >       >
> >       >
> >       >
> >       > On Wed, May 7, 2014 at 11:31 AM, Indra Pramana <indra at sg.or.id>
> wrote:
> >       >       Hi Craig and all,
> >       >
> > > I checked S?bastien Han's blog post, it seems that the way how the
> journal was mounted
> > is a
> > > bit different, is it because the article was based on older version of
> Ceph?
> > >
> > > ====
> > > $ sudo mount /dev/sdc /journal
> > >
> > > $ ceph-osd -i 2 --mkjournal
> > > 2012-08-16 13:29:58.735095 7ff0c4b58780 -1 created new journal
> /journal/journal for
> > > object store /srv/ceph/osd2
> > >
> > > $ sudo service ceph start osd.2
> > > === osd.2 ===
> > > Starting Ceph osd.2 on ceph03...
> > > starting osd.2 at :/0 osd_data /srv/ceph/osd2 /journal/journal
> > > ====
> > >
> > > From what I can see, on all my OSD nodes in my Ceph cluster, the
> journal is mounted on
> > > this folder (or file?) instead of /journal:
> > >
> > > /var/lib/ceph/osd/ceph-X/journal
> > >
> > > which in turns is a symbolic link to this file:
> > >
> > > lrwxrwxrwx 1 root root 58 Apr  6 23:45 journal ->
> > > /dev/disk/by-partuuid/3ff2c20c-6b58-41d3-9e7e-b3ec63c62c2f
> > >
> > > which in turns is a symbolic link to the journal partition on the
> device:
> > >
> > > lrwxrwxrwx 1 root root 10 May  3 23:48
> 3ff2c20c-6b58-41d3-9e7e-b3ec63c62c2f ->
> > ../../sdf1
> > >
> > > I am using one SSD for journals for multiple OSDs within a node. Any
> advise on the
> > > correct way how to mount and create the journal? Do I need to mount
> first, or create
> > > first? Because I supposed that we need to mount to the partition
> *after* the journal is
> > > created?
> > >
> > > I am using latest stable version of Dumpling (v0.67.7).
> > >
> > > Any advice is greatly appreciated.
> > >
> > > Thank you.
> > >
> > >
> > >
> > > On Wed, May 7, 2014 at 1:40 AM, Craig Lewis <clewis at centraldesktop.com>
> wrote:
> > >       On 5/6/14 03:34 , Gandalf Corvotempesta wrote:
> > >
> > > Hi to all,
> > > I would like to replace a disk used as journal (one partition for each
> OSD)
> > >
> > > Which is the safest method to do so?
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users at lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> > >
> > > I haven't tried this yet, but I imagine that the process is similar to
> moving your
> > > journal from the spinning disk to an SSD.
> > >
> > > S?bastien Han had a blog post about this:
> > >
> http://www.sebastien-han.fr/blog/2012/08/17/ceph-storage-node-maintenance/
> > >
> > >
> > >
> > > --
> > >
> > > Craig Lewis
> > > Senior Systems Engineer
> > > Office +1.714.602.1309
> > > Email clewis at centraldesktop.com
> > >
> > > Central Desktop. Work together in ways you never thought possible.
> > > Connect with us   Website  |  Twitter  |  Facebook  |  LinkedIn  |
> Blog
> > >
> > >
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users at lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> > >
> > >
> > >
> > >
> >
> >
> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140509/55a8686f/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux