Re: CentOS7 Mounting Problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The main issue I see with osds not automatically mounting and starting is the partition ID of the OSD and journals are not set to the GUID expected by the udev rules for OSDs and journals.  Running ceph-disk activate-all might give you more information as to why the OSDs aren't mounting properly.  That's the command that is run when your system boots up.  You also want to make sure the the right type of file is touched on your osds (upstart, systemd, etc) to indicate which service manager should try to start the osd.

On Mon, Apr 10, 2017 at 4:43 PM Georgios Dimitrakakis <giorgis@xxxxxxxxxxxx> wrote:
 Hi Xavier,

 I still have the entries in my /etc/fstab file and what I did to solve
 the problem was to enable on all nodes the service
 "ceph-osd@XXX.service" where "XXX" is the OSD number.

 I don't know the reason why this was initially disabled in my
 installation...

 As for the "ceph-disk list" command you were referring to it showed
 correctly the results for my disks e.g.:
 /dev/sdd :
  /dev/sdd2 ceph journal, for /dev/sdd1
  /dev/sdd1 ceph data, active, cluster ceph, osd.1, journal /dev/sdd2


 Unfortunately I couldn't run "udevadm" correctly...I must be missing
 something...

 # udevadm test -h $(udevadm info -q path /dev/sdd)
 calling: test
 version 219
 udevadm test OPTIONS <syspath>

 Test an event run.
   -h --help                            Show this help
      --version                         Show package version
   -a --action=""                  Set action string
   -N --resolve-names=early|late|never  When to resolve names



 Best,

 G.



> Hi Georgios,
>
> Ive had a few issues with automatic mounting on CentOS two months
> ago,
> and here are a few tips to how we got automatic mount running with no
> entries in the fstab. The versions for my test are CentOS 7.1 with
> Ceph Hammer, kernel 3.10.0-229 and udev/systemd 208.
>
> First, I strongly recommend using `ceph-disk list` as a first test.
> If
> all goes well the output should look like this:
>
> [root@ceph-test ~]# ceph-disk list
> /dev/sda :
>  /dev/sda1 other, xfs, mounted on /boot
>  /dev/sda2 other, LVM2_member
> /dev/sdb :
>  /dev/sdb1 ceph journal, for /dev/sdd1
>  /dev/sdb2 ceph journal, for /dev/sde1
>  /dev/sdb3 ceph journal, for /dev/sdc1
> /dev/sdc :
>  /dev/sdc1 ceph data, active, cluster ceph, osd.2, journal /dev/sdb3
> /dev/sdd :
>  /dev/sdd1 ceph data, active, cluster ceph, osd.1, journal /dev/sdb1
> /dev/sde :
>  /dev/sde1 ceph data, active, cluster ceph, osd.0, journal /dev/sdb2
>
> If the partitions are not detected as ceph data/journal, then your
> partitions type UUIDs are not set properly; this is important for the
> Ceph udev rules to work. An if the data-journal associations are not
> displayed, you might want to check that the "journal" symlink and
> "journal_uuid" files in the OSD directory are correct and pointing to
> the right device. Thats if youre using separate partitions as
> journals, of course.
>
> Then `udevadm`can help you see what exactly is going on in the udev
> rule when its run. Try:
>     udevadm test -h $(udevadm info -q path /dev/sdc)
> (or any other device thats used as data for OSDs)
>
> This command should show you a full log of the events. In our case,
> the failure was due to a missing keyring file that made the
> `ceph-disk-activate` call from 95-ceph-osd.rules fail.
>
> Finally, you might also want to try using
> 60-ceph-partuuid-workaround.rules instead of
> 60-ceph-by-parttypeuuid.rules if its the later that is used in your
> system. The `udevadm test` log should give good clues to whether
> thats
> the issue or not.
>
> Kind Regards,
> --
>
> Xavier Villaneau
>
> Software Engineer, Concurrent Computer Corporation
>
> On Sat, Apr 1, 2017 at 4:47 AM Georgios Dimitrakakis  wrote:
>
>>  Hi,
>>
>>  just to provide some more feedback on this one and what I ve done
>> to
>>  solve it, although not sure if this is the most "elegant"
>> solution.
>>
>>  I have add manually to /etc/fstab on all systems the respective
>> mount
>>  points for Ceph OSDs, e.g. entries like this:
>>
>>  UUID=9d2e7674-f143-48a2-bb7a-1c55b99da1f7
>> /var/lib/ceph/osd/ceph-0 xfs
>>  defaults               0 0
>>
>>  Then I ve checked and seen that the "ceph-osd@.service" was
>> "disabled"
>>  which means that wasnt starting by default.
>>
>>  Therefore I did modify all respective services on all nodes, with
>>  commands like:
>>
>>  systemctl enable ceph-osd@0.service
>>
>>  Did a reboot on the nodes and all CEPH OSDs were mounted and the
>>  service was starting by default, so the problem was solved.
>>
>>  As said I dont know if this is the correct way to do it but for
>> me it
>>  works.
>>  I guess that something still goes wrong when the root volume is
>> on LVM
>>  and all the above that should happen automatically dont happen
>> and
>>  require manual intervention.
>>
>>  Looking forward for any comments on this procedure or things that
>> I
>>  might have missed.
>>
>>  Regards,
>>
>>  G.
>>
>> > Hi Tom and thanks a lot for the feedback.
>> >
>> > Indeed my root filesystem is on an LVM volume and I am currently
>> > running CentOS 7.3.1611 with kernel 3.10.0-514.10.2.el7.x86_64
>> and
>> > the
>> > ceph version is 10.2.6 (656b5b63ed7c43bd014bcafd81b001959d5f089f)
>> >
>> > The 60-ceph-by-parttypeuuid.rules on the system is the same is
>> the
>> > one on the bug you ve mentioned but unfortunately it still doesnt
>> > work.
>> >
>> > So, are there any more ideas on how to further debug it?
>> >
>> > Best,
>> >
>> > G.
>> >
>> >
>> >> Are you running the CentOS filesystem as LVM? This
>> >> (http://tracker.ceph.com/issues/16351 [1] [1]) still seems to be
>> an
>> >> issue
>> >> on CentOS 7 that Ive seen myself too. After migrating to a
>> standard
>> >> filesystem layout (i.e. no LVM) the issue disappeared.
>> >>
>> >> Regards,
>> >>
>> >>  Tom
>> >>
>> >> -------------------------
>> >>
>> >> FROM: ceph-users  on behalf of Georgios Dimitrakakis
>> >>  SENT: Thursday, March 23, 2017 10:21:34 PM
>> >>  TO: ceph-users@xxxxxxxx [2]
>> >>  SUBJECT: CentOS7 Mounting Problem
>> >>
>> >> Hello Ceph community!
>> >>
>> >>  I would like some help with a new CEPH installation.
>> >>
>> >>  I have install Jewel on CentOS7 and after the reboot my OSDs
>> are
>> >> not
>> >>  mount automatically and as a consequence ceph is not operating
>> >>  normally...
>> >>
>> >>  What can I do?
>> >>
>> >>  Could you please help me solve the problem?
>> >>
>> >>  Regards,
>> >>
>> >>  G.
>> >>  _______________________________________________
>> >>  ceph-users mailing list
>> >>  ceph-users@xxxxxxxxxxxxxx [3]
>> >>  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [4] [2]
>> >>
>> >>
>> >> Links:
>> >> ------
>> >> [1] http://tracker.ceph.com/issues/16351 [5]
>> >> [2] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [6]
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx [7]
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [8]
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx [9]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [10]
>
>
> Links:
> ------
> [1] http://tracker.ceph.com/issues/16351
> [2] mailto:ceph-users@xxxxxxxx
> [3] mailto:ceph-users@xxxxxxxxxxxxxx
> [4] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> [5] http://tracker.ceph.com/issues/16351
> [6] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> [7] mailto:ceph-users@xxxxxxxxxxxxxx
> [8] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> [9] mailto:ceph-users@xxxxxxxxxxxxxx
> [10] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> [11] mailto:giorgis@xxxxxxxxxxxx

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux