Re: ceph-disk and /dev/dm-* permissions - race condition?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



That doesn't appear to work on 10.2.3 (with modified ceph-disk/main.py
from your fix above).  I think it ends up trying to access the
/dev/mapper/UUID files before they have been established so the
ceph-osd starter process fails since there are no mapped dm partitions
yet.  Adding 'local-filesystems' to the "start on" line is forcing it
to start too soon,I think.   I see these errors in the upstart logs
for ceph-osd-all-starter.log.

ceph-disk: Cannot discover filesystem type: device
/dev/mapper/00457719-b9b0-4cd0-a912-8e6e5efff7cd: Command
'/sbin/blkid' returned non-zero exit status 2
ceph-disk: Cannot discover filesystem type: device
/dev/mapper/eb056779-7bd0-4768-86cb-d757174a2046: Command
'/sbin/blkid' returned non-zero exit status 2
ceph-disk: Cannot discover filesystem type: device
/dev/mapper/f1300502-1143-4c91-b43c-051342b36933: Command
'/sbin/blkid' returned non-zero exit status 2
ceph-disk: Error: One or more partitions failed to activate



On Wed, Nov 23, 2016 at 6:42 AM, Loic Dachary <loic@xxxxxxxxxxx> wrote:
> I think that could work as well:
>
> in ceph-disk.conf
>
> description "ceph-disk async worker"
>
> start on (ceph-disk and local-filesystems)
>
> instance $dev/$pid
> export dev
> export pid
>
> exec flock /var/lock/ceph-disk -c 'ceph-disk --verbose --log-stdout trigger --sync $dev'
>
> with https://github.com/ceph/ceph/pull/12136/commits/72f0b2aa1eb4b7b2a2222c2847d26f99400a8374
>
> What do you say ?
>
> On 22/11/2016 20:13, Wyllys Ingersoll wrote:
>> I dont know, but making the change in the 55-dm.rules file seems to do
>> the trick well enough for now.
>>
>> On Tue, Nov 22, 2016 at 12:07 PM, Loic Dachary <loic@xxxxxxxxxxx> wrote:
>>>
>>>
>>> On 22/11/2016 16:13, Wyllys Ingersoll wrote:
>>>> I think that sounds reasonable, obviously more testing will be needed
>>>> to verify.  Our situation occurred on an Ubuntu Trusty (upstart based,
>>>> not systemd) server, so I dont think this will help for non-systemd
>>>> systems.
>>>
>>> I don't think there is a way to enforce an order with upstart. But maybe there is ? If you don't know about it I will research.
>>>
>>>> On Tue, Nov 22, 2016 at 9:48 AM, Loic Dachary <loic@xxxxxxxxxxx> wrote:
>>>>> Hi,
>>>>>
>>>>> It should be enough to add After=local-fs.target to /lib/systemd/system/ceph-disk@.service and have ceph-disk trigger --sync chown ceph:ceph /dev/XXX to fix this issue (and others). Since local-fs.target indirectly depends on dm, this ensures ceph disk activation will only happen after dm is finished. It is entirely possible that the ownership is incorrect when ceph-disk trigger --sync starts running, but it will no longer race with dm and it can safely chown ceph:ceph and proceed with activation.
>>>>>
>>>>> I'm testing this with https://github.com/ceph/ceph/pull/12136 but I'm not sure yet if I'm missing something or if that's the right thing to do.
>>>>>
>>>>> What do you think ?
>>>>>
>>>>> On 04/11/2016 15:51, Wyllys Ingersoll wrote:
>>>>>> We are running 10.2.3 with encrypted OSDs and journals using the old
>>>>>> (i.e. non-Luks) keys and are seeing issues with the ceph-osd processes
>>>>>> after a reboot of a storage server.  Our data and journals are on
>>>>>> separate partitions on the same disk.
>>>>>>
>>>>>> After a reboot, sometimes the OSDs fail to start because of
>>>>>> permissions problems.  The /dev/dm-* devices come back with
>>>>>> permissions set to "root:disk" sometimes instead of "ceph:ceph".
>>>>>> Weirder still is that sometimes the ceph-osd will start and work in
>>>>>> spite of the incorrect perrmissions (root:disk) and other times they
>>>>>> will fail and the logs show permissions errors when trying to access
>>>>>> the journals. Sometimes half of the /dev/dm- devices are "root:disk"
>>>>>> and others are "ceph:ceph".  There's no clear pattern, so that's what
>>>>>> leads me to think its a race condition in the ceph_disk "dmcrypt_map"
>>>>>> function.
>>>>>>
>>>>>> Is there a known issue with ceph-disk and/or ceph-osd related to
>>>>>> timing of the encrypted devices being setup and the permissions
>>>>>> getting changed to the ceph processes can access them?
>>>>>>
>>>>>> Wyllys Ingersoll
>>>>>> Keeper Technology, LLC
>>>>>> --
>>>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>
>>>>>
>>>>> --
>>>>> Loïc Dachary, Artisan Logiciel Libre
>>>>
>>>
>>> --
>>> Loïc Dachary, Artisan Logiciel Libre
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
> --
> Loïc Dachary, Artisan Logiciel Libre
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux