Re: ceph-deploy and journal on separate disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 22, 2013 at 4:36 AM, Pavel Timoschenkov
<Pavel@xxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
> Hi.
> With this patch - is all ok.
> Thanks for help!
>

Thanks for confirming this, I have opened a ticket
(http://tracker.ceph.com/issues/6085 ) and will work on this patch to
get it merged.

> -----Original Message-----
> From: Alfredo Deza [mailto:alfredo.deza@xxxxxxxxxxx]
> Sent: Wednesday, August 21, 2013 7:16 PM
> To: Pavel Timoschenkov
> Cc: ceph-users@xxxxxxxx
> Subject: Re:  ceph-deploy and journal on separate disk
>
> On Wed, Aug 21, 2013 at 9:33 AM, Pavel Timoschenkov <Pavel@xxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
>> Hi. Thanks for patch. But after patched ceph src and install it, I have not ceph-disk or ceph-deploy command.
>> I did the following steps:
>> git clone --recursive https://github.com/ceph/ceph.git patch -p0 <
>> <patch name> ./autogen.sh ./configure make make install What am I
>> doing wrong?
>
> Oh I meant to patch it directly, there was no need to rebuild/make/install again because the file is a plain Python file (no compilation needed).
>
> Can you try that instead?
>>
>> -----Original Message-----
>> From: Alfredo Deza [mailto:alfredo.deza@xxxxxxxxxxx]
>> Sent: Monday, August 19, 2013 3:38 PM
>> To: Pavel Timoschenkov
>> Cc: ceph-users@xxxxxxxx
>> Subject: Re:  ceph-deploy and journal on separate disk
>>
>> On Fri, Aug 16, 2013 at 8:32 AM, Pavel Timoschenkov <Pavel@xxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
>>> <<<I suspect that there are left over partitions in /dev/sdaa that
>>> are causing this to <<<fail, I *think* that we could pass the `-t`
>>> flag with the filesystem and prevent this.
>>>
>>> Hi. Any changes (
>>>
>>> Can you create a build that passes the -t flag with mount?
>>>
>>
>> I tried going through these steps again and could not get any other ideas except to pass in that flag for mounting. Would you be willing to try a patch?
>> (http://fpaste.org/33099/37691580/)
>>
>> You would need to apply it to the `ceph-disk` executable.
>>
>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> From: Pavel Timoschenkov
>>> Sent: Thursday, August 15, 2013 3:43 PM
>>> To: 'Alfredo Deza'
>>> Cc: Samuel Just; ceph-users@xxxxxxxx
>>> Subject: RE:  ceph-deploy and journal on separate disk
>>>
>>>
>>>
>>> The separate commands (e.g. `ceph-disk -v prepare /dev/sda1`) works
>>> because then the journal is on the same device as the OSD data, so
>>> the execution is different to get them to a working state.
>>>
>>> I suspect that there are left over partitions in /dev/sdaa that are
>>> causing this to fail, I *think* that we could pass the `-t` flag with
>>> the filesystem and prevent this.
>>>
>>> Just to be sure, could you list all the partitions on /dev/sdaa (if
>>> /dev/sdaa is the whole device)?
>>>
>>> Something like:
>>>
>>>     sudo parted /dev/sdaa print
>>>
>>> Or if you prefer any other way that could tell use what are all the
>>> partitions in that device.
>>>
>>>
>>>
>>>
>>>
>>> After
>>>
>>> ceph-deploy disk zap ceph001:sdaa ceph001:sda1
>>>
>>>
>>>
>>> root@ceph001:~# parted /dev/sdaa print
>>>
>>> Model: ATA ST3000DM001-1CH1 (scsi)
>>>
>>> Disk /dev/sdaa: 3001GB
>>>
>>> Sector size (logical/physical): 512B/4096B
>>>
>>> Partition Table: gpt
>>>
>>>
>>>
>>> Number  Start  End  Size  File system  Name  Flags
>>>
>>>
>>>
>>> root@ceph001:~# parted /dev/sda1 print
>>>
>>> Model: Unknown (unknown)
>>>
>>> Disk /dev/sda1: 10.7GB
>>>
>>> Sector size (logical/physical): 512B/512B
>>>
>>> Partition Table: gpt
>>>
>>> So that is after running `disk zap`. What does it say after using
>>> ceph-deploy and failing?
>>>
>>>
>>>
>>> Number  Start  End  Size  File system  Name  Flags
>>>
>>>
>>>
>>> After ceph-disk -v prepare /dev/sdaa /dev/sda1:
>>>
>>>
>>>
>>> root@ceph001:~# parted /dev/sdaa print
>>>
>>> Model: ATA ST3000DM001-1CH1 (scsi)
>>>
>>> Disk /dev/sdaa: 3001GB
>>>
>>> Sector size (logical/physical): 512B/4096B
>>>
>>> Partition Table: gpt
>>>
>>>
>>>
>>> Number  Start   End     Size    File system  Name       Flags
>>>
>>> 1      1049kB  3001GB  3001GB  xfs          ceph data
>>>
>>>
>>>
>>> And
>>>
>>>
>>>
>>> root@ceph001:~# parted /dev/sda1 print
>>>
>>> Model: Unknown (unknown)
>>>
>>> Disk /dev/sda1: 10.7GB
>>>
>>> Sector size (logical/physical): 512B/512B
>>>
>>> Partition Table: gpt
>>>
>>>
>>>
>>> Number  Start  End  Size  File system  Name  Flags
>>>
>>>
>>>
>>> With the same errors:
>>>
>>>
>>>
>>> root@ceph001:~# ceph-disk -v prepare /dev/sdaa /dev/sda1
>>>
>>> DEBUG:ceph-disk:Journal /dev/sda1 is a partition
>>>
>>> WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the
>>> same device as the osd data
>>>
>>> DEBUG:ceph-disk:Creating osd partition on /dev/sdaa
>>>
>>> Information: Moved requested sector from 34 to 2048 in
>>>
>>> order to align on 2048-sector boundaries.
>>>
>>> The operation has completed successfully.
>>>
>>> DEBUG:ceph-disk:Creating xfs fs on /dev/sdaa1
>>>
>>> meta-data=/dev/sdaa1             isize=2048   agcount=32, agsize=22892700
>>> blks
>>>
>>>          =                       sectsz=512   attr=2, projid32bit=0
>>>
>>> data     =                       bsize=4096   blocks=732566385, imaxpct=5
>>>
>>>          =                       sunit=0      swidth=0 blks
>>>
>>> naming   =version 2              bsize=4096   ascii-ci=0
>>>
>>> log      =internal log           bsize=4096   blocks=357698, version=2
>>>
>>>          =                       sectsz=512   sunit=0 blks, lazy-count=1
>>>
>>> realtime =none                   extsz=4096   blocks=0, rtextents=0
>>>
>>> DEBUG:ceph-disk:Mounting /dev/sdaa1 on /var/lib/ceph/tmp/mnt.UkJbwx
>>> with options noatime
>>>
>>> mount: /dev/sdaa1: more filesystems detected. This should not happen,
>>>
>>>        use -t <type> to explicitly specify the filesystem type or
>>>
>>>        use wipefs(8) to clean up the device.
>>>
>>>
>>>
>>> mount: you must specify the filesystem type
>>>
>>> ceph-disk: Mounting filesystem failed: Command '['mount', '-o',
>>> 'noatime', '--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.UkJbwx']'
>>> returned non-zero exit status 32
>>>
>>>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux