Re: OSD journal size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Joseph,

I suspect the same...I was just wondering if it was supposed to be supported using ceph-deploy since CERN had it in their setup.

I was able to use '/dev/disk/by-id', although when I list out the osd mount points it still shows sdb,sdc, etc:


oot@hqosd1:/dev/disk/by-id# df -h

Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1       3.7T   36M  3.7T   1% /var/lib/ceph/osd/ceph-0
/dev/sdc1       3.7T   36M  3.7T   1% /var/lib/ceph/osd/ceph-1

I guess I was excepting the mount points to use those  'by-id' names instead....but maybe this is expected?


Thanks,

Shain


Shain Miley | Manager of Systems and Infrastructure, Digital Media | smiley@xxxxxxx | 202.513.3649

________________________________________
From: Gruher, Joseph R [joseph.r.gruher@xxxxxxxxx]
Sent: Wednesday, October 23, 2013 6:32 PM
To: Shain Miley
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: RE:  OSD journal size

Speculating, but it seems possible that the ':' in the path is problematic, since that is also the separator between disk and journal (HOST:DISK:JOURNAL)?

Perhaps if you enclose in ''s or or use /dev/disk/by-id?

>-----Original Message-----
>From: ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-
>bounces@xxxxxxxxxxxxxx] On Behalf Of Shain Miley
>Sent: Wednesday, October 23, 2013 1:55 PM
>To: Alfredo Deza
>Cc: ceph-users@xxxxxxxx
>Subject: Re:  OSD journal size
>
>O.K...I found the help section in 1.2.7 that talks about using paths...however I
>still cannot get this to work:
>
>
>root@hqceph1:/usr/local/ceph-install-1# ceph-deploy osd prepare
>hqosd1:/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:1:0
>
>usage: ceph-deploy osd [-h] [--zap-disk] [--fs-type FS_TYPE] [--dmcrypt]
>                       [--dmcrypt-key-dir KEYDIR]
>                       SUBCOMMAND HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]
>                       ...]
>ceph-deploy osd: error: argument HOST:DISK[:JOURNAL]: must be in form
>HOST:DISK[:JOURNAL]
>
>
>is '/dev/disk/by-path' names supported...or am I doing something wrong?
>
>Thanks,
>
>Shain
>
>
>
>Shain Miley | Manager of Systems and Infrastructure, Digital Media |
>smiley@xxxxxxx | 202.513.3649
>
>________________________________________
>From: ceph-users-bounces@xxxxxxxxxxxxxx [ceph-users-
>bounces@xxxxxxxxxxxxxx] on behalf of Shain Miley [SMiley@xxxxxxx]
>Sent: Wednesday, October 23, 2013 4:19 PM
>To: Alfredo Deza
>Cc: ceph-users@xxxxxxxx
>Subject: Re:  OSD journal size
>
>Alfredo,
>
>Do you know what version of ceph-deploy has this updated functionality
>
>I just updated to 1.2.7 and it does not appear to include it.
>
>Thanks,
>
>Shain
>
>Shain Miley | Manager of Systems and Infrastructure, Digital Media |
>smiley@xxxxxxx | 202.513.3649
>
>________________________________________
>From: ceph-users-bounces@xxxxxxxxxxxxxx [ceph-users-
>bounces@xxxxxxxxxxxxxx] on behalf of Shain Miley [SMiley@xxxxxxx]
>Sent: Monday, October 21, 2013 6:13 PM
>To: Alfredo Deza
>Cc: ceph-users@xxxxxxxx
>Subject: Re:  OSD journal size
>
>Alfredo,
>
>Thanks a lot for the info.
>
>I'll make sure I have an updated version of ceph-deploy and give it another
>shot.
>
>Shain
>Shain Miley | Manager of Systems and Infrastructure, Digital Media |
>smiley@xxxxxxx | 202.513.3649
>
>________________________________________
>From: Alfredo Deza [alfredo.deza@xxxxxxxxxxx]
>Sent: Monday, October 21, 2013 2:03 PM
>To: Shain Miley
>Cc: ceph-users@xxxxxxxx
>Subject: Re:  OSD journal size
>
>On Mon, Oct 21, 2013 at 1:21 PM, Shain Miley <SMiley@xxxxxxx> wrote:
>> Hi,
>>
>> We have been testing a ceph cluster with the following specs:
>>
>> 3 Mon's
>> 72 OSD's spread across 6 Dell R-720xd servers
>> 4 TB SAS drives
>> 4 bonded 10 GigE NIC ports per server
>> 64 GB of RAM
>>
>> Up until this point we have been running tests using the default
>> journal size of '1024'.
>> Before we start to place production data on the cluster I was want to
>> clear up the following questions I have:
>>
>> 1)Is there a more appropriate journal size for my setup given the
>> specs listed above?
>>
>> 2)According to this link:
>>
>> http://www.slideshare.net/Inktank_Ceph/cern-ceph-day-london-2013/11
>>
>> CERN is using  '/dev/disk/by-path' for their OSD's.
>>
>> Does ceph-deploy currently support setting up OSD's using this method?
>
>Indeed it does!
>
>`ceph-deploy osd --help` got updated recently to demonstrate how this needs
>to be done (an extra step is involved):
>
>For paths, first prepare and then activate:
>
>    ceph-deploy osd prepare {osd-node-name}:/path/to/osd
>    ceph-deploy osd activate {osd-node-name}:/path/to/osd
>
>
>
>>
>> Thanks,
>>
>> Shain
>>
>> Shain Miley | Manager of Systems and Infrastructure, Digital Media |
>> smiley@xxxxxxx | 202.513.3649
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>_______________________________________________
>ceph-users mailing list
>ceph-users@xxxxxxxxxxxxxx
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>_______________________________________________
>ceph-users mailing list
>ceph-users@xxxxxxxxxxxxxx
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>_______________________________________________
>ceph-users mailing list
>ceph-users@xxxxxxxxxxxxxx
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux