Re: defaults paths

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05 Apr 2012, at 14:34, Wido den Hollander wrote:

> On 04/05/2012 10:38 AM, Bernard Grymonpon wrote:
>> I assume most OSD nodes will normally run a single OSD, so this would not apply to most nodes.
>> 
>> Only in specific cases (where multiple OSDs run on a single node) this would come up, and these specific cases might even require to have the journals split over multiple devices (multiple ssd-disks ...)
> 
> I think that's a wrong assumption. On most systems I think multiple OSDs will exist, it's debatable if one would run OSDs from different clusters very often.

If it is recommended setup to have multiple OSDs per node (like, one OSD per physical drive), then we need to take that in account - but don't assume that one node only has one SSD disk for journals, which would be shared between all OSDs...

> 
> I'm currently using: osd data = /var/lib/ceph/$name
> 
> To get back to what sage mentioned, why add the "-data" suffix to a directory name? Isn't it obvious that a directory will contain data?

Each osd has data and a journal... there should be some way to identify both...

Rgds,
-bg

> 
> As I think it is a very specific scenario where a machine would be participating in multiple Ceph clusters I'd vote for:
> 
> /var/lib/ceph/$type/$id
> 
> Wido
> 
>> 
>> In my case, this doesn't really matter, it is up to the provision software to make the needed symlinks/mounts.
>> 
>> Rgds,
>> Bernard
>> 
>> On 05 Apr 2012, at 09:37, Andrey Korolyov wrote:
>> 
>>> In ceph case, such layout breakage may be necessary in almost all
>>> installations(except testing), comparing to almost all general-purpose
>>> server software which need division like that only in very specific
>>> setups.
>>> 
>>> On Thu, Apr 5, 2012 at 11:28 AM, Bernard Grymonpon<bernard@xxxxxxxxxxxx>  wrote:
>>>> I feel it's up to the sysadmin to mount / symlink the correct storage devices on the correct paths - ceph should not be concerned that some volumes might need to sit together.
>>>> 
>>>> Rgds,
>>>> Bernard
>>>> 
>>>> On 05 Apr 2012, at 09:12, Andrey Korolyov wrote:
>>>> 
>>>>> Right, but probably we need journal separation at the directory level
>>>>> by default, because there is a very small amount of cases when speed
>>>>> of main storage is sufficient for journal or when resulting speed
>>>>> decrease is not significant, so journal by default may go into
>>>>> /var/lib/ceph/osd/journals/$i/journal where osd/journals mounted on
>>>>> the fast disk.
>>>>> 
>>>>> On Thu, Apr 5, 2012 at 10:57 AM, Bernard Grymonpon<bernard@xxxxxxxxxxxx>  wrote:
>>>>>> 
>>>>>> On 05 Apr 2012, at 08:32, Sage Weil wrote:
>>>>>> 
>>>>>>> We want to standardize the locations for ceph data directories, configs,
>>>>>>> etc.  We'd also like to allow a single host to run OSDs that participate
>>>>>>> in multiple ceph clusters.  We'd like easy to deal with names (i.e., avoid
>>>>>>> UUIDs if we can).
>>>>>>> 
>>>>>>> The metavariables are:
>>>>>>> cluster = ceph (by default)
>>>>>>> type = osd, mon, mds
>>>>>>> id = 1, foo,
>>>>>>> name = $type.$id = osd.0, mds.a, etc.
>>>>>>> 
>>>>>>> The $cluster variable will come from the command line (--cluster foo) or,
>>>>>>> in the case of a udev hotplug tool or something, matching the uuid on the
>>>>>>> device with the 'fsid =<uuid>' line in the available config files found
>>>>>>> in /etc/ceph.
>>>>>>> 
>>>>>>> The locations could be:
>>>>>>> 
>>>>>>> ceph config file:
>>>>>>>  /etc/ceph/$cluster.conf     (default is thus ceph.conf)
>>>>>>> 
>>>>>>> keyring:
>>>>>>>  /etc/ceph/$cluster.keyring  (fallback to /etc/ceph/keyring)
>>>>>>> 
>>>>>>> osd_data, mon_data:
>>>>>>>  /var/lib/ceph/$cluster.$name
>>>>>>>  /var/lib/ceph/$cluster/$name
>>>>>>>  /var/lib/ceph/data/$cluster.$name
>>>>>>>  /var/lib/ceph/$type-data/$cluster-$id
>>>>>>> 
>>>>>>> TV and I talked about this today, and one thing we want is for items of a
>>>>>>> given type to live together in separate directory so that we don't have to
>>>>>>> do any filtering to, say, get all osd data directories.  This suggests the
>>>>>>> last option (/var/lib/ceph/osd-data/ceph-1,
>>>>>>> /var/lib/ceph/mon-data/ceph-foo, etc.), but it's kind of fugly.
>>>>>>> 
>>>>>>> Another option would be to make it
>>>>>>> 
>>>>>>> /var/lib/ceph/$type-data/$id
>>>>>>> 
>>>>>>> (with no $cluster) and make users override the default with something that
>>>>>>> includes $cluster (or $fsid, or whatever) in their $cluster.conf if/when
>>>>>>> they want multicluster nodes that don't interfere.  Then we'd get
>>>>>>> /var/lib/ceph/osd-data/1 for non-crazy people, which is pretty easy.
>>>>>> 
>>>>>> As a osd consists of data and the journal, it should stay together, with all info for that one osd in one place:
>>>>>> 
>>>>>> I would suggest
>>>>>> 
>>>>>> /var/lib/ceph/osd/$id/data
>>>>>> and
>>>>>> /var/lib/ceph/osd/$id/journal
>>>>>> 
>>>>>> ($id could be replaced by $uuid or $name, for which I would prefer $uuid)
>>>>>> 
>>>>>> Rgds,
>>>>>> Bernard
>>>>>> 
>>>>>>> 
>>>>>>> Any other suggestions?  Thoughts?
>>>>>>> sage
>>>>>>> --
>>>>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>>>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>> 
>>>> 
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>> 
>> 
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux