Re: OSD id in configuration file

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

That doesn't make sense... Would you explain further?

This page (http://ceph.newdream.net/wiki/Cluster_configuration) says the id
can be a number or a name, which applies to (mon|mds|ods).$id.

Will I have problems using SHA1 sums to uniquely identify each OSD in my
cluster? (i.e. OSD.$sha1sum)

---
Thanks,
Dyweni

On Mon, 13 Feb 2012 09:07:16 -0800 (PST), Sage Weil wrote:

On Mon, 13 Feb 2012, Eric_YH_Chen@xxxxxxxxxxx [1] wrote:
Hi, all: For the scalability consideration, we would like to name the first harddisk as "00101" on first server. And named the first harddisk as "00201" on second server. The ceph.conf seems like this: [osd] osd data = /srv/osd.$id osd journal = /srv/osd.$id.journal osd journal size
= 1000 [osd.00101]
The leading 0's are meaningless (or at worst, misleading) here. ceph-osd
identifiers are integers.

host = server-001 btrfs dev= /dev/sda [osd.00102] host = server-001
btrfs dev= /dev/sdb [osd.00103] host = server-001 btrfs dev= /dev/sdc
[osd.00201] host = server-002 btrfs dev= /dev/sda [osd.00202] host =
server-002 btrfs dev= /dev/sdb [osd.00203] host = server-002 btrfs dev= /dev/sdc [osd.00301] host = server-003 btrfs dev= /dev/sda [osd.00302]
host = server-003 btrfs dev= /dev/sdb [osd.00303] host = server-003
btrfs dev= /dev/sdc But we are worried about if it is an acceptable
configuration for ceph. As I see, the maximum osd is 304 now, although there are only 9 osds in the cluster. Will this configuration influence
about the performance? And what if we add osd.00204 in the future?
They are stored as an array currently, so if the max is 304 using
anything below that is better, not worse. ~300 is a small number, so this isn't really problematic. Just don't label an OSD osd.200302 or you'll be
unhappy. sage -- To unsubscribe from this list: send the line
"unsubscribe ceph-devel" in the body of a message to
majordomo@xxxxxxxxxxxxxxx [2] More majordomo info at
http://vger.kernel.org/majordomo-info.html [3]

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux