On Mon, 13 Feb 2012, Eric_YH_Chen@xxxxxxxxxxx wrote: > Hi, all: > > For the scalability consideration, we would like to name the first > harddisk as "00101" on first server. > > And named the first harddisk as "00201" on second server. The ceph.conf > seems like this: > > [osd] > osd data = /srv/osd.$id > osd journal = /srv/osd.$id.journal > osd journal size = 1000 > > [osd.00101] The leading 0's are meaningless (or at worst, misleading) here. ceph-osd identifiers are integers. > host = server-001 > btrfs dev= /dev/sda > > [osd.00102] > host = server-001 > btrfs dev= /dev/sdb > > [osd.00103] > host = server-001 > btrfs dev= /dev/sdc > > [osd.00201] > host = server-002 > btrfs dev= /dev/sda > > [osd.00202] > host = server-002 > btrfs dev= /dev/sdb > > [osd.00203] > host = server-002 > btrfs dev= /dev/sdc > > [osd.00301] > host = server-003 > btrfs dev= /dev/sda > > [osd.00302] > host = server-003 > btrfs dev= /dev/sdb > > [osd.00303] > host = server-003 > btrfs dev= /dev/sdc > > But we are worried about if it is an acceptable configuration for ceph. > > As I see, the maximum osd is 304 now, although there are only 9 osds in > the cluster. > Will this configuration influence about the performance? > And what if we add osd.00204 in the future? They are stored as an array currently, so if the max is 304 using anything below that is better, not worse. ~300 is a small number, so this isn't really problematic. Just don't label an OSD osd.200302 or you'll be unhappy. sage -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html