Configuration of OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
I have a question about OSD as follows.
I have a server as OSD which contains 4*1T disks.
Therefore I should set OSDes like this in ceph.conf......
ÂÂ [osd0]
ÂÂ Â Â Â Â host=ceph1
ÂÂ Â Â Â Â btrfs devs=/dev/sda1
ÂÂ [osd1]
ÂÂ Â Â Â Â host=ceph1
ÂÂ Â Â Â Â btrfs devs=/dev/sdb1
ÂÂ [osd2]
ÂÂ Â Â Â Â host=ceph1
ÂÂ Â Â Â Â btrfs devs=/dev/sdc1
ÂÂ [osd3]
ÂÂ Â Â Â Â host=ceph1
ÂÂ Â Â Â Â btrfs devs=/dev/sdd1
My question is, if I have many servers(maybe larger than 200) and each
server contains 12*1T disks.
Should I set my ceph.conf like this pattern?
I mean, if I have 200 servers, I will have to configure 200*12=240
OSDes manually?
That will be a hard work....:p
Can I set like this instead and the system could do the rest for me?
ÂÂ[osd0-osd3]
ÂÂ Â Â Â Â host=ceph1
ÂÂ Â Â Â ÂÂbtrfs devs=/dev/sda1
ÂÂ Â Â Â ÂÂbtrfs devs=/dev/sdb1
ÂÂ Â Â Â ÂÂbtrfs devs=/dev/sdc1
ÂÂ Â Â Â ÂÂbtrfs devs=/dev/sdd1
I am not sure if this makes sense or not. I just happened to have this
thought....:p
--
Best Regards,
Sylar Shen
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux