On Wed, 16 Mar 2011, Sylar Shen wrote: > Hi, > I have a question about OSD as follows. > I have a server as OSD which contains 4*1T disks. > Therefore I should set OSDes like this in ceph.conf...... > [osd0] > host=ceph1 > btrfs devs=/dev/sda1 > [osd1] > host=ceph1 > btrfs devs=/dev/sdb1 > [osd2] > host=ceph1 > btrfs devs=/dev/sdc1 > [osd3] > host=ceph1 > btrfs devs=/dev/sdd1 > My question is, if I have many servers(maybe larger than 200) and each > server contains 12*1T disks. > Should I set my ceph.conf like this pattern? Right. > I mean, if I have 200 servers, I will have to configure 200*12=240 > OSDes manually? > That will be a hard work....:p Well, you can write a simple perl or bash script to generate the conf for you (or some fragment of it). > Can I set like this instead and the system could do the rest for me? > [osd0-osd3] > host=ceph1 > btrfs devs=/dev/sda1 > btrfs devs=/dev/sdb1 > btrfs devs=/dev/sdc1 > btrfs devs=/dev/sdd1 > I am not sure if this makes sense or not. I just happened to have this > thought....:p Yeah, that won't work. :) We don't have any plans to make the configuration parsing more complex at this point. This problem is probably best solved by a separate tool (not ceph itself). sage