Re: How to deal with a single host with several harddisks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2011/5/4 Wido den Hollander <wido@xxxxxxxxx>:
> Hi,
>
> On Wed, 2011-05-04 at 17:37 +0800, tsk wrote:
>> Hi folks,
>>
>>
>> May I know that, Âif there is 6 harddisk available for btrfs in just a
>> single host, Âthere should be 6 cosd process in the host when the
>> disks are all working?
>
> Yes, that is the way common way.
>
>>
>> A single cosd process can not manage several disks?
>>
>
> Yes and no. A single cosd process simply wants a mount point. If you
> look closer, the init script simply mounts the device specified by
> 'btrfs devs' in the configuration.
>
> You could run LVM, mdadm or even a btrfs multi-disk volume under a
> mountpoint, this way you could have on cosd process per disk.
>
> I would recommend running one cosd process per disk, it takes a bit more
> memory (about 800M per cosd), but this way Ceph can take the full
> advantage of all the available diskspace.


800M or 80M?
There is 12 disks in my hosts, 1TB each. 10 disks of every host can be
used for ceph.
If one cosd per disk, there will be 10 cosd processes, which need a
lot lot of memory!

I note that new cosd process takes 35M memory, but another cosd which
run 5 days takes 112M memory.  Hoping there is no memory leak.


> If you have multiple hosts I would recommend making a CRUSH map which
> makes sure your data replicas are not stored within the same physical
> machine: http://ceph.newdream.net/wiki/Custom_data_placement_with_CRUSH
>
> The newer versions of Ceph will make a basic CRUSH map based on the
> ceph.conf, as far as I know it will prevent saving replicas on the same
> node. However, I would recommended checking your CRUSH map to make sure
> it does.
>
>> How should the ceph.conf be configured for this scenario?
>
> For example:
>
> [osd.0]
> Â host = node01
> Â btrfs devs = /dev/sda
>
> [osd.1]
> Â host = node01
> Â btrfs devs = /dev/sdb
>
> [osd.2]
> Â host = node01
> Â btrfs devs = /dev/sdc
>
> etc, etc
>
> The init script and mkcephfs will then format the specified drives with
> btrfs and mount it when the OSD starts.
>
> I would also recommend running your journal on a separate drive:
> http://ceph.newdream.net/wiki/Troubleshooting#Performance
>
> Wido
>
>>
>>
>> Thx!
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at Âhttp://vger.kernel.org/majordomo-info.html
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux