Re: How to tell a VM to write more local ceph nodes than to the network.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 14 January 2015 at 12:08, JM <jmaxinfo@xxxxxxxxx> wrote:
Hi Roland,

You should tune your Ceph Crushmap with a custom rule in order to do that (write first on s3 and then to others). This custom rule will be applied then to your proxmox pool.
(what you want to do is only interesting if you run VM from host s3)

Can you give us your crushmap ?

 
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7 osd.7
device 8 osd.8
device 9 osd.9
device 10 osd.10
device 11 osd.11
device 12 osd.12
device 13 osd.13
device 14 osd.14
device 15 osd.15
device 16 osd.16
device 17 osd.17
device 18 osd.18

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host h1 {
    id -2        # do not change unnecessarily
    # weight 8.140
    alg straw
    hash 0    # rjenkins1
    item osd.1 weight 0.900
    item osd.3 weight 0.900
    item osd.4 weight 0.900
    item osd.5 weight 0.680
    item osd.6 weight 0.680
    item osd.7 weight 0.680
    item osd.8 weight 0.680
    item osd.9 weight 0.680
    item osd.10 weight 0.680
    item osd.11 weight 0.680
    item osd.12 weight 0.680
}
host s3 {
    id -3        # do not change unnecessarily
    # weight 0.450
    alg straw
    hash 0    # rjenkins1
    item osd.2 weight 0.450
}
host s2 {
    id -4        # do not change unnecessarily
    # weight 0.900
    alg straw
    hash 0    # rjenkins1
    item osd.13 weight 0.900
}
host s1 {
    id -5        # do not change unnecessarily
    # weight 1.640
    alg straw
    hash 0    # rjenkins1
    item osd.14 weight 0.290
    item osd.0 weight 0.270
    item osd.15 weight 0.270
    item osd.16 weight 0.270
    item osd.17 weight 0.270
    item osd.18 weight 0.270
}
root default {
    id -1        # do not change unnecessarily
    # weight 11.130
    alg straw
    hash 0    # rjenkins1
    item h1 weight 8.140
    item s3 weight 0.450
    item s2 weight 0.900
    item s1 weight 1.640
}

# rules
rule replicated_ruleset {
    ruleset 0
    type replicated
    min_size 1
    max_size 10
    step take default
    step chooseleaf firstn 0 type host
    step emit
}

# end crush map

thanks so far!

regards

Roland

 



2015-01-13 22:03 GMT+01:00 Roland Giesler <roland@xxxxxxxxxxxxxx>:
I have a 4 node ceph cluster, but the disks are not equally distributed across all machines (they are substantially different from each other)

One machine has 12 x 1TB SAS drives (h1), another has 8 x 300GB SAS (s3) and two machines have only two 1 TB drives each (s2 & s1).

Now machine s3 has by far the most CPU's and RAM, so I'm running my VM's mostly from there, but I want to make sure that the writes that happen to the ceph cluster get written to the "local" osd's on s3 first and then the additional writes/copies get done to the network.

Is this possible with ceph.  The VM's are KVM in Proxmox in case it's relevant.

regards

Roland

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux