Re: How to tell a VM to write more local ceph nodes than to the network.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



# Get the compiled crushmap
root@server01:~# ceph osd getcrushmap -o /tmp/myfirstcrushmap

# Decompile the compiled crushmap above
root@server01:~# crushtool -d /tmp/myfirstcrushmap -o /tmp/myfirstcrushmap.txt

then give us your /tmp/myfirstcrushmap.txt file.. :)


2015-01-14 17:36 GMT+01:00 Roland Giesler <roland@xxxxxxxxxxxxxx>:
On 14 January 2015 at 12:08, JM <jmaxinfo@xxxxxxxxx> wrote:
Hi Roland,

You should tune your Ceph Crushmap with a custom rule in order to do that (write first on s3 and then to others). This custom rule will be applied then to your proxmox pool.
(what you want to do is only interesting if you run VM from host s3)

Can you give us your crushmap ?

Please note that I made a mistake in my email.  The machine that I want to run on write first, is S1 not S3

For the life of me I cannot find how to extract the crush map.  I found:
ceph osd getcrushmap -o crushfilename
Where can I find the crush file?  I've never needed this.
This is my first installation, so please bear with my while I learn!

Lionel: I read what you're saying.  However, the strange thing is that last year I had this Windows 2008 VM running on the same cluster without changes and coming back from leave in the new year, it has crawled to a painfully slow state.  And I don't quite know where to start to trace this.  The windows machine is not the problem, since even before windows starts up the boot process of the VM is very slow.

thanks

Roland


 



2015-01-13 22:03 GMT+01:00 Roland Giesler <roland@xxxxxxxxxxxxxx>:
I have a 4 node ceph cluster, but the disks are not equally distributed across all machines (they are substantially different from each other)

One machine has 12 x 1TB SAS drives (h1), another has 8 x 300GB SAS (s3) and two machines have only two 1 TB drives each (s2 & s1).

Now machine s3 has by far the most CPU's and RAM, so I'm running my VM's mostly from there, but I want to make sure that the writes that happen to the ceph cluster get written to the "local" osd's on s3 first and then the additional writes/copies get done to the network.

Is this possible with ceph.  The VM's are KVM in Proxmox in case it's relevant.

regards

Roland

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux