Re: questions on editing crushmap for ceph cache tier

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Jul 31, 2015, at 2:55 AM, Robert LeBlanc <robert@xxxxxxxxxxxxx> wrote:
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
> 
> You are close...
> 
> I've done it by creating a new SSD root in the CRUSH map, then put the
> SSD OSDs into a -ssd entry. I then created a new crush rule to choose
> from the SSD root, then have the tiering pool use that rule. If you
> look at the example in the document and think of ceph-osd-ssd-server-1
> and ceph-osd-platter-server-1 as the same physical server with just
> logical separation, you can follow the rest of the example. You will
> need to modify either the ceph.conf to have the ssd OSDs have a
> different CRUSH map location or write a location script to do it
> automatically [1].
> 
> ceph.conf example:
> [osd.70]
>        crush location = root=ssd host=nodez-ssd

Thanks. It works. Modifying ceph.conf is very important. Otherwise, after I start the OSDs, it will automatically
go back to original host.

> 
> To do it programmaticly, you can use /usr/bin/ceph-crush-location as a
> base. I extended it by finding the device, then searching through
> hdparam to see if the rotation was greater than zero. If it wasn't
> then I output the hostname with the ssd portion, otherwise just the
> hostname. It was only a few lines of code, but I can't find it right
> now. I was waiting for ACLs to be modified so that I could query the
> data center location from our inventory management system (that was a
> few months ago) and I'm still waiting.
> 
> [1] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-January/045673.html
> -----BEGIN PGP SIGNATURE-----
> Version: Mailvelope v0.13.1
> Comment: https://www.mailvelope.com
> 
> wsFcBAEBCAAQBQJVunMXCRDmVDuy+mK58QAAq04P+QH95haU34fWZ5PsPIxv
> oY7JReywHwP3mWBO2XkaIg/l4AYV/HCBckBTSyp+GGFAtMeVEndiHCYTUf+F
> kQJjIk1jZoN+WTLnD8nsfDMrmVmforyGcG4Y399C4cCkBmeoU3jeGeKx+Unx
> dxiW6flRH5GPCazQdAAbXgb8InynUZ/EqmTY0FCDuLQ3CEXELuM8IReKwz0X
> 9LYqVIV+tdE1Ff2nDnLQlYYVpVv5K0y4TXBj8JzYH/41XbEws2GQnhb6b8zW
> aopUDC9RsNGtzWf4RDg8X3LDHrw7IBtAuJf+PHbcq3Y4cmPf5Z0TYiS1bqn1
> 19kj3EhDVVdW1KUue2S3GemyP0+bIypA/VDGzFXgv9g5oKN0bXPOjuFKAD2Q
> 7Rc2yoW70LACgL0a2KiRPRt8e6Jz5/vG6GijZvxTTZfkDPKHBOPPA3mFyAS4
> FGmu39/q5VP7V+CepaKjbGNUWRzlzOOcc4ybk3dmktYEFOTw4QZLczBGJ8s4
> I+AdYDjiQOAG3n3xixqRFOb4URjfKOrUbnHfNVQJU+qfYfV1RBLThhRjiv0v
> O+oiKiWuugZicHniTfHuOYePgxbs9eU2Hk8VRVk9ievXuRynDrH7D+IeUzUJ
> JGoj01YM60Ul1XJPWatMoM+435hcHrGd0rJ3bi91DOrZmT55X4jjdUA8z/3Y
> xMZE
> =Tqaw
> -----END PGP SIGNATURE-----
> ----------------
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
> 
> 
> On Wed, Jul 29, 2015 at 9:21 PM, van <chaofanyu@xxxxxxxxxxx> wrote:
>> Hi, list,
>> 
>> Ceph cache tier seems very promising for performance.
>> According to
>> http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
>> , I need to create a new pool based on SSD OSDs。
>> 
>> Currently, I’ve two servers with several HDD-based OSDs. I mean to add one
>> SSD-based OSD for each server, and then use these two OSDs to build a cache
>> pool.
>> But I’ve found problems editing crushmap.
>> The example in the link use two new hosts to build SSD OSDs and then create
>> a new ruleset take the new hosts.
>> But in my environment, I do not have new servers to use.
>> Can I create a ruleset choose part of OSDs in a host?
>> For example, as the crushmap shown below, osd.2 and osd.5 are new added
>> SSD-based OSDs, how can I create a ruleset choose these two OSDs only and
>> how can I avoid default ruleset to choose osd.2 and osd.5?
>> Is this possible, or I have to add a new server to deploy cache tier?
>> Thanks.
>> 
>> host node0 {
>>  id -2
>>  alg straw
>>  hash 0
>>  item osd.0 weight 1.0 # HDD
>>  item osd.1 weight 1.0 # HDD
>>  item osd.2 weight 0.5 # SSD
>> }
>> 
>> host node1 {
>>  id -3
>>  alg straw
>>  hash 0
>>  item osd.3 weight 1.0 # HDD
>>  item osd.4 weight 1.0 # HDD
>>  item osd.5 weight 0.5 # SSD
>> }
>> 
>> root default {
>>        id -1           # do not change unnecessarily
>>        # weight 1.560
>>        alg straw
>>        hash 0  # rjenkins1
>>        item node0 weight 2.5
>>        item node1 weight 2.5
>> }
>> 
>> # typical ruleset
>> rule replicated_ruleset {
>>        ruleset 0
>>        type replicated
>>        min_size 1
>>        max_size 10
>>        step take default
>>        step chooseleaf firstn 0 type host
>>        step emit
>> }
>> 
>> 
>> 
>> van
>> chaofanyu@xxxxxxxxxxx
>> 
>> 
>> 
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux