Migrate cephfs metadata to SSD in running cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



yes, https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-a
nd-ssd-within-the-same-box/   is enough

don't test on your production env. before you start, backup your cursh map.

ceph osd getcrushmap -o crushmap.bin




below's some hint:

ceph osd getcrushmap -o crushmap.bin

crushtool -d crushmap.bin -o crushmap.txt

vi crushmap.txt

crushtool -c crushmap.txt -o crushmap.bin

ceph osd setcrushmap -i crushmap.bin


=====
ceph osd pool set POOL_NAME crush_ruleset ID
eg. ceph osd pool set ssd_pool crush_rulese 0, ceph osd pool set sata_pool
crush_rulese 1



=====
root ssd {
        id -1           # do not change unnecessarily
        # weight 0.040
        alg straw
        hash 0  # rjenkins1
        item node1 weight 0.020
        item node4 weight 0.020
        item node2 weight 0.020
}

rule ssd_rule {
        ruleset 1
        type replicated
        min_size 1
        max_size 10
        step take ssd
        step chooseleaf firstn 0 type host
        step emit
}

####### step take ssdXXX <---> root ssdXXX







2017-02-16 22:38 GMT+08:00 Mike Miller <millermike287 at gmail.com>:

> Hi,
>
> thanks all, still I would appreciate hints on a concrete procedure how to
> migrate cephfs metadata to a SSD pool, the SSDs being on the same hosts
> like to spinning disks.
>
> This reference I read:
> https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-a
> nd-ssd-within-the-same-box/
> Are there other alternatives to this suggested configuration?
>
> I am kind of a little paranoid to start playing around with crush rules in
> the running system.
>
> Regards,
>
> Mike
>
> On 1/5/17 11:40 PM, jiajia zhong wrote:
>
>>
>>
>> 2017-01-04 23:52 GMT+08:00 Mike Miller <millermike287 at gmail.com <mailto:
>> millermike287 at gmail.com>>:
>>
>>     Wido, all,
>>
>>     can you point me to the "recent benchmarks" so I can have a look?
>>     How do you define "performance"? I would not expect cephFS
>>     throughput to change, but it is surprising to me that metadata on
>>     SSD will have no measurable effect on latency.
>>
>>     - mike
>>
>>
>> operations like "ls", "stat", "find" would become faster, the bottleneck
>> is the slow osds which store file data.
>>
>>
>>     On 1/3/17 10:49 AM, Wido den Hollander wrote:
>>
>>
>>             Op 3 januari 2017 om 2:49 schreef Mike Miller
>>             <millermike287 at gmail.com <mailto:millermike287 at gmail.com>>:
>>
>>
>>             will metadata on SSD improve latency significantly?
>>
>>
>>         No, as I said in my previous e-mail, recent benchmarks showed
>>         that storing CephFS metadata on SSD does not improve performance.
>>
>>         It still might be good to do since it's not that much data thus
>>         recovery will go quickly, but don't expect a CephFS performance
>>         improvement.
>>
>>         Wido
>>
>>             Mike
>>
>>             On 1/2/17 11:50 AM, Wido den Hollander wrote:
>>
>>
>>                     Op 2 januari 2017 om 10:33 schreef Shinobu Kinjo
>>                     <skinjo at redhat.com <mailto:skinjo at redhat.com>>:
>>
>>
>>                     I've never done migration of cephfs_metadata from
>>                     spindle disks to
>>                     ssds. But logically you could achieve this through 2
>>                     phases.
>>
>>                       #1 Configure CRUSH rule including spindle disks
>>                     and ssds
>>                       #2 Configure CRUSH rule for just pointing to ssds
>>                        * This would cause massive data shuffling.
>>
>>
>>                 Not really, usually the CephFS metadata isn't that much
>>                 data.
>>
>>                 Recent benchmarks (can't find them now) show that
>>                 storing CephFS metadata on SSD doesn't really improve
>>                 performance though.
>>
>>                 Wido
>>
>>
>>
>>                     On Mon, Jan 2, 2017 at 2:36 PM, Mike Miller
>>                     <millermike287 at gmail.com
>>                     <mailto:millermike287 at gmail.com>> wrote:
>>
>>                         Hi,
>>
>>                         Happy New Year!
>>
>>                         Can anyone point me to specific walkthrough /
>>                         howto instructions how to move
>>                         cephfs metadata to SSD in a running cluster?
>>
>>                         How is crush to be modified step by step such
>>                         that the metadata migrate to
>>                         SSD?
>>
>>                         Thanks and regards,
>>
>>                         Mike
>>                         _______________________________________________
>>                         ceph-users mailing list
>>                         ceph-users at lists.ceph.com
>>                         <mailto:ceph-users at lists.ceph.com>
>>                         http://lists.ceph.com/listinfo
>> .cgi/ceph-users-ceph.com
>>                         <http://lists.ceph.com/listinf
>> o.cgi/ceph-users-ceph.com>
>>
>>                     _______________________________________________
>>                     ceph-users mailing list
>>                     ceph-users at lists.ceph.com
>>                     <mailto:ceph-users at lists.ceph.com>
>>                     http://lists.ceph.com/listinfo
>> .cgi/ceph-users-ceph.com
>>                     <http://lists.ceph.com/listinf
>> o.cgi/ceph-users-ceph.com>
>>
>>     _______________________________________________
>>     ceph-users mailing list
>>     ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
>>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>     <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20170221/df315a4f/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux