Re: Migrate cephfs metadata to SSD in running cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

thanks all, still I would appreciate hints on a concrete procedure how to migrate cephfs metadata to a SSD pool, the SSDs being on the same hosts like to spinning disks.

This reference I read:
https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
Are there other alternatives to this suggested configuration?

I am kind of a little paranoid to start playing around with crush rules in the running system.

Regards,

Mike

On 1/5/17 11:40 PM, jiajia zhong wrote:


2017-01-04 23:52 GMT+08:00 Mike Miller <millermike287@xxxxxxxxx <mailto:millermike287@xxxxxxxxx>>:

    Wido, all,

    can you point me to the "recent benchmarks" so I can have a look?
    How do you define "performance"? I would not expect cephFS
    throughput to change, but it is surprising to me that metadata on
    SSD will have no measurable effect on latency.

    - mike


operations like "ls", "stat", "find" would become faster, the bottleneck is the slow osds which store file data.


    On 1/3/17 10:49 AM, Wido den Hollander wrote:


            Op 3 januari 2017 om 2:49 schreef Mike Miller
            <millermike287@xxxxxxxxx <mailto:millermike287@xxxxxxxxx>>:


            will metadata on SSD improve latency significantly?


        No, as I said in my previous e-mail, recent benchmarks showed
        that storing CephFS metadata on SSD does not improve performance.

        It still might be good to do since it's not that much data thus
        recovery will go quickly, but don't expect a CephFS performance
        improvement.

        Wido

            Mike

            On 1/2/17 11:50 AM, Wido den Hollander wrote:


                    Op 2 januari 2017 om 10:33 schreef Shinobu Kinjo
                    <skinjo@xxxxxxxxxx <mailto:skinjo@xxxxxxxxxx>>:


                    I've never done migration of cephfs_metadata from
                    spindle disks to
                    ssds. But logically you could achieve this through 2
                    phases.

                      #1 Configure CRUSH rule including spindle disks
                    and ssds
                      #2 Configure CRUSH rule for just pointing to ssds
                       * This would cause massive data shuffling.


                Not really, usually the CephFS metadata isn't that much
                data.

                Recent benchmarks (can't find them now) show that
                storing CephFS metadata on SSD doesn't really improve
                performance though.

                Wido



                    On Mon, Jan 2, 2017 at 2:36 PM, Mike Miller
                    <millermike287@xxxxxxxxx
                    <mailto:millermike287@xxxxxxxxx>> wrote:

                        Hi,

                        Happy New Year!

                        Can anyone point me to specific walkthrough /
                        howto instructions how to move
                        cephfs metadata to SSD in a running cluster?

                        How is crush to be modified step by step such
                        that the metadata migrate to
                        SSD?

                        Thanks and regards,

                        Mike
                        _______________________________________________
                        ceph-users mailing list
                        ceph-users@xxxxxxxxxxxxxx
                        <mailto:ceph-users@xxxxxxxxxxxxxx>
                        http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
                        <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>

                    _______________________________________________
                    ceph-users mailing list
                    ceph-users@xxxxxxxxxxxxxx
                    <mailto:ceph-users@xxxxxxxxxxxxxx>
                    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
                    <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>

    _______________________________________________
    ceph-users mailing list
    ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
    <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux