Re: Client's read affinity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Another thing that i would love to ask and clarify is, would this work
for openstack vms that uses cinder, instead of vms that uses direct
integration between nova and ceph ?
We use cinder bootable volumes and normal cinder attached volumes to vms.

thx

On Wed, Apr 5, 2017 at 10:36 AM, Wes Dillingham
<wes_dillingham@xxxxxxxxxxx> wrote:
> This is a big development for us. I have not heard of this option either. I
> am excited to play with this feature and the implications it may have in
> improving RBD reads in our multi-datacenter RBD pools.
>
> Just to clarify the following options:
> "rbd localize parent reads = true" and "crush location = foo=bar" are
> configuration options for the client's ceph.conf and are not needed for OSD
> hosts as their locations are already encoded in the CRUSH map.
>
> It looks like this is a pretty old option (
> http://narkive.com/ZkTahBVu:5.455.67 )
>
> so I am assuming it is relatively tried and true? but I have never heard of
> it before... is anyone out there using this in a production RBD environment?
>
>
>
>
> On Tue, Apr 4, 2017 at 7:36 PM, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
>>
>> AFAIK, the OSDs should discover their location in the CRUSH map
>> automatically -- therefore, this "crush location" config override
>> would be used for librbd client configuration ("i.e. [client]
>> section") to describe their location in the CRUSH map relative to
>> racks, hosts, etc.
>>
>> On Tue, Apr 4, 2017 at 3:12 PM, Brian Andrus <brian.andrus@xxxxxxxxxxxxx>
>> wrote:
>> > Jason, I haven't heard much about this feature.
>> >
>> > Will the localization have effect if the crush location configuration is
>> > set
>> > in the [osd] section, or does it need to apply globally for clients as
>> > well?
>> >
>> > On Fri, Mar 31, 2017 at 6:38 AM, Jason Dillaman <jdillama@xxxxxxxxxx>
>> > wrote:
>> >>
>> >> Assuming you are asking about RBD-back VMs, it is not possible to
>> >> localize the all reads to the VM image. You can, however, enable
>> >> localization of the parent image since that is a read-only data set.
>> >> To enable that feature, set "rbd localize parent reads = true" and
>> >> populate the "crush location = host=X rack=Y etc=Z" in your ceph.conf.
>> >>
>> >> On Fri, Mar 31, 2017 at 9:00 AM, Alejandro Comisario
>> >> <alejandro@xxxxxxxxxxx> wrote:
>> >> > any experiences ?
>> >> >
>> >> > On Wed, Mar 29, 2017 at 2:02 PM, Alejandro Comisario
>> >> > <alejandro@xxxxxxxxxxx> wrote:
>> >> >> Guys hi.
>> >> >> I have a Jewel Cluster divided into two racks which is configured on
>> >> >> the crush map.
>> >> >> I have clients (openstack compute nodes) that are closer from one
>> >> >> rack
>> >> >> than to another.
>> >> >>
>> >> >> I would love to (if is possible) to specify in some way the clients
>> >> >> to
>> >> >> read first from the nodes on a specific rack then try the other one
>> >> >> if
>> >> >> is not possible.
>> >> >>
>> >> >> Is that doable ? can somebody explain me how to do it ?
>> >> >> best.
>> >> >>
>> >> >> --
>> >> >> Alejandrito
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Alejandro Comisario
>> >> > CTO | NUBELIU
>> >> > E-mail: alejandro@nubeliu.comCell: +54 9 11 3770 1857
>> >> > _
>> >> > www.nubeliu.com
>> >> > _______________________________________________
>> >> > ceph-users mailing list
>> >> > ceph-users@xxxxxxxxxxxxxx
>> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>
>> >>
>> >>
>> >> --
>> >> Jason
>> >> _______________________________________________
>> >> ceph-users mailing list
>> >> ceph-users@xxxxxxxxxxxxxx
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> >
>> >
>> >
>> > --
>> > Brian Andrus | Cloud Systems Engineer | DreamHost
>> > brian.andrus@xxxxxxxxxxxxx | www.dreamhost.com
>>
>>
>>
>> --
>> Jason
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> --
> Respectfully,
>
> Wes Dillingham
> wes_dillingham@xxxxxxxxxxx
> Research Computing | Infrastructure Engineer
> Harvard University | 38 Oxford Street, Cambridge, Ma 02138 | Room 210
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Alejandrito
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux