There are very few configuration settings passed between Cinder and Nova when attaching a volume. I think the only real possibility (untested) would be to configure two Cinder backends against the same Ceph cluster using two different auth user ids -- one for cache enabled and another for cache disabled. Then you could update the ceph.conf on the Nova compute hosts to have a client section for each user id, configured however you want it. You would most likely need to unset "disk_cachemodes" in your nova.conf since that would override any ceph.conf client-specific settings (again, untested). On Sat, Jan 7, 2017 at 10:59 PM, Lazuardi Nasution <mrxlazuardin@xxxxxxxxx> wrote: > Hi, > > I'm still waiting for clues or any comments of this case. This case happen > because only some volumes are multi attached volumes. I'm trying to not to > downgrade performance of mostly volumes, images and instance ephemeral data > by disabling RBD Cache feature. Any best practices of combining multi > attached volumes with single attached volumes? > > Best regards, > > > > Date: Tue, 3 Jan 2017 16:12:29 +0700 > From: Lazuardi Nasution <mrxlazuardin@xxxxxxxxx> > To: Ceph Users <ceph-users@xxxxxxxxxxxxxx> > Subject: RBD Cache & Multi Attached Volumes > Message-ID: > <CA+u3GuKa2Z0yGj0bbNqbH-3nx_XXdokfWpBz9vecSugWgmvg+Q@xxxxxxxxxxxxxx> > Content-Type: text/plain; charset="utf-8" > > > Hi, > > For using with OpenStack Cinder multi attached volumes, is it possible to > disable RBD Cache for specific multi attached volumes only? Single attached > volumes still need to enable RBD Cache for better performance. > > If I disable RBD Cache on /etc/ceph/ceph.conf, is > disk_cachemodes="network=writeback" > on /etc/nova/nova.conf still effective? What if I make different ceph.conf > for specific OpenStack services? > > Best regards, > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Jason _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com