How do you manage cache coherency with Varnish? On Jun 21, 2013, at 6:09 AM, Artem Silenkov <artem.silenkov@xxxxxxxxx> wrote: > This picture shows the way we do it http://habrastorage.org/storage2/1ed/532/627/1ed5326273399df81f3a73179848a404.png > > Regards, Artem Silenkov, 2GIS TM. > --- > 2GIS LLC > > http://2gis.ru > a.silenkov at 2gis.ru > > gtalk: > artem.silenkov at gmail.com > > cell:+79231534853 > > > > > 2013/6/21 Alvaro Izquierdo Jimeno <aizquierdo@xxxxxxxx> > Thanks Artem > > > > De: Artem Silenkov [mailto:artem.silenkov@xxxxxxxxx] > Enviado el: viernes, 21 de junio de 2013 14:01 > Para: Alvaro Izquierdo Jimeno > CC: ceph-users@xxxxxxxxxxxxxx > Asunto: Re: several radosgw sharing pools > > > > Good day! > > > > We use balancing such way > > > > varnish frontend-->radosgw1 > > | > > ->radosgw2 > > > > Every radosgw host use his own config so not necessary to add both nodes in every ceph.conf. It looks like > > > > Host1 > > [client.radosgw.gateway] > > host = myhost1 > > ... > > > > Host2 > > [client.radosgw.gateway] > > host = myhost2 > > ... > > > > > > Pools, users, etc are internal params so every radosgw installation share this without any problem. And shares concurrently so you can do atomic writes and other good things. You could also use monit to monitor service health and even try to repair it automatically. > > > > Regards, Artem Silenkov, 2GIS TM. > > --- > > 2GIS LLC > > http://2gis.ru > > a.silenkov@xxxxxxx > > gtalk:artem.silenkov@xxxxxxxxx > > cell:+79231534853 > > > > 2013/6/21 Alvaro Izquierdo Jimeno <aizquierdo@xxxxxxxx> > > Hi, > > > > I have a ceph cluster with a radosgw running. The radosgw part in ceph.conf is: > > [client.radosgw.gateway] > > host = myhost1 > > …….. > > > > But if the process radosgw dies for some reason, we lose this behavior…So: > > > > -Can I setup another radosgw in other host sharing pools, users…. in ceph? > > i.e.: > > [client.radosgw.gateway2] > > host = myhost2 > > …….. > > -If previous question is ‘yes’, Is there any load balancer in the radosgw configure options? > > > > > > Thank you so much in advanced and best regards, > > Álvaro. > > > ____________ > Verificada la ausencia de virus por G Data AntiVirus Versión: AVA 22.10538 del 21.06.2013 Noticias de virus: www.antiviruslab.com > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > > ____________ > Verificada la ausencia de virus por G Data AntiVirus Versión: AVA 22.10538 del 21.06.2013 Noticias de virus: www.antiviruslab.com > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com