Choose two: * POSIX filesystem with a reliable storage underneath * Multiple sites with poor or high-latency connection between them * Performance -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Tue, Mar 5, 2019 at 6:52 PM Matti Nykyri <matti@xxxxxxxxx> wrote: > > Hi. > > I'm looking for a distributed filesystem that would distribute a single > namespace with POSIX access. It should be scaleable upwards to be > easily to add more physical drives as storage fills up with time. The > deepscrub and snapshotting are essential. I have been studying different > options and would consider Ceph and CephFS the best option. > > I have a two site setup with fairly poor connection in between. That > would have two MDS in active-active configuration. I'm considering to > have two copies on each site, so total of 4 copies of the data. For > reading this is not a problem, but for writing as I understand in my > case the client would have to write 4 copies until the write is acked. > Is it possible to set Ceph up so that to write would be satisfied by > two writes on the local site and Rados would then complete the write to > remote site asyncronously? > > This is a fairly small system and the requirements for different servers > is probably a bit overkill... As the manual suggests that clients should > not run on the OSD's. Is it sufficient to run OSD on a server and then > have a Linux container to run the client etc? If container is not a > sufficient barrier would a virtualized server do? > > Does this sound feasible? Would you have any suggestions on this before > I start deploying and testing this... Thank you all... And sorry for the > newbie q. > > -- > -Matti > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com