On 08/19/2013 12:01 PM, Schmitt, Christian wrote: >> yes. depends on 'everything', but it's possible (though not recommended) >> to run mon, mds, and osd's on the same host, and even do virtualisation. > > Currently we don't want to virtualise on this machine since the > machine is really small, as said we focus on small to midsize > businesses. Most of the time they even need a tower server due to the > lack of a correct rack. ;/ whoa :) >>> Our Application, Ceph's object storage and a database? >> >> what is 'a database'? > > We run Postgresql or MariaDB (without/with Galera depending on the cluster size) You wouldn't want to put the data of postgres or mariadb on cephfs. I would run the native versions directly on the servers and use mysql-multi-master circular replication. I don't know about similar features of postgres. >> shared nothing is possible with ceph, but in the end this really depends >> on your application. > > hm, when disk fails we already doing some backup on a dell powervault > rd1000, so i don't think thats a problem and also we would run ceph on > a Dell PERC Raid Controller with RAID1 enabled on the data disk. this is open to discussion, and really depends on your use case. >>> Currently we make an archiving software for small customers and we want >>> to move things on the file system on a object storage. >> >> you mean from the filesystem to an object storage? > > yes, currently everything is on the filesystem and this is really > horrible, thousands of pdfs just on the filesystem. we can't scale up > that easily with this setup. Got it. > Currently we run on Microsoft Servers, but we plan to rewrite our > whole codebase with scaling in mind, from 1 to X Servers. So 1, 3, 5, > 7, 9, ... X²-1 should be possible. cool. >>> Currently we only >>> have customers that needs 1 machine or 3 machines. But everything should >>> work as fine on more. >> >> it would with ceph. probably :) > > That's nice to hear. I was really scared that we don't find a solution > that can run on 1 system and scale up to even more. We first looked at > HDFS but this isn't lightweight. not only that, HDFS also has a single point of failure. > And the overhead of Metadata etc. > just isn't that cool. :) _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com