Hi Em, its highly recommanded to bring the journals on SSDs considering https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/ --- Also, if you like speed, its highly recommanded to use cache tier --- Create the pool with a not too much high pg_num value. You can increase it anytime. But you can not decrease it just like that. Also your cluster will stop working if the PG / OSD ratio is too high. --- You must create different pools for rbd and cephfs, but the documentation will anyway inform you about that. http://docs.ceph.com/docs/jewel/rados/ is in general a very good start point. I suggest you to read it, and i mean read, not just flying over it. And after you read it, read it again, because for sure you missed some useful informations. Good luck ! -- Mit freundlichen Gruessen / Best regards Oliver Dzombic IP-Interactive mailto:info@xxxxxxxxxxxxxxxxx Anschrift: IP Interactive UG ( haftungsbeschraenkt ) Zum Sonnenberg 1-3 63571 Gelnhausen HRB 93402 beim Amtsgericht Hanau Geschäftsführung: Oliver Dzombic Steuer Nr.: 35 236 3622 1 UST ID: DE274086107 Am 26.06.2016 um 11:18 schrieb EM - SC: > Hi, > > I'm new to ceph and in the mailing list, so hello all! > > I'm testing ceph and the plan is to migrate our current 18TB storage > (zfs/nfs) to ceph. This will be using CephFS and mounted in our backend > application. > We are also planning on using virtualisation (opennebula) with rbd for > images and, if it makes sense, use rbd for our oracle server. > > My question is about pools. > For what I read, I should create different pools for different HD speed > (SAS, SSD, etc). > - What else should I consider for creating pools? > - should I create different pools for rbd, cephfs, etc? > > thanks in advanced, > em > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com