Re: howto: multiple ceph filesystems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey David, thanks for your answer. You're probably right, my friend.

This idea of multiple FS just came after we realize that in general we have a great amount of workload for /mnt/metadata, and a considerably low amount for /mnt/data. (Just an example to clarify our case)

Despite the fact that the data dir depends on the metadata dir we thought about splitting them apart in order to provide some sort of high availability because part of our systems can still go on even if the data dir goes down.

The main concern is, then, to be able to handle the workloads in separate ways and in each own way. In fact we can get to a better approach that does not have such an high overhead and for sure I'll read more about your suggestions, maybe we can simply use the placement rules xD
On Thu, 10 May 2018 at 20:54 David Turner <drakonstein@xxxxxxxxx> wrote:
Another option you could do is to use a placement rule. You could create a general pool for most data to go to and a special pool for specific folders on the filesystem. Particularly I think of a pool for replica vs EC vs flash for specific folders in the filesystem.

If the pool and OSDs wasn't the main concern for multiple filesystems and the mds servers are then you could have multiple active mds servers and pin the metadata for the indexes to one of them while the rest is served by the other active mds servers.

I really haven't come across a need for multiple filesystems in ceph with the type of granularity you can achieve with mds pinning, folder placement rules, and cephx authentication to limit a user to a specific subfolder.


On Thu, May 10, 2018, 5:10 PM João Paulo Sacchetto Ribeiro Bastos <joaopaulosr95@xxxxxxxxx> wrote:
Hey John, thanks for you answer. For sure the hardware robustness will be nice enough. My true concern was actually the two FS ecosystem coexistence. In fact I realized that we may not use this as well because it may be represent a high overhead, despite the fact that it's a experiental feature yet.
On Thu, 10 May 2018 at 15:48 John Spray <jspray@xxxxxxxxxx> wrote:
On Thu, May 10, 2018 at 7:38 PM, João Paulo Sacchetto Ribeiro Bastos
<joaopaulosr95@xxxxxxxxx> wrote:
> Hello guys,
>
> My company is about to rebuild its whole infrastructure, so I was called in
> order to help on the planning. We are essentially an corporate mail
> provider, so we handle daily lots of clients using dovecot and roundcube and
> in order to do so we want to design a better plant of our cluster. Today,
> using Jewel, we have a single cephFS for both index and mail from dovecot,
> but we want to split it into an index_FS and a mail_FS to handle the
> workload a little better, is it profitable nowadays? From my research I
> realized that we will need data and metadata individual pools for each FS
> such as a group of MDS for each of then, also.
>
> The one thing that really scares me about all of this is: we are planning to
> have four machines at full disposal to handle our MDS instances. We started
> to think if an idea like the one below is valid, can anybody give a hint on
> this? We basically want to handle two MDS instances on each machine (one for
> each FS) and wonder if we'll be able to have them swapping between active
> and standby simultaneously without any trouble.
>
> index_FS: (active={machines 1 and 3}, standby={machines 2 and 4})
> mail_FS: (active={machines 2 and 4}, standby={machines 1 and 3})

Nothing wrong with that setup, but remember that those servers are
going to have to be well-resourced enough to run all four at once
(when a failure occurs), so it might not matter very much exactly
which servers are running which daemons.

With a filesystem's MDS daemons (i.e. daemons with the same
standby_for_fscid setting), Ceph will activate whichever daemon comes
up first, so if it's important to you to have particular daemons
active then you would need to take care of that at the point you're
starting them up.

John

>
> Regards,
> --
>
> João Paulo Sacchetto Ribeiro Bastos
> +55 31 99279-7092
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--

João Paulo Sacchetto Ribeiro Bastos
+55 31 99279-7092

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--

João Paulo Sacchetto Ribeiro Bastos
+55 31 99279-7092

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux