Re: how to mount a specific pool in cephs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tuesday, May 22, 2012 at 2:51 PM, Grant Ashman wrote:
> Awesome, that seemed to work!
> However, I feel a bit silly - what I'm after is:
> 
> /mnt/ceph-data - mounted to pool 0 (data)
> /mnt/ceph-backup - mounted to pool 3 (backup)
> 
> but this seemed to change both to mount to pool 3?
> 
> Am I simply doing something wrong at my mount stage?
> 
> The process to mount my specific pool as I understand it is;
> 
> 1. mount -t ceph 1.2.3.4:/ /mnt/ceph-backup
> 2. ceph mds add_data_pool 3
> 3. cephfs /mnt/ceph-backup/ -p 3 
> 
> Which should give me /ceph-backup mounted to the backup pool.
> 
> Should I simply then do a 
> mount -t ceph 1.2.3.4:/ /mnt/ceph-data/
> to get the /ceph-data/ pool mounted to the pool 0 data pool?

Right now, you can't actually get fully-distinct filesystem trees in a single Ceph cluster. That will probably come at some point in the future, and will require different MDS daemons and separate data & metadata pools for each filesystem.

What you *can* do right now is:
1) Set the pool a subtree writes all its new data to, and
2) mount subtrees of the filesystem, and
3) give specific clients read/write access to only a subset of the pools used by the filesystem hierarchy.

This means that you could have your root directory contain a "live" and a "backup" directory:
mount -t ceph 1.2.3.4:/ /mnt/ceph
mkdir /mnt/ceph/live; mkdir /mnt/ceph/backup

Leave the "live" directory using pool 0 while setting the "backup" directory to use pool 3:
cephfs /mnt/ceph/backup -p 3; umount /mnt/ceph

And then run one of
mount -t ceph 1.2.3.4:/live /mnt/ceph
or
mount -t ceph 1.2.3.4:/backup /mnt/ceph-backup

On your live and backup servers, respectively. (Or mount both and copy files to backup, or whatever.) Your live servers won't see the backup space, and the backup space won't see the live space, and they will live in different pools that you can place on different OSDs. (Although the metadata can't be segregated right now; you'll want a different strategy for that.)

Make sense?
-Greg

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux