Re: After creating more than one filesystem, how do i mount the filesystem i want?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Isaac,

there can only be a single MDS map at a given time. So only one File System will be exposed by the Ceph Cluster.

Everytime you will run the "ceph mds newfs" command, you will just overwrite the current config. If you happen to do so, make sure you stop the MDS to cleanly generate a new MDS map with the new pools.

You need to create subdirectories on this file system and perform an in-depth mount using different sub-direcrtories if you want to split the data across the File System but always on the pools you chose when you issued the "ceph mds newfs" command

e.g.
	• MDS FS is the root /
	• Create /Dir1
	• Create /Dir2

Use an admin client to:
	• mount -t ceph host:6789:/ /mnt/ceph
	• mkdir /mnt/ceph/Dir1 then make necessary chown or chgrp for specific access through client1
	• mkdir /mnt/ceph/Dir2 then make necessary chown or chgrp for specific access through client2
To perform the in-depth mount
	• On client1 mount -t ceph host:6789:/Dir1 /mnt/ceph
	• On client2 mount -t ceph host:6789:/Dir2 /mnt/ceph

But all data will live on the same data pool and all the metadata will live in the same metadata pool.

You can of course tune each sub-directory permissions to accommodate the level of separation you need between your clients.

Rgds
JC

On 9 Jan 2014, at 17:10, Isaac Otsiabah <zmoo76b@xxxxxxxxx> wrote:

> 
> 
> Josh, i created a new Ceph Filesystem with the following commands:
> 
> [root@host1]# ceph osd pool create data02 256
> [root@host1]# ceph osd pool create metadata02 256
> 
> [root@host1]# ceph osd dump 2>&1 |egrep '^pool'
> pool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins 
> pg_num 704 pgp_num 704 last_change 1 owner 0 crash_replay_interval 45
> pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash rjenkins pg_num 704 pgp_num 704 last_change 1 owner 0
> pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins pg_num 704 pgp_num 704 last_change 1 owner 0
> pool 3 'data02' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 15 owner 0
> pool 4 'metadata02' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 17 owner 0
> 
> 
> [root@host1]# ceph mds newfs 4 3 --yes-i-really-mean-it
> [root@host1]# mount -t ceph 10.10.0.190:6789:/  /mnt/ceph
> 
> 
> After i mounted as shown above and copied files to the mount point, the data 
> went to the new  "data02" pool of the new filesystem rather than the old default "data" pool. My question is, if i have more than one 
> filesystem, how do i mount  the  filesystem i want to be able to read 
> from and write to the correct pool?
> Isaac  
> 
> 
> 
> 
> ________________________________
> From: Josh Durgin <josh.durgin@xxxxxxxxxxx>
> To: David Zafman <david.zafman@xxxxxxxxxxx> 
> Cc: "ceph-devel@xxxxxxxxxxxxxxx" <ceph-devel@xxxxxxxxxxxxxxx> 
> Sent: Monday, February 11, 2013 7:13 PM
> Subject: Re: .gitignore issues
> 
> 
> On 02/11/2013 06:28 PM, David Zafman wrote:
>> 
>> After updating to latest master I have the following files listed by git status:
> 
> These are mostly renamed binaries. If you run 'make clean' on the 
> version before the name changes 
> (133295ed001a950e3296f4e88a916ab2405be0cc) they'll be removed.
> If you're sure you have nothing you want to save that's not
> in a commit, you can always 'git clean -fdx'.
> 
> src/ceph.conf and src/keyring are generated by vstart.sh, and
> I forgot to add them to .gitignore again earlier. There was
> also a typo in ceph-filestore-dump - it was not renamed.
> These are fixed now.
> 
> Josh
> 
>> $ git status
>> # On branch master
>> # Untracked files:
>> #   (use "git add <file>..." to include in what will be committed)
>> #
>> #       src/bench_log
>> #       src/ceph-filestore-dump
>> #       src/ceph.conf
>> #       src/dupstore
>> #       src/keyring
>> #       src/kvstorebench
>> #       src/multi_stress_watch
>> #       src/omapbench
>> #       src/psim
>> #       src/radosacl
>> #       src/scratchtool
>> #       src/scratchtoolpp
>> #       src/smalliobench
>> #       src/smalliobenchdumb
>> #       src/smalliobenchfs
>> #       src/smalliobenchrbd
>> #       src/streamtest
>> #       src/testcrypto
>> #       src/testkeys
>> #       src/testrados
>> #       src/testrados_delete_pools_parallel
>> #       src/testrados_list_parallel
>> #       src/testrados_open_pools_parallel
>> #       src/testrados_watch_notify
>> #       src/testsignal_handlers
>> #       src/testtimers
>> #       src/tpbench
>> #       src/xattr_bench
>> nothing added to commit but untracked files present (use "git add" to track)
>> 
>> David Zafman
>> Senior Developer
>> david.zafman@xxxxxxxxxxx
>> 
>> 
>> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

JC
jc.lopez@xxxxxxxxxxx

Cell : +1-(408)-680-6959


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux