On Wed, Nov 21, 2012 at 11:23 PM, Drunkard Zhang <gongfan193@xxxxxxxxx> wrote: > > 2012/11/22 Gregory Farnum <greg@xxxxxxxxxxx>: > > On Tue, Nov 20, 2012 at 8:28 PM, Drunkard Zhang <gongfan193@xxxxxxxxx> wrote: > >> 2012/11/21 Gregory Farnum <greg@xxxxxxxxxxx>: > >>> No, absolutely not. There is no relationship between different RADOS > >>> pools. If you've been using the cephfs tool to place some filesystem > >>> data in different pools then your configuration is a little more > >>> complicated (have you done that?), but deleting one pool is never > >>> going to remove data from the others. > >>> -Greg > >>> > >> I think that should be a bug. Here's the story I did: > >> I created one directory 'audit' in running ceph filesystem, and put > >> some data into the directory (about 100GB) before these commands: > >> ceph osd pool create audit > >> ceph mds add_data_pool 4 > >> cephfs /mnt/temp/audit/ set_layout -p 4 > >> > >> log3 ~ # ceph osd dump | grep audit > >> pool 4 'audit' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num > >> 8 pgp_num 8 last_change 1558 owner 0 > >> > >> at this time, all data in audit still usable, after 'ceph osd pool > >> delete data', the disk space recycled (forgot to test if the data > >> still usable), only 200MB used, from 'ceph -s'. So, here's what I'm > >> thinking, the data stored before pool created won't follow the pool, > >> it still follows the default pool 'data', is this a bug, or intended > >> behavior? > > > > Oh, I see. Data is not moved when you set directory layouts; it only > > impacts files created after that point. This is intended behavior — > > Ceph would need to copy the data around anyway in order to make it > > follow the pool. There's no sense in hiding that from the user, > > especially given the complexity involved in doing so safely — > > especially when there are many use cases where you want the files in > > different pools. > > -Greg > > Got you, but how can I know which pools a file lives in? Is there any commands? You can get this information with the cephfs program if you're using the kernel client. There's not yet a way to get it out of ceph-fuse, although we will be implementing it as virtual xattrs in the not-too-distant future. > About data and pools relationship, I thought that objects is hooked to > a pool, when the pool changed, just unhook this and hook to another, > seems I was wrong. Indeed that's incorrect. Pools are a logical namespace; when you delete the pool you are also deleting everything else in it. Doing otherwise is totally infeasible with Ceph since they also represent placement policies. -Greg -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html