Re: Very unbalanced storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 4 Sep 2012, Andrew Thompson wrote:
> On 9/4/2012 11:59 AM, Tommi Virtanen wrote:
> > On Fri, Aug 31, 2012 at 11:58 PM, Andrew Thompson <andrewkt@xxxxxxxxxxx>
> > wrote:
> > > Looking at old archives, I found this thread which shows that to mount a
> > > pool as cephfs, it needs to be added to mds:
> > > 
> > > http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/5685
> > > 
> > > I started a `rados cppool data tempstore` a couple hours ago. When it
> > > finishes, will I need to remove the current pool from mds somehow(other
> > > than
> > > just deleting the pool)?
> > > 
> > > Is `ceph mds add_data_pool <poolname>` still required? (It's not listed in
> > > `ceph --help`.)
> > If the pool you are trying to grow pg_num for really is a CephFS data
> > pool, I fear a "rados cppool" is nowhere near enough to perform a
> > migration. My understanding is that each of the inodes stored in
> > cephfs/on ceph-mds'es knows what pool the file data resides in; you
> > shoveling the objects into another pool with "rados cppool" doesn't
> > change these pointers, removing the old pool will just break the
> > filesystem.
> > 
> > Before we go too far down this road: is your problem pool *really*
> > being use as a cephfs data pool? Based on how it's not named "data"
> > and you're just now asking about "ceph mds add_data_pool", it seems
> > that's not likely..
> 
> Well, I guess it's time to wipe this cluster and start over.
> 
> Yes, it was my `data` pool I was trying to grow. After renaming and removing
> the original data pool, I can `ls` my folders/files, but not access them.

Yeah.  Sorry I didn't catch this earlier, but TV is right: the ceph fs 
inodes refer to the data pool pool by id #, not by name, so the cppool 
trick won't work in the fs case.

> I attempted a tar backup beforehand, so unless it flaked out, I should be able
> to recover data.
> 
> I was concerned the small number of PGs created by default by mkcephfs would
> be an issue, so I was trying to up it a bit. I'm not going to have 100+ OSDs
> or petabytes of data. I just want a relatively safe place to store my files
> that I can easily extend as needed.

mkcephfs picks the pg_num by taking the initial osd count and shiftin 'osd 
pg bits' bits to the left.  Adjusting that (by default it is 6) in 
ceph.conf should give you larger initial pools.

> So far, I'm 0 and 5... I keep blowing up the filesystem, one way or another.

Sorry to hear that!  The pg splitting (i.e., online pg_num adjustment) is 
still the next major osd project on the roadmap, but we've been a bit 
sidetracked with performance these past few weeks.

sage


> 
> -- 
> Andrew Thompson
> http://aktzero.com/
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux