Re: v0.38 released

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 10, 21:14, Sage Weil wrote:
>  * osd: some peering refactoring
>  * osd: 'replay' period is per-pool (now only affects fs data pool)
>  * osd: clean up old osdmaps
>  * osd: allow admin to revert lost objects to prior versions (or delete)
>  * mkcephfs: generate reasonable crush map based on 'host' and 'rack' 
>    fields in [osd.NN] sections of ceph.conf
>  * radosgw: bucket index improvements
>  * radosgw: improved swift support
>  * rbd: misc command line tool fixes
>  * debian: misc packaging fixes (including dependency breakage on upgrades)
>  * ceph: query daemon perfcounters via command line tool
> 
> The big upcoming items for v0.39 are RBD layering (image cloning), further 
> improvements to radosgw's Swift support, and some monitor failure recovery 
> and bootstrapping improvements.  We're also continuing work on the 
> automation bits that the Chef cookbooks and Juju charms will use, and a 
> Crowbar barclamp was also just posted on github.  Several patches are 
> still working their way into libvirt and qemu to improve support for RBD 
> authentication.

Any plans to address the ENOSPC issue? I gave v0.38 a try and the
file system behaves like the older (<= 0.36) versions I've tried
before when it fills up: The ceph mounts hang on all clients.

But there is progress: Sync is now interuptable (it used to block
in D state so that it could not be killed even with SIGKILL), and
umount works even if the file system is full. However, subsequent
mount attempts then fail with "mount error 5 = Input/output error".

Our test setup consists of one mds, one monitor and 8 osds. mds and
monitor are on the same node, and this node is not not an osd. All
nodes are running Linux-3.0.9 ATM, but I would be willing to upgrade
to 3.1.1 if this is expected to make a difference.

Here's some output of "ceph -w". Funny enough it reports 770G of free
disk space space although the writing process terminated with ENOSPC.

2011-11-15 12:12:45.388535    pg v38805: 65940 pgs: 1956 creating, 63984 active+clean; 1856 GB data, 3730 GB used, 770 GB / 4600 GB avail
2011-11-15 12:12:45.589228   mds e4: 1/1/1 up {0=0=up:active}
2011-11-15 12:12:45.589326   osd e11: 8 osds: 8 up, 8 in full
2011-11-15 12:12:45.589908   log 2011-11-15 12:12:19.599894 osd.326 192.168.3.26:6800/1673 168 : [INF] 0.593 scrub ok
2011-11-15 12:12:45.590000   mon e1: 1 mons at {0=192.168.3.34:6789/0}
2011-11-15 12:12:49.554163    pg v38806: 65940 pgs: 1956 creating, 63984 active+clean; 1856 GB data, 3730 GB used, 770 GB / 4600 GB avail
2011-11-15 12:12:54.526661    pg v38807: 65940 pgs: 1956 creating, 63984 active+clean; 1856 GB data, 3730 GB used, 770 GB / 4600 GB avail
2011-11-15 12:12:56.309292    pg v38808: 65940 pgs: 1956 creating, 63984 active+clean; 1856 GB data, 3730 GB used, 770 GB / 4600 GB avail

Thanks
Andre
-- 
The only person who always got his work done by Friday was Robinson Crusoe

Attachment: signature.asc
Description: Digital signature


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux