Re: [PATCH v3] ceph: defer flushing the capsnap if the Fb is used

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2021-01-20 at 08:56 +0800, Xiubo Li wrote:
> On 2021/1/18 19:08, Jeff Layton wrote:
> > On Mon, 2021-01-18 at 17:10 +0800, Xiubo Li wrote:
> > > On 2021/1/13 5:48, Jeff Layton wrote:
> > > > On Sun, 2021-01-10 at 10:01 +0800, xiubli@xxxxxxxxxx wrote:
> > > > > From: Xiubo Li <xiubli@xxxxxxxxxx>
> > > > > 
> > > > > If the Fb cap is used it means the current inode is flushing the
> > > > > dirty data to OSD, just defer flushing the capsnap.
> > > > > 
> > > > > URL: https://tracker.ceph.com/issues/48679
> > > > > URL: https://tracker.ceph.com/issues/48640
> > > > > Signed-off-by: Xiubo Li <xiubli@xxxxxxxxxx>
> > > > > ---
> > > > > 
> > > > > V3:
> > > > > - Add more comments about putting the inode ref
> > > > > - A small change about the code style
> > > > > 
> > > > > V2:
> > > > > - Fix inode reference leak bug
> > > > > 
> > > > >   fs/ceph/caps.c | 32 +++++++++++++++++++-------------
> > > > >   fs/ceph/snap.c |  6 +++---
> > > > >   2 files changed, 22 insertions(+), 16 deletions(-)
> > > > > 
> > > > Hi Xiubo,
> > > > 
> > > > This patch seems to cause hangs in some xfstests (generic/013, in
> > > > particular). I'll take a closer look when I have a chance, but I'm
> > > > dropping this for now.
> > > Okay.
> > > 
> > > BTW, what's your test commands to reproduce it ? I will take a look when
> > > I am free these days or later.
> > > 
> > > BRs
> > > 
> > I set up xfstests to run on cephfs, and then just run:
> > 
> >      $ sudo ./check generic/013
> > 
> > It wouldn't reliably complete with this patch in place. Setting up
> > xfstests is the "hard part". I'll plan to roll up a wiki page on how to
> > do that soon (that's good info to have out there anyway).
> 
> Okay, sure.
> 

I'm not sure where this should be documented. Still, here's a
local.config that I'm using now (with comments). I'm happy to merge this
somewhere for posterity, but not sure where it should go.

-- 
Jeff Layton <jlayton@xxxxxxxxxx>
#
# For running xfstests on kcephfs
#
# In this example, we've created two different named filesystems: "test"
# and "scratch. They must be pre-created on the ceph cluster before the
# test is run.
#
# Standard mountpoint locations are fine
#
export TEST_DIR=/mnt/test
export SCRATCH_MNT=/mnt/scratch

#
# "check" can't automatically detect ceph device strings, so we must
# explicitly declare that we want to use "-t ceph".
#
export FSTYP=ceph

#
# The check script gets very confused when two different mounts use
# the same device string. Eventually we may fix this in ceph so we can
# get monaddrs from the config, but for now, we must declare the location
# of the mons explicitly. Note that we're using two different monaddrs
# here, though these are using the same cluster. The monaddrs must also
# match the type of ms_mode option that is in effect (i.e.
# ms_mode=legacy requires v1 monaddrs).
#
export TEST_DEV=10.10.10.1:3300:/
export SCRATCH_DEV=10.10.10.2:3300:/

#
# TEST_FS_MOUNT_OPTS is for /mnt/test, and MOUNT_OPTONS is for /mnt/scratch
#
# Here, we're using the admin account for both mounts. The credentials
# should be in a standard keyring location somewhere. See:
#
# https://docs.ceph.com/en/latest/rados/operations/user-management/#keyring-management
#
COMMON_OPTIONS="name=admin,ms_mode=crc"

#
# asyncronous directory ops
#
COMMON_OPTIONS+=",nowsync"

#
# swizzle in the COMMON_OPTIONS
#
TEST_FS_MOUNT_OPTS="-o ${COMMON_OPTIONS},mds_namespace=test"
MOUNT_OPTIONS="-o ${COMMON_OPTIONS},mds_namespace=scratch"

#
# fscache -- this needs a different option for each
#
# TEST_FS_MOUNT_OPTS+=",fsc=test"
# MOUNT_OPTIONS+=",fsc=scratch"

export TEST_FS_MOUNT_OPTS
export MOUNT_OPTIONS

[Index of Archives]     [CEPH Users]     [Ceph Large]     [Ceph Dev]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux