Re: [4/9] pohmelfs: directory operations.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Evgeniy,
  I've played a bit with pohmelfs and it looks very nice. Specifically
I was very happy with the easy configuration on both the server and
the client.

I understand that you manage the metadata operations asynchronously.
How do you handle multiple clients in pohmelfs? Any specific locking?

I did have some problems. I was trying to run dbench and it failed.
Also I understand that you haven't implemented all vfs interface yet
(chmod, chown, etc.). I was looking at the following code (more of at
the comments), and it explained some strange stuff that I had seen:

> +/*
> + * Create new inode for given parameters (name, inode info, parent).
> + * This does not create object on the server, it will be synced there
> during writeback.
> + */
> +struct pohmelfs_inode *pohmelfs_new_inode(struct pohmelfs_sb *psb,
> +               struct pohmelfs_inode *parent, struct qstr *str,
> +               struct netfs_inode_info *info, int link)

...
> +/*
> + * Create new object in local cache. Object will be synced to server
> + * during writeback for given inode.
> + */
> +struct pohmelfs_inode *pohmelfs_create_entry_local(struct pohmelfs_sb *psb,
> +       struct pohmelfs_inode *parent, struct qstr *str, u64 start, int mode)
> +{
...
> +/*
> + * Create local object and bind it to dentry.
> + */
> +static int pohmelfs_create_entry(struct inode *dir, struct dentry
> *dentry, u64 start, int mode)
> +{
> +       struct pohmelfs_sb *psb = POHMELFS_SB(dir->i_sb);
> +       struct pohmelfs_inode *npi, *parent;
> +       struct qstr str = dentry->d_name;
> +       int err;
> +
> +       parent = POHMELFS_I(dir);
> +
> +       err = pohmelfs_data_lock(parent, 0, ~0, POHMELFS_WRITE_LOCK);
> +       if (err)
> +               return err;
> +
> +       str.hash = jhash(dentry->d_name.name, dentry->d_name.len, 0);
> +
> +       npi = pohmelfs_create_entry_local(psb, parent, &str, start, mode);
> +       if (IS_ERR(npi))
> +               return PTR_ERR(npi);
> +
> +       d_instantiate(dentry, &npi->vfs_inode);
> +
> +       dprintk("%s: parent: %llu, inode: %llu, name: '%s', parent_nlink:
> %d, nlink: %d.\n",
> +                       __func__, parent->ino, npi->ino, dentry->d_name.name,
> +                       (signed)dir->i_nlink, (signed)npi->vfs_inode.i_nlink);
> +
> +       return 0;
> +}
> +
> +/*
> + * VFS create and mkdir callbacks.
> + */
> +static int pohmelfs_create(struct inode *dir, struct dentry *dentry, int mode,
> +               struct nameidata *nd)
> +{
> +       return pohmelfs_create_entry(dir, dentry, 0, mode);
> +}
> +
> +static int pohmelfs_mkdir(struct inode *dir, struct dentry *dentry, int mode)
> +{
> +       int err;
> +
> +       inode_inc_link_count(dir);
> +       err = pohmelfs_create_entry(dir, dentry, 0, mode | S_IFDIR);
> +       if (err)
> +               inode_dec_link_count(dir);
> +
> +       return err;
> +}

I understand that you handle metadata operations locally, and for
example when doing mkdir or creating a file, the file/directory is
created locally, the inode is set dirty and when a writeback occurs
everything is syncronized to the server. Isn't there some problem in
cases where there are more than one client operating on the same
directory with the same filename then? Moreover, it is possible that
client A creates a file and client B creates a directory with the same
name. Am I missing something, or is that the intended design?

Thanks,
Yehuda
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux