Hi Sage, Adam, Matt and David, we have resolved a couple of compilation issues in 'wip-libcephfs' branch and have created a respective pull request, see https://github.com/ceph/ceph/pull/424 Regards, Ilya P.S. I'm going on vacation for the next week, so keep connected with Andrey Kuznetsov (Andrey_Kuznetsov@xxxxxxxx) during this period, please. ________________________________________ От: Sage Weil [sage@xxxxxxxxxxx] Отправлено: 11 июля 2013 г. 23:01 To: Ilya Storozhilov Cc: Adam C. Emerson; Matt W. Benjamin; David Zafman; ceph-devel@xxxxxxxxxxxxxxx Тема: Re: wip-libcephfs rebased and pulled up to v.65 Please check out the current wip=libcephfs branch and let me know how it looks/works for you guys. I cleaned up your patches a bit and fixed teh root cause of the xattr issue you were seeing. Thanks! sage On Thu, 11 Jul 2013, Ilya Storozhilov wrote: > Hi Adam (CC: Sage, Matt and David), > > it seems to me we have choosen a bad commit description, where instead of "FIX: readlink not copy link path to user's buffer" there should be something like "FIX: ceph_ll_readlink() now fills user provided buffer with the link data instead of returning a pointer to the libcephfs internal memory location". Take a look to the sourcecode - it does actually what you are talking about (see https://github.com/ferustigris/ceph/blob/wip-libcephfs-rebased-v65/src/libcephfs.cc): > > ----------------------------------------------------------------------- > extern "C" int ceph_ll_readlink(struct ceph_mount_info *cmount, Inode *in, char *buf, size_t bufsiz, int uid, int gid) > { > const char *value = NULL; > int res = (cmount->get_client()->ll_readlink(in, &value, uid, gid)); > if (res < 0) > return res; > if (bufsiz < (size_t)res) > return ENAMETOOLONG; > memcpy(buf, value, res); // <-- Here we are copying a link data to the user provided buffer. This is what you want us to do. > return res; > } > ----------------------------------------------------------------------- > > In your branch (see https://github.com/linuxbox2/linuxbox-ceph/blob/wip-libcephfs-rebased-v65/src/libcephfs.cc) this function does not copy link data to the user-provided buffer, but passes back a pointer to the internal libcephfs structure, which is not good solution, as you mentioned below: > > ----------------------------------------------------------------------- > extern "C" int ceph_ll_readlink(struct ceph_mount_info *cmount, Inode *in, char **value, int uid, int gid) > { > return (cmount->get_client()->ll_readlink(in, (const char**) value, uid, gid)); > } > ----------------------------------------------------------------------- > > Regards, > Ilya > > ________________________________________ > ??: Adam C. Emerson [aemerson@xxxxxxxxxxxx] > ??????????: 10 ???? 2013 ?. 20:41 > To: Ilya Storozhilov > Cc: Sage Weil; Matt W. Benjamin; David Zafman > ????: Re: wip-libcephfs rebased and pulled up to v.65 > > At Wed, 10 Jul 2013 12:17:24 +0000, Ilya Storozhilov wrote: > [snip] > > ?wip-libcephfs-rebased-v65? branch of the https://github.com/linuxbox2/linuxbox-ceph repository has not been branched from the ?wip-libcephfs? branch of https://github.com/ceph/ceph as it was made with our ?open_by_handle_api? branch of the https://github.com/ferustigris/ceph repo. That is why we were not able to automatically ?cherry-pick? our changes using respective git command. So we have manually applied our changes to the ?wip-libcephfs-rebased-v65? branch in https://github.com/ferustigris/ceph repo as one commit - you can check it out here: https://github.com/ferustigris/ceph/commit/c3f4940b2cfcfd3ea9a004e6f07f1aa3c0b6c419. > [snip] > > Good afternoon, sir. > > I was looking at your patch and Matt and I have concerns about the > change you made to readlink. By passing back a pointer to a buffer > rather than copying the link into a supplied buffer, we're opening > ourselves up to the content changing, being deallocated, or otherwise > having something bad happen to it. > > Thanks. > > -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html