I'm trying to use glusterfs-api, but ran into some questions on usage (currently targetting Fedora 19's glusterfs-api-devel-3.4.1-1.fc19.x86_64). Why is there no glfs_readdir? POSIX readdir_r() is broken by design; it assumes that the caller is providing a large enough buffer, but does not take a buffer size as a parameter, so it is very easy to trigger buffer overflow bugs, since struct dirent contains only a variable-length d_name member. Read this for more details about how hard it is to use readdir_r properly: http://womble.decadent.org.uk/readdir_r-advisory.html Read this for proof that POSIX is considering ditching readdir_r altogether and merely require sane thread-safety requirements on readdir() (since at least that way there are no buffer overflow attacks): http://austingroupbugs.net/view.php?id=696 The only safe way to use readdir_r is to know the maximum d_name that can possibly be returned, but there is no glfs_fpathconf() for determining that information. Your example usage of glfs_readdir_r() suggests that 512 bytes is large enough: https://forge.gluster.org/glusterfs-core/glusterfs/blobs/f44ada6cd9bcc5ab98ca66bedde4fe23dd1c3f05/api/examples/glfsxmp.c but I don't know if that is true. You _do_ have the advantage that since every brick backing a glusterfs volume is using an xfs file system, then you only have to worry about the NAME_MAX of xfs - but I don't know that value off the top of my head. Can you please let me know how big I should make my struct dirent to avoid buffer overflow, and properly document this in <glusterfs/api/glfs.h>? Furthermore, can you please provide a much saner glfs_readdir() so I don't have to worry about contortions of using a broken-by-design function? -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org
Attachment:
signature.asc
Description: OpenPGP digital signature