On 08/12/2014 03:21 PM, Christoph Hellwig wrote: > [Can you please trim your quotes? Quoting 700+ lines of a patch for > a 30 line reply is completely unreasonable, and placing the reply in > the middle of it is even worse. I will ignore mails ignoring the > netiquette this blatantly in the future] > Yes sorry you are absolutely right, My bad, I usually do this, a moment of spaciness. > On Tue, Aug 12, 2014 at 02:36:55PM +0300, Boaz Harrosh wrote: >>> + /* >>> + * Use the session max response size as the basis for setting >>> + * GETDEVICEINFO's maxcount >>> + */ >>> + max_resp_sz = server->nfs_client->cl_session->fc_attrs.max_resp_sz; >>> + max_pages = nfs_page_array_len(0, max_resp_sz); >>> + dprintk("%s: server %p max_resp_sz %u max_pages %d\n", >>> + __func__, server, max_resp_sz, max_pages); >>> + >> >> This is an extremely too big an allocation for obj-lo (which has >> a couple of embedded strings here). The all RPC can fit a single >> page >> >> Should we put like a flag in struct pnfs_layoutdriver_type: >> >> if (server->pnfs_curr_ld->flags & PNFS_DEVINFO_SINGLE_PAGE) { >> max_pages = 1; >> max_resp_sz = PAGE_SIZE; >> } >> >> This gives us so many extra allocation for storing one page pointer but for >> the simplicity of the cleanup we can live with it. > > Sounds fine to me, but do you really have that many GETDEVICEINFO calls > in object layout setups that it's worth the effort? > Panasas's biggest installation is like 1200 OSDs. With exofs I tested with 300. They come in groups of like 9, each 9 devices is good for 2G of data, before you move to the next set. Each file has a randomized set of devices, so a git clone would load the 300 devices easily. > Another slightly cleaner option would be to have a max_deviceinfo_size > field in the layout driver and cap the size by it. > Sure! max_deviceinfo_size would be even better, lets say that if it is zero then max_resp_sz is used. (Easier for you) Thanks Boaz -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html