Daniel P. Berrange wrote:
On Mon, Aug 20, 2007 at 04:31:26PM +0100, Richard W.M. Jones wrote:This patch adds block device and network stats. The main changes over the previous version are:* Remote support. * Change Xen network interface names to have the form "vif<domid>.<n>"[1]. Discussions about the previous version may be found starting here: https://www.redhat.com/archives/libvir-list/2007-August/thread.html#00064I have left use of stdint.h / int64_t, since it wasn't clear to me what conclusion people had arrived at.Personally I'm for using long long, since its consistent with the other existing APIs using 64 bit quantities. They're both standards so there's no much of a reason to favour one over the other.
I favour int64_t, because I want 64 bit integers (signed in this case because I want the special -1 value). C doesn't specify a maximum width for "long long".
I notice the Xen impl of the block stats only fills in the rd_req and wr_req fields, not the rd_bytes and wr_bytes fields. Are requests always fixed at 512 bytes in size ? If so, should be just junk those fields and only return data in terms of the bytes (other units can be calculated as needed). As a point of reference libstatgrab only returns bytes read/written for disks.
I assumed that Xen requests were variable in size ... That is based on my reading of this code from blkif.h:
#define BLKIF_MAX_SEGMENTS_PER_REQUEST 11 struct blkif_request_segment { grant_ref_t gref; /* reference to I/O buffer frame */ /* @first_sect: first sector in frame to transfer (inclusive). */ /* @last_sect: last sector in frame to transfer (inclusive). */ uint8_t first_sect, last_sect; }; struct blkif_request { uint8_t operation; /* BLKIF_OP_??? */ uint8_t nr_segments; /* number of segments */ blkif_vdev_t handle; /* only for read/write requests */ uint64_t id; /* private guest value, echoed in resp */ blkif_sector_t sector_number;/* start sector idx on disk (r/w only) */ struct blkif_request_segment seg[BLKIF_MAX_SEGMENTS_PER_REQUEST]; }; typedef struct blkif_request blkif_request_t;But to be honest I don't know (the source to Xen I have here still uses the sparse tree, making it impossible to find the actual source for the Linux drivers ...).
If it is true that requests have a fixed size, then returning bytes _also_ would be good, but I suspect that we should also still return requests as well since (a) it costs virtually nothing to do so, and (b) it might be interesting to find the average size of each request, since lots of small requests are likely to be less efficient than small numbers of large requests.
The Xen impl as coded only works for disks named xvdN, because the code for calculating device ID assumes xvdN device numbering scheme: device = 202 * 256 + minor; I know this is all utterly horrific, but we need to apply same logic as used in XenD for sdNNN and hdNNN :-( For sdNNN based disks it seems to be 8 * 256 + 16 * (ord value of disk letter ) + partition numberFor hdNNN based disks it seems to beide major number corresponding to disk letter * 256 + minor number as calculated from partition numbers.
OK ... I've never seen Xen guests with sdX/hdX devices. Are they common?
The interface stats look OK to me. The impl which parses /proc/net/dev though could be shared with the QEMU driver - only the device name needs to be different between them - QEMU will be vnetXXX - we have the actual dev name when we create teh TAP device, but don't bother to save it anywhere from the looks of things.I left the explicit structure size parameter to allow for future extensibility.Good plan. Regards, Dan.
Rich. -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature
-- Libvir-list mailing list Libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list