On Tue, Nov 14, 2017 at 06:52:48PM -0500, John Ferlan wrote:
On 11/13/2017 03:50 AM, Martin Kletzander wrote:With this commit we finally have a way to read and manipulate basic resctrl settings. Locking is done only on exposed functions that read/write from/to resctrlfs. Not in fuctions that are exposed in virresctrlpriv.h as those arefunctions
Fixed
only supposed to be used from tests. Signed-off-by: Martin Kletzander <mkletzan@xxxxxxxxxx> --- src/Makefile.am | 2 +- src/libvirt_private.syms | 12 + src/util/virresctrl.c | 1012 ++++++++++++++++++++++++++++++++++++++++++++- src/util/virresctrl.h | 59 +++ src/util/virresctrlpriv.h | 32 ++ 5 files changed, 1115 insertions(+), 2 deletions(-) create mode 100644 src/util/virresctrlpriv.hThis is a *lot* of code! I wasn't able to run through Coverity mainly because I have some stuff in a local branch that conflicts with earlier patches. If you push those, then I can apply these later patches and let Coverity have a peek on memory leaks or other strangeness that I could have missed below. I'll reserve the right to come back here again ;-) I think there's only a few "missed things".
I had this as one big gross ball of pus that I had to split into at least bit smaller chunks. I'm not sure I could've split this one even more. Maybe I could, but it didn't occur to me, mainly after renaming, rebasing and splitting if for non-trivial amount of time already. I'll push parts of it later, but I'm keeping this as a branch 'catwip' [1] on my github [2] where you can always get the code from in case it has conflicts with the newest master. [1] https://github.com/nertpinx/libvirt/tree/catwip [2] https://github.com/nertpinx/libvirt.git
diff --git a/src/Makefile.am b/src/Makefile.am index 1d24231249de..ad113262fbb0 100644 --- a/src/Makefile.am +++ b/src/Makefile.am @@ -167,7 +167,7 @@ UTIL_SOURCES = \ util/virprocess.c util/virprocess.h \ util/virqemu.c util/virqemu.h \ util/virrandom.h util/virrandom.c \ - util/virresctrl.h util/virresctrl.c \ + util/virresctrl.h util/virresctrl.c util/virresctrlpriv.h \ util/virrotatingfile.h util/virrotatingfile.c \ util/virscsi.c util/virscsi.h \ util/virscsihost.c util/virscsihost.h \ diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index b24728ce4a1d..37bac41e618b 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -2532,6 +2532,18 @@ virRandomInt; # util/virresctrl.h virCacheTypeFromString; virCacheTypeToString; +virResctrlAllocAddPID; +virResctrlAllocCreate; +virResctrlAllocForeachSize; +virResctrlAllocFormat; +virResctrlAllocGetFree; +virResctrlAllocMasksAssign; +virResctrlAllocNewFromInfo; +virResctrlAllocRemove; +virResctrlAllocSetID; +virResctrlAllocSetSize; +virResctrlAllocUpdateMask; +virResctrlAllocUpdateSize; virResctrlGetInfo; virResctrlInfoGetCache; diff --git a/src/util/virresctrl.c b/src/util/virresctrl.c index 6c6692e78f42..ac1b38436bb2 100644 --- a/src/util/virresctrl.c +++ b/src/util/virresctrl.c @@ -23,7 +23,7 @@ #include <sys/stat.h> #include <fcntl.h> -#include "virresctrl.h" +#include "virresctrlpriv.h" #include "c-ctype.h" #include "count-one-bits.h" @@ -151,6 +151,153 @@ virResctrlInfoNew(void) } +/* Alloc-related definitions and AllocClass-related functions */ +typedef struct _virResctrlAllocPerType virResctrlAllocPerType; +typedef virResctrlAllocPerType *virResctrlAllocPerTypePtr; +struct _virResctrlAllocPerType { + /* There could be bool saying whether this is set or not, but since everything + * in virResctrlAlloc (and most of libvirt) goes with pointer arrays we would + * have to have one more level of allocation anyway, so this stays faithful to + * the concept */ + unsigned long long **sizes; + size_t nsizes; + + /* Mask for each cache */ + virBitmapPtr *masks; + size_t nmasks; +}; + +typedef struct _virResctrlAllocPerLevel virResctrlAllocPerLevel; +typedef virResctrlAllocPerLevel *virResctrlAllocPerLevelPtr; +struct _virResctrlAllocPerLevel { + virResctrlAllocPerTypePtr *types; /* Indexed with enum virCacheType */ +}; + +struct _virResctrlAlloc { + virObject parent; + + virResctrlAllocPerLevelPtr *levels; + size_t nlevels; + + char *id; /* The identifier (any unique string for now) */ + char *path; +}; + +static virClassPtr virResctrlAllocClass; + +static void +virResctrlAllocDispose(void *obj) +{ + size_t i = 0; + size_t j = 0; + size_t k = 0; + + virResctrlAllocPtr resctrl = obj; + + for (i = 0; i < resctrl->nlevels; i++) { + virResctrlAllocPerLevelPtr level = resctrl->levels[--resctrl->nlevels];Again the odd (to me at least) looking loop control that's reducing the for loop end condition.
Yeah, fixed now
+ + if (!level) + continue; + + for (j = 0; j < VIR_CACHE_TYPE_LAST; j++) { + virResctrlAllocPerTypePtr type = level->types[j]; + + if (!type) + continue; + + for (k = 0; k < type->nsizes; k++) + VIR_FREE(type->sizes[k]); + + VIR_FREE(type->sizes);what about type->masks[k]
good point
You could create a Free function for each entry too.
could, maybe I will, but it should not be freed separately anyway...
+ VIR_FREE(type); + } + VIR_FREE(level->types); + VIR_FREE(level); + } + + VIR_FREE(resctrl->id);resctrl->path ?
Yes! =)
+ VIR_FREE(resctrl->levels); +} +Two blank lines (multiple instances in new functions)
I tried keeping this one more organized, two lines between groups of functions, one line between functions in the same group. But I can do two everywhere, the fact that I don't fully agree is irrelevant (unfortunately), but I'd rather get this in instead of arguing over the amount of whitespace =D
+static int virResctrlAllocOnceInit(void)static int virResctrlAllocOnceInit(void)
Oh, missed this one.
+{ + if (!(virResctrlAllocClass = virClassNew(virClassForObject(), + "virResctrlAlloc", + sizeof(virResctrlAlloc), + virResctrlAllocDispose))) + return -1; + + return 0; +} + +VIR_ONCE_GLOBAL_INIT(virResctrlAlloc) + +static virResctrlAllocPtr +virResctrlAllocNew(void) +{ + if (virResctrlAllocInitialize() < 0) + return NULL; + + return virObjectNew(virResctrlAllocClass); +} + + +/* Common functions */ +static int +virResctrlLockInternal(int op) +{ + int fd = open(SYSFS_RESCTRL_PATH, O_DIRECTORY | O_CLOEXEC); + + if (fd < 0) { + virReportSystemError(errno, "%s", _("Cannot open resctrlfs")); + return -1; + } + + if (flock(fd, op) < 0) {So only ever used on a local file system right? Linux file locking functions are confounding...
Yes, only local filesystem.
Why not use virFile{Lock|Unlock}?
Because that uses fcnlt(2) which is different lock which might not interactl with flock(2), so all programs working with resctrlfs must use the same type of lock. And resctrlfs should specifically use flock(2) according to the docs: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/x86/intel_rdt_ui.txt
+ virReportSystemError(errno, "%s", _("Cannot lock resctrlfs")); + VIR_FORCE_CLOSE(fd); + return -1; + } + + return fd; +} + +static inline int +virResctrlLockRead(void)Not used in this series...
Yeah, this can be removed. Actually clang will complain about it. I'll remove it, it's not that difficult to add it later on :D
+{ + return virResctrlLockInternal(LOCK_SH); +} + +static inline int +virResctrlLockWrite(void) +{ + return virResctrlLockInternal(LOCK_EX); +} + +static int +virResctrlUnlock(int fd) +{ + int ret = -1; +
ret is pointless here.
+ if (fd == -1) + return 0; + + /* The lock gets unlocked by closing the fd, which we need to do anyway in + * order to clean up properly */ + if (VIR_CLOSE(fd) < 0) { + virReportSystemError(errno, "%s", _("Cannot close resctrlfs")); + + /* Trying to save the already broken */So if close unlocks too, then why the unlock?
Only if the close failed, I figured we might as well try to safe the already broken, right? I can remove it if you want.
+ if (flock(fd, LOCK_UN) < 0) + virReportSystemError(errno, "%s", _("Cannot unlock resctrlfs")); + return -1; + } + + return ret; +} + + /* Info-related functions */ int virResctrlGetInfo(virResctrlInfoPtr *resctrl) @@ -318,3 +465,866 @@ virResctrlInfoGetCache(virResctrlInfoPtr resctrl, VIR_FREE(*controls); goto cleanup; } + + +/* Alloc-related functions */A few notes about the arguments could be beneficial... Or at least the algorithm that allows partial allocations to work for future consumers.+static virResctrlAllocPerTypePtr +virResctrlAllocFindType(virResctrlAllocPtr *resctrl, + unsigned int level, + virCacheType type, + bool alloc) +{ + virResctrlAllocPerLevelPtr a_level = NULL; + virResctrlAllocPtr tmp = NULL; + + if (!*resctrl) { + if (!alloc || !(*resctrl = virResctrlAllocNew())) + return NULL; + } + + tmp = *resctrl; + + /* Per-level array */ + if (tmp->nlevels <= level) { + if (!alloc || VIR_EXPAND_N(tmp->levels, tmp->nlevels, + level - tmp->nlevels + 1) < 0) + return NULL; + } + + if (!tmp->levels[level]) { + if (!alloc || + VIR_ALLOC(tmp->levels[level]) < 0 || + VIR_ALLOC_N(tmp->levels[level]->types, VIR_CACHE_TYPE_LAST) < 0) + return NULL; + } + a_level = tmp->levels[level]; + + if (!a_level->types[type]) { + if (!alloc || VIR_ALLOC(a_level->types[type]) < 0) + return NULL; + } + + return a_level->types[type]; +} + +static virBitmapPtr * +virResctrlAllocFindMask(virResctrlAllocPtr *resctrl, + unsigned int level, + virCacheType type, + unsigned int cache, + bool alloc) +{ + virResctrlAllocPerTypePtr a_type = virResctrlAllocFindType(resctrl, level, + type, alloc); + + if (!a_type) + return NULL; + + if (!a_type->masks) { + if (!alloc || VIR_ALLOC_N(a_type->masks, cache + 1) < 0) + return NULL; + a_type->nmasks = cache + 1; + } else if (a_type->nmasks <= cache) { + if (!alloc || VIR_EXPAND_N(a_type->masks, a_type->nmasks, + cache - a_type->nmasks + 1) < 0) + return NULL; + } + + return a_type->masks + cache; +} + +static unsigned long long * +virResctrlAllocFindSize(virResctrlAllocPtr *resctrl, + unsigned int level, + virCacheType type, + unsigned int cache, + bool alloc) +{ + virResctrlAllocPerTypePtr a_type = virResctrlAllocFindType(resctrl, level, + type, alloc); + + if (!a_type) + return NULL; + + if (!a_type->sizes) { + if (!alloc || VIR_ALLOC_N(a_type->sizes, cache + 1) < 0) + return NULL; + a_type->nsizes = cache + 1; + } else if (a_type->nsizes <= cache) { + if (!alloc || VIR_EXPAND_N(a_type->sizes, a_type->nsizes, + cache - a_type->nsizes + 1) < 0) + return NULL; + } + + if (!a_type->sizes[cache]) { + if (!alloc || VIR_ALLOC(a_type->sizes[cache]) < 0) + return NULL; + } + + return a_type->sizes[cache]; +} +This could really use a functional description especially since it's external... Interesting way to code this though - took a few attempts to stare at it before it finally started sinking in ;-)+int +virResctrlAllocUpdateMask(virResctrlAllocPtr *resctrl, + unsigned int level, + virCacheType type, + unsigned int cache, + virBitmapPtr mask) +{ + virBitmapPtr *found = virResctrlAllocFindMask(resctrl, level, type, cache, + true); + + if (!found) + return -1; + + virBitmapFree(*found); + + *found = virBitmapNew(virBitmapSize(mask)); + if (!*found) + return -1; + + return virBitmapCopy(*found, mask);> +} +Similarly here too... I think "external" function should be commented. I'll only mention again here though...+int +virResctrlAllocUpdateSize(virResctrlAllocPtr *resctrl, + unsigned int level, + virCacheType type, + unsigned int cache, + unsigned long long size) +{ + unsigned long long *found = virResctrlAllocFindSize(resctrl, level, type, + cache, true); + + if (!found) + return -1; + + *found = size; + return 0; +} + +static bool +virResctrlAllocCheckCollision(virResctrlAllocPtr a, + unsigned int level, + virCacheType type, + unsigned int cache) +{ + /* If there is an allocation for type 'both', there can be no other + * allocation for the same cache */ + if (virResctrlAllocFindSize(&a, level, VIR_CACHE_TYPE_BOTH, cache, false)) + return true; + + if (type == VIR_CACHE_TYPE_BOTH) { + if (virResctrlAllocFindSize(&a, level, VIR_CACHE_TYPE_CODE, cache, false)) + return true; + if (virResctrlAllocFindSize(&a, level, VIR_CACHE_TYPE_DATA, cache, false)) + return true; + } + + /* And never two allocations for the same type */ + if (virResctrlAllocFindSize(&a, level, type, cache, false)) + return true; + + return false; +} + +int +virResctrlAllocSetSize(virResctrlAllocPtr *resctrl, + unsigned int level, + virCacheType type, + unsigned int cache, + unsigned long long size) +{ + if (virResctrlAllocCheckCollision(*resctrl, level, type, cache)) { + virReportError(VIR_ERR_XML_ERROR, + _("Colliding cache allocations for cache " + "level '%u' id '%u', type '%s'"), + level, cache, virCacheTypeToString(type)); + return -1; + } + + return virResctrlAllocUpdateSize(resctrl, level, type, cache, size); +} + +int +virResctrlAllocForeachSize(virResctrlAllocPtr resctrl, + virResctrlAllocForeachSizeCallback cb, + void *opaque) +{ + int ret = 0; + unsigned int level = 0; + unsigned int type = 0; + unsigned int cache = 0; + + if (!resctrl) + return 0; + + for (level = 0; level < resctrl->nlevels; level++) { + virResctrlAllocPerLevelPtr a_level = resctrl->levels[level]; + + if (!a_level) + continue; + + for (type = 0; type < VIR_CACHE_TYPE_LAST; type++) { + virResctrlAllocPerTypePtr a_type = a_level->types[type]; + + if (!a_type) + continue; + + for (cache = 0; cache < a_type->nsizes; cache++) { + unsigned long long *size = a_type->sizes[cache]; + + if (!size) + continue; + + ret = cb(level, type, cache, *size, opaque); + if (ret < 0) + return ret; + } + } + } + + return 0; +} + +int +virResctrlAllocSetID(virResctrlAllocPtr alloc, + const char *id) +{ + if (!id) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("Resctrl allocation id cannot be NULL")); + return -1; + } + + return VIR_STRDUP(alloc->id, id); +} + +char * +virResctrlAllocFormat(virResctrlAllocPtr resctrl)No external consumer yet.+{ + virBuffer buf = VIR_BUFFER_INITIALIZER; + unsigned int level = 0; + unsigned int type = 0; + unsigned int cache = 0; + + if (!resctrl) + return NULL; + + for (level = 0; level < resctrl->nlevels; level++) { + virResctrlAllocPerLevelPtr a_level = resctrl->levels[level]; + + if (!a_level) + continue; + + for (type = 0; type < VIR_CACHE_TYPE_LAST; type++) { + virResctrlAllocPerTypePtr a_type = a_level->types[type]; + + if (!a_type) + continue; + + virBufferAsprintf(&buf, "L%u%s:", level, virResctrlTypeToString(type)); + + for (cache = 0; cache < a_type->nmasks; cache++) { + virBitmapPtr mask = a_type->masks[cache]; + char *mask_str = NULL; + + if (!mask) + continue; + + mask_str = virBitmapToString(mask, false, true); + if (!mask_str) { + virBufferFreeAndReset(&buf); + return NULL; + } + + virBufferAsprintf(&buf, "%u=%s;", cache, mask_str); + } + + virBufferTrim(&buf, ";", 1); + virBufferAddChar(&buf, '\n'); + } + } + + virBufferCheckError(&buf); + return virBufferContentAndReset(&buf); +} + +static int +virResctrlAllocParseProcessCache(virResctrlAllocPtr *resctrl, + unsigned int level, + virCacheType type, + char *cache) +{ + char *tmp = strchr(cache, '='); + unsigned int cache_id = 0; + virBitmapPtr mask = NULL; + int ret = -1; + + if (!tmp) + return 0; + + *tmp = '\0'; + tmp++; + + if (virStrToLong_uip(cache, NULL, 10, &cache_id) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Invalid cache id '%s'"), cache); + return -1; + } + + mask = virBitmapNewString(tmp); + if (!mask) + return -1; + + if (virResctrlAllocUpdateMask(resctrl, level, type, cache_id, mask) < 0) + goto cleanup; + + ret = 0; + cleanup: + virBitmapFree(mask); + return ret; +} + +static int +virResctrlAllocParseProcessLine(virResctrlAllocPtr *resctrl, + char *line) +{ + char **caches = NULL; + char *tmp = NULL; + unsigned int level = 0; + int type = -1; + size_t ncaches = 0; + size_t i = 0; + int ret = -1; + + /* Skip lines that don't concern caches, e.g. MB: etc. */ + if (line[0] != 'L') + return 0; + + /* And lines that we can't parse too */ + tmp = strchr(line, ':'); + if (!tmp) + return 0; + + *tmp = '\0'; + tmp++; + + if (virStrToLong_uip(line + 1, &line, 10, &level) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Cannot parse resctrl schema level '%s'"), + line + 1); + return -1; + } + + type = virResctrlTypeFromString(line); + if (type < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Cannot parse resctrl schema level '%s'"), + line + 1); + return -1; + } + + caches = virStringSplitCount(tmp, ";", 0, &ncaches); + if (!caches) + return 0; + + for (i = 0; i < ncaches; i++) { + if (virResctrlAllocParseProcessCache(resctrl, level, type, caches[i]) < 0) + goto cleanup; + } + + ret = 0; + cleanup: + virStringListFree(caches); + return ret; +} + +static int +virResctrlAllocParse(virResctrlAllocPtr *alloc, + const char *schemata) +{ + virResctrlAllocPtr tmp = NULL; + char **lines = NULL; + size_t nlines = 0; + size_t i = 0; + int ret = -1; + + lines = virStringSplitCount(schemata, "\n", 0, &nlines); + for (i = 0; i < nlines; i++) { + if (virResctrlAllocParseProcessLine(&tmp, lines[i]) < 0) + goto cleanup; + } + + *alloc = tmp; + tmp = NULL; + ret = 0; + cleanup: + virStringListFree(lines); + virObjectUnref(tmp); + return ret; +} + +static void +virResctrlAllocSubtractPerType(virResctrlAllocPerTypePtr a, + virResctrlAllocPerTypePtr b) +{ + size_t i = 0; + + if (!a || !b) + return; + + for (i = 0; i < a->nmasks && i < b->nmasks; ++i) { + if (a->masks[i] && b->masks[i]) + virBitmapSubtract(a->masks[i], b->masks[i]); + } +} + +static void +virResctrlAllocSubtract(virResctrlAllocPtr a, + virResctrlAllocPtr b) +{ + size_t i = 0; + size_t j = 0; + + if (!b) + return; + + for (i = 0; i < a->nlevels && b->nlevels; ++i) { + if (a->levels[i] && b->levels[i]) { + /* Here we rely on all the system allocations to use the same types. + * We kind of _hope_ it's the case. If this is left here until the + * review and someone finds it, then suggest only removing this last + * sentence. */Should we use 'sa_assert' instead of just hoping? it's not the verboten assert, but at least the Coverity or Clang would catch, right?
It would not make any sense for it not to be true. I guess I was just angry with the Linux kernel interface for CAT when writing this. That was the case for most of the time I spent on this series. And some of the time I was not working at all. And sometimes when sleeping, too. Anyway, nothing bad happens if it's not true, maybe an allocation that is not purely exclusive will be created.
+ for (j = 0; j < VIR_CACHE_TYPE_LAST; j++) { + virResctrlAllocSubtractPerType(a->levels[i]->types[j], + b->levels[i]->types[j]); + } + } + } +} + +virResctrlAllocPtr +virResctrlAllocNewFromInfo(virResctrlInfoPtr info)No external consumer
Good point, making it static.
+{ + size_t i = 0; + size_t j = 0; + size_t k = 0; + virResctrlAllocPtr ret = NULL; + virBitmapPtr mask = NULL; + + for (i = 0; i < info->nlevels; i++) { + virResctrlInfoPerLevelPtr i_level = info->levels[i]; + + if (!i_level) + continue; + + for (j = 0; j < VIR_CACHE_TYPE_LAST; j++) { + virResctrlInfoPerTypePtr i_type = i_level->types[j]; + + if (!i_type) + continue; + + virBitmapFree(mask); + mask = virBitmapNew(i_type->bits); + if (!mask) + goto error; + virBitmapSetAll(mask); + + for (k = 0; k <= i_type->max_cache_id; k++) { + if (virResctrlAllocUpdateMask(&ret, i, j, k, mask) < 0) + goto error; + } + } + } + + cleanup: + virBitmapFree(mask); + return ret; + error: + virObjectUnref(ret); + ret = NULL; + goto cleanup; +} + +virResctrlAllocPtr +virResctrlAllocGetFree(virResctrlInfoPtr resctrl)No external consumer...
Actually, there will be one, in tests, but there is circular dependency with the other function (AllocMasksAssign). The tests need both XML support and this file, XML support needs this file as well, but if we say this file needs tests then it's circular. And I wanted to split it a little bit so we don't get that huge files.
+{ + virResctrlAllocPtr ret = NULL; + virResctrlAllocPtr alloc = NULL; + virBitmapPtr mask = NULL; + struct dirent *ent = NULL; + DIR *dirp = NULL; + char *schemata = NULL; + int rv = -1; + + ret = virResctrlAllocNewFromInfo(resctrl); + if (!ret) + return NULL; + + if (virFileReadValueString(&schemata, + SYSFS_RESCTRL_PATH + "/schemata") < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("Could not read schemata file for the default group")); + goto error; + } + + if (virResctrlAllocParse(&alloc, schemata) < 0) + goto error; + if (!alloc) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("No schemata for default resctrlfs group")); + goto error; + } + virResctrlAllocSubtract(ret, alloc); + + if (virDirOpen(&dirp, SYSFS_RESCTRL_PATH) < 0) + goto error; + + while ((rv = virDirRead(dirp, &ent, SYSFS_RESCTRL_PATH)) > 0) { + if (ent->d_type != DT_DIR) + continue; + + if (STREQ(ent->d_name, "info")) + continue; + + VIR_FREE(schemata); + if (virFileReadValueString(&schemata, + SYSFS_RESCTRL_PATH + "/%s/schemata", + ent->d_name) < 0) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Could not read schemata file for group %s"), + ent->d_name); + goto error; + } + + virObjectUnref(alloc); + alloc = NULL; + if (virResctrlAllocParse(&alloc, schemata) < 0) + goto error; + + virResctrlAllocSubtract(ret, alloc); + + VIR_FREE(schemata); + } + if (rv < 0) + goto error; + + cleanup: + virObjectUnref(alloc); + VIR_DIR_CLOSE(dirp); + VIR_FREE(schemata); + virBitmapFree(mask); + return ret; + + error: + virObjectUnref(ret); + ret = NULL; + goto cleanup; +} + +int +virResctrlAllocMasksAssign(virResctrlInfoPtr r_info, + virResctrlAllocPtr alloc, + void **save_ptr)No external consumer...
see above
+{ + int ret = -1; + unsigned int level = 0; + virResctrlAllocPtr alloc_free = NULL; + + if (!r_info) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("Resource control is not supported on this host")); + return -1; + } + + if (!save_ptr) { + alloc_free = virResctrlAllocGetFree(r_info); + } else { + if (!*save_ptr) + *save_ptr = virResctrlAllocGetFree(r_info); + + alloc_free = *save_ptr; + } + + if (!alloc_free) + return -1; + + for (level = 0; level < alloc->nlevels; level++) { + virResctrlAllocPerLevelPtr a_level = alloc->levels[level]; + virResctrlAllocPerLevelPtr f_level = NULL; + unsigned int type = 0; + + if (!a_level) + continue; + + if (level < alloc_free->nlevels) + f_level = alloc_free->levels[level]; + + if (!f_level) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("Cache level %d does not support tuning"), + level); + goto cleanup; + } + + for (type = 0; type < VIR_CACHE_TYPE_LAST; type++) {OMG, my eyes need a beer!
did someone say "beer"?
+ virResctrlAllocPerTypePtr a_type = a_level->types[type]; + virResctrlAllocPerTypePtr f_type = f_level->types[type]; + unsigned int cache = 0; + + if (!a_type) + continue; + + if (!f_type) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("Cache level %d does not support tuning for " + "scope type '%s'"), + level, virCacheTypeToString(type)); + goto cleanup; + } + + for (cache = 0; cache < a_type->nsizes; cache++) { + unsigned long long *size = a_type->sizes[cache]; + virBitmapPtr a_mask = NULL; + virBitmapPtr f_mask = f_type->masks[cache]; + virResctrlInfoPerLevelPtr i_level = r_info->levels[level]; + virResctrlInfoPerTypePtr i_type = i_level->types[type]; + unsigned long long granularity; + unsigned long long need_bits; + size_t i = 0; + ssize_t pos = -1; + ssize_t last_bits = 0; + ssize_t last_pos = -1; + + if (!size) + continue; + + if (cache >= f_type->nmasks) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("Cache with id %u does not exists for level %d"), + cache, level); + goto cleanup; + } + + f_mask = f_type->masks[cache]; + if (!f_mask) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("Cache level %d id %u does not support tuning for " + "scope type '%s'"), + level, cache, virCacheTypeToString(type)); + goto cleanup; + } + + granularity = i_type->size / i_type->bits; + need_bits = *size / granularity; + + if (*size % granularity) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("Cache allocation of size %llu is not " + "divisible by granularity %llu"), + *size, granularity); + goto cleanup; + } + + if (need_bits < i_type->min_cbm_bits) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("Cache allocation of size %llu is smaller " + "than the minimum allowed allocation %llu"), + *size, granularity * i_type->min_cbm_bits); + goto cleanup; + } + + while ((pos = virBitmapNextSetBit(f_mask, pos)) >= 0) { + ssize_t pos_clear = virBitmapNextClearBit(f_mask, pos); + ssize_t bits; + + if (pos_clear < 0) + pos_clear = virBitmapSize(f_mask); + + bits = pos_clear - pos; + + /* Not enough bits, move on and skip all of them */ + if (bits < need_bits) { + pos = pos_clear; + continue; + } + + /* This fits perfectly */ + if (bits == need_bits) { + last_pos = pos; + break; + } + + /* Remember the smaller region if we already found on before */ + if (last_pos < 0 || (last_bits && bits < last_bits)) { + last_bits = bits; + last_pos = pos; + } + + pos = pos_clear; + } + + if (last_pos < 0) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("Not enough room for allocation of " + "%llu bytes for level %u cache %u " + "scope type '%s'"), + *size, level, cache, + virCacheTypeToString(type)); + goto cleanup; + } + + a_mask = virBitmapNew(i_type->bits); + for (i = last_pos; i < last_pos + need_bits; i++) { + ignore_value(virBitmapSetBit(a_mask, i)); + ignore_value(virBitmapClearBit(f_mask, i)); + } + + if (a_type->nmasks <= cache) { + if (VIR_EXPAND_N(a_type->masks, a_type->nmasks, + cache - a_type->nmasks + 1) < 0) { + virBitmapFree(a_mask); + goto cleanup; + } + } + a_type->masks[cache] = a_mask; + } + } + } + + ret = 0; + cleanup: + if (!save_ptr) + virObjectUnref(alloc_free); + return ret; +} + +/* This checks if the directory for the alloc exists. If not it tries to create + * it and apply appropriate alloc settings. */ +int +virResctrlAllocCreate(virResctrlInfoPtr r_info, + virResctrlAllocPtr alloc, + const char *drivername, + const char *machinename) +{ + char *alloc_path = NULL; + char *schemata_path = NULL; + bool dir_created = false; + char *alloc_str = NULL; + int ret = -1; + int lockfd = -1; + + if (!alloc) + return 0; + + if (!alloc->path && + virAsprintf(&alloc->path, "%s/%s-%s-%s", + SYSFS_RESCTRL_PATH, drivername, machinename, alloc->id) < 0)This is being created in /sys/fs... and theoretically nothing will change for @drivername and @machinename+ return -1; + + /* Check if this allocation was already created */ + if (virFileIsDir(alloc->path)) { + VIR_FREE(alloc_path);dead code ;-) "alloc_path" is never allocated...
s/alloc_path/alloc->path/ =)
Any concern over the guest being killed without running through virResctrlAllocRemove and the rmdir?
Yes, where do you see that? The remove will be done in qemuProcessStop(). I can remove this one puny directory here, but proper cleanup needs to be done for all anyway.
+ return 0; + } + + if (virFileExists(alloc->path)) { + virReportError(VIR_ERR_INTERNAL_ERROR, + _("Path '%s' for resctrl allocation exists and is not a " + "directory"), alloc->path); + goto cleanup; + } + + lockfd = virResctrlLockWrite(); + if (lockfd < 0) + goto cleanup; + + if (virResctrlAllocMasksAssign(r_info, alloc, NULL) < 0) + goto cleanup; + + alloc_str = virResctrlAllocFormat(alloc); + if (!alloc_str) + return -1;Leaking... and leaving 'em locked on the way out.
good point. Peter-style bug, I'll fix this. I'll rebase all of these with the fixes and push it on the github so that it's easier to work with. Thanks for the review.
Attachment:
signature.asc
Description: Digital signature
-- libvir-list mailing list libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list