Re: [PATCH v9 22/27] virt: gunyah: Add resource tickets

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2/6/2023 1:50 AM, Srivatsa Vaddagiri wrote:
* Elliot Berman <quic_eberman@xxxxxxxxxxx> [2023-01-20 14:46:21]:

+int ghvm_add_resource_ticket(struct gunyah_vm *ghvm, struct gunyah_vm_resource_ticket *ticket)
+{
+	struct gunyah_vm_resource_ticket *iter;
+	struct gunyah_resource *ghrsc;
+	int ret = 0;
+
+	mutex_lock(&ghvm->resources_lock);
+	list_for_each_entry(iter, &ghvm->resource_tickets, list) {
+		if (iter->resource_type == ticket->resource_type && iter->label == ticket->label) {
+			ret = -EEXIST;
+			goto out;
+		}
+	}
+
+	if (!try_module_get(ticket->owner)) {
+		ret = -ENODEV;
+		goto out;
+	}
+
+	list_add(&ticket->list, &ghvm->resource_tickets);
+	INIT_LIST_HEAD(&ticket->resources);
+
+	list_for_each_entry(ghrsc, &ghvm->resources, list) {
+		if (ghrsc->type == ticket->resource_type && ghrsc->rm_label == ticket->label) {
+			if (!ticket->populate(ticket, ghrsc))
+				list_move(&ghrsc->list, &ticket->resources);

Do we need the search to continue in case of a hit? 'gh_vm_add_resource' seems to
break loop on first occurrence.

Also do we have examples of more than one 'gunyah_resource' being associated
with same 'gunyah_vm_resource_ticket'?  Both vcpu and irqfd tickets seem to deal
with just one resource?


I'll mention this in the commit text as well.

Resources are created by Gunyah as configured in the VM's devicetree configuration. Gunyah doesn't process the label and that makes it possible for userspace to create multiple resources with the same label. The kernel needs to be prepared for that to happen. IMO, this isn't a framework issue, so I've chosen the policy to be "many-to-one": resource tickets can bind to many resources and resources are bound to only one ticket. If the resource ticket handler isn't designed to accept multiple resources, they can skip/ignore any further populate callbacks.

  static int gh_vm_start(struct gunyah_vm *ghvm)
  {
  	struct gunyah_vm_memory_mapping *mapping;
  	u64 dtb_offset;
  	u32 mem_handle;
-	int ret;
+	struct gunyah_resource *ghrsc;
+	struct gh_rm_hyp_resources *resources;
+	int ret, i;
down_write(&ghvm->status_lock);
  	if (ghvm->vm_status != GH_RM_VM_STATUS_NO_STATE) {
@@ -241,6 +314,22 @@ static int gh_vm_start(struct gunyah_vm *ghvm)
  		goto err;
  	}
+ ret = gh_rm_get_hyp_resources(ghvm->rm, ghvm->vmid, &resources);
+	if (ret) {
+		pr_warn("Failed to get hypervisor resources for VM: %d\n", ret);
+		goto err;
+	}
+
+	for (i = 0; i < le32_to_cpu(resources->n_entries); i++) {

minor nit: not sure if we can rely on compiler to optimize this, but it would
be better if we run le32_to_cpu once and use the result in loop.


Done.

+		ghrsc = gh_rm_alloc_resource(ghvm->rm, &resources->entries[i]);
+		if (!ghrsc) {
+			ret = -ENOMEM;
+			goto err;
+		}
+
+		gh_vm_add_resource(ghvm, ghrsc);

Shouldn't we have gh_vm_add_resource()->  ticket->populate() return a result and
in case of failure we should bail out from this loop?


I'm hesitant to treat the resource ticket rejecting the resource as a bail condition.

Userspace is able to detect when functions didn't get set up and I wanted to avoid adding further complexity to kernel drivers.

+	}
+
  	ret = gh_rm_vm_start(ghvm->rm, ghvm->vmid);
  	if (ret) {
  		pr_warn("Failed to start VM: %d\n", ret);



[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux