Updated: October 28, 2024 |
Most qvm trace events are Class 10 events. However, some qvm threads may also make kernel calls to the hypervisor microkernel; these will have corresponding kernel call events.
The following should help you interpret the data provided with Class 10 events from a hypervisor host.
The guest is executing code at guest_ip address.
The guest has stopped executing code.
For both ARM and x86 platforms, to know if the guest actually ran before it exited, check for the presence of ID 7 events between the ID 0 and ID 1 events (see ID 7 — provide guest clock cycle values at guest entry and exit below). If no ID 7 events are present, then the guest didn't run.
The timestamp for the ID 1 event should not be used to calculate the time spent in the guest during an entry/exit cycle. For an explanation, see the note in the ID 7 event description.
A virtual CPU (vCPU) thread has been created to schedule the vCPU.
Occurs only once per vCPU in the lifetime of a qvm process instance. If you start your trace after the qvm process has started, you won't see these events.
An interrupt has been asserted on a vCPU (seen by the guest as a CPU). The VM (qvm process instance) in which the guest is running must be configured to deliver the interrupt to the guest. This configuration is specified in the *.qvmconf file for the qvm process instance.
An interrupt is being de-asserted on a vCPU (seen by the guest as a CPU).
A timer has been created for a vCPU.
Occurs only once per timer in the lifetime of a qvm process instance. If you start your trace after the qvm process has started, you may not see these events.
A virtual timer has been triggered.
Guest clock cycle values at guest entry and exit.
The Cycles trace event records the amount of time spent in the guest between Entry/Exit trace events.
You can't draw meaningful conclusions from the difference between the timestamps of the Entry (ID 0) and Exit (ID 1) events. These events are emitted at the user level, and thus the vCPU thread can be preempted, migrated, interrupted, etc. between the time of the guest exit and re-entry.
This is especially true on a system where the host OS is heavily loaded by threads that are unrelated to the hypervisor's vCPU threads. Also, work is often done in a hypervisor-privileged context when the hypervisor prepares to enter a guest and when it handles a guest exit.
The Cycles trace event should be used because it includes only the time spent in the guest and excludes both the time that the hypervisor may not have been running and the time taken to perform the guest exit/enter processing.
For an example of how you can interpret the trace events emitted by a hypervisor to measure its overhead, see Measuring the hypervisor overhead below.
The kernel calls you will see in trace logs for a hypervisor host will also include events that are not specific to the hypervisor, such as MMap and MMunmap. See the System Analysis Toolkit User's Guide for more information about kernel call trace events.
If you want to measure the hypervisor overhead, you must have an accurate indication of the time spent in the guest versus the time spent in the host. Here, we list some of the trace events emitted by the hypervisor during a guest entry/exit cycle and show how to calculate the percentage of time spent in the guest.
begin HYP EL 0 - Guest Entry Trace Event HYP EL 0 - prepare to enter guest HYP EL 2 - enter guest GST EL 0/1 - running in guest HYP EL 2 - guest exited because the CPU needs the hypervisor to intervene HYP EL 0 - Cycles + Guest Exit Trace Events HYP EL 0 - handle guest exit (may involve a vdev or emulation of PSCI calls, etc.) end
While the vCPU thread is running hypervisor code at EL 0, the host QNX Neutrino sees it as an ordinary application and will handle interrupts and context switch between host threads. Some systems run other software on the host, at the same or higher thread priorities as the vCPUs, that's unrelated to what the hypervisor and guest are doing. If you take the difference between the guest exit and guest entry timestamps, you will include all time that is completely dependent on what the other software concurrently running on the host is doing. This other time is not hypervisor overhead.
The Cycles trace event provides the start and end times between which the guest ran during the entry/exit cycle. Subtracting the start time from the end time provides the actual time spent in the guest. If you want to know when those times were from the host's perspective, then examine the offset value in the Exit trace event; this lets you convert the start or end time from the Cycles event into a timestamp that is meaningful for the host.
% time in guest = (time spent in guest) / (total thread RUNNING time)