This document shares some of the performance benchmarks observed as part of the v1.3.0 release. It will talk about what this means for an end-user's perf and scale story.
KubeVirt is an extension for Kubernetes that includes a collection of custom resource definitions (CRDs) served by kubevirt-apiserver. These CRDs are managed by controllers. Due to the distributed nature of the system, understanding performance and scalability data becomes challenging without taking specific assumptions into account. This section aims to provide clarity on those assumptions.
- The data presented in the document is collected from
periodic-kubevirt-e2e-k8s-1.25-sig-performance
, after Sept 06, 2023, data was collected fromperiodic-kubevirt-e2e-k8s-1.27-sig-performance
and after March 25, 2024, data was collected fromperiodic-kubevirt-e2e-k8s-1.29-sig-performance
- The test suite includes three tests:
- It creates 100 minimal VMIs, with a small pause of 100 ms between creation of 2 VMIs. The definition
of minimal VMIs can be found here.
This is represented in the graphs as for VMI, for example
vmiCreationToRunningSecondsP50
for VMI - It creates 100 minimal VMs, with a small pause of 100 ms between creation of 2 VMIs. The definition
of minimal VMs created can be found here.
This is represented in the graphs as for VM, for example
vmiCreationToRunningSecondsP50
for VM - It creates VMs with instancetype and preference, the definition can be found here. The benchmarks for this will be added in future releases.
- It creates 100 minimal VMIs, with a small pause of 100 ms between creation of 2 VMIs. The definition
of minimal VMIs can be found here.
This is represented in the graphs as for VMI, for example
- The test waits for the VMIs to go into running state and collects a bunch of metrics
- The collected metrics are categorized into two buckets, performance and scale
- Performance Metrics: This tells users how KubeVirt stack is performing. Examples include
vmiCreationToRunningSecondsP50
andvmiCreationToRunningSecondsP95
. This helps users understand how KubeVirt performance evolved over the releases; depending on the user deployment, the numbers will vary, because a real production workload could use other KubeVirt extension points like the device plugins, custom scheduler, different version of kubelet etc. These numbers are just a guidance for how the KubeVirt codebase is performing with minimal VMIs, provided all other variables(hardware, kubernetes version, cluster-size etc) remain the same. - Scalability metrics: This helps users understand the KubeVirt scaling behaviors. Examples include,
PATCH-pods-count
for VMI,PATCH-virtualmachineinstances-count
for VMI andUPDATE-virtualmachineinstances-count
for VMI. These metrics are measured on the client side to understand the load generated to apiserver by the KubeVirt stack. This will help users and developers understand the cost of new features going into KubeVirt. It will also make end-users aware about the most expensive calls coming from KubeVirt in their deployment and potentially act on it.
- Performance Metrics: This tells users how KubeVirt stack is performing. Examples include
- The performance job is run 3 times a day and metrics are collected.
- The blue dots on the graphs are individual measurements, and orange line is weekly average
- The gray dotted line in the graph is July 6, 2023, denoting release of v1.0.0.
- The blue dotted line in the graph is September 6, 2023, denoting change in k8s provider from v1.25 to v1.27.
- The red dotted line in the graph is November 6, 2023, denoting release of v1.1.0.
- The green dotted line in the graph is March 5, 2024, denoting release of v1.2.0.
- The violet dotted line in the graph is March 25, 2024, denoting change in k8s provider from v1.27 to v1.29.
NOTE: The scalability metrics were hampered by a bug, that was fixed before the release here. Hence the scalability metrics data needs to be re-evaluated by users with the bug fix to get a more accurate benchmarks