Add Usage Examples sections. Create KubeVirt Architecture section. Fix minor editorial issues. Fix grammar and formatting issues. Story: 2010931 Task: 50286 Change-Id: I6118d0af848d07f3764eeae5ea8467864c65fceb Signed-off-by: Elisamara Aoki Goncalves <elisamaraaoki.goncalves@windriver.com>
4.2 KiB
Live Migration Support for VMs
Live migration is a process during which a running moves to another compute node while the guest workload continues to run and remain accessible.
Enable the Live Migration Support
Live migration is enabled by default in recent versions of KubeVirt. Versions prior to v0.56 must be enabled in the feature gates. The feature gates field in the KubeVirt Custom Resource must be expanded by adding the live migration to it.
Limitation for live migrations:
- using a must have a shared access mode to be live migrated.
- Live migration is not allowed with a pod network binding of bridge interface type ().
- Live migration requires ports 49152, 49153 to be available in the
virt-launcher
pod. If these ports are explicitly specified in a masquarade interface, live migration will not function.
Initiate Live Migration
Live migration is initiated by posting a object to the cluster. The
example below starts a migration process for a virtual machine instance
vmi-fedora
.
cat <<EOF> migration.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceMigration
metadata:
name: migration-job
spec:
vmiName: vmi-fedora
EOF
kubectl apply -f migration.yaml
Use virtctl to initiate Live Migration
Live migration can also be initiated using virtctl
.
virtctl migrate vmi-fedora
Live Migration for SRIOV based VMs
It is possible to live migrate based , but there are some limitations:
- Specify the MAC address statically and on host the name of the interface should be same on source and target host.
- Specify the Static IP address.
Below is an example manifest for using a static MAC address:
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
labels:
special: vmi-sriov-network
name: vmi-sriov-test-vm
spec:
running: true
template:
metadata:
labels:
kubevirt.io/size: small
kubevirt.io/domain: fedora
spec:
domain:
cpu:
cores: 4
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- macAddress: "02:00:00:00:00:01"
name: sriov-net1
sriov: {}
rng: {}
resources:
requests:
memory: 1024M
networks:
- multus:
networkName: sriov-net1
name: sriov-net1
volumes:
- containerDisk:
image: docker.io/kubevirt/fedora-cloud-container-disk-demo:devel
name: containerdisk
- cloudInitNoCloud:
networkData: |
ethernets:
sriov-net1:
addresses:
- 10.10.10.12/24
gateway: 10.10.10.1
match:
macAddress: "02:00:00:00:00:01"
nameservers:
addresses:
- 10.96.0.10
search:
- default.svc.cluster.local
- svc.cluster.local
- cluster.local
set-name: sriov-link-enabled
version: 2
userData: |-
#!/bin/bash
echo "fedora" |passwd fedora --stdin
name: cloudinitdisk
Below is an example manifest to initiate the live migration:
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceMigration
metadata:
name: migration-job-1
spec:
vmiName: vmi-sriov-test-vm