docs/doc/source/kube-virt/vm-snapshot-and-restore-21158b60cd56.rst
Elisamara Aoki Goncalves 1f91cd1ee0 Update documentation for Kubevirt
Add Usage Examples sections.
Create KubeVirt Architecture section.
Fix minor editorial issues.
Fix grammar and formatting issues.

Story: 2010931
Task: 50286

Change-Id: I6118d0af848d07f3764eeae5ea8467864c65fceb
Signed-off-by: Elisamara Aoki Goncalves <elisamaraaoki.goncalves@windriver.com>
2024-09-04 22:08:20 +00:00

4.6 KiB

VM Snapshot and Restore

snapshot allows you to snapshot the running with existing configuration and restore back to configuration point.

Snapshot a VM

Snapshotting a is supported for online and offline .

When snapshotting a running the controller will check for the qemu guest agent in the . If the agent exists it will freeze the filesystems before taking the snapshot and unfreeze after the snapshot. It is recommended to take online snapshots with the guest agent for a better snapshot, if not present, a best effort snapshot will be taken.

To enable snapshot functionality system does require snapshot and snapshot controller to be created on system. Follow the steps below:

  1. Run to install snapshot and snapshot controller on Kubernets:

  2. Create VolumeSnapshotClass for cephfs and rbd:

    cat <<EOF>cephfs-storageclass.yaml
    —
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshotClass
    metadata:
      name: csi-cephfsplugin-snapclass
    driver: cephfs.csi.ceph.com
    parameters:
      clusterID: 60ee9439-6204-4b11-9b02-3f2c2f0a4344
      csi.storage.k8s.io/snapshotter-secret-name: ceph-pool-kube-cephfs-data
      csi.storage.k8s.io/snapshotter-secret-namespace: default
    deletionPolicy: Delete
    EOF
    cat <<EOF>rbd-storageclass.yaml
    —
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshotClass
    metadata:
      name: csi-rbdplugin-snapclass
    driver: rbd.csi.ceph.com
    parameters:
      clusterID: 60ee9439-6204-4b11-9b02-3f2c2f0a4344
      csi.storage.k8s.io/snapshotter-secret-name: ceph-pool-kube-rbd
      csi.storage.k8s.io/snapshotter-secret-namespace: default
    deletionPolicy: Delete
    EOF

    Note

    Get the cluster ID from: kubectl describe sc cephfs, rbd.

  3. Create snapshot manifest of running using the example yaml below:

    cat<<EOF>cirros-snapshot.yaml
    apiVersion: snapshot.kubevirt.io/v1alpha1
    kind: VirtualMachineSnapshot
    metadata:
      name: snap-cirros
    spec:
      source:
        apiGroup: kubevirt.io
        kind: VirtualMachine
        name: pvc-test-vm
      failureDeadline: 3m
    EOF
  4. Apply the snapshot manifest and verify if the snapshot is successfully created.

    kubectl apply -f cirros-snapshot.yaml
    [sysadmin@controller-0 kubevirt-GA-testing(keystone_admin)]$ kubectl get VirtualMachineSnapshot
    NAME          SOURCEKIND       SOURCENAME    PHASE       READYTOUSE   CREATIONTIME   ERROR
    snap-cirros   VirtualMachine   pvc-test-vm   Succeeded   true         28m

Example manifest to restore the snapshot:

<<EOF>cirros-restore.yaml
apiVersion: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineRestore
metadata:
  name: restore-cirros
spec:
  target:
    apiGroup: kubevirt.io
    kind: VirtualMachine
    name: pvc-test-vm
  virtualMachineSnapshotName: snap-cirros
EOF
kubectl apply -f cirros-restore.yaml

Verify the snapshot restore:

[sysadmin@controller-0 kubevirt-GA-testing(keystone_admin)]$ kubectl get VirtualMachineRestore
NAME               TARGETKIND       TARGETNAME    COMPLETE   RESTORETIME   ERROR
restore-cirros     VirtualMachine   pvc-test-vm   true       34m