Add Usage Examples sections. Create KubeVirt Architecture section. Fix minor editorial issues. Fix grammar and formatting issues. Story: 2010931 Task: 50286 Change-Id: I6118d0af848d07f3764eeae5ea8467864c65fceb Signed-off-by: Elisamara Aoki Goncalves <elisamaraaoki.goncalves@windriver.com>
4.6 KiB
VM Snapshot and Restore
snapshot allows you to snapshot the running with existing configuration and restore back to configuration point.
Snapshot a VM
Snapshotting a is supported for online and offline .
When snapshotting a running the controller will check for the qemu guest agent in the . If the agent exists it will freeze the filesystems before taking the snapshot and unfreeze after the snapshot. It is recommended to take online snapshots with the guest agent for a better snapshot, if not present, a best effort snapshot will be taken.
To enable snapshot functionality system does require snapshot and snapshot controller to be created on system. Follow the steps below:
Run to install snapshot and snapshot controller on Kubernets:
kubectl apply -f
https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yamlkubectl apply -f
https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yamlkubectl apply -f
https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yamlkubectl apply -f
https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yamlkubectl apply -f
https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml
Create
VolumeSnapshotClass
forcephfs
andrbd
:cat <<EOF>cephfs-storageclass.yaml — apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-cephfsplugin-snapclass driver: cephfs.csi.ceph.com parameters: clusterID: 60ee9439-6204-4b11-9b02-3f2c2f0a4344 csi.storage.k8s.io/snapshotter-secret-name: ceph-pool-kube-cephfs-data csi.storage.k8s.io/snapshotter-secret-namespace: default deletionPolicy: Delete EOF cat <<EOF>rbd-storageclass.yaml — apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-rbdplugin-snapclass driver: rbd.csi.ceph.com parameters: clusterID: 60ee9439-6204-4b11-9b02-3f2c2f0a4344 csi.storage.k8s.io/snapshotter-secret-name: ceph-pool-kube-rbd csi.storage.k8s.io/snapshotter-secret-namespace: default deletionPolicy: Delete EOF
Note
Get the cluster ID from:
kubectl describe sc cephfs, rbd
.Create snapshot manifest of running using the example yaml below:
cat<<EOF>cirros-snapshot.yaml apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: snap-cirros spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: pvc-test-vm failureDeadline: 3m EOF
Apply the snapshot manifest and verify if the snapshot is successfully created.
kubectl apply -f cirros-snapshot.yaml [sysadmin@controller-0 kubevirt-GA-testing(keystone_admin)]$ kubectl get VirtualMachineSnapshot NAME SOURCEKIND SOURCENAME PHASE READYTOUSE CREATIONTIME ERROR snap-cirros VirtualMachine pvc-test-vm Succeeded true 28m
Example manifest to restore the snapshot:
<<EOF>cirros-restore.yaml
apiVersion: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineRestore
metadata:
name: restore-cirros
spec:
target:
apiGroup: kubevirt.io
kind: VirtualMachine
name: pvc-test-vm
virtualMachineSnapshotName: snap-cirros
EOF
kubectl apply -f cirros-restore.yaml
Verify the snapshot restore:
[sysadmin@controller-0 kubevirt-GA-testing(keystone_admin)]$ kubectl get VirtualMachineRestore
NAME TARGETKIND TARGETNAME COMPLETE RESTORETIME ERROR
restore-cirros VirtualMachine pvc-test-vm true 34m