Add Usage Examples sections. Create KubeVirt Architecture section. Fix minor editorial issues. Fix grammar and formatting issues. Story: 2010931 Task: 50286 Change-Id: I6118d0af848d07f3764eeae5ea8467864c65fceb Signed-off-by: Elisamara Aoki Goncalves <elisamaraaoki.goncalves@windriver.com>
8.5 KiB
Interface and Networks
Connecting a to a network consists of two parts. First, networks are
specified in spec.networks
. Then, interfaces backed by the
networks are added to the VM by specifying them in
spec.domain.devices.interfaces
.
Each interface must have a corresponding network with the same name.
An interface defines a virtual network interface of a virtual machine (also called a frontend). A network specifies the backend of an interface and declares which logical or physical device it is connected to (also called as backend).
MacVTap
In MacVTap mode, virtual machines are directly exposed to the Kubernetes nodes L2 network. This is achieved by 'extending' an existing network interface with a virtual device that has its own MAC address.
MacVTap interfaces are feature gated; to enable the feature, follow instructions, in order to activate the MacVTap feature gate (case sensitive).
How to Activate a Feature Gate
cat << END > enable-feature-gate.yaml
---
apiVersion: kubevirt.io/v1
kind: KubeVirt
metadata:
name: kubevirt
namespace: kubevirt
spec:
configuration:
developerConfiguration:
featureGates:
- LiveMigration
- Macvtap
END
kubectl apply -f enable-feature-gate.yaml
Note
Make sure to add all existing feature gates in overrides file.
Alternatively, the existing kubevirt custom resources can be altered:
kubectl edit kubevirt kubevirt -n kubevirt
...
spec:
configuration:
developerConfiguration:
featureGates:
- DataVolumes
- LiveMigration
- Macvtap
Note
The names of the feature gates are case sensitive.
Below is the usage example of MacVTap interface used by :
Create network attachment for the MacVTap network:
kind: NetworkAttachmentDefinition
apiVersion: k8s.cni.cncf.io/v1
metadata:
name: macvtapnetwork
annotations:
k8s.v1.cni.cncf.io/resourceName: macvtap.network.kubevirt.io/dataplane1
spec:
config: '{
"cniVersion": "0.3.1",
"name": "macvtapnetwork",
"type": "macvtap",
"mtu": 1500
}'
Note
By running this yaml, the system will create 10
macvtap.network.kubevirt.io/dataplane1
network attachments
or interfaces.
Now you can create the using MacVTap network, for example:
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
labels:
special: vmi-host-network
name: vmi-host-network
spec:
running: true
template:
metadata:
labels:
kubevirt.io/size: small
kubevirt.io/domain: fedora
spec:
domain:
cpu:
cores: 1
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- name: hostnetwork
macvtap: {}
resources:
requests:
memory: 1024M
networks:
- name: hostnetwork
multus:
networkName: macvtapnetwork
volumes:
- name: containerdisk
containerDisk:
image: docker.io/kubevirt/fedora-cloud-container-disk-demo:devel
- name: cloudinitdisk
cloudInitNoCloud:
userData: |-
#!/bin/bash
echo "fedora" |passwd fedora --stdin
Multus
It is also possible to connect VMIs to secondary networks using
Multus. This assumes that multus is installed across your cluster and a
corresponding NetworkAttachmentDefinition
was created.
Example:
Note
First create the respective network attachment, for example if you want to use MacVTap or for a secondary interface.
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
labels:
special: vmi-host-network
name: vmi-host-network-2
spec:
running: true
template:
metadata:
labels:
kubevirt.io/size: small
kubevirt.io/domain: fedora
spec:
domain:
cpu:
cores: 2
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- name: default
masquerade: {}
- name: hostnetwork
macvtap: {}
resources:
requests:
memory: 1024M
networks:
- name: default
pod: {}
- name: hostnetwork
multus:
networkName: macvtapnetwork
volumes:
- name: containerdisk
containerDisk:
image: docker.io/kubevirt/fedora-cloud-container-disk-demo:devel
- name: cloudinitdisk
cloudInitNoCloud:
userData: |-
#!/bin/bash
echo "fedora" |passwd fedora --stdin
In the example manifest above, the uses the first interface as default and second interface is MacVTap which is mapped using multus.
SRIOV
In mode, virtual machines are directly exposed to an device, usually allocated by Intel device plugin. The device is passed through into the guest operating system as a host device, using the userspace interface, to maintain high networking performance.
Note
In device plugin is part of the default platform functionality.
Note
KubeVirt relies on userspace driver to pass devices into the guest. As a result, when configuring , define a pool of VF resources that uses driver: vfio.
Example:
Note
Make sure an interface is configured on a DATANETWORK
(sriovnet0
for example below) on the host. For more details
see provisioning-sr-iov-interfaces-using-the-cli
.
Create the Network attachment.
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: sriov-net1 annotations: k8s.v1.cni.cncf.io/resourceName: intel.com/pci_sriov_net_sriovnet0 spec: config: '{ "type": "sriov", "vlan": 5, "cniVersion": "0.3.1", "name": "sriov-net1" }'
Launch the VM.
apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: labels: special: vmi-sriov-network name: vmi-sriov-network spec: running: true template: metadata: labels: kubevirt.io/size: small kubevirt.io/domain: fedora spec: domain: cpu: cores: 1 devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - masquerade: {} name: default - name: sriov-net1 sriov: {} resources: requests: memory: 1024M networks: - name: default pod: {} - multus: networkName: sriov-net1 name: sriov-net1 volumes: - name: containerdisk containerDisk: image: docker.io/kubevirt/fedora-cloud-container-disk-demo:devel - name: cloudinitdisk cloudInitNoCloud: userData: |- #!/bin/bash echo "fedora" |passwd fedora --stdin