.. WARNING: Add no lines of text between the label immediately following .. and the title. .. _interface-and-networks-7cadb7bdb80b: ====================== Interface and Networks ====================== Connecting a |VM| to a network consists of two parts. First, networks are specified in ``spec.networks``. Then, interfaces backed by the networks are added to the VM by specifying them in ``spec.domain.devices.interfaces``. Each interface must have a corresponding network with the same name. An interface defines a virtual network interface of a virtual machine (also called a frontend). A network specifies the backend of an interface and declares which logical or physical device it is connected to (also called as backend). MacVTap ------- In MacVTap mode, virtual machines are directly exposed to the Kubernetes nodes L2 network. This is achieved by 'extending' an existing network interface with a virtual device that has its own MAC address. MacVTap interfaces are feature gated; to enable the feature, follow instructions, in order to activate the MacVTap feature gate (case sensitive). How to Activate a Feature Gate ------------------------------ .. code-block:: none cat << END > enable-feature-gate.yaml --- apiVersion: kubevirt.io/v1 kind: KubeVirt metadata: name: kubevirt namespace: kubevirt spec: configuration: developerConfiguration: featureGates: - LiveMigration - Macvtap END kubectl apply -f enable-feature-gate.yaml .. note:: Make sure to add all existing feature gates in overrides file. Alternatively, the existing kubevirt custom resources can be altered: .. code-block:: none kubectl edit kubevirt kubevirt -n kubevirt ... spec: configuration: developerConfiguration: featureGates: - DataVolumes - LiveMigration - Macvtap .. note:: The names of the feature gates are case sensitive. Below is the usage example of MacVTap interface used by |VM|: Create network attachment for the MacVTap network: .. code-block:: none kind: NetworkAttachmentDefinition apiVersion: k8s.cni.cncf.io/v1 metadata: name: macvtapnetwork annotations: k8s.v1.cni.cncf.io/resourceName: macvtap.network.kubevirt.io/dataplane1 spec: config: '{ "cniVersion": "0.3.1", "name": "macvtapnetwork", "type": "macvtap", "mtu": 1500 }' .. note:: By running this yaml, the system will create 10 ``macvtap.network.kubevirt.io/dataplane1`` network attachments or interfaces. Now you can create the |VM| using MacVTap network, for example: .. code-block:: none apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: labels: special: vmi-host-network name: vmi-host-network spec: running: true template: metadata: labels: kubevirt.io/size: small kubevirt.io/domain: fedora spec: domain: cpu: cores: 1 devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: hostnetwork macvtap: {} resources: requests: memory: 1024M networks: - name: hostnetwork multus: networkName: macvtapnetwork volumes: - name: containerdisk containerDisk: image: docker.io/kubevirt/fedora-cloud-container-disk-demo:devel - name: cloudinitdisk cloudInitNoCloud: userData: |- #!/bin/bash echo "fedora" |passwd fedora --stdin Multus ------ It is also possible to connect VMIs to secondary networks using Multus. This assumes that multus is installed across your cluster and a corresponding ``NetworkAttachmentDefinition`` |CRD| was created. Example: .. note:: First create the respective network attachment, for example if you want to use MacVTap or |SRIOV| for a secondary interface. .. code-block:: none apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: labels: special: vmi-host-network name: vmi-host-network-2 spec: running: true template: metadata: labels: kubevirt.io/size: small kubevirt.io/domain: fedora spec: domain: cpu: cores: 2 devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default masquerade: {} - name: hostnetwork macvtap: {} resources: requests: memory: 1024M networks: - name: default pod: {} - name: hostnetwork multus: networkName: macvtapnetwork volumes: - name: containerdisk containerDisk: image: docker.io/kubevirt/fedora-cloud-container-disk-demo:devel - name: cloudinitdisk cloudInitNoCloud: userData: |- #!/bin/bash echo "fedora" |passwd fedora --stdin In the example manifest above, the |VM| uses the first interface as default and second interface is MacVTap which is mapped using multus. SRIOV ----- In |SRIOV| mode, virtual machines are directly exposed to an |SRIOV| |PCI| device, usually allocated by Intel |SRIOV| device plugin. The device is passed through into the guest operating system as a host device, using the |VFIO| userspace interface, to maintain high networking performance. .. note:: In |prod| |SRIOV| device plugin is part of the default platform functionality. .. note:: KubeVirt relies on |VFIO| userspace driver to pass |PCI| devices into the |VMI| guest. As a result, when configuring |SRIOV|, define a pool of VF resources that uses driver: vfio. Example: .. note:: Make sure an |SRIOV| interface is configured on a ``DATANETWORK`` (``sriovnet0`` for example below) on the |prod| host. For more details see :ref:`provisioning-sr-iov-interfaces-using-the-cli`. #. Create the Network attachment. .. code-block:: none apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: sriov-net1 annotations: k8s.v1.cni.cncf.io/resourceName: intel.com/pci_sriov_net_sriovnet0 spec: config: '{ "type": "sriov", "vlan": 5, "cniVersion": "0.3.1", "name": "sriov-net1" }' #. Launch the VM. .. code-block:: none apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: labels: special: vmi-sriov-network name: vmi-sriov-network spec: running: true template: metadata: labels: kubevirt.io/size: small kubevirt.io/domain: fedora spec: domain: cpu: cores: 1 devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - masquerade: {} name: default - name: sriov-net1 sriov: {} resources: requests: memory: 1024M networks: - name: default pod: {} - multus: networkName: sriov-net1 name: sriov-net1 volumes: - name: containerdisk containerDisk: image: docker.io/kubevirt/fedora-cloud-container-disk-demo:devel - name: cloudinitdisk cloudInitNoCloud: userData: |- #!/bin/bash echo "fedora" |passwd fedora --stdin