Ports and Port Groups
A vSwitch allows allows several different types of communication, including communication to and from the VMkernel and between VMs. To help distinguish between these different types of communication, ESXi uses ports and port groups. A vSwitch without any ports or port groups is like a physical switch that has no physical RJ45 ports, there is no way to connect anything to the switch, and therefore it is useless.
Port groups differentiate between the types of traffic passing through a vSwitch, and they also operate as a boundary for communication and/or security policy configuration. There are two different types of ports and port groups that you can configure in a vSwitch:
* VMkernel port
* VM port group
Since a vSwitch cannot be used in any other way without at least one port or port group, you will see that the vSphere Web Client combines the creation of new vSwitches with the creation of new ports or port groups.
Uplinks
Uplinks provide external connectivity to the vSwitches. Although a vSwitch allows communication between VMs connected to the vSwitch, it cannot communicate with the physical network without uplinks. Just as a physical switch must be connected to other switches to communicate over the network, vSwitches must be connected to the ESXi host's physical NICs as uplinks to communicate with the rest of the network.
Unlike ports and port groups, uplinks are not required for a vSwitch to work. Like in a physical switch where computers connected to an isolated switch with no uplinks to other physical switches in the network can still communicate with each other (just not with any other system that is not connected to the same physical switch), VMs connected to a vSwitch without any uplinks can still communicate with each other but not with VMs on other vSwitches or physical systems.
This sort of configuration is known as an internal-only vSwitch. It can be useful to allow VMs to communicate only with each other. VMs that communicate through an internal-only vSwitch do not pass any traffic through a physical adapter on the ESXi host. Communication between VMs connected to an internal-only vSwitch takes place entirely in the software and happens at the speed at which VMkernel can perform the task, often referred to as system bus speed.
For VMs to communicate with resources beyond the VMs hosted on the local ESXi host, a vSwitch must be configured to use at least one physical network adapter, or uplink. A vSwitch can be bound to two or more physical network adapters.
A vSwitch bound to at least one physical network adapter allows VMs to establish communincation with physical servers on the nwtwork or with VMs on other ESXi hosts. That's assuming, of course, that the VMs on the other ESXi hosts are connected to a vSwitch that is bound to at least one physical network adapter. Just like in a physical network, a virtual network requires connectivity from one end to another.
The vSwitch associated with a physical network adapter provides VMs with the amount of bandwidth the physical adapter is configured to support. All the VMs will share this bandwidth when communicating with physical machines or VMs or VMs on other ESXi hosts. In this way, a vSwitch is once again similar to a physical switch. For example, a vSwitch bound to a network adapter with a 1 Gbps maximum speed will provide up to 1 Gbps of bandwidth for the VMs connected to it; similarly, a physical switch with a 1 Gbps uplink to another physical switch provides up to 1 Gbps of bandwidth between the two switches for systems attached to the physical switches.
A vSwitch can also be bound to multiple physical adapters. In this configuration, the vSwitch is sometimes referred to as a NIC team, but usually NIC teaming refers specifically to the grouping of network connections, not to a vSwitch with multiple uplinks.
A limitation worth noting: Although a single vSwitch can be associated with multiple physical adapters, a single physical adapter cannot be connected to multiple vSwitches. ESXi hosts can have up to 32 e1000 network adapters. They support up to eight 10 Gbps Ethernet adapters.
A vSwitch can have a maximum of 32 uplinks. In other words, a single vSwitch can use up to 32 physical network adapters to send and receive traffic from the physical switches. Binding multiple physical NICs to a vSwitch offers the advantage of redundancy and load distribution.
Configuring Management Networking
Management traffic is a special type of network traffic that runs across a VMkernel port. VMkernel ports provide network access for the VMkernel's TCP/IP stack, which is separate and independent from the network traffic generated by VMs. The ESXi management network, however, is treated a bit differently than "regular" VMkernel traffic in two ways:
* Firstly, the ESXi management network is automatically created when you install ESXi. In order for the ESXi host to be reachable across the network, a management network must be configured and working. So the ESXi installer automatically sets up an ESXi management network.
* Secondly, the Direct Console User Interface (DCUI) --- the user interface that exists when you are working at the physical console of a server running ESXi --- provides a mechanism for configuring or reconfiguring the management network but not any other forms of networking on that host, apart from a few options for resetting network configuration.
Although the vSphere Web Client offers an option to enable management traffic when configuring networking, it's unlikely that you will use this option very often. After all, for you to configure management networking from within the vSphere Web Client, the ESXi host must already have functional management networking in place (vCenter Server communicates with ESXi over the management network). You might use this option to create additional management interfaces.
Port groups differentiate between the types of traffic passing through a vSwitch, and they also operate as a boundary for communication and/or security policy configuration. There are two different types of ports and port groups that you can configure in a vSwitch:
* VMkernel port
* VM port group
Since a vSwitch cannot be used in any other way without at least one port or port group, you will see that the vSphere Web Client combines the creation of new vSwitches with the creation of new ports or port groups.
Uplinks
Uplinks provide external connectivity to the vSwitches. Although a vSwitch allows communication between VMs connected to the vSwitch, it cannot communicate with the physical network without uplinks. Just as a physical switch must be connected to other switches to communicate over the network, vSwitches must be connected to the ESXi host's physical NICs as uplinks to communicate with the rest of the network.
Unlike ports and port groups, uplinks are not required for a vSwitch to work. Like in a physical switch where computers connected to an isolated switch with no uplinks to other physical switches in the network can still communicate with each other (just not with any other system that is not connected to the same physical switch), VMs connected to a vSwitch without any uplinks can still communicate with each other but not with VMs on other vSwitches or physical systems.
This sort of configuration is known as an internal-only vSwitch. It can be useful to allow VMs to communicate only with each other. VMs that communicate through an internal-only vSwitch do not pass any traffic through a physical adapter on the ESXi host. Communication between VMs connected to an internal-only vSwitch takes place entirely in the software and happens at the speed at which VMkernel can perform the task, often referred to as system bus speed.
For VMs to communicate with resources beyond the VMs hosted on the local ESXi host, a vSwitch must be configured to use at least one physical network adapter, or uplink. A vSwitch can be bound to two or more physical network adapters.
A vSwitch bound to at least one physical network adapter allows VMs to establish communincation with physical servers on the nwtwork or with VMs on other ESXi hosts. That's assuming, of course, that the VMs on the other ESXi hosts are connected to a vSwitch that is bound to at least one physical network adapter. Just like in a physical network, a virtual network requires connectivity from one end to another.
The vSwitch associated with a physical network adapter provides VMs with the amount of bandwidth the physical adapter is configured to support. All the VMs will share this bandwidth when communicating with physical machines or VMs or VMs on other ESXi hosts. In this way, a vSwitch is once again similar to a physical switch. For example, a vSwitch bound to a network adapter with a 1 Gbps maximum speed will provide up to 1 Gbps of bandwidth for the VMs connected to it; similarly, a physical switch with a 1 Gbps uplink to another physical switch provides up to 1 Gbps of bandwidth between the two switches for systems attached to the physical switches.
A vSwitch can also be bound to multiple physical adapters. In this configuration, the vSwitch is sometimes referred to as a NIC team, but usually NIC teaming refers specifically to the grouping of network connections, not to a vSwitch with multiple uplinks.
A limitation worth noting: Although a single vSwitch can be associated with multiple physical adapters, a single physical adapter cannot be connected to multiple vSwitches. ESXi hosts can have up to 32 e1000 network adapters. They support up to eight 10 Gbps Ethernet adapters.
A vSwitch can have a maximum of 32 uplinks. In other words, a single vSwitch can use up to 32 physical network adapters to send and receive traffic from the physical switches. Binding multiple physical NICs to a vSwitch offers the advantage of redundancy and load distribution.
Configuring Management Networking
Management traffic is a special type of network traffic that runs across a VMkernel port. VMkernel ports provide network access for the VMkernel's TCP/IP stack, which is separate and independent from the network traffic generated by VMs. The ESXi management network, however, is treated a bit differently than "regular" VMkernel traffic in two ways:
* Firstly, the ESXi management network is automatically created when you install ESXi. In order for the ESXi host to be reachable across the network, a management network must be configured and working. So the ESXi installer automatically sets up an ESXi management network.
* Secondly, the Direct Console User Interface (DCUI) --- the user interface that exists when you are working at the physical console of a server running ESXi --- provides a mechanism for configuring or reconfiguring the management network but not any other forms of networking on that host, apart from a few options for resetting network configuration.
Although the vSphere Web Client offers an option to enable management traffic when configuring networking, it's unlikely that you will use this option very often. After all, for you to configure management networking from within the vSphere Web Client, the ESXi host must already have functional management networking in place (vCenter Server communicates with ESXi over the management network). You might use this option to create additional management interfaces.
No comments:
Post a Comment