Virtualization: Fundamentals and Concepts

Virtualization allows multiple operating system instances to run simultaneously on a single computer; it is a methodology to separate hardware from a single operating system.  Each of these multiple operating systems is inside a VM (virtual machine) and is controlled by the Virtual Machine Monitor (VMM) that is also referred to as the Hypervisor.  Since the controlling entity i.e. the hypervisor sits between the VMs and the bare-metal hardware it can control how the VMs can use the CPU, memory, and storage, even allowing a VM to migrate from one server to another.

Virtualization helps abstract the hardware of a device from the user of that hardware (instance).  Each instance can run independent of each other, on the underlying layer that's below it.  This layer of abstraction is what is virtualized.  Virtualization is done to allow multiple virtual instances (or virtual machines i.e. VMs) to run on a single piece of hardware.  This provides the benefit of each virtual instance looking like a discrete piece of hardware for the software that runs on top of it.


Virtualization is all about separating traditional IT resources into more easily managed and centralized solutions.  This separation often increases scalability, improves resource utilization, and reduces administrative resources.

Different forms of virtualization commonly used in enterprise environments are:

  • Hardware Virtualization:  Uses software called a hypervisor (VMware ESXi, Microsoft Hyper-V, or Citrix XenServer) to abstract the physical characteristics of a server.  This permits multiple virtual instances of operating systems to run on a single physical server.  The virtual machines are not aware that they are sharing physical hardware.  The resources of the physical server are better utilized.
  • Software Virtualization:  Streams a remotely installed application from a server to a client (Citrix XenApp or Microsoft App-V) or packages up an application to run in a standalone sandbox without requiring local installation (VMware ThinApp).  Because the applications are no longer installed on client desktops, administrators can more easily administer and distribute applications and their patches from a single networked location.
  • Desktop Virtualization:  Is similar to hardware virtualization in that it separates a personal computer desktop environment from a physical machine by either remotely streaming the desktop (VMware View or Citrix XenDesktop).  In some cases the entire desktop may be cached locally, but most solutions simply provide a remote keyboard, video and mouse (KVM) interface through a locally installed application (Citrix Receiver or Microsoft Remote Desktop Connection).  The desktops run on high performing servers that are centrally managed and easily deployed by IT.
  • Storage Virtualization:  Abstracts logical storage from physical storage.  Large pools of disks are divided into smaller logical units that are presented as a single volume but may actually span across many physical disks.  This improves performance, increases drive space utilization, and provides redundancy.
  • Network Virtualization:  Either separate physically attached networks into different virtual networks or combine many separate virtual networks to share the same segments of a large physical network.  By creating virtual networks, administrators are able to logically group machines and their traffic while better utilizing the physical networking infrastructure.









Configuring the primary Management network interface vmk0

Verify active links:

esxcli network nic list


Remove disconnected uplinks from the previous step [VMware adds them all]:

esxcli network vswitch standard uplink remove --uplink-name=vmnic0 --vswitch-name=vSwitch0

esxcli network vswitch standard uplink remove --uplink-name=vmnic1 --vswitch-name=vSwitch0


Verify standard vSwitch list:

esxcli network vswitch standard list


Set iphash Load balance and MTU:

esxcli network vswitch standard set --mtu 9000 -v vSwitch0

esxcli network vswitch standard policy failover set -l iphash -v vSwitch0

esxcli network vswitch standard policy failover set --active-uplinks=vmnic2,vmnic3 -v vSwitch0

esxcli network vswitch standard portgroup policy failover set -p "Management Network" -l iphash

esxcli network vswitch standard portgroup policy failover set -p "Management Network" --active-uplinks=vmnic2,vmnic3


Verify iphash load balancing and MTU settings:

esxcli network vswitch standard policy failover get --vswitch-name=vSwitch0


Set DNS server search:

esxcli network ip dns server add --server 10.96.5.43

esxcli network ip dns server add --server 10.96.5.27

esxcli network ip dns search add --domain cscinfo.com

esxcli network ip dns server remove --server 192.168.43.77


Verify DNS server and DNS search:

esxcli network ip dns server list

esxcli network ip dns search list


Configure vmk0 for primary IP:

esxcli network interface ipv4 set --interface-name vmk0 --ipv4 10.133.72.93 --netmask 255.255.254.0 --type static


Create VLANs in ESXi:


Verify portgroup list:

esxcli network standard vswitch portgroup list


Add portgroup:

esxcli network vswitch standard portgroup add --portgroup-name=<portgroup name> --vswitch-name=<vswitch name>


Tag portgroup:

esxcli network vswitch standard portgroup set --portgroup-name=<portgroup name> --vlan-id=<vlan id>


Example:

esxcli network vswitch standard portgroup add --portgroup-name="BK-APIPortal-Preproduction" --vswitch-name="vSwitch0"

esxcli network vswitch standard portgroup set --portgroup-name="BK-APIPortal-Preproduction" --vlan-id=233


















VMware Virtualization products: an overview

VMware ESXi
The hypervisor that is installed on the bare metal hardware.  Type 1.  No service console.  Size 130 MB.  The core of the virtualization functionality is in the VMkernel.  ESXi provides CPU scheduling, memory management, and virtual switch data processing to all the VMs.

ESXi 6.0 => . 4096 vCPUs, 320 Logical CPUs, 32 vCPUs per core, 6 TB RAM.

VMware vCenter Server
Centralized management platform and framework for all ESXi hosts and their VMs.  Deploy, manage, automate and secure a virtual infrastructure.  Uses a backend database (Microsoft SQL Server or Oracle) to store data.  Also provides advanced features like vSphere vMotion, vSphere DRS, HA and FT.  EVC (Enhanced vMotion Capability), Storage I/O Control, vDS, Network I/O Control, Storage DRS.

vSphere Update Manager
Add-on package for vCenter Server to keep ESXi hosts and VMs patched with the latest updates.  Automated installation of patches for ESXi hosts.  Full integration with vSphere features like DRS.

VMware vSphere Desktop Client
Windows-based.  Can also be used to manage individual ESXi hosts.  Rich GUI to manage day-to-day tasks.

VMware vSphere Web Client
Came with vSphere 6.0 .  Dynamic, web-based UI for managing a virtual infrastructure.

VMware vCenter Orchestrator
A workflow automation engine that is automatically installed with every instance of vCenter Server.  The vRealize Orchestrator plug-ins extend the functionality nto work with Microsoft AD, Cisco UCS, and VMware vRealize Automation.  Building automated workflows in the virtualized datacenter.

vSphere Virtual Symmetric Multi-Processing
The vSMP product allows you to construct VMs with multiple virtual processor cores and/or sockets.  It is not the licensing product that allows ESXi installed on servers with multiple processors; it is the technology that allows the use of multiple processors inside a VM.

vSphere vMotion and Storage vMotion
Live migration.  Move a running VM from one physical host to another physical host without powering off the VM.  Can reduce resource contention.  Move the VM to a host with more CPU resources.  Storage untouched.  Shortcoming: it's a manual operation.

vSphere Distributed Resources Scheduler
DRS leverages vMotion to provide automatic distribution of resource utilization across multiple ESXi hosts that are configured in a cluster.  At startup, DRS attempts to place each VM on the host that is best suited to run the VM at that time.  Uses an internal algorithm.

vSphere Storage DRS
Helps balance storage capacity and storage performance across a cluster of datastores using mechanisms similar to vSphere DRS.  Intelligent placement.  Appropriate datastore within the datastores cluster.

Storage I/O Control and Network I/O Control
SIOC allows you to assign relative priority to storage I/O as well as assign storage I/O limits to VMs.  These settings are enforced cluster-wide.  NIOC provides you with more granular control over how VMs use network bandwidth provided by the physical NICs.

Storage-Based Policy Management (SBPM)



vSphere High Availability (HA)



vSphere Symmetric Multi-Processing Fault Tolerance (SMP-FT)



vSphere Storage APIs



vSphere Virtual SAN (VSAN)



vSphere Replication



vSphere Flash Read Cache



vSphere Content Library



VMware Horizon View



VMware vRealize Automation



VMware vCenter Site Recovery Manager







VM Performance enhancement through Memory Management

When compared to a bare metal physical server which often have over-provisioned resources of CPU, memory and hard disk storage, resulting in lots of wastage of resources, a VM can be right-sized to fit the application running on it, thereby freeing up resources for other VMs and applications.  However the ESXi host is still going to run out of resources at some point of time when you have VMs requesting more resources than what the ESXi host is capable of providing.

We can ensure that a VM's guest OS and its applications get the resources they need without being deprived by other guest OSs and their applications.  These controls are referred to as Reservations, Limits and Shares.

Reservations:  Reservations help ensure a certain minimal amount of resources available to the VM irrespective of what else and what other applications are running on the ESXi host.

Limits:  Limits place a threshold on the given resource a VM can use.  This feature is quite granular in how resources are utilized.  Depending on the resource to which the limit is being applied, the ESXi host behavior will vary.

Shares:  Shares help prioritize resources during periods of resource contention.  When VMs are competing for scarce resources, the ESXi host decides which VM gets to use certain resources depending on the Shares configurations.  VMs with higher shares allocated will get higher priority to the ESXi's host resources.

Let us say you start creating more VMs on the ESXi host and allocate memoryto them.  At some point of time, you will have allocated more memory to the VMs than whatever is available on the ESXi host.  VMware ESXi supports a number of technologies that do advanced memory management:

Idle Memory Tax
Even before VMware ESXi aggressively starts making enhancements to relieve memory demand, it ensures that VMs do not pile up or stock memory unnecessarily by "charging" more for the idle memory.  Up to 75% of the memory allocated to a VM can be borrowed to service another VM by Idle Memory Tax (IDT).   This is a configurable parameter.  Usually it is not necessary to do this and is not recommended.  Inside the VM guest OS, VMware Tools will expand its balloon driver to figure out which memory blocks are allocated but idle and therefore available to be used somewhere else.

Transparent Page Sharing (TPS)
Transparent Page Sharing (TPS) is a mechanism used by ESXi in which identical memory pages are shared amongst VMs to reduce the net count of memory pages consumed.  The ESXi hypervisor computes hashes of the contents of memory pages to identify and mark pages that contain identical memory data.  If it finds a hash match, it compares the matching memory pages to exclude a false positive.  Once the pages are verified to be identical, the hypervisor will transparently remap the memory pages of the VMs so that they are sharing the same physical memory page, thus reducing overall memory consumption.  Some advanced parameters are available to finetune the behavior of the page sharing mechanisms.

VMware indicated that TPS is no longer enabled by default.  This is because of a research paper that demonstrates using TPS to gain access to the AES encryption key of a machine sharing pages.   This is a security risk, therefore VMware made the decision to disable TPS by default and leaves the onus on customers to evaluate the risk of enabling TPS in their environment if they so desire.

Ballooning
Ballooning involves the use of a driver installed into the VM guest OS.  This driver gets installed when you install VMware Tools.  This balloon driver can respond to commands from the ESXi hypervisor to reclaim memory from that particular guest OS.  The balloon driver accomplishes this by requesting memory from the guest OS, a process called inflating, and then passing that memory back to the hypervisor to be used by other VMs.

Usually when the ESXi reclaims memory from the VM through the balloon driver there is no performance impact on the guest OS since this memory was not being used anyway.  However in case the amount of memory configured for that VM is already insufficient for the guest OS and its applications, it's quite a possibility that inflating the balloon driver will invoke the guest OS paging or swapping, affecting the VM performance.

The balloon driver is OS-specific, so it's different for Linux and Windows.  Comes as part of the VMware Tools.  When the ESXi host is running low on memory, it i.e. the hypervisor will signal the balloon driver to grow.  The balloon driver will request memory from the OS.  This causes the balloon driver's memory footprint to grow or inflate.  The memory that is passed to the balloon driver is then given to the hypervisor.  The hypervisor uses this memory to be granted to other VMs on the host, reducing the swap activity and minimizing the performance impact due to memory constraints.  When the memory demand on the host decreases, the balloon driver will deflate, and return memory to the VM.

In certain cases, inflating the balloon driver can release memory back to the hypervisor without diminishing the VM performance because the guest OS can give the balloon driver unused or idle pages.

Memory Compression
When an ESXi host gets to a point where hypervisor swapping becomes necessary, the VMkernel will attempt to compress memory pages and keep them in RAM in a compressed memory cache.  Pages that can be compressed by at least 50% are put into the compressed memory cache instead of being written to disk and can then be recovered much more quickly if the guest OS needs that memory page, since memory access is a million times faster than disk access.  Memory compression can drastically reduce the number of pages that must be swapped to disk and can thus dramatically improve the performance of an ESXi host that is under strong memory pressure.  A configurable amount of VM memory is used for the compression cache, 10% by default, but this starts at zero and grows as needed when VM memory starts swapping out.  Compression is invoked only when the ESXi host reaches the point when VMkernel swapping is needed.

Swapping
We have two types of swapping when memory is managed by VMware ESXi.  The first type is guest OS swapping in which the guest OS inside the VM swaps the pages out to its virtual disk due to its own memory management algorithms.  This is generally due to memory requirements that are greater than available memory.  This is the scenario when a VM is configured with less memory than the guest OS and its applications require.  Guest OS is controlled completely by the guest OS, and the hypervisor has no hold over that aspect.

The second type of swapping is hypervisor swapping.  In case none of the previously described methodologies trim the guest OS memory usage sufficiently, the ESXi host is forced to resort to use hypervisor swapping.  Hypervisor swapping involves the ESXi swapping memory pages out to disk in order to reclaim memory that is needed somewhere else.  ESXi hypervisor swapping happens without any regard to whether the pages are being actively used by the VM guest OS.  As a consequence, and because disk response times is millions of times slower than memory response times, guest OS performance is severely impacted if hypervisor swapping is invoked.  It is for this rationale that ESXi will  not invoke swapping unless and until it is absolutely necessary, if you have run out of options after having tried all other memory management techniques.

Another vital point to note is that you should avoid hypervisor swapping under all except the most extraneous circumstances; there is a significant and noticeable impact to performance.  Even swapping to SSDs is considerably slower than directly accessing RAM.






ESXi host: esxcli and other useful commands

System related commands
uname -a
vmware -v
vmware -vl
ps -tgs | grep -i <VM-Name>
esxcli vm process list
esxcli system boot device get
esxcli system coredump partition list
esxcli system hostname get
esxcli system account list
esxcli system stats uptime get
esxcli system stats installtime get
echo $(($(esxcli system stats uptime get)/86400000000))
esxcli system hostname set --host=<Hostname without FQDN>
esxcli system hostname set --fqdn=<Rest of the FQDN>
esxcli network ip dns search add --domain=corp.company.com
esxcli system module list
esxcli system process list
esxcli system process stats load get
esxcli system process stats running get
esxcli system secpolicy domain list
esxcli system settings advanced list
esxcli system settings kernel list
esxcli system syslog config get
esxcli system syslog config logger list
esxcli system uuid get
esxcli system version get
esxcli system visorfs get
esxcli system visorfs ramdisk list
esxcli system visorfs tardisk list
esxcli device driver list
esxcli software vib list
esxcli software vib get -n <Product name >
esxcli software vib remove -n  esx-dvfilter-arpspy -n esx-dvfilter-ipv6spy -n esx-dvfilter-maclearn
esxcli software vib install --no-sig-check -v ....... 
esxcli software vib install --depot=/vmfs/volumes/datastore-whatever/patchwhatever
esxcli software vib install -d /tmp/< Product-file >
esxcfg-info
esxcfg-module --list
esxcfg-volume --list
esxcfg-advcfg --list
esxcfg-dumppart --list
vim-cmd vmsvc/getallvms
vim-cmd vmsvc/power.getstate <VM-UUID>
vim-cmd vmsvc/power.on <VM-UUID>
vim-cmd hostsvc/enable_ssh
vim-cmd hostsvc/start_ssh
vim-cmd hostsvc/enable_esx_shell
vim-cmd hostsvc/start_esx_shell
vsish -e get /hardware/cpu/cpuModelName
vim-cmd hostsvc/hosthardware 
vim-cmd hostsvc/hosthardware | grep CPU
esxcli system settings advanced set -i 1 -o "/UserVars/SuppressShellWarning"
esxcli system settings advanced set -o /Mem/ShareForceSalting -i 0
esxcli system maintenanceMode get
esxcli system maintenanceMode set --enable yes
esxcli system maintenanceMode set --enable no
vim-cmd hostsvc/maintenance_mode_enter
vim-cmd hostsvc/maintenance_mode_exit
vim-cmd vmsvc/get.capability <VM-ID>
vim-cmd proxysvc/port_info
vim-cmd solo/querycfgoptdesc
vim-cmd hostsvc/queryconnectioninfo | grep <VM-Name> -A40 -B4
esxcli --debug --formatter=table system version get
vim-cmd vmsvc/device.getdevices <VM-ID>
vim-cmd vmsvc/get.config <VM-ID>
vim-cmd vmsvc/get.environment <VM-ID>
vim-cmd vmsvc/get.runtime <VM-ID>
vim-cmd vmsvc/get.summary <VM-ID>
esxcli system settings advanced list -d
esxcli system settings kernel list -d
esxcli system snmp get | hash | set | test
esxcli vm process kill -t soft -w <WorldID>
esxcli system shutdown reboot --delay=60 --reason="<Whatever>"
esxcli system shutdown reboot -d 60 -r "<Whatever be the reason>"
esxcli system shutdown poweroff --delay=60 --reason="<Whatever>"

/opt/hp/hpssacli/bin/hpssacli controller all show config detail

Network related commands
esxcli network nic list
esxcli network vm list
esxcli network ip interface list
esxcli network ip interface ipv4 get
esxcli network ip route ipv4 list
esxcli network ip neighbor list
esxcli network ip netstack list
esxcli network ip netstack get -N defaultTcpipStack
esxcfg-route
esxcfg-route -l
esxcfg-vswitch -l
esxcfg-vswitch -U vmnic5 vSwitch0   (Removes vmnic5)
esxcfg-vswitch -L vmnic3 vSwitch0   (Adds vmnic3)
esxcfg-vmknic -l
esxcli network ip interface set -e true -i vmk1
esxcli network ip interface set -e true -i vmk2
esxcfg-nics -l
esxcfg-nas --list
esxcli storage nfs list
esxcli network ip dns search list
esxcli network ip dns server list
esxcli network ip dns server add -s <New-DNS-server>
esxcli network firewall ruleset list
esxcli network firewall ruleset rule list
esxcli network ip connection list
esxcli network ip connection list | grep 5671      Useful in checking the TCP connection between the RabbitMQ daemon and the one on the ESXi host
esxcli network ip interface ipv4 set -i vmk0/1 -I <IP-address> -N <Netmask> -P false -t static
esxcli network ip interface ipv4 set --interface-name vmk0/1 --ipv4 <IP-address> --netmask <Netmask> --type static 
esxcli network ip interface set --interface-name=vmk0/1 --mtu=9000
esxcli network vswitch standard list
esxcli network vswitch dvs vmware list
esxcli network nic get -n vmnic0
esxcli network nic stats get --nic-name=vmnic0
esxcli network nic vlan stats get --nic-name=vmnic0
/usr/lib/vmware/vm-support/bin/nicinfo.sh
esxcli network ip dns search add --domain=<dept.>.<company>.com
esxcli network vswitch standard portgroup list
esxcli network vswitch standard portgroup set -p "VM Network" --vlan-id 997
esxcli network ip interface set --mtu=9000 --interface-name=vmk1/2
esxcli network ip interface tag add -i vmk1/2 -t VMotion
esxcli network vswitch standard portgroup policy failover get --portgroup-name=<Portgroup>
esxcli network vswitch standard portgroup policy failover set -p <IP-vmotion1/2> -a vmnic0/1 -s vmnic1/0
esxcli network vswitch standard policy failover get -v vSwitch0
esxcli network vswitch standard portgroup policy failover get -p "Management Network"
esxcli network vswitch standard policy failover set -l iphash -v vSwitch0
esxcli network vswitch standard portgroup policy failover set -p "Management Network" -l iphash
esxcli network vswitch standard policy failover set -l explicit -v vSwitch0
esxcli network vswitch standard policy failover set -l portid -v vSwitch0
esxcli network vswitch standard policy failover set -l mac -v vSwitch0
esxcli network vswitch standard policy failover set -v vSwitch0 --active-uplinks vmnic0,vmnic3
esxcli network vswitch standard policy failover set -v vSwitch0 --active-uplinks vmnic0 --standby-uplinks vmnic3
esxcli network vm list
esxcli network ip ipsec sa list
esxcli network ip set --ipv6-enabled=false
esxcli network ip interface ipv6 set -i vmk0 -d false -r false
esxcli network nic stats get –n vmnic0
vsish -e get /net/pNics/vmnic0/stats                  or vmnic1, vmnic2 or vmnic3
esxcli network nic list | grep Up | cut -d " " -f 1 | xargs -n 1 esxcli network nic stats get -n
tcpdump-uw –i vmk0
tcpdump-uw -i vmk0 icmp
tcpdump-uw –i vmk0 –s 1514 tcp
tcpdump-uw –i vmk0 –s 1514 port not 22 and port not 53
tcpdump-uw -i vmk0 icmp -XX
tcpdump-uw -i vmk0 not port 22 -vvv
pktcap-uw –-vmk vmk0
pktcap-uw –-uplink vmnic0
pktcap-uw --vmk vmk0 --proto 0x01
pktcap-uw --vmk vmk0 --proto 0x01 --capture PortOutput
pktcap-uw --vmk vmk0 --ip 216.239.35.4
pktcap-uw --vmk vmk0 --ip 216.239.35.4 --capture PortOutput
pktcap-uw --vmk vmk0 --tcpport 443
pktcap-uw --switchport <SwitchPortNumber>
pktcap-uw --switchport <SwitchPortNumber> --capture PortOutput
pktcap-uw --capture Drop
net-stats –l
esxcli network vswitch standard policy failover get --vswitch-name=<vSwitch>
ethtool -S vmnic2 | egrep -i 'error|drop'                    For RX error
ethtool -S vmnic0 | grep rx_crc_error
esxcli network nic tso get
esxcli network nic coalesce get
vsish -e get /net/tcpip/instances/defaultTcpipStack/sysctl/_net_inet_tcp_delayed_ack
vmdumper -l
esxcli network ip interface ipv4 set -i vmk0 -I 10.133.72.43 -N 255.255.254.0 --type static
esxcfg-route
esxcfg-route -l
esxcfg-route -a 10.133.72.0/23 10.133.72.1
esxcfg-route -d 10.133.96.0/23 10.133.96.1
esxcli network ip route ipv4 add --gateway 10.133.72.1 --network 10.133.72.0/23
esxcli network ip route ipv4 list
esxcli network ip route ipv4 remove --gateway 10.133.96.1 --network 10.133.96.0/23
esxcli network ip route ipv4 remove -n 10.133.96.0/23 -g 10.133.96.1
esxcfg-route -l
esxcli software vib get -n net-e1000e
esxcli software vib get -n net-ixgbe
esxcli network vm list
esxcli network vm port list --world-id=<World-ID>
esxcli network port filter stats get --portid=<Port-ID> 
esxcli network diag ping --ipv4 --host=<Host-IP> --size=9000
vim-cmd hostsvc/net/query_networkhint --pnic-name=vmnic2
vim-cmd hostsvc/net/query_networkhint --pnic-name=vmnic2 |  egrep "location|mgmtAddr|softwareVersion|systemName|hardware|vlan|portId|port|ipSubnet"
vim-cmd vmsvc/getallvms    (for VM IDs)
vim-cmd vmsvc/get.networks <VM-ID>
esxcli network switch standard policy shaping get -v vSwitch0
esxcli network switch standard policy security get -v vSwitch0
esxcli network switch standard policy failover get -v vSwitch0
esxcli network vswitch dvs vmware lacp config get
esxcli network vswitch dvs vmware lacp status get
esxcli network vswitch dvs vmware lacp stats get
esxcli network nic ring preset get -n vmnic0/1/2/3/4/5
esxcli network nic down -n vmnic2
esxcli network nic up -n vmnic2
esxcli system module get --module=ixgbe
esxcli system module get --module=ixgben
Storage related commands
esxcli storage nmp device list
esxcli storage vmfs snapshot list
esxcli storage nmp satp list
esxcli storage nmp psp list
esxcli storage core device list
esxcli storage core path list
esxcli storage nfs list
esxcli storage nfs add -H <NFS-Storage-IP> | <NFS-hostname> -s <Share-mount-point-on-the-NFS-storage> -v <Datastore-name>
esxcli storage nfs remove -v <Datastore-name>
esxcfg-nas --list
esxcli storage core adapter list | grep fc | awk ‘{print $4}’
esxcli storage core device partition list
esxcli storage core device stats get
esxcli storage core path stats get
esxcli storage vmfs extent list
esxcli storage vmfs unmap -l <Datastore-LUN>
esxcli storage filesystem list
esxcli storage san fc list
esxcli storage san fc events get
esxcli storage san iscsi list
esxcli storage san sas list
esxcli storage core device world list
esxcfg-scsidevs –l
esxcfg-scsidevs –m
ls -altr /vmfs/devices/disks
esxcfg-mpath –b
esxcfg-mpath --list
esxcli storage core device vaai status get
esxcli storage nmp satp set --default-psp VMW_PSP_RR --satp <youre_satp_policy>
esxcli storage core claimrule list -c all
esxcli storage core adapter rescan --all
esxcli storage core adapter rescan --adapter vmhba4
vim-cmd vmsvc/get.datastores <VM-ID>
vdq -iH
vdq -qH
esxcli vsan cluster unicastagent list
esxcli vsan cluster get
vim-cmd hostsvc/storage/diagnostic/query_available
vim-cmd hostsvc/queryconnectioninfo | grep DataStoreInfo -A20
vim-cmd hostsvc/datastore/listsummary esxcli iscsi adapter list
esxcli iscsi adapter auth chap get --adapter=vmhba64
esxcli iscsi adapter capabilities get --adapter=vmhba64
esxcli iscsi adapter discovery sendtarget list
esxcli iscsi adapter param get --adapter=vmhba64
esxcli iscsi logicalnetworkportal list
esxcli iscsi physicalnetworkportal list
esxcli iscsi plugin list 
voma -m vmfs -f check -d /vmfs/devices/disks

Hardware related commands
lspci
lspci | grep Network
esxcli network nic list
esxcfg-nics -l
esxcli hardware cpu list
esxcli hardware memory get
esxcli hardware pci list
esxcli hardware platform get
esxcfg-dumppart --list
vsish -e get /hardware/cpu/cpuModelName
vim-cmd hostsvc/hosthardware 
vim-cmd hostsvc/queryconnectioninfo | grep memorySize -A8
vim-cmd hostsvc/hosthardware | grep CPU
esxcli iscsi adapter list
esxcli storage core adapter list
esxcli storage san fc list
esxcli storage san fc events get
esxcfg-scsidevs --list
esxcfg-scsidevs --vmfs
esxcfg-mpath --list-map
esxcfg-mpath --list-paths
esxcli fcoe adapter list
esxcli fcoe nic list
esxcli hardware usb passthrough device list
esxcli hardware ipmi sel list
esxcli hardware ipmi sdr list
esxcli hardware ipmi fru list
esxcli hardware smartcard info get
esxcli hardware smartcard slot list
smbiosDump | grep -B 13 "Memory" | egrep "Location|Manufacturer|Serial|Part|Size|Speed"

VAAI related commands
To check on the VAAI Plugin install:
esxcli software vib list | grep -i vaai
esxcli software vib get -n <Product name>

To install:
esxcli software vib install -d /tmp/<Product name>

To view whether VAAI is enabled or disabled, run the following commands.  If the value that is returned is 0, VAAI is disabled.   If the value returned is 1, VAAI is enabled.

esxcfg-advcfg -g /DataMover/HardwareAcceleratedMove
esxcfg-advcfg -g /DataMover/HardwareAcceleratedInit
esxcfg-advcfg -g /VMFS3/HardwareAcceleratedLocking

To enable VAAI for a dedicated primitive, use the option -s 1:

esxcfg-advcfg -s 1 /DataMover/HardwareAcceleratedMove
esxcfg-advcfg -s 1 /DataMover/HardwareAcceleratedInit
esxcfg-advcfg -s 1 /VMFS3/HardwareAcceleratedLocking

To disable VAAI for a dedicated primitive, use the option -s 1:

esxcfg-advcfg -s 0 /DataMover/HardwareAcceleratedMove
esxcfg-advcfg -s 0 /DataMover/HardwareAcceleratedInit
esxcfg-advcfg -s 0 /VMFS3/HardwareAcceleratedLocking

Use this command to check on the VAAI status:
esxcli storage core device vaai status get

Troubleshooting possibilities
esxcli network ip connection list
esxcli network ip neighbor list
tcpdump-uw -c 5 -n -i vmk0 host <IP-address> and port 443
esxcli network diag ping -s 9000 -H <IP-address>
traceroute <IP-address>


Commonly used Networking commands
vmware -vl
esxcli network ip interface list
esxcli network nic list
esxcli network vm list
esxcli network ip interface ipv4 get
esxcli network ip route ipv4 list
esxcli network ip neighbor list
esxcli network ip neighbor list -i vmk3
esxcli network ip connection list
esxcli network vswitch standard list
esxcli network vswitch dvs vmware list
esxcli network nic get -n vmnic0
esxcli network nic get -n vmnic1
esxcli network nic get -n vmnic2
esxcli network nic get -n vmnic3
esxcli network nic stats get --nic-name=vmnic0
esxcli network nic vlan stats get --nic-name=vmnic0
esxcli network vswitch standard portgroup list
esxcli network ip ipsec sa list
net-stats -l
tcpdump-uw -i vmk0
pktcap-uw --vmk vmk0
pktcap-uw --uplink vmnic0
cat /var/log/shell.log          for showing History of commands
ethtool -S vmnic0
lspci | grep -i network