Copyright(c) 2013 - 2018 Intel Corporation

This release includes the native i40en VMware ESX Driver for Intel(R) Ethernet Controllers X710, XL710, XXV710, and X722 family

Driver version: 1.7.11

Supported ESXi release: 6.0
Compatible ESXi versions: 6.5, 6.7

=================================================================================

Contents
--------

- Important Notes
- Supported Features
- New Features
- New Hardware Supported
- Physical Hardware Configuration Maximums
- Bug Fixes
- Known Issues and Workarounds
- Command Line Parameters
- Previously Released Versions

=================================================================================

Important Notes:
----------------

- Recovery Mode
	A device will enter recovery mode if a device's NVM becomes corrupted.
	If a device enters recovery mode because of an interrupted NVM update, you should attempt to finish the update.
	If the device is in recovery mode because of a corrupted NVM, use the nvmupdate utility to reset
	the NVM back to factory defaults.

	NOTE: You must power cycle your system after using Recovery Mode to completely reset the firmware and hardware.

- Backplane devices
	Backplane devices are operating in auto mode only, and thus the user cannot manually overwrite speed settings.

- VLAN Tag Stripping Control for VF drivers
	VLAN Tag Stripping Control feature is enabled by default but can be disabled by VF driver.
	On a Linux VM with i40evf SR-IOV device (VF) driver, use below command to control the feature:
	ethtool --offload <IF> rxvlan on/off

	NOTE: Disabling VLAN Tag Stripping is only applicable to Virtual Guest Tagging (VGT) configurations.
	NOTE: VLAN Tag Stripping Control feature is currently not available on Windows VF drivers.

- Malicious Driver Detection (MDD)
	Malicious Driver Detection feature protects NIC from malformed packets or any other hostile actions
	which may be performed by drivers operating with the NIC (accidentally or deliberately)
	In case of detecting Malicious Driver event, driver reacts in below ways:
	  - if the source of the MDD event was i40en driver (Physical Function [PF] driver), hardware is reset;
	  - if the source of the MDD event was Virtual Machine's SR-IOV driver (Virtual Function [VF] driver),
	    suspected VF is disabled after 4th such event - malicious VM SR-IOV adapter becomes unavailable.
	    To bring it back, VM reboot or VF driver reload is required.

- LLDP Agent
	Link Layer Discovery Protocol (LLDP) supports Intel X710 and XL710 adapters with FW 6.0 and later
	as well as X722 adapters with FW 3.10 and later.
	Set LLDP driver load param to allow or disallow LLDP frames forwarded to the network stack

	  LLDP agent is enabled in firmware by default (Default FW setting)
	  Set LLDP=0 to disable LLDP agent in firmware
	  Set LLDP=1 to enable LLDP agent in firmware
	  Set LLDP to anything other then 0 or 1 will fallback to the default setting (LLDP enabled in firmware)
	  LLDP agent is always enabled in firmware when MFP (Multi Functional Port, i.e. NPAR) is enabled,
	  regardless of the driver parameter LLDP setting.

	When the LLDP agent is enabled in firmware, the ESXi OS will not receive LLDP frames and Link Layer
	Discovery Protocol information will not be available on the physical adapter inside ESXi.

	Please note that the LLDP driver module parameter is an array of values. Each value represents LLDP agent
	setting for a physical port.
	Please refer to "Command Line Parameters" section for suggestions on how to set driver module parameters.


Supported Features:
-------------------

- Rx, Tx, TSO checksum offload
- Netqueue (VMDQ)
- VxLAN Offload
- Hardware VLAN filtering
- Rx Hardware VLAN stripping
- Tx Hardware VLAN inserting
- Interrupt moderation
- SR-IOV (supports four queues per VF, VF MTU, and VF VLAN)
        Valid range for max_vfs
        1-32 (4 port devices)
        1-64 (2 port devices)
        1-128 (1 port devices)
- Link Auto-negotiation
- Flow Control
- Management APIs for CIM Provider, OCSD/OCBB
- Firmware Recovery Mode


New Features:
-------------

- Implemented VLAN Tag Stripping Control for VF drivers
- Implemented LLDP Agent control for X722 adapters (supported in FW 3.10 and later)


New Hardware Supported:
-----------------------

- Added new devices support for specific OEMs


Physical Hardware Configuration Maximums:
--------------------------------
40Gb Ethernet Ports (Intel) = 4
25Gb Ethernet Ports (Intel) = 4
10Gb Ethernet Ports (Intel) = 16


Bug Fixes:
----------

- Fixed Malicious Driver Detection (MDD) event handling. Previous drivers detected MDD events but did not properly reset
  the adapter. The PF driver also now properly disables an offending VF after it detects 4 MDD events on the same VF.
- Fixed an issue where SR-IOV was unable to be enabled via Web Client when i40en driver failed to load all PFs.
- Fixed a PSOD when booting a Supermicro X710DAi with X722 adapters.
- Fixed link not being detected while toggling Promiscuous Mode on a VF interface which could lead to VM instability and spontaneous rebooting.


Known Issues and Workarounds:
----------------------------

- Unable to reload VF driver on SLES 12SP2 and ESXi 6.0 update 3
	Workaround: Upgrade to ESXi 6.0 Update 3a or ESXi 6.5. Please look at the VMware Knowledge Base 2149955.
- Very low throughput when sending IPv6 to a Linux VM that uses a VMXNET3 adapter
	Workaround: Please look at the VMware Knowledge Base 2057874
- Driver is unable to configure the maximum 128 Virtual Functions per adapter due to the kernel limitation
	Workaround:
	- ESXi 6.0: reduce the number of VFs
	- ESXi 6.5 and 6.7: Please look at the VMware Knowledge Base 2147604
- Cannot set maximum values for VMDQ and SR-IOV VFs on a port at the same time
	Workaround: Reduce the VMDQ or max_vfs value for the port
- Unable to unload the driver when a VM with a VF adapter is powered on
	Workaround: Shut down all VMs with VF adapters and try unloading the driver again.
- In MFP adapter mode multicast traffic does not work on emulated adapters when a VM with an SR-IOV VF adapter is powered on
	Workaround: Do not mix SR-IOV and emulated traffic in MFP mode
- In RHEL 7.2 an IPv6 connection persists between VF adapters after changing port group VLAN mode from trunk (VGT) to port VLAN (VST)
	Workaround: Upgrade to RHEL 7.3 or newer. This is a Linux kernel bug that causes packets to arrive at the wrong virtual interface.
- HW reset on MDD event caused by ESXi kernel segmenting packets into more than 8 descriptors on ESX 6.0 Update 3 and older
	Workaround: Upgrade to ESXi 6.5 or newer
- Disabling VFs due to MDD events caused by configuring VF adapters as 'PCI Device' instead of 'SR-IOV Passthru Device'
	Workaround: Configure VMs with 'SR-IOV Passthru Device'
- Switching port (vmnic) of management uplink may lead to connectivity issues
	Workaround: Switch the port of management uplink back to the original one


Command Line Parameters:
------------------------

ethtool is not supported for native driver.
Please use esxcli, vsish, or esxcfg-* to set or get the driver information, for example:

- Get the driver supported module parameters
  esxcli system module parameters list -m i40en

- Set a driver module parameter (clearing other parameter settings)
  esxcli system module parameters set -m i40en -p LLDP=0

- Set a driver module parameter (other parameter settings left unchanged)
  esxcli system module parameters set -m i40en -a -p LLDP=0

- Get the driver info
  esxcli network nic get -n vmnic1

- Get an uplink stats
  esxcli network nic stats -n vmnic1

- Get the private stats
  vsish -e get /net/pNics/vmnic1/stats


=================================================================================

Previously Released Versions:
-----------------------------
- Driver Version: 1.7.5
	Hardware Supported: Intel(R) Ethernet Controllers X710, XL710, XXV710, and X722 family
	Supported ESXi releases: 6.0, 6.5 and 6.7
	New Features Supported:
		- Introduced support for firmware recovery mode
	New hardware Supported
		- Added new devices support for specific OEMs
	Bug Fixes:
		- Reduced driver's memory footprint
		- Prevent VxLAN port reprogramming failures after changing VxLAN port more than 16 times
		- Prevent dropped packets during link speed change
		- Don't show a link down message on SFP+ module removal
		- Fix for dropped emulated adapter traffic between MFP mode master and slave partitions
		- Fix for the NIC down procedure hanging when heavy traffic is running.
		- Fixed intermittent link flap after running NVM Update
		- Fixed multicast traffic not being received on emulated adapters when a VM with an SR-IOV VF adapter is powered on
		- Fix for SR-IOV VF adapters hanging when PF is brought down
		- Show correct cable types for AUI, MII and 1000BaseT-Optical link types
		- Fix intermittent PSOD during NVM Update
		- Fix for MDD event and TX hang caused by TSO_MSS option smaller than 64 bytes
		- Fixed issue where adapter could end up in a reset loop after a TX hang event
		- Show an error message when trying to set invalid pause frame parameters
		- Fix for VF driver hang when GOS requested VF promiscuous mode
		- Fix for intermittent packet loss when link is brought down and up
	Known Issues:
		- Unable to reload VF driver on SLES 12SP2 and ESXi 6.0 update 3
			Workaround: Upgrade to ESXi 6.0 Update 3a or ESXi 6.5. Please look at the VMware Knowledge Base 2149955.
		- Very low throughput when sending IPv6 to a Linux VM that uses a VMXNET3 adapter
			Workaround: Please look at the VMware Knowledge Base 2057874
		- Driver is unable to configure the maximum 128 Virtual Functions per adapter due to the kernel limitation
			Workaround:
			- ESXi 6.0: reduce the number of VFs
			- ESXi 6.5 and 6.7: Please look at the VMware Knowledge Base 2147604
		- Cannot set maximum values for VMDQ and SR-IOV VFs on a port at the same time
			Workaround: Reduce the VMDQ or max_vfs value for the port
		- Unable to unload the driver when a VM with a VF adapter is powered on
			Workaround: Shut down all VMs with VF adapters and try unloading the driver again.
		- SR-IOV settings not taking effect in vSphere vServer Web Client when a FVL mezzanine / daughterboard adapter is present
			Workaround: Configure SR-IOV manually using the max_vfs module parameter or remove the mezzanine / daughterboard adapter.
		- In MFP adapter mode multicast traffic does not work on emulated adapters when a VM with an SR-IOV VF adapter is powered on
			Workaround: Do not mix SR-IOV and emulated traffic in MFP mode
		- In RHEL 7.2 an IPv6 connection persists between VF adapters after changing port group VLAN mode from trunk (VGT) to port VLAN (VST)
			Workaround: Upgrade to RHEL 7.3 or newer. This is a Linux kernel bug that causes packets to arrive at the wrong virtual interface.
		- X722 adapter causes PSOD on Supermicro X10DAi
			Workaround: None
		- HW reset on MDD event caused by ESXi kernel segmenting packets into more than 8 descriptors on ESX 6.0 Update 3 and older
			Workaround: Upgrade to ESXi 6.5 or newer.


- Driver Version: 1.5.8
	Hardware Supported: Intel(R) Ethernet Controllers X710, XL710, XXV710, and X722 family
	Supported ESXi releases: 6.0 and 6.5
	Compatible ESXi version: 6.7
	New Features Supported:
	Bug Fixes:
		- Fix duplicated packets under heavy traffic when VMkernel adapter's MAC address is the same as PF's MAC address
		- NIC occasionally stops working right after updating the firmware
	Known Issues:
		- Unable to reload VF driver on SLES 12SP2 and ESXi 6.0 update 3
			Workaround: Upgrade to ESXi 6.0 Update 3a or ESXi 6.5. Please look at the VMware Knowledge Base 2149955.
		- Very low throughput when sending IPv6 to a Linux VM that uses a VMXNET3 adapter
			Workaround: Please look at the VMware Knowledge Base 2057874
		- Driver is unable to configure the maximum 128 Virtual Functions per adapter due to the kernel limitation
			Workaround:
			- ESXi 6.0: reduce the number of VFs
			- ESXi 6.5 and 6.7: Please look at the VMware Knowledge Base 2147604


- Driver Version: 1.5.6
	Hardware Supported: Intel(R) Ethernet Controllers X710, XL710, XXV710, and X722 family
	Supported ESXi releases: 6.0 and 6.5
	Compatible ESXi version: 6.7
	New Features Supported:
		- Log an error message if an SFP+ module does not meet the thermal requirements
		- Add LLDP driver load param to allow or disallow LLDP frames forwarded to the network stack
		  This feature only supports Intel X710 and XL710 adapters with FW 6.0.x and  later
	Bug Fixes:
		- ESXi crashes when NPAR-EP is enabled with 2 or more devices
		- Fix incorrect PHY type, 0x20, detection for XXV710 adapter
		- Fix VF guest VLAN tagging issue for Windows GOS
		- VF link status is still up after pulling the cable and the PF is down
		- Fix Windows GOS VF connectivity issues
		- VF traffic does not resume after PF reset
		- Unable to set auto negotiation when physical link is removed on X710 10GBASE-T adapter
		- Disabling uplink during heavy traffic causes a network hang
		- Possible TX queue hang during heavy VMDq traffic
		- No connectivity between NPAR master / slave ports from the same PF
		- The driver does not report pause frame statistics
	Known Issues:
		- Unable to reload VF driver on SLES 12SP2 and ESXi 6.0 update 3
			Workaround: Upgrade to ESXi 6.0 Update 3a or ESXi 6.5. Please look at the VMware Knowledge Base 2149955.
		- Very low throughput when sending IPv6 to a Linux VM that uses a VMXNET3 adapter
			Workaround: Please look at the VMware Knowledge Base 2057874
		- Driver is unable to configure the maximum 128 Virtual Functions per adapter due to the kernel limitation
			Workaround:
			- ESXi 6.0: reduce the number of VFs
			- ESXi 6.5: Please look at the VMware Knowledge Base 2147604


- Driver Version: 1.4.3
	Hardware Supported: Intel(R) Ethernet Controllers X710, XL710, XXV710, and X722 family
	Supported ESXi releases: 6.0 and 6.5
	Compatible ESXi version: 6.7
	New Features Supported:
		- None
	Bug Fixes:
		- Fix Link speed changing
		- There is no traffic when vmknic and VF are configured using the same PF port
		- Unable to set Pause Parameters
		- Duplicate Packets across queues when SR-IOV is enabled
		- SFP+ module swap link down
	Known Issues:
		- ESXi crashes when NPAR-EP is enabled
		  Workaround: Use only one i40en adapter in the system when NPAR-EP is enabled
		- Very low throughput when sending IPv6 to a Linux VM that uses a VMXNET3 adapter
		  Workaround: Please look at the VMware Knowledge Base 2057874
		- Driver is unable to configure the maximum 128 Virtual Functions per adapter due to the kernel limitation
		  Workaround:
			- ESXi 6.0: reduce the number of VFs
			- ESXi 6.5: Please look at the VMware Knowledge Base 2147604


- Driver Version: 1.3.1
	Hardware Supported: Intel(R) Ethernet Controllers X710, XL710, XXV710, and X722 family
	Supported ESXi release: 6.0
	Compatible ESXi versions: 6.5 and 6.7
	Features Supported:
		- Rx, Tx, TSO checksum offload
		- Netqueue (VMDQ)
		- VxLAN Offload
		- Hardware VLAN filtering
		- Rx Hardware VLAN stripping
		- Tx Hardware VLAN inserting
		- Interrupt moderation
		- SR-IOV (supports four queues per VF, VF MTU, and VF VLAN)
		        Valid range for max_vfs
		        1-32 (X710 based devices)
		        1-64 (XL710 based devices)
		- Link Auto-negotiation
		- Flow Control
		- Management APIs for CIM Provider, OCSD/OCBB
	Bug Fixes:
		- None
	Known Issues:
		- There is no traffic when vmknic and VF are configured using the same PF port
		  Workaround: none
		- ESXi crashes when NPAR-EP is enabled
		  Workaround: Use only one i40en adapter in the system when NPAR-EP is enabled

