vSphere 6.7, released today, includes an update to both its hypervisor (ESXi 6.7) and management console (vCenter Server 6.7). This release shows that VMware Inc. is not content to let its hypervisor become a commodity, and that it’s possible to make incremental, evolutionary changes to a proven product and, moreover, that VMware is still making substantial investments in its hypervisor. The vSphere 6.7 beta, though NDA-constrained, has been available to the public since October 2017. Despite the fact that a lot of new features were baked into the 6.5 release, this release does make some nice incremental changes. Following are some of the most important changes included with vSphere 6.7.
An important hardware caveat to be aware of is VMware has released an HCL for vSphere 6.7 that excludes some older, yet popular CPUs. If you’re thinking about running this release on an older system for development or testing first before placing it into production on your newer servers, make sure to check the HCL to ensure compatibility.
Single Reboot Upgrade
vSphere upgrades can now be completed with one single reboot. Prior to vSphere 6.7, major version upgrades took quite a while (although they could be done without disruption by transferring workloads by using the Distributed Resource Scheduler [DRS]). vSphere 6.7, on the other hand, allows you to do a “quick boot” where it loads vSphere ESXi without restarting the hardware because it only restarts the kernel. This feature is only available with platforms and drivers that are on the Quick Boot whitelist, which is currently quite limited.
VMware Configuration Maximum Tool
The most visible configuration maximum change in vSphere 6.7 is the number of devices that can be attached to a host. VMware has increased some of the other maximums.
vSphere 6.5 eliminated the vSphere Client that ran natively on Windows (also known as the C# Client or Thin Client) in favor of the vSphere Web Client, which was Flash-based. Also introduced in version 6.5 was the vSphere Client, which replaced Flash with HTML5. vSphere 6.7 further extends the capabilities of the vSphere Client and will eventually replace the vSphere Web Client. It looks like the vSphere Client can do about 90 percent that the vSphere Web Client can do. In vSphere 6.5, VMware had a list of the functionalities not yet supported in the vSphere Client; hopefully the company will do the same for vSphere 6.7.
Figure 1 shows the main menu of the vSphere Web Client, and Figure 2 shows the main vSphere Client menu. Although the new client looks cleaner, and does seem more responsive than the vSphere Web Client, the location of some items has changed and some workflows will have to be adjusted accordingly with these changes. I wrote an article on the vSphere Client when it first came out that explains why VMware is switching to an HTML5-based client.
vCenter Server Appliance
Now that the vCenter Server Appliance (VCSA) is functionally equivalent to the Windows-based vCenter Server, it would take a lot to convince me to use the Windows-based one instead of VCSA. Overall, I have found that the VCSA embedded database (PostgreSQL) performs great. Furthermore, the VCSA is very easy to update, and the Linux OS (Photon OS) is rock solid. As a side note, the VCSA can easily be monitored using vimtop (be sure to read my recent articles on using vimtop). You can also read my article about migrating from a Windows-based vCenter Server to a VCSA, as well as another article on using the built-in VCSA backup tool. The built-in backup tool in vSphere 6.7 offers more scheduling options for its VCSA backup tool than in vSphere 6.5. The Backup Scheduler tool (Figure 3) can be accessed from the vCenter Server Appliance Management Interface (VAMI). VMware is also stating that there are “phenomenal” performance improvements in vCenter operations per second, in reduction of memory usage and DRS-related operations.
Suspend and Resume of vGPU Workloads
vGPU allows you to carve up a physical GPU into multiple virtual GPUs that can be used by VMs. Although vGPUs were introduced with vSphere 6.0, the VMs that used vGPUs there were limited in what you could do with a VM that was using a vGPU. vSphere 6.7, on the other hand, removes some of these barriers, and now you can suspend and resume a vGPU-enabled VM.
For quite some time vSphere has had the ability to mask off CPU features so that VMs that were running on systems with newer CPUs could be vMotion to servers with older CPUs. This is called Enhanced vMotion Compatibility, or EVC. In vSphere 6.7 VMware has extended this capability to allow you to do this on a per-VM, rather than on an ESXi-host basis. This means that if you have VMs that you want to take advantage of CPU-specific features, and are willing to limit those VMs to CPUs that only have those features in your cluster, you can configure them to do so.
A per-VM EVC is set from the vSphere client by selecting a VM, going to the Configure tab and selecting Edit (Figure 4).
I’ve been a fan of using instant clones with virtual desktops—they’ve proven to be a big space saver, to use only a fraction of the disk resources compared to a full clone, and to allow VMs to be provisioned in seconds from a parent image. With vSphere 6.7, VMware has exposed the APIs that can be used to create instant clones. It looks like a straightforward process and I suspect that many people will figure out some very interesting ways to use the instant clone API.
ESXi Quick Boot
vSphere 6.7 introduces the Quick Boot feature, which allows a system to reboot in less than two minutes as it does not re-initialize the physical server BIOS. This can speed up operations that require an ESXi system to be rebooted; however, Quick Boot is only supported on certain systems and does not work with systems that have ESXi Secure Boot enabled.
Figure 5 shows two hosts, one with Quick Boot enabled and another without it enabled. By default, Quick Boot is enabled if the system supports it.
Persistent Memory (PMem) Devices
vSphere 6.7 now supports the next generation of storage devices that use persistent DRAM memory, known as non-volatile dual in-line memory module (NVDIMM) devices. This technology is still in its infancy, but applications that require the lowest possible latency regardless of the cost will find this feature invaluable. PMem is presented to vSphere as either as vPMemDisk, which is treated somewhat like a datastore, or as a virtual NVDIMM (vNVDIMM), which is presented directly to guest OSes that can use NVDIMM devices.
Virtual Hardware Version 14
Virtual hardware is the abstract version of physical hardware to a virtual machine or, in essence, a virtual motherboard. As physical hardware supports more features, VMware builds new virtual hardware accordingly to emulate the physical version. vSphere 6.7 comes with a new virtual hardware, version 14. Version 14 adds support for NVDIMM, as well as Trusted Platform Module (TPM), Microsoft Virtual-based Security (VBS) and I/O Memory Management.
VMFS3 datastores have been around for a long time, but VMware is now phasing them out. To assist with this transition, vSphere 6.7 automatically upgrades VMFS3 datastores to VMFS5 when they’re mounted. If you want to upgrade VMFS5 datastores to VMFS6 datastores, you’ll need to upgrade the datastore with vSphere Storage vMotion because an in-place upgrade of a VMFS5 to VMFS6 datastore isn’t possible.
As a side note, vSphere 6.7 supports VMFS5 and VMFS6; however, vSphere 6.0 and earlier systems only support VMFS5 datastores. As such, if you have an environment that contains vSphere 6.0 or earlier systems, you’ll want to only use VMFS6 datastores on systems that won’t be accessed by them.
Upgrading to vCenter Server 6.7
A specific order must be used when upgrading to vSphere 6.7. Check the documentation for the latest order and caveats, but the basic procedure can be carried out by first upgrading the Platform Service Controller (PSC), then upgrading vCenter Server and, last, updating the ESXi hosts.
Because upgrading directly from vSphere 5.5 to 6.7 isn’t supported, you’ll need to first upgrade from vSphere 5.5 to vSphere 6.5, and then finally to vSphere 6.7. It needs to be noted that an ESXi 5.5 host cannot be managed by VCSA 6.7. On the contrary, upgrading from vSphere 6.0 to 6.7 is supported. If you’re still running a window-based vCenter Server rather than a VCSA, VMware does offer a tool to assist you in the migration; be sure to read my article on using this tool.
Upgrading to ESXi 6.7
As mentioned earlier, ESXi 6.5 doesn’t support all the CPUs that ESXi 6.0 does, so be sure to check the HCL to unsure that your system is supported. Roughly speaking, what you’ll typically find supported, at the minimum, is a 2 core CPUs that were released after September 2006 and have NX/XD enabled. You can use the VMware Update Manager (VUM) to do an orchestrated automated upgrade. Alternatively, you can manually update the ESXi systems using an ISO image or esxcli commands or, if you use stateless host, you can use vSphere Auto-Deploy to update your servers. To see how to update an ESXi system using esxcli commands, be sure to read my article.
If I’m forced to pick one single standout feature in vSphere 6.7, it would have to be the instant clone API. I see this feature as a great enabler for the VMware ecosystem and VMware developers because the ability to spawn hundreds of identical VMs that only use a small amount of space in minutes has some mind-boggling use cases. However, with great power comes greater responsibility, and it will be interesting to watch the development of tools to manage and orchestrate these VMs over time.
Yes, instant clones is the gee-wiz feature in this release, but the rest of the improvements in this release prove that the hypervisor has room for evolutionary growth—and that VMware is serious in maintaining its leadership position in this regard.
Related Article: Here’s What’s New in vSphere 6.7