Here’s What’s New in VMware vSphere and vCenter 6.7 You can expect some incremental changes to VMware hypervisor (ESXi) and management (vCenter Server)

Widenet is VCP Certified for VMware Products.
In many Article, we talked about Tips & tricks, or to solve problems, like here, or here. Now we will talking about a big news from VMware.

vSphere 6.7, released today, includes an update to both its hypervisor (ESXi 6.7) and management console (vCenter Server 6.7). This release shows that VMware Inc. is not content to let its hypervisor become a commodity, and that it’s possible to make incremental, evolutionary changes to a proven product and, moreover, that VMware is still making substantial investments in its hypervisor. The vSphere 6.7 beta, though NDA-constrained, has been available to the public since October 2017. Despite the fact that a lot of new features were baked into the 6.5 release, this release does make some nice incremental changes. Following are some of the most important changes included with vSphere 6.7.

Hardware Caveat
An important hardware caveat to be aware of is VMware has released an HCL for vSphere 6.7 that excludes some older, yet popular CPUs. If you’re thinking about running this release on an older system for development or testing first before placing it into production on your newer servers, make sure to check the HCL to ensure compatibility.

Single Reboot Upgrade
vSphere upgrades can now be completed with one single reboot. Prior to vSphere 6.7, major version upgrades took quite a while (although they could be done without disruption by transferring workloads by using the Distributed Resource Scheduler [DRS]). vSphere 6.7, on the other hand, allows you to do a “quick boot” where it loads vSphere ESXi without restarting the hardware because it only restarts the kernel. This feature is only available with platforms and drivers that are on the Quick Boot whitelist, which is currently quite limited.

VMware Configuration Maximum Tool
The most visible configuration maximum change in vSphere 6.7 is the number of devices that can be attached to a host. VMware has increased some of the other maximums.

vSphere Client
vSphere 6.5 eliminated the vSphere Client that ran natively on Windows (also known as the C# Client or Thin Client) in favor of the vSphere Web Client, which was Flash-based. Also introduced in version 6.5 was the vSphere Client, which replaced Flash with HTML5. vSphere 6.7 further extends the capabilities of the vSphere Client and will eventually replace the vSphere Web Client. It looks like the vSphere Client can do about 90 percent that the vSphere Web Client can do. In vSphere 6.5, VMware had a list of the functionalities not yet supported in the vSphere Client; hopefully the company will do the same for vSphere 6.7.
Figure 1 shows the main menu of the vSphere Web Client, and Figure 2 shows the main vSphere Client menu. Although the new client looks cleaner, and does seem more responsive than the vSphere Web Client, the location of some items has changed and some workflows will have to be adjusted accordingly with these changes. I wrote an article on the vSphere Client when it first came out that explains why VMware is switching to an HTML5-based client.

[Click on image for larger view.]Figure 1. 
The vSphere Web Client main menu.
[Click on image for larger view.]Figure 2.

vCenter Server Appliance

Now that the vCenter Server Appliance (VCSA) is functionally equivalent to the Windows-based vCenter Server, it would take a lot to convince me to use the Windows-based one instead of VCSA. Overall, I have found that the VCSA embedded database (PostgreSQL) performs great. Furthermore, the VCSA is very easy to update, and the Linux OS (Photon OS) is rock solid. As a side note, the VCSA can easily be monitored using vimtop (be sure to read my recent articles on using vimtop). You can also read my article about migrating from a Windows-based vCenter Server to a VCSA, as well as another article on using the built-in VCSA backup tool. The built-in backup tool in vSphere 6.7 offers more scheduling options for its VCSA backup tool than in vSphere 6.5. The Backup Scheduler tool (Figure 3) can be accessed from the vCenter Server Appliance Management Interface (VAMI). VMware is also stating that there are “phenomenal” performance improvements in vCenter operations per second, in reduction of memory usage and DRS-related operations.

[Click on image for larger view.]Figure 3. 
The Backup Scheduler tool.

Suspend and Resume of vGPU Workloads
vGPU allows you to carve up a physical GPU into multiple virtual GPUs that can be used by VMs. Although vGPUs were introduced with vSphere 6.0, the VMs that used vGPUs there were limited in what you could do with a VM that was using a vGPU. vSphere 6.7, on the other hand, removes some of these barriers, and now you can suspend and resume a vGPU-enabled VM.

Per-VM EVC
For quite some time vSphere has had the ability to mask off CPU features so that VMs that were running on systems with newer CPUs could be vMotion to servers with older CPUs. This is called Enhanced vMotion Compatibility, or EVC. In vSphere 6.7 VMware has extended this capability to allow you to do this on a per-VM, rather than on an ESXi-host basis. This means that if you have VMs that you want to take advantage of CPU-specific features, and are willing to limit those VMs to CPUs that only have those features in your cluster, you can configure them to do so.

A per-VM EVC is set from the vSphere client by selecting a VM, going to the Configure tab and selecting Edit (Figure 4).

[Click on image for larger view.]Figure 4. Setting up a per-VM EVC.

 

Instant Clone
I’ve been a fan of using instant clones with virtual desktops—they’ve proven to be a big space saver, to use only a fraction of the disk resources compared to a full clone, and to allow VMs to be provisioned in seconds from a parent image. With vSphere 6.7, VMware has exposed the APIs that can be used to create instant clones. It looks like a straightforward process and I suspect that many people will figure out some very interesting ways to use the instant clone API.

ESXi Quick Boot
vSphere 6.7 introduces the Quick Boot feature, which allows a system to reboot in less than two minutes as it does not re-initialize the physical server BIOS. This can speed up operations that require an ESXi system to be rebooted; however, Quick Boot is only supported on certain systems and does not work with systems that have ESXi Secure Boot enabled.

Figure 5 shows two hosts, one with Quick Boot enabled and another without it enabled. By default, Quick Boot is enabled if the system supports it.

[Click on image for larger view.]Figure 5. 
The New ESXi Quick Boot feature is enabled by default if the system supports it.

Persistent Memory (PMem) Devices
vSphere 6.7 now supports the next generation of storage devices that use persistent DRAM memory, known as non-volatile dual in-line memory module (NVDIMM) devices. This technology is still in its infancy, but applications that require the lowest possible latency regardless of the cost will find this feature invaluable. PMem is presented to vSphere as either as vPMemDisk, which is treated somewhat like a datastore, or as a virtual NVDIMM (vNVDIMM), which is presented directly to guest OSes that can use NVDIMM devices.

Virtual Hardware Version 14
Virtual hardware is the abstract version of physical hardware to a virtual machine or, in essence, a virtual motherboard. As physical hardware supports more features, VMware builds new virtual hardware accordingly to emulate the physical version. vSphere 6.7 comes with a new virtual hardware, version 14. Version 14 adds support for NVDIMM, as well as Trusted Platform Module (TPM), Microsoft Virtual-based Security (VBS) and I/O Memory Management.

VMFS Datastores
VMFS3 datastores have been around for a long time, but VMware is now phasing them out. To assist with this transition, vSphere 6.7 automatically upgrades VMFS3 datastores to VMFS5 when they’re mounted. If you want to upgrade VMFS5 datastores to VMFS6 datastores, you’ll need to upgrade the datastore with vSphere Storage vMotion because an in-place upgrade of a VMFS5 to VMFS6 datastore isn’t possible.

As a side note, vSphere 6.7 supports VMFS5 and VMFS6; however, vSphere 6.0 and earlier systems only support VMFS5 datastores. As such, if you have an environment that contains vSphere 6.0 or earlier systems, you’ll want to only use VMFS6 datastores on systems that won’t be accessed by them.

Upgrading to vCenter Server 6.7
A specific order must be used when upgrading to vSphere 6.7. Check the documentation for the latest order and caveats, but the basic procedure can be carried out by first upgrading the Platform Service Controller (PSC), then upgrading vCenter Server and, last, updating the ESXi hosts.

Because upgrading directly from vSphere 5.5 to 6.7 isn’t supported, you’ll need to first upgrade from vSphere 5.5 to vSphere 6.5, and then finally to vSphere 6.7. It needs to be noted that an ESXi 5.5 host cannot be managed by VCSA 6.7. On the contrary, upgrading from vSphere 6.0 to 6.7 is supported. If you’re still running a window-based vCenter Server rather than a VCSA, VMware does offer a tool to assist you in the migration; be sure to read my article on using this tool.

Upgrading to ESXi 6.7
As mentioned earlier, ESXi 6.5 doesn’t support all the CPUs that ESXi 6.0 does, so be sure to check the HCL to unsure that your system is supported. Roughly speaking, what you’ll typically find supported, at the minimum, is a 2 core CPUs that were released after September 2006 and have NX/XD enabled. You can use the VMware Update Manager (VUM) to do an orchestrated automated upgrade. Alternatively, you can manually update the ESXi systems using an ISO image or esxcli commands or, if you use stateless host, you can use vSphere Auto-Deploy to update your servers. To see how to update an ESXi system using esxcli commands, be sure to read my article.

Wrapping Up
If I’m forced to pick one single standout feature in vSphere 6.7, it would have to be the instant clone API. I see this feature as a great enabler for the VMware ecosystem and VMware developers because the ability to spawn hundreds of identical VMs that only use a small amount of space in minutes has some mind-boggling use cases. However, with great power comes greater responsibility, and it will be interesting to watch the development of tools to manage and orchestrate these VMs over time.

Yes, instant clones is the gee-wiz feature in this release, but the rest of the improvements in this release prove that the hypervisor has room for evolutionary growth—and that VMware is serious in maintaining its leadership position in this regard.

Related Article: Here’s What’s New in vSphere 6.7

Troubleshooting importing OVF Template into VMware ESXi Error while importing OVA files, Unsupported hardware family, Unsupported devices

This post shows how to adapt a VMWARE OVA exported from Virtual BOX for a Virtual Machine, compatible with ESXi.

When you try to open an OVA with the VMware format on an ESXi you get the following error:

 “The OVF package requires unsupported hardware
Details: Line 25: Unsupported hardware family ‘virtualbox2.2’.”
Details: Line 25: Unsupported hardware family ‘_unsupported_version’.”

vmware1

Uncompresse OVA Archive :

First off all uncompress the OVA archive (with a zip extractor like 7-ZIP)

You will get a directory with 3 files on it like this :

VMFile.mf
VMFile.ovf
VMFile.vmdk

Modify the OVF file :

Open the *.ovf file (with Notepad+)

Change the following line :

 <vssd:VirtualSystemType>_unsupported_version</vssd:VirtualSystemType>

By this one :

 <vssd:VirtualSystemType>vmx-07</vssd:VirtualSystemType>

Modify the *.mf file and calculate the SHA1 hash of the modified OVF file:

Open the *.mf file which contains the SHA1 hash of *.ovf file. So you need to replace the value specified by the new SHA1 hash of .ovf file.

SHA1(VMFile.ovf)= 48432f9cb8b0bfa97098006abb390805449303be
SHA1(VMFile.vmdk)= ffa3500bc379a2e040badce315d6b3b06876d5a9

To calculate this hash you can use a tool like FCIV from microsoft. You can download it there : http://support.microsoft.com/kb/841290

>D:\FCIV\fciv.exe -sha1 "VMFile.ovf"
//
// File Checksum Integrity Verifier version 2.05.
//
da39a3ee5e6b4b0d3255bfef95601890afd80709 VMFile.ovf

So, put the new hash in the *.mf file and save it

SHA1(VMFile.ovf)= da39a3ee5e6b4b0d3255bfef95601890afd80709
SHA1(VMFile.vmdk)= ffa3500bc379a2e040badce315d6b3b06876d5a9

Maybe you may encounter other errors while importing the OVF template like:

Line XX: OVF hardware element ‘ResourceType’ with instance ID ‘5’: No support for the virtual hardware device type ’20’

This is a problem of SATA controller, change settings in ovf like this:

From this

<Item>
<rasd:Address>0</rasd:Address>
<rasd:Caption>sataController0</rasd:Caption>
<rasd:Description>SATA Controller</rasd:Description>
<rasd:ElementName>sataController0</rasd:ElementName>
<rasd:InstanceID>5</rasd:InstanceID>
<rasd:ResourceSubType>AHCI</rasd:ResourceSubType>
<rasd:ResourceType>20</rasd:ResourceType>
</Item>

With this
<Item>
<rasd:Address>0</rasd:Address>
<rasd:Caption>SCSIController</rasd:Caption>
<rasd:Description>SCSI Controller</rasd:Description>
<rasd:ElementName>SCSIController</rasd:ElementName>
<rasd:InstanceID>5</rasd:InstanceID>
<rasd:ResourceSubType>lsilogic</rasd:ResourceSubType>
<rasd:ResourceType>6</rasd:ResourceType>
</Item>

Other problems could happen if in the <Item> is listed an audio card, you should delete whole line starting from

<Item>
sound-card-settings
</Item>

Then save again OVF file.

Remember that every time you save the ovf file, and you what to try to import into ESXi, you must generate the hash, replace the hash in the .mf file, follow the steps above.

Deploy the new OVF :

Now, you can deploy directly the new OVF file on ESXi.
On VSPHERE select :

File > Deploy OVF Template
Select your OVF file :

The following WARNINGS are raised but you can move forward.

vmware2

So now you can deploy and start your VM.

Related Article: VIRTUALBOX OVA TO VSPHERE OVF | Uncompress a VMWARE OVA and modify its VM version | File Checksum Integrity Verifier

Understanding the Power of VMware vApps The power of VMware vApps is something that I think most VMware Admins still overlook simply because they haven’t taken the time to learn more

Introduction

The power of VMware vApps is something that I think most VMware Admins still overlook simply because they haven’t taken the time to learn more. I believe that once you learn more about vApps, you’ll see that they offer amazing portability and power which you’ll want to use in your VMware infrastructure.

In the past, I have created a couple of videos on vApps. They are Great New vApp / OVF 1.0 Features in vSphere 4 and What are VMware vApps?. These videos offer good information on the concept of a vApp but they are also based on vSphere 4 and there have been a number of improvements since then. Thus, let’s start from the beginning on what a vApp is and how the latest features can help you, in vSphere 5.

What is a VMware vApp?

A vApp is a container for virtual machines that offers resource controls and management for the virtual machines that are inside. Think of a vApp as a portable, self-contained box that holds multiple virtual machines that make up a multi-tiered application (like a web server, database, and security server), including all custom network configurations.

vApps offer:

  • Container for multiple virtual machines
  • Resource controls for the VMs inside the container
  • Network configurations contained inside
  • Portability of the vApp such that everything can be contained and transferred to another virtual infrastructure
  • Entire vApps can be powered on, powered off, suspended, or shutdown
  • Entire vApps can be cloned

Probably the best way to understand vApps is to create one so let’s learn how.

Creating a vApp

Creating a vApp is easy. To do it, in your vSphere client (connected to vCenter), click on File, go to New, and click on vApp, as you see in Figure 1. Alternately, you can press Control-A.


Figure 1

This will bring up the New vApp Wizard. The first thing you need to do in this wizard is to create a name for the vApp In my case, I simply called it “Client-Server-App” and clicked Next.


Figure 2

Next, you need to configure the resource allocation for the vApp. At this point, the only resource allocations available are either CPU or memory. The resource configurations are just like a resource pool as a vApp really contains a resource pool. vApp resources use the same shares, reservations, and limits that regular resource pools use. Notice how I went ahead and reserved 4000Mhz of vCPU and 6000MB (6GB) of vRAM for the VMs that will be inside the vApp resource pool.


Figure 3

Finally, a review before creating the vApp, as shown in Figure 4. After reviewing, click Finish.