vSphere 6.7 – Let’s break it down!

      No Comments on vSphere 6.7 – Let’s break it down!

The announcement and release of vSphere 6.7, vCenter 6.7 and vSAN 6.7 came out last week as you’ve likely already seen or read about.  This was a little surprising since VMworld 2018 is just around the corner and they usually reserve big releases like this until closer to the big show.  Does that mean we’re getting a full point version announced soon?  Only time will tell.

Speculation aside, this vSphere release is definitely worth checking out.  There’s a ton of enhancements and new features available that will certainly help any moves towards a hybrid cloud infrastructure.  It’s not without limitations of course which I’ll detail below.  Not the least of which being processor support and compatibility with other vSphere products.

In this article I’ll be going over a few of the most intriguing features and enhancements as well as those limitations.  Let’s break it down!

vSphere 6.7 Configuration Maximums

As usual with most big releases the configuration maximums tend to go up.  This release only shows a few increases but in some key areas that I’ll detail below.  VMware also has a great site that you can pull all of the configuration maximums from vSphere 6.0, 6.5, 6.5 U1 and 6.7 at https://configmax.vmware.com.  Here’s the most interesting changes I noticed:

MaximumvSphere 6.5 U1vSphere 6.7
RAM per FT VM64 GB128GB
Logical CPUs per host576768
Maximum RAM per host12 TB16 TB
Number of total paths on a server20484096
Volumes per host5121024
LUNs per server5121024
Number of PEs per host256512

You can have 16TB of RAM on a host now!  That’s crazy to even think about for most people.  Surprisingly or maybe not surprisingly to some there are hardware options where you can have even more RAM than that on a single physical server(depending on how you configure it of course).  The other changes center around storage and allowing more volumes and more paths to a host.  In the VVOLs arena they now allow up to 512 Protocol Endpoints per host.  I haven’t seen a ton of VVOLs adoption yet myself so I don’t know how big of a difference that will make but I know VMware is sure pushing VVOLs pretty hard.

vCenter 6.7 Updates

vCenter 6.7 has a number of changes and updates in this release.  A few notes about vCenter and the VCSA/PSC appliance before we get into the deep end:

  • This is the last version that will contain a Windows-based version of vCenter.
  • There is no upgrade path from vSphere/vCenter 5.5.  You will have to upgrade to 6.0 first.
  • /psc is now part of the vSphere Client under the Administration section divided between the Certificate Management and Configuration tabs.

With that out of the way let’s move onto the VCSA updates!

vCenter Server Appliance (VCSA)

vCenter with Embedded Platform Services Controller can now take advantage of Enhanced Linked Mode.  It was announced last year at VMworld 2017 and they finally got it baked into the VCSA.  You no longer need to have External Platform Services Controllers to enable Enhanced Linked Mode.  You also don’t need load balancers for high availability either.  This change supports all the vSphere scale maximums as well.  It also reduces the number of infrastructure components to manage and it’s easier to backup with the addition of File-Based Backup options.

There were significant improvements made to the vSphere Appliance Management Interface (VAMI) and as noted above consolidation of the /PSC functionality.  There are also performance improvements to vCenter as follows (All metrics compared at cluster scale limits, versus vSphere 6.5):

  • 2X faster performance in vCenter operations per second
  • 3X reduction in memory usage
  • 3X faster DRS-related operations (e.g. power-on virtual machine)

vSphere Client

Another huge step forward for the HTML5-based vSphere Client which is reported to be 95% feature complete compared to the vSphere Web Client (Flash Client).  Below are the additional workflows added to the vSphere Client although there are a few specific options within them that aren’t available.  (The announcement says NSX is one of the workflows but it’s not on the compatibility list and it’s noted in the release notes as not compatible.)

  • vSphere Update Manager
  • Content Library
  • vSAN
  • Storage Policies
  • Host Profiles
  • vDS Topology Diagram
  • Licensing

Management, Migration and Provisioning

vCenter Server Hybrid Linked Mode is now available in vSphere 6.7 which simplifies manageability and unifies visibility across an on-premises vSphere infrastructure and a vSphere-based public cloud infrastructure running different versions of vSphere.  A good example of that could be VMware Cloud on AWS.  This new set of features will allow for Cross-Cloud Cold and Hot Migration.  Let that soak in for a minute!

Another feature related to the last ones mentioned is Cross-vCenter Mixed Version Provisioning operations which includes vMotion, Full Clone and cold migrate.  To clarify you can now vMotion or create clones across vCenters of different versions.  I can see so many use cases for this including new infrastructure deployments where I don’t want or need to upgrade the old infrastructure but need to move the workloads to the new environment.

vRealize Operations Manager

The last thing I’ll mention here on the vCenter enhancements is the addition of vRealize Operations Manager Dashboards right inside of vCenter.  This feature requires vRealize Operations Manager 6.7 of course.  They’re slowly unifying management components with reporting and analytics and it’s a very welcome thing.  Being able to see vROPs information without having to open both interfaces is definitely a time saver.

vSphere 6.7 Updates

Along with all the updates to vCenter there are also a number of feature changes and updates to ESXi and vSphere generally that we’ll talk about here.  Some of these could be lumped into the vCenter section but I think are more generally related to vSphere and storage.  Let’s start with the ESXi 6.7 updates!

ESXi 6.7 Updates

The Single Reboot feature eliminates one of the two hardware reboots required during major version upgrades.  Previously on major version releases the hardware would reboot into the installer, install the update and then reboot again into the upgraded ESXi version.  Now with Single Reboot, the update is applied and then the hardware reboots directly to the upgraded version.  That should save quite a bit of time for administrators.

The next cool time saving feature is ESXi Quick Boot.  This feature will skip the hardware reboot (the BIOS or UEFI firmware reboot) that normally has to reinitialize all the hardware on a host before booting into ESXi.  That on most hosts takes a lot of time.  ESXi Quick Boot completely skips that process and just restarts ESXi in place saving all the hardware reboot time.  Here’s a quick link on how to enable the feature.  The only problem with it currently is it’s only supported on a small list of hardware detailed below or you can check if your hardware supports it through a script.

Supported platforms for ESXi Quick Boot:

  • Dell PowerEdge R630
  • Dell PowerEdge R640
  • Dell PowerEdge R730
  • Dell PowerEdge R740
  • HPE ProLiant DL360 Gen9
  • HPE ProLiant DL360 Gen10
  • HPE ProLiant DL380 Gen9
  • HPE ProLiant DL380 Gen10

vSphere 6.7

One of the swankier new features is vSphere Persistent Memory (PMEM) support.  PMEM is simply DRAM with Non-Volatile Memory on board that can store data like an SSD.  Imagine a DIMM with half memory and half Non-Volatile Memory that can be presented to the host as a datastore and you have Persistent MemoryHPE and Dell both have supported options for this out now.  This puts your storage at the DRAM layer and as you can imagine greatly increases the speed at which you can access your data.  You can even attach Virtual NVDIMMs to compatible Guest OS’s taking a piece of the PMEM datastore and attaching it directly to your guest.  Here’s a table Dell put together describing the speed difference of standard storage types versus Persistent Memory in nanoseconds.

Storage TechnologyData Access Time
15K SAS Disk~ 6,000,000 ns
SATA SSD~ 120,000 ns
NVMe SSD~ 60,000 ns
Persistent Memory - NVDIMM/NVDIMM-N~ 150 ns

This release also sees new protocol support for Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE v2) and adds Paravirtualized RDMA (PVRDMA) support.  It introduces a new software Fibre Channel over Ethernet (FCoE) adapter and support for the iSCSI Extension for RDMA (iSER).  There are many new options to integrate with high performance storage platforms as well as the ability to bypass normal methods of connectivity and present even more types of storage directly to guest operating systems.

Lastly on the ESXi side you can now add multiple Syslog targets.  I’ve seen customers use vRealize Log Insight for their virtual environment and another product may be used by another team to correlate Syslogs.  Now they don’t have to choose where they go since you can add up to three Syslog targets in the VAMI.

General vSphere 6.7 Updates

One feature that is sure to be a game changer is Per-VM EVC (Enhanced vMotion Compatibility) mode.  Per-VM EVC allows you to set the processor generation at each individual VM as needed instead of on the entire cluster.  This allows you to migrate VM’s between clusters of hosts running different generation CPUs while the VMs are powered on.  No need to power them off or to set the EVC level on the cluster.  Just set it on the VM itself and vMotion.  This feature will make it significantly easier to migrate between clusters with different hardware seamlessly.

At VMworld 2017 last year I attended a really interesting session that detailed many of the features I’ve discussed today.  This next one was mentioned there as well.  vGPU is getting some love here and you can now Suspend and Resume vGPU-based VM’s to allow for migration between hosts.  They also indicated full vMotion compatibility was being worked on but that’s not here yet.  Suspend and Resume is a step in the right direction and will make it a little easier to maintain vGPU-based clusters.

VMware has also taken the initiative to help protect data in motion by enabling encrypted vMotion across different vCenters as well as different vSphere versions.  In the same security vein, vSphere 6.7 also adds support for Trusted Platform Module 2.0 (TPM 2.0) which works with Secure Boot to validate that you’re only running secure, signed code and disallows you from running unsigned code protecting your environment from certain types of attacks.  So that protects the physical hardware but “What about my Guest OS’s?” you may be asking.  vSphere 6.7 adds a feature called Virtual TPM 2.0 which presents a virtual TPM device to the guest and cryptographically protects your VM by storing the TPM data in the VM’s NVRAM file and securing that file with VM Encryption.  This allows that data to travel with the VM during migrations and ensures that each VM is protected and that protection is encapsulated with the VM rather than tied to a host or physical hardware.  Of note, to do VM Encryption you need a 3rd party key management service infrastructure.

vSphere 6.7 Storage Updates

In vSphere 6.5, VMware reintroduced Automatic UNMAP.  This feature basically works with storage that supports the vSphere Storage APIs for Array Integration (VAAI) primitives to allow certain storage tasks to be offloaded to the storage array hardware.  UNMAP is one of those tasks.  By running UNMAP you can reclaim VMFS deleted block on thin-provisioned LUNs.  vSphere 6.5 enabled this feature to run automatically.  vSphere 6.7 allows you to configure the UNMAP rate to better control the frequency/throughput utilized by the feature.  Previously it was at a static 25Mbps rate, but now is configurable between 100Mbps to 2000Mbps.  The UNMAP feature now also extends to SESparse disks on VMFS-6.  This only works when the VM is powered on and only affects the highest level snapshot.

vSphere 6.7

Finally, vSphere 6.7 also adds support for the 4Kn HDD drives as local storage but the SSD and NVMe drives are currently not supported for local storage.  VMware is providing a software read-modify-write layer to emulate the 512B sector drives.

vSphere 6.7 Final Thoughts and Support Issues

Ok so there’s a whole lot of good coming in this release.  So many cool new features.  Let’s not forget though, sometimes good things come with consequences.  For most companies on a 3-5 year refresh cycle you’re probably not going to be affected.  For those of us running homelabs on a little bit older gear you’re possibly going to run into issues.  First thing out of the gate is that CPU support for vSphere 6.7 has been truncated significantly.

vSphere 6.7 no longer supports the following processors:

  • AMD Opteron 13xx Series
  • AMD Opteron 23xx Series
  • AMD Opteron 24xx Series
  • AMD Opteron 41xx Series
  • AMD Opteron 61xx Series
  • AMD Opteron 83xx Series
  • AMD Opteron 84xx Series
  • Intel Core i7-620LE Processor
  • Intel i3/i5 Clarkdale Series
  • Intel Xeon 31xx Series
  • Intel Xeon 33xx Series
  • Intel Xeon 34xx Clarkdale Series
  • Intel Xeon 34xx Lynnfield Series
  • Intel Xeon 35xx Series
  • Intel Xeon 36xx Series
  • Intel Xeon 52xx Series
  • Intel Xeon 54xx Series
  • Intel Xeon 55xx Series
  • Intel Xeon 56xx Series
  • Intel Xeon 65xx Series
  • Intel Xeon 74xx Series
  • Intel Xeon 75xx Series

Of course I’m running Intel Xeon 5540’s on my Dell R710 hosts in my lab.  I’ve seen some posts of people being able to work around the issue with certain processor types which may hold some light at the end of the tunnel for me.  Interestingly the release notes indicate you will get a purple screen of death (PSOD) on unsupported CPU’s but I got a black screen with the following information instead.

vSphere 6.7Undeterred, I installed a nested ESXi 6.7 onto my VMware Workstation instance and then deployed the vCenter 6.7 VCSA.  Either way if your CPU’s are on this list you’ll need to consider upgrading before you upgrade to vSphere 6.7.

The vSphere 6.7 announcement was also slightly misleading in that it states several times that it adds functionality and workflows for NSX but if you read until the end or you check out the release notes they both indicate that there is currently no supported or compatible version of NSX that works with vSphere 6.7.  I get that you can add features for something that will be supported later but it’s troublesome when companies talk about them before they are a reality.  Just a pet peeve I guess.  Either way it looks like a new version of NSX that supports vSphere 6.7 is likely coming soon.

You’ll notice I didn’t talk about vSAN 6.7 here.  There really didn’t seem to be any major changes except a few under the hood and Windows Server Failover Cluster (WSFC) support.

Above I mentioned that I installed ESXi 6.7 onto VMware Workstation and vCenter 6.7 onto that nested host.  There’s no difference in the ESXi 6.7 install at all but the vCenter deployment is a bit changed with a new interface.  It’s much cleaner with a really streamlined look and feel.

vSphere 6.7 vSphere 6.7

That’s all for now!  vSphere 6.7 is well worth checking out as far as I can tell.  I’m not sure why they released it as an incremental release instead of a full version release.  I’m also wondering why they announced just a few months prior to VMworld 2018.  Either way as usual VMware is on the right track.  Packed full of features and updates to continue the push towards a hybrid cloud datacenter and into the software defined future.  Thanks for reading!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.