Get-VM

Bits and Bytes of Virtualization

October 19, 2018
by zach
0 comments

vRealize Automation Deployment Failed

I recently deployed a new vRealize Automation 7.5 environment. The deployment went without any issues. The configuration also went well. A week later, the console of the vRA appliance was launched and an error was displayed. The error indicates that the vRealize Automation deployment failed.

ERROR: DEPLOYMENT FAILED, YOU WILL NEED TO REDEPLOY

This was an odd error to see as the environment was up and running for well over a week with no indication anything was wrong. I searched the web and found one reference to it on the VMTN communities forum. A VMware employee had responded. They said to reboot the virtual appliance and ensure the services all registered after the reboot. If all was well, then to edit a welcome text file to remove the error. The error in the boot.msg file was Failed services in runlevel 3: network vcac-server. Slightly different than the service in the VMTN post.

I rebooted the appliance and confirmed the services registered correctly.

The welcome text file to edit is located at: opt/vmware/etc/isv/welcometext
Replace the error content with the following: ${app.name} - ${app.version}

The VMware employee indicates that a knowledge base article is being created for this issue. I will edit this post with an update to the KB when available.

October 5, 2018
by zach
0 comments

My Ignite Experience and Highlights

I was fortunate enough to attend Microsoft’s Ignite conference in Orlando last week. Normally, I attend VMworld as I have been to that conference six times. Earlier this year, I requested to attend Ignite in priority over VMworld because of my shifting focus. Luckily, I was given the go ahead to book for Ignite a couple weeks before it was sold out. I have only been to one Microsoft conference, which was a TechEd over five years ago. I’ll describe my Ignite experience and highlights.

Azure and Automation Sessions

Most of the sessions I added to my schedule were focused on automation, serverless, and Hashicorp’s Terraform integration with Azure. Early on, many of the sessions I had scheduled were on the expo floor. This was a new concept to me as these presentations were sprinkled across the expo floor but weren’t necessarily presentations by vendors. These expo theater sessions were 20 minutes where the breakout sessions in their own rooms were 75 minutes in length.

On Monday, I attended a session (BRK2213) led by Donovan Brown called “Getting Started with Azure DevOps”. I have seen Donovan on Channel 9 in the past and liked him in those videos. His session did not disappoint. He provided a good overview of Azure DevOps, previously Visual Studio Team Services, and then dove deeper. He talked about how many Microsoft teams uses Azure DevOps to build and maintain their respective products, including the Windows team. I wanted to go back and review this session video. Unfortunately, it may not become available as it included a few items that weren’t to be made public. I’ve reached out to determine if the video would be released at a later date or not.

Session BRK3266, Automation Tools for Azure Infrastructure, described a wide range of products that could be used to automate Azure. Powershell, Azure CLI, Azure Building Blocks, Terraform, and Ansible were discussed. All of the products have their strengths and weaknesses. Use cases surrounding these products were mentioned to give a better idea of when to use each. 

One of the first sessions that had a large focus on Terraform was BRK3194, “Deploying containerized and serverless apps using Terraform with Kubernetes (AKS) and Azure Functions. It was led by Christie Koehler (Hashicorp) and Zachary Deptawa (Azure). Zachary has been on a few Hashicorp webinars I have attended in the past. Between the two presenters, there is a lot of knowledge around Terraform and Hashicorp in general. This was jam packed with a lot of information and excellent demos. I will be rewatching this session to catch anything I missed as well as run through the same steps they showed in their demos to discover more about their session.

Kicking off Thursday morning was a deep session (BRK4020) about the Azure Functions internals. About half of the session was over my head as it got deep into the weeds of how Functions works but it was definitely worth attending or watching. Azure FunctionsThey showed many of the differences and advancements they have made from version 1.0 to 2.0. Azure Functions 2.0 is a big step in the right direction for all but before moving from 1.0 to 2.0, users need to check out if their function app will port directly over. Also, Functions on Linux with the consumption model is now in preview mode!

I ended the conference with a session (BRK3043) purely about Terraform in a multi-cloud environment led by Mark Gray. The session started off with the basics about Terraform but quickly gained steam and dove into some more advanced features. His demos were packed full of very good information and tips if you are learning Terraform. Mark also posted his demos to GitHub for anyone wanting to look closer at his code.

Other Interesting Ignite Sessions

On Tuesday morning, I attended session BRK2041 called “A deeper look at Azure Storage with a special focus on new capabilities”. This session had A LOT of content led by Tad Brockway and Jeffrey Snover. Multiple topics under the Azure Storage service was covered with many demos showing improvements within the service. There was also a history of their storage platform and its progress throughout the generations. A key fact of decreasing their storage costs by 98% through these generations was staggering.

Later on Tuesday, I attended another session (BRK3062) that goes into architecting security and governance across Azure subscriptions. Before this session, I had not been exposed too much to the security and governance side of things. It is a very important aspect of architecting an Azure solutions in my current position. The new Azure Blueprints feature was discussed briefly. Blueprints looks like it will be a very powerful tool for numerous use cases. This session has encouraged me to dive deeper into the subject. 

The biggest session on Thursday was Inside Azure Datacenter Architecture with Mark Russinovich (BRK3347). When I got in line twenty minutes before the session started, I was easily over 500 people back. Luckily the auditorium was very large and held everyone. This session is a must see if you are interested in the back end Azure infrastructure as well as its history. It was packed full of demos ranging across storage performance, service fabrics, and IoT sensor redundancy. It was my favorite session out of the entire conference. 

Interesting Announcements

Is it the year of VDI? Probably not, but Microsoft has a new service for Windows 7 and 10 desktops available. Virtual desktops are hard. If Microsoft can’t virtualize their own desktops effectively, who can? Pair up all of the services that Azure and Office 365 can easily tie in and this becomes a very attractive offering for companies. Check out more here.

As mentioned previously, Azure Blueprints will be a new focus for me going forward. The labs available to be taken during Ignite were locked down and deployed using Azure Blueprints. Knowing how to use the blueprints feature will be a differentiator for companies trying to ensure security and to control costs. More can be found here.

A friend of mine at a Fortune 500 company mentioned the new Azure ExpressRoute Global Reach announcement. This is interesting as he mentioned that this could be used to connect their datacenters over Microsoft’s backbone instead of paying for their current provider. Depending on the cost of everything, it may be a big cost savings. Keep an eye on the pricing as it may become very attractive for companies. Not much was posted about it but the announcement can be found here.

Overall Conference Experience

Overall, I enjoyed the Ignite conference and ranks at the top of conferences I have attended. The overall production value of Ignite felt like it was a step above VMworld. The video production in the community and expo areas as well as streaming of multiple sessions on the huge big board in the hang area was impressive. I watched two sessions from the hang area where I had originally planned to attend in person. One was because the session room was full. The other instance was of convenience when I was already watching another session and saw my next session was slated to be shown on the screen directly in front of me. The turnaround time of getting session videos online was impressive.

The demos and hands on experiences across the Microsoft technologies were great. I took the advantage of trying the HoloLens to discover its augmented reality benefits.

The transportation to the conference center from the hotels and back was well done. On Monday, the initial bus seemed to be late. The rest of the days had minimal wait times. My biggest gripe was on Thursday afternoon when the transportation window was cut down in preparation for the party. The busses were planning on leaving at 4:30 however, hundreds if not a thousand people were ready to head back at 4:10 and were waiting in line. The busses were waiting for us there at the curb but we were not allowed to enter the bus to escape the heat. Instead we all stood next to the busses waiting to get into the air conditioning. 

The food and refreshments weren’t bad. Breakfasts were the same every morning, which felt odd. A morning #BaconReport was provided by @Schumatt daily. Lunches and afternoon snacks did vary and weren’t too bad. Considering the amount of food that had to be made, none of us are expecting an amazing meal. I’ve definitely had worse in the past!

Next Year

I had a great time this year while learning a lot. I plan to return next year. The 2019 Ignite conference will return back to Orlando in 13 months on November 4th. 

April 19, 2018
by zach
0 comments

vRA 7.4 Upgrade Issue

VMware released the latest revision of vRealize Automation last week. I found some time to perform an upgrade to my homelab environment. At the time, 7.3.0 was the running version. vRAI planned to skip past 7.3.1 and go directly to 7.4. I downloaded the vRA 7.4 ISO file, attached it to the appliance’s CD-ROM drive and clicked check updates from the CD-ROM. Unfortunately, the error “No update found on 1 CD drive(s)” was given. I soon decided to skip that and let the appliance upgrade to 7.3.1 first. That upgrade went smoothly without any issues.

The Issue

Next up was the vRA 7.4 upgrade. I took another round of snapshots and went back into the appliance management and initiated the 7.4 install. The vRA appliance upgraded to 7.4 and asked for a reboot. The appliance rebooted and came back online. After waiting a very long time for the IaaS components to begin their upgrade I noticed an issue with some appliance services. The vCO service did not have any status while the following services were “UNAVAILABLE“:

advanced-designer-service
o11n-gateway-service
shell-ui-app

Services Unavailable

I dug into some logs and found WARN events surrounding the unavailable services. In those events, I noticed the following error: “Unable to establish a connection to vCenter Orchestrator server.” Therefore, I needed to figure out why the vCO service was not starting. Once I could get it to start, the others would register successfully. I checked the logs for the vCO services and found the following error:

 2018-04-14 18:39:16.702+0000 [serverHealthMonitorScheduler-1] WARN {} [LdapCenterImpl] Unable to fetch element "vsphere.local\vcoadmins" from ldap : Error...:[404 ][javax.naming.NamingException]
2018-04-14 18:39:16.702+0000 [serverHealthMonitorScheduler-1] ERROR {} [AuthenticationHealth] Unable to find the server administrative group: vsphere.local\vcoadmins in the authentication provider.

The Resolution

This is an immediate smoking gun for my configuration. I set up the vRO admin group to use a group within my Active Directory. Therefore, the local group, vcoadmins, was not present and prevented the vCO service from registering with vRA. I changed the vRO admin group to my AD group and rebooted the appliance.

vRO Admin Group

All of the services registered successfully and the IaaS upgrade process began. The vRA 7.4 upgrade completed shortly after that without any further issues.

Upgrade Complete

However, I don’t know why the vRO admin group was changed to vsphere.local/vcoadmins during the 7.3.1->7.4 upgrade. Luckily it wasn’t too big of an issue to fix but annoying to say the least.

April 11, 2018
by zach
0 comments

Import Python Modules for use in an Azure Function

Azure Functions is a “serverless” compute service that supports multiple programming languages. Some languages are officially supported, while others are in preview. Azure FunctionsI have numerous python scripts that I could push into the cloud to help me learn how to use Azure Functions. Unfortunately, the previewed languages do not have very much documentation out there.  The biggest hurdle was importing python modules for use in an Azure Function.

Azure Functions uses the App Service on the back-end which allows you to customize your application environment with the help of Kudu. I found some documentation across multiple sites that had aged a bit. Not a single how-to post or guide had all of the answers. The inaccuracy of the guides I found may be from the preview nature of the language support. This is not surprising as Python is in preview. After lots of trial and error, I found a method that worked for me.

Create a Function App

First, create a new Function App. 

Create a New Function App

Confirm the function app is up and running.  Then click the + sign next to functions to add a function to the app. 

Create a New Function

The center pane will ask for a scenario and language to assist with a premade function. Since we are using python for our language, a custom function must be selected to proceed.

Create Custom Function

The next screen provides templates to use to get started. However, to use python, the “Experimental Language Support” switch needs to be enabled.

Enable Experimental Languages

After selecting Python, only two options (HTTP trigger and Queue trigger) can be selected. For this demo, I will select HTTP trigger. I left the defaults for this example. 

HTTP Trigger

Update Python Version

Now that we have a function in the app, the python version needs to be updated. The python version that is installed is old and conflicted with my scripts. This may not be the case for your scripts but if you need to update to a specific version of python, this will assist in that process. My scripts were written for Python 2.7. I need to fix my scripts to support Python 3.6 but that will come at a later time. To get started, We need to access the Kudu tool. Click the Function App name on the left, then Platform features at the top, and then “Advanced tools (Kudu)” near the bottom of the center pane.

Azure App's Kudu

To update the Python version, click the Site extensions at the top.

Click the Gallery tab. Then type in Python in the search. The results will provide multiple versions of Python available to be installed. Pick your desired version. 

I need Python version 2.7.14 x64. Click the + sign to install the extension into your environment. The install icon will show an animated loading icon while it is installing. Once it is finished, a X icon will be present in the upper right of the tile. Take not of the path where this version of python is installed. It will be needed later.

Now that our desired version of Python has been installed, the Handler Mappings need to be updated. Go back to the Function App’s Platform Features page. Then select “Application settings.”

Application Settings

A new tab is shown in the center pane. Scroll to the bottom to the Handler Mappings section. A new mapping needs to be added. Click “Add new handler mapping” and enter the relevant settings for “fastCgi” handler mapping for the version of Python you installed. The path is shown on the tile when you installed the different version. My handler settings were as followed:

fastCgi ->D:\home\python27\python.exe -> D:\home\python27\wfastcgi.py 

Python fastCgi Handler Mapping

Scroll to the top of the Application Settings page and click Save.

You can test the version of Python being used by replacing the code in the run.py file with the following code:

import os
import json
import platform
postreqdata = json.loads(open(os.environ['req']).read())
response = open(os.environ['res'], 'w')
response.write("Python version: {0}".format(platform.python_version()))
response.close()

When the above code is run, the output returns the Python version. My example returns the correct version from the site extension I installed.

pyVersionRun

Create Virtual Environment

Next, a virtual environment needs to be created. This is where the Python modules will be installed. Head back to the Kudu tool and click the “Debug console” dropdown and click CMD.

Kudu Powershell

At the top, you will see a directory structure that can be used for navigation. First, the virtual environment module needs to be installed as it does not seem to come with the updated version of Python that was installed previously with the site extension addition. Run the following command: “python -m pip install virtualenv”.

Install Virtual Env

Now that the virtualenv module is installed, it is time to create a new virtual environment. Navigate to the following directory: “D:\home\site\wwwroot\{yourFunctionName}”. Then in the console type the following: “python -m virtualenv yourenv” where ‘yourenv’ will be the name of the virtual environment that you create.

Create Virtual Env

Once the virtual environment has been created navigate to “yourenv\scripts” and run activate.bat. This will activate your virtual environment and place your active console in it. You can see if it is active as it the environment name precedes the path as shown below.

Enter Virtual Env

You now have access to run python commands that allow you to install modules and configure your Python environment to your needs. 

Install Python Modules

Installing modules through PIP is recommended. However, I ran into an issue where PIP would not install a couple modules I needed.  I recommend attempting to install using PIP first, as I did with ‘lxml’ below.

lxml Install

I have received an error while installing modules that indicates it needs the vcvarsall.bat file that is included within the Microsoft Visual C++ 9.0 package. If you do get this error, you can manually download the “wheels” that contain the module you need to install. The best site that I found that can direct you to the official wheel files is www.pythonwheels.com. From there, you can find the module you need. Select the correct version of wheel that is specific to your environment (2.7, 3.6, x86, x64, etc.). You also need to install the wheel module before you import these wheel files (python -m pip install wheel). 

Now that wheel is installed and you have downloaded the correct .whl file for your module, you can simply drag and drop the .whl file from your desktop into the following folder: “D:\home\site\wwwroot\{yourFunctionName}\{yourenv}\Lib\site-packages.” It will unpack the .whl file automatically and make it available. 

Once you have installed all of your modules, run “pip freeze” to discover the modules that are installed. I installed bs4, lxml, and requests. They naturally installed a few other modules as dependencies.

List Installed Python Modules

Import Modules Within Your Script

I know this has been long, but you’re almost done! The last thing to do is let Python know where your modules reside so it can correctly import them into your scripts for use. At the top of you script(s), enter the following code:

import sys, os.path
sys.path.append(os.path.abspath(os.path.join(os.path.dirname( __file__ ), 'yourenv/Lib/site-packages')))

Ensure you replace “yourenv” with whatever you chose to name your virtual environment. 

After that, your script will be able to import any Python module it needs and complete successfully.

June 1, 2017
by zach
2 Comments

Invalid Username or Password When Logging Into Embedded vRO

Granting user authentication from vRealize Automation (vRA) 7.2 to vRealize Orchestrator (vRO) is not as easy as it should be. I received an “Invalid Username or Password” error when logging into vRO, as shown below. vRO with an invalid username or password error.My vRA environment was configured to use my home lab Active Directory (AD) domain without any issue. Next I wanted to get my vRO appliance configured. I logged into the vRO Control Center to configure the authentication and other items. Since I am using the embedded vRO, the Authentication Provider is automatically set to vRealize Automation. The custom tenant was set and I was able to populate the AD groups from the dropdown without any issues. Since I could see my AD groups, I didn’t think vRO would have any issues with authenticating any user within my selected AD group. I was mistaken.

A quick search across blogs and forums did not provide much help. I went to the vExpert Slack channel and hit another roadblock. A couple members told me to follow a blog post from vCOTeam to correctly configure the domain login. I had already tried this without any success. The channel said that is how it works now and could not see much else from the logs I provided. With a bit more searching, I found a blog post on Spas Kaloferov’s blog that was my key to finding the solution to this problem, twice.

Solution 1

The solution that worked in my home lab was referenced under his misconfiguration of the Identity Provider in vRA. He mentions changing the IdP Hostname to the vRA Load Balancer address. Unfortunately, my vRA environment contains zero load balancers. I did notice that my IdP Hostname was not the vRA FQDN. It was set as the hostname with no domain suffix. After changing the IdP Hostname to the correct vRA FQDN, I was able to login with my AD user account.

IdP Hostname with the correct FQDN causes vRO to not authenticate.

Solution 2

While working with a client, I ran into this issue again. Immediately, I checked the IdP Hostname. This time, the IdP Hostname had the correct FQDN configured. Later, we accidentally discovered that the certificate that was generated by one of their team members had a misspelled FQDN for the vRA appliance and lacked another Subject Alternative Name (SAN). A new certificate was generated with all of the correct FQDNs and SANs required for our deployment. This proved to be the solution for their version of this issue.

In Conclusion

VMware needs to address this finicky configuration between vRA and vRO. There are too many variables that may cause this issue.

With the release of vRA/vRO 7.3, they changed the back-end authentication again and will probably eliminate this issue. However, they will create a new issue. They always do.

 

May 31, 2017
by zach
2 Comments

Errors Deploying Infoblox Appliances to vCenter 6.5

I want to learn more about integrating with Infoblox’s IPAM solution as it is the number one IPAM solution for most medium to large companies. Getting experience should be as simple as deploying their DDI appliance into my home lab. I discovered a couple issues while trying to deploy their OVA into my environment. First, I am running vCenter and vSphere 6.5. When I requested an evaluation of their product, I was provided a link to an older vNIOS version (7.3.9). The following error was presented when I tried to deploy it:

Infoblox OVA failed checksum verification

If you look closely at the error, you will see it is failing a checksum for the Xen Server .ovf file. I can confirm I was downloading the VMware version. The MD5 hash to matched up with Infoblox’s MD5 hash. I tried deploying through the vCenter Web Client from a local source first. I also tried allowing vCenter to reach out to Infoblox to download the package by the URL. After trying too many methods, I reached out for support and was told to download a newer version, specifically version 8.0.6. No reason was provided why this package failed.

Lets Try a Newer Infoblox Version!

With a newer version in hand, a new deployment began. This time, a new error presented itself.

Infoblox-Known Issue

Issues detected with selected template. Details: – 17:3:SECTION_RESTRICTION: Section Product Section (Information about the installed software) not allowed on envelope.

But there is a workaround..

Infoblox’s support confirms that this issue is a known issue and will be fixed in version 8.2.0. The workaround to this issue is to not deploy the appliance through vCenter but rather directly to a host. I can confirm this does work and I have Infoblox up and running in my environment. Even though this is a known issue, I did a quick search and was unable to find this error in any results. Infoblox’s Knowledge Base is also lacking this error. Therefore, I’m putting it out there just in case anyone else comes across this issue.

April 24, 2017
by zach
0 comments

vRA 7 Server Deployment Fails After VM is Deployed From Template

Recently, I purchased enough equipment to complete a homelab environment. Everything went well until the last step of deploying a new VM through vRA 7.2. I asked a couple colleagues what they thought and they hadn’t seen it before. I searched VMTN and google didn’t find the exact cause of the issue so I decided to get this out there just in case someone else ran into it.

Issue

To set the stage, I have a small deployment of vRA 7.2 running in a nested environment. My first catalog item is a Windows Server 2012 R2 VM. The template is prepped and a customization specification ready to be applied. Using just vCenter, I could deploy a VM from the template and use the customization specification to customize the guest successfully. However, when I attempted this process through vRA, I received the following error right after the clone completed.

The following component requests failed: vSphere_Machine_1. Request failed: Machine “servername”: 
CustomizeVM: Error getting property ‘info’ from managed object CustomizationSpecManager.

vra-cSpec-DeployError

I also received the following error in vCenter.

Set virtual machine custom value: A specified parameter was not correct: key

vcenter-cSpec-DeployErrorI tried a few different things to resolve the issue like creating a new customization spec but everything I did always pointed back to vRA trying to initiate the next step after the VM was deployed from the template.

Resolution

As I searched blogs and VMTN for answers, I discovered the following thread. It isn’t the smoking gun but did get me pointed in the right direction. It describes a permissions issue causing the error Danny saw, which happens to be the same error I experienced. Next, I took a look at the permissions granted to my svc_vRA account. It had full admin privileges, at the data center level. Since this is my homelab, there’s no reason I can’t grant it more access. I granted it admin privileges at the vCenter level. This change allows it access to the customization specifications, which are above the data center access. I kicked off a new deployment and received a successful deployment of a base Windows Server 2012 R2 VM.

Make sure the account you are using within vRA has enough permissions. Then ensure they are granted in the correct location!

April 21, 2017
by zach
0 comments

Moving My Career AHEAD

As many of you already know, I joined AHEAD a couple months ago. I started as a Senior Technical Architect on February 7th. This is my jump out of the customer space into consulting. Moving My Career AHEADI felt this was the best time in my career and in my personal life to make this move. I had become bored with the day-to-day activities within a customer environment. My last company had plenty of technology to work with but it was just advancements of the same old stuff I had been using for years. Therefore, a change was needed before I was completely burnt out of IT in general.

I had been in contact with AHEAD for some time but the time was not quite right. They reached out to me in January and the ball was in full motion to get me onboard. They wasted zero time getting me engaged with clients as I was on-site with a client in my first two weeks. In the two months that have passed since joining, I have been busy the entire time. Not only learning how the consulting side works, but also learning new methods and new technology has been on my agenda.

I am excited about my future career at AHEAD. Initially when searching for companies I was willing to work for, AHEAD stood out because of the talent on staff. Two months in, the talent at AHEAD has surprised me even more. The best part is everyone is willing to assist me in whatever way. It is definitely a team atmosphere. I’m definitely glad to be here! I’m ready for the challenges ahead!

I will also be paying more attention to this blog. I already have four posts in queue resulting from issues or experiences that I have come across in the past two months.

 

April 12, 2017
by zach
3 Comments

vRA 7.2 Active Directory Policy Failing to Create New Computer Object

I love the new Active Directory Policy feature within vRealize Automation (vRA) 7.2. It allows easy management of Active Directory (AD) objects, like computer objects when a new VM is provisioned. I like this integration much better than the CCC plugin that was created for vRA 6.x a couple years ago. The flexibility of Active Directory Policies within vRA is highly desirable for most admins. It can also be fairly dynamic when paired with its custom property.

The Issue

Without much work, the Active Directory Policy configuration is quick and simple. However, I encountered a problem when the workflow within vRealize Orchestrator (vRO) could not create a new computer object during an event subscription lifecycle state. The error isn’t very descriptive unfortunately. 

AD Object Creation Failure

With not much to go on, I decided to perform the same operation but with the regular AD workflows within the AD plugin in vRO’s library. I received the same error when using those workflows. Choosing a different OU to deploy to also resulted in an error.

The Solution

I changed the service account I used to a domain admin account and was met with a successful creation of an AD computer object. At that moment, I realized I used a service account that did not have proper rights to the OU I was trying to create/delete computer objects in. It is an easy fix but without much of an error, it can be frustrating to troubleshoot.

Other than this user error, the Active Directory Policy integration works very well and is a must have for environments with Active Directory.

 

July 13, 2016
by zach
0 comments

vRA Could Not Create a SSL/TLS Secure Channel

Problem!

At the end of Monday, I noticed our vRA implementation was not provisioning new servers. A failure of a new machine request was reported two minutes after the submission/approval. I looked through the logs and found an error stating that the request could not create SSL/TLS secure channel. Therefore, I performed my proper engineer duties and hit the interwebs for a solution.

vRA Couldn't Create SSL/TLS Secure Channel

vRA Couldn’t Create SSL/TLS Secure Channel

Solution? Not So Fast.

Great! I found a VMware KB article (2123455) that describes my error in verbatim. Scrolling down to the resolution, I find it is a communication issue between the DEM-Worker servers and vRO. VMware references a specific Microsoft patch (3061518)that would have been installed on the DEM-W servers that needs to be removed. Therefore, I logged onto our DEM-W servers and found the patch was indeed installed on the servers. Unfortunately, I noticed it had been installed on the servers since August 9, 2015, which happened to be the day the servers were stood up initially. I was then not sold on the idea that they had worked for 11 months and then all of the sudden quit working because of this patch.

Fixed!

I opened a case with VMware to look into it. A vRA support log bundle was generated and sent off for review. The support engineer asked me to remove the patch even though it had been working properly for 11 months.  I found I could not directly remove it as it wasn’t shown in the list of Windows updates that could be uninstalled. So I wait for another solution….

The next day I was provided an update showing there was a roll-up update from Microsoft that may be the culprit. This time it was KB3161606. Sure enough, this patch was installed on both DEM-W servers over the weekend. I uninstalled it and rebooted both servers. Success! IaaS server provisioning is now completing without issues. The patch was pushed out in June by Microsoft. Hopefully, VMware gets around to updating their KB article to include KB3161606 alongside KB3061518.