Manually generated certificate request for vRealize Automation in VMware Cloud Foundation

VMware Cloud Foundation has a really neat certificate authority integration feature that makes the management of certificates much easier than it is in a normal VMware environment. However, this integration has its limitations:

  • Only Microsoft Certificate Authority is supported
  • Only basic authentication with Microsoft Certificate Authority is supported – if you require kerberos authentication then that doesn’t work

Therefore, if you need to use certificates from a public certificate authority or require kerberos authentication then the generation of certificates is a more manual process. Thankfully it is not as manual as usual so there is no need to use OpenSSL.

SSH to SDDC Manager and login as the vcf user

Enter su and enter the root password

Navigate to the following directory (operations manager is correct)

/opt/vmware/vcf/operationsmanager/scripts/cli

Run the following command:

./generate_certificate.sh

Enter 1 to generate the CSR

Press enter to accept default resource type (vra)

Enter the information for the certificate request; country, state etc.

Enter the VIP FQDN for vRA

Take care on the next section to add all of the subject alternate names correctly as it is very frustrating when you come to validate the install of vRA in SDDC Manager and find that one of the FQDNs doesn’t match an entry in the certificate

Once complete type ‘done’ as the final subject alternate name

Enter the file path to create the private key file (defaults to /tmp/private_key.pem)

Enter the file path to create the CSR file (defaults to /tmp/csr.pem)

Change the permissions on the files so that the vcf user can download them (the owner will be root). The commands below will grant (more than) the necessary permissions:

chmod 777 /tmp/private_key.pem
chmod 777 /tmp/csr.pem

Download the files from the SDDC Manager via the scp client of your choice

Use the CSR file to create the certificate using the ‘vmware’ template as detailed in the VMware Cloud Foundation documentation. The certificate should be downloaded in Base64 format as should the certificate chain.

Using the text editor of your choice create a new file. This file should contain the certificate, any intermediate certificates and the root certificate in the following order:

  • vRA certificate
  • Intermediate certificate
  • Root certificate

The easiest way to determine which is which is that the vRA certificate will be the largest, the intermediate the next largest and the root certificate the smallest

Use the contents of this file during the deployment of vRA within SDDC Manager

Manually uploading product bundles to vRealize Lifecycle Manager

Under most circumstances product binaries for vRealize Lifecycle Manager can be downloaded directly (or via a proxy) from my.vmware.com. However, where internet connectivity is not available there is a method by which a local repository can be used . This post outlines the steps required to achieve this.

Firstly, download the relevant (and supported by vRLCM) OVA file from my.vmware.com

SSH to the vRLCM server and create a new directory into which the OVA files will be stored for example:

mkdir /data/binaries/OVA

Upload the OVA files downloaded previously to this directory. The example below uses pscp to upload vRealize Log Insight 4.8 but other methods such as winscp are absolutely fine.

pscp c:\temp\ovafilename.ova root@FQDN:/data/binaries/OVA

Wait until the OVA file has uploaded successfully

Login to the UI of vRealize Lifecycle Manager and navigate to Settings > Product Support and click Add Binaries.

Ensure that ‘local’ is selected as the location type and then enter the path to the OVAs that was created previously.

Click the Discover button and then select the products that you want to add to vRLCM and then click Add.

The mapping of product binaries then takes place

Once complete the product will be shown under product binaries and can then be used to deploy into an environment

Executing a vRO action via REST API

The ability to execute a vRO workflow or action via API is useful if you are using vRA or vRO as part of an overall CI/CD pipeline. If a vRA catalog item uses vRO actions to filter or set values on a custom form how can this be done if the vRA catalog item will be requested via REST.

In the example below I have a simple action that takes two inputs:

  • firstName – Type:String
  • secondName – Type:String

In order to be able to execute a vRO action you need to submit basic auth credentials. This will require the user account to be a member of the vco-admins group in vRO. This can either be a local user or an Active Directory user if vRA has been configured to integrate with Active Directory

The process for being able to execute the action via REST is a three part step. Part 1 is to determine the ID of the action that you wish to call. Execute the following operation:

GET - https://vro-fqdn/vco/api/actions

The result will be a large array of actions that looks something like this:

{
     "link": [
         {
             "attributes": [
                 {
                     "value": "d7e461f6-b88f-4f63-a2cc-7f3c76d1bd19",
                     "name": "id"
                 },
                 {
                     "value": "isBlocking",
                     "name": "name"
                 },
                 {
                     "name": "description"
                 },
                 {
                     "value": "com.vmware.library.vcaccafe.workflow.subscription/isBlocking",
                     "name": "fqn"
                 },
                 {
                     "value": "0.0.0",
                     "name": "version"
                 }
             ],
             .......

You’ll need to process this array to search for the name of the action that you wish to execute.

Part 2 is to retrieve the inputs for the action which is achieved via the following REST operation:

GET - https://vro-fqdn/vco/api/actions/{actionid}

The result will look something like this:

{

    "href": "https://192.168.1.26:443/vco/api/actions/278a1622-1924-4f37-bb24-0c5d36346ae9/",
    "relations": {
        "link": [
            {
                "href": "https://192.168.1.26:443/vco/api/actions/com.cloudkindergarten.examples/executedByApi/",
                "rel": "alternate"
            },
            {
                "href": "https://192.168.1.26:443/vco/api/actions/278a1622-1924-4f37-bb24-0c5d36346ae9/permissions/",
                "rel": "permissions"
            },
            {
                "href": "https://192.168.1.26:443/vco/api/actions/278a1622-1924-4f37-bb24-0c5d36346ae9/executions/",
                "rel": "executions"
            }
        ]
    },
    "id": "278a1622-1924-4f37-bb24-0c5d36346ae9",
    "output-type": "string",
    "name": "executedByApi",
    "description": "",
    "version": "0.0.0",
    "fqn": "com.cloudkindergarten.examples/executedByApi",
    "script": "var fullName = (firstName + \" \" + secondName);\nreturn fullName;",
    "input-parameters": [
        {
            "description": "",
            "type": "string",
            "name": "firstName"
        },
        {
            "description": "",
            "type": "string",
            "name": "secondName"
        }
    ]
}

From this we can see that action is expecting two input parameters; firstName and secondName both of type string.

Part 3 is to execute this action. We do this by using the following body content

{"parameters":
 [
  {
   "value": {"string":{ "value": "Andy"}},
   "type": "string",
   "name": "firstName",
   "scope": "local"
  },
  {
   "value":{"string":{"value": "Davies"}},
   "type": "string",
   "name": "secondName",
   "scope": "local"
  }
 ]
}

This is then executed with the REST Operation:

POST https://{vro-fqdn}/vco/api/actions/{actionid}/executions/

The response will hopefully be the returned value from the action. In this case the full name of the person:

{
    "value": {
        "string": {
            "value": "Andy Davies"
        }
    },
    "type": "string"
}

The New VMware Workspace ONE and VMware Horizon…

The New VMware Workspace ONE and VMware Horizon Reference Architecture Incorporates Cloud, On-Prem, and Multi-site Design

The New VMware Workspace ONE and VMware Horizon…

The VMware Workspace ONE and VMware Horizon Reference Architecture is now available and is a must read for anyone considering, designing, or undertaking a digital workspace project using VMware Workspace ONE, VMware Horizon 7, or VMware Horizon Cloud Service on Microsoft Azure. The VMware Workspace ONE and VMware Horizon Reference Architecture guide provides a framework and guidance for architecting using Workspace ONE and Horizon, whether using […] The post The New VMware Workspace ONE and…Read More


VMware Social Media Advocacy

Integrating vROps with vRO using webhook shim

Introduction

In an ideal world, running custom vRO workflows as a result of a triggered alert in vROps would be either an out of the box feature or (easily) possible via a vROps management pack.

There have been two incarnations of a vRO management pack for vROps and neither has provided this, the first incarnation did, with a lot of manual changes to the vROps adapter file allow this, and from an initial inspection of the latest vRO management pack this would also be possible – perhaps the subject of another blog post!

So, in the absence of such a (perfect) solution the good news is that there is a solution to the challenge, and there have already been a number of blog posts written on this subject, both from VMware staff and externally. Whilst working with a customer I found that none of these resources captured all of the steps in a single succinct post, that didn’t assume any previous knowledge.

The solution

A couple of years back, John Dias and Steve Flanders wrote a web hook shim that enabled vROps and vRLI to interact not just with vRO but with a number of different endpoints. The web hook shim is a small python application that listens for requests from vROps or vRLI and translates those requests into something that the intended endpoint can understand.

Install Photon OS

As VMware has their  own Linux distribution, this is what I have used for this blog post, especially as it works really well as a container host. There are lots of articles on how to get an instance of Photon OS up and running quickly:

https://vmware.github.io/photon/

The easiest method is to use the OVA image. Although you should note that there has been an issue with the hardware version 13 image, as outlined on William Lam’s blog:

https://www.virtuallyghetto.com/2017/11/workarounds-for-deploying-photonos-2-0-on-vsphere-fusion-workstation.html

Configure Network Settings

Whilst you could leave the VM to use DHCP for network addressing, you’ll most likely want to configure a static IP address. Details on how to do this can be found in the photon OS administration guide:

https://github.com/vmware/photon/blob/master/docs/photon-admin-guide.md#managing-the-network-configuration

I ended up with a network configuration file as shown below:

Screen Shot 2018-07-28 at 13.02.23

Once the networking configuration has been set, run the following command to restart the network daemon:

systemctl restart systemd-networkd

Run the following command to validate that the IP address information has been set correctly

ip a

The result should be something similar to that shown below

Screen Shot 2018-07-28 at 13.07.13

Configure Hostname

To set the hostname of the appliance run the following command:

hostnamectl set-hostname abc

To check that is has been set correctly run the following command:

hostnamectl

The result should be something similar to that shown below:

Screen Shot 2018-07-28 at 13.13.01

Configure Docker

We will be running the web hook shim as a Docker container. Docker is already installed within the Photon OS image, but it is does not configured to run by default. Docker also needs to be configured to start automatically

Start Docker

Start Docker by running the following command:

systemctl start docker

Set Docker to start automatically

Set Docker to start automatically during system start by running the following command:

systemctl enable docker

Start the web hook shim

The web hook shim is available on Docker Hub, so as long as the appliance can access the internet then getting it running is a simple process. Before we can get the link between vROps and vRO working we need to do some configuration of the web hook shim. Run the following command to start the Docker container

docker run -it -p 5001:5001 --name {yourchoice} vmware/webhook-shims bash

This command runs the container interactively, sets the ports that the container will respond on, gives it a memorable name, pulls the container image from Docker hub and provides a bash shell to allow shell access within the container

To check the status of the the container run the following command:

docker ps -a

This will have an output similar to the following:

Screen Shot 2018-07-28 at 13.23.04

Configure the web hook shim

Attach to the container by running the following command:

docker attach vmwareshim

The various translation files for the different endpoints are in a directory called ‘loginsightwebhookdemo’:

cd loginsightwebhookdemo
vi vrealizeowchestrator.py

The first element to change is the hostname of the vRO endpoint to which alert notifications will be sent:

VROHOSTNAME = 'vrohostname'

If running an instance of vRO from a vRA appliance there is no need to include port information. If running the standalone version of vRO then you will need to include port 8281 in this section for example:

VROHOSTNAME = 'myvroserver.mydomain.local:8281'

Information around the various means of authentication can be found within the vrealizeowchestrator.py file. Amend the various components as necessary to achieve the desired authentication mechanism:

USENETRC = True
VROUSER = ''
VROPASS = ''
VROTOKEN = ''
VROHOK = ''

If running in an environment without trusted certificates, the following line needs to be set so that certificate checks are disabled:

VERIFY = False

The shim currently has an issue whereby two instances of the workflow are executed for each alert that is triggered. To fix this issue edit the vrealizeorchestrator.py file and alter the following line:

@app.route("/endpoint/vro/", methods=['PUT', 'POST'])

To:

@app.route("/endpoint/vro/", methods=['POST'])

Save the file and then exit vi.

Using .netrc credentials

Prior to configuring the web hook shim, if .netrc credentials are going to be used it is easier to configure these now, instead of having to exit the shim, creating the credentials and then re-attaching

To use netrc credentials, run the following commands:

cd ~
echo 'machine {vro-host-or-ip} login {vro-login-name} password {vro-password}' > .netrc
chmod 600 .netrc

Start the web hook shim

Once all of the configuration has been done it is time to start the web hook shim:

cd ..
./runserver.py

A couple of information messages will be displayed. To validate that the web hook shim is operating, open a web browser and enter the address as:

webhooksshimiporhostname:5001

You should see the following returned

Screen Shot 2018-07-28 at 13.47.11

Useful commands

To stop the web hook shim enter the following command:

CTRL-C

You can restart the web hook shim later via this command:

./runserver.py

To detach from the Docker container, whilst leaving the web hook shim running, use the following command:

CTRL-P, CTRL-Q

Setup the master workflow in vRO

Before vROps can be configured we need a workflow that will receive the alerts generated. In John Dias’s series of posts he discusses a vRO workflow package that can be imported and used to test the functionality. That package can be found at the following location:

https://github.com/vmw-loginsight/webhook-shims/tree/master/samples

When configuring vROps we are going to specify the ID of a workflow that will be run. This workflow needs to have a single input called ‘alertId’ with a type of ‘string’. For the purposes of this blog post I’m going to keep things really simple and merely prove the ability to run a workflow based on a vROps alert. As a result my workflow is really basic, it has a single input and just outputs that alert to the system log.

The ID of the workflow can be found on its general tab, under ID. Copy this and save it for later.

Screen Shot 2018-07-28 at 14.05.03

vROps configuration

At a high level the configuration steps within vROps consist of the following:

  1. Setup an outbound instance
  2. Configure notification settings
  3. Setup a test alert

Setup an outbound instance

The first step is to create the outbound instance using the OOTB Rest Notification Plugin. You get to the notification settings via the following sequence:

Administration > Management > Notification Settings

Configure the outbound plugin with the following parameters:

Parameter Value
Plugin Type Rest Notification Plugin
Instance Name vmwareshim
Endpoint http://webhookshimip:5001/endpoint/vro/workflowId
Username none
Password none (but you must put something in this box)
Content Type application/json
Certificate Thumbprint none
Connection Timeout 20

To validate the settings, press the test button. The test will fail within vROps with the following error displayed:

Screen Shot 2018-07-28 at 14.19.50

However, you will be able to see that integration is working by looking within the photon OS appliance. If you have an SSH session open and you’re attached to the web hook shim container then you should see the following log output:

screen-shot-2018-07-28-at-14-22-00.png

You can also check within vRO, where there should be a an execution of the workflow specified

Screen Shot 2018-07-28 at 14.40.53

Configure notification settings

vROps can be configured to send notifications to multiple targets and use advanced filtering to determine where different alerts should be sent. For the purposes of this blog post I am going to configure a notification rule to send all alerts to vRO via the web hook shim. Configure the rule by navigating to the following page:

Alerts > Alert Settings > Notification Settings

The rule should look like the following image:

Screen Shot 2018-07-28 at 14.50.29

Setup a test alert

The final step is to have an alert triggered. If you’re performing this in an environment with active ESXi hosts / Virtual Machines, then the likelihood is that you will get alerts triggered anyway. If you are just testing this in a lab, configure an alert with symptoms that have such low thresholds that they will inevitably get triggered.

Screen Shot 2018-07-28 at 14.57.10

Once the alert has been triggered you should be able to see successful executions of your workflow in vRO. In my case I’m just logging the alertId but that is all that you need to be able to extract a lot of information – although there will be multiple REST calls required to vROps in order to get that information.

John Dias included some initial steps that could be used to retrieve information about the affected object. The basic flow is as follows:

  • alertID – retrieving the alert from vROps will also return the alertDefinitionId and the resourceId of the affected object
  • resourceId – can be used to retrieve the name of the object, properties of the object, related object, triggered symptoms
  • alertDefinitionId – can be used to find out more prescriptive information about the alert definition itself and can be used to match the symptoms of the alert definition to the triggered symptoms on the resource

Additional Resources

There are a large number of blog posts around this subject, which may provide additional information and/or context:

https://blogs.vmware.com/management/2017/03/webhook-shims-now-available-on-docker-hub.html

https://blogs.vmware.com/management/2017/02/self-healing-datacenter.html

https://blogs.vmware.com/management/2017/01/vrealize-webhooks-infinite-integrations.html?src=so_5703fb3d92c20&cid=70134000001M5td

Configuring vRO to send emails for gmail

Overview

Google allows you to relay up to 2000 emails per rolling 24 hour period, which can be useful for development of workflows via vRO. The steps below are written for standard gmail accounts. If you have a G Suite account then the steps are subtly different in that the SMTP host name is ‘smtp-relay.gmail.com’ and not ‘smtp.gmail.com’. The message per day limit is also increased when using a G Suite account is that you can send up to 10000 messages per rolling 24 hour period. Lastly, when using vRO with a G Suite account security needs to be relaxed within G Suite to permit sending emails from less secure apps.

Required Steps

  1. Start vRO control center service – The service is disabled by default
  2. /etc/init.d/vco-configurator start
  3. Browse to the vRO control center
  4. https://vro-server-hostname:8283/vco-controlcenter/#/
  5. Go to Manage > CertificatesScreen Shot 2018-05-02 at 22.18.10
  6. Click Import > Import from URLScreen Shot 2018-05-02 at 22.20.15
  7. Enter the URL as ‘smtp.gmail.com:587’ or ‘smtp-relay.gmail.com:587’Screen Shot 2018-05-02 at 22.21.53
  8. Review the certificate and then click ImportScreen Shot 2018-05-02 at 22.22.12
  9. Once the certificate is imported you can use the OOTB workflow called ‘Send notification’. The settings used will be as follows:
Input Name Value
smtpHost smtp.gmail.com
smtpPort 587
username gmail email address
password gmail password
useSsl No
useStartTls Yes

 

 

vRealize Orchestrator: Get VM by Name in Large environments

The Challenge

In very large environments, the standard (out of the box methods) of retrieving a specific  virtual machine object by name in vRealize Orchestrator are not particularly efficient and can take a long time to retrieve a particular object. This is of particular concern when using a vRO action to retrieve an external value in a vRA XaaS form. The default timeout for value retrieval is 30 seconds, and although this can be extended (see https://kb.vmware.com/s/article/2144872), the aim should be to retrieve all values in the fastest time possible as opposed to increasing the timeout to a large value.

So, the challenge was set, design a method of retrieval of any VM object within the vRA timeout.

The Options

Out of the box, vRO provides a method of retrieving all virtual machine:

allVMs = VcPlugin.getAllVirtualMachines()

This can be looped through until a specific virtual machine name is found. In small environments this method is absolutely fine as the response time will not be an issue

This can be extended (and made faster) either via including a name:

allVMs = VcPlugin.getAllVirtualMachines(null, vmname)

or by using xpath:

allVMs = VcPlugin.getAllVirtualMachines(null, "xpath:name=\'" + vmname + "\'")

The latter method has issues with case sensitive queries, and using an xpath translation is really slow!

In the environment I was working the former query took many minutes to return data, the latter two took between 45-60 seconds, which was much better, but not quick enough to return the VM object to vRA in less than 30 seconds which was the ultimate goal.

So an alternative solution had to be found…

The Solution

The aim was to get to a situation whereby the smallest possible number of VMs could be retrieved by vRO, which could then be used to search for the exact VM object. In this way the query made to vCenter is much smaller, resulting in a significantly faster response time. Although this information can itself be queried from vCenter, in environments with more than one vCenter this would still be an iterative process which may result in lengthy delays in returning information.

GetVMbyNameviavROps

Fortunately there is a database in many vSphere environments that contains a combined view of the world; vRealize Operations Manager. Each virtual machine object has a number of properties that allow the parent objects of the virtual machine to be readily identified:

  • summary|parentCluster
  • summary|parentHost

vROps also has an API that can be used to retrieve these properties once the internal vROps ID of the object has been determined. The API calls used are:

  • https://hostname/suite-api/api/resources?name&resourceKind=virtualmachine –  the object identifier is included in the response
  • https://hostname/suite-api/api/resource/objectid/properties – the response is queried for the value of the specified property

Unfortunately this only gets us half way. Although we now have the cluster and host that the virtual machine is running on, we need to convert those names into objects that vRO can use.

To speed things up, a cache was created that contains all of the vSphere Clusters in the various vCenter environments. This cache is queried to match the name of the cluster to the cluster object. The cache is updated via a scheduled workflow.

The host object is now retrieved by querying the hosts in the cluster looking for a match for the host retrieved from vROps. This part could be skipped altogether by using a cache containing all ESXi hosts. This was not done in the environment I was working in as the number of hosts would have been very large (> 5000) and the impact of having that many attributes in a configuration element was felt to be excessive.

The final stage is to match the virtual machines of the identified host with the name originally entered. The result is the virtual machine object.

The vRO Package

The package below includes all of the elements needed. Once the package has been imported there is a little bit of configuration that needs to be done:

  1. Add vROps host(s) as REST endpoints (Library > Configuration > Add a REST host). The name given to the REST host will be the one used in the ‘Get VM by Name via vROps’ workflow that can be used to test the action. Use basic authentication and configure certificate handling as appropriate
  2. Edit the resource configuration ‘GetVMbyName/vROpsHosts’. Edit the ‘vROpsHosts’ attribute and add each configured vROps host into the arrayScreen Shot 2018-05-01 at 20.45.25
  3. Edit the resource configuration ‘GetVMbyName/allVCSDKConnections’. Edit the ‘vcSDKConnections’ attribute and add each vCenter SDK connection as appropriateScreen Shot 2018-05-01 at 20.46.47
  4. Run the ‘Create Cluster Cache’ workflow and then validate that the ‘GetVMyName/ClusterCache’ resource configuration has all expected clustersScreen Shot 2018-05-01 at 20.50.01

The ‘Get VM by Name via vROps’ workflow can now be run. Enter the name of the virtual machine to search for and the name of the vROps host to use

Screen Shot 2018-05-01 at 20.55.15

The output can be viewed either via the log:

Screen Shot 2018-05-01 at 20.56.21

Or by viewing the variables. A positive result will include a virtual machine object in the ‘vm’ variable

Screen Shot 2018-05-01 at 20.58.30

The test workflow requires a REST host name to be inputted, although this could set via a field on a vRA form, or additional code written to choose the REST host based on some condition.

Download Package:

GetVMbyNameviavROps