Exploring the vRealize Automation 7.0 Event Broker – Part 1

Abstract

One of the most prominent additions to vRealize Automation 7.0, aside from the converged blueprints, is a feature called the Event Broker. This service provides a uniform and robust way of subscribing to and responding to various events originating from vRA. It allows you to plug in actions and checks throughout a machine life cycle, but it goes further than that. With the Event Broker, you can subscribe to other events as well, such as blueprint lifecycle events, approval events and system log events.

This article aims to explore the Event Broker and to show a few examples of how it may be used.

Event Broker architecture

The design of Event Broker is, in essence, a classic publish/subscribe architecture. In other words, the service acts as an event distribution point where producers of events can publish them and consumers can subscribe to specific event streams. Clients would typically subscribe to a category of events, typically known as a “topic”. Events are often filtered even further using conditions, e.g. subscribing to events only from a certain part of the life cycle or only for a certain type of objects. Unlike a traditional publish/subscribe architecture, subscribers of the Event Service can also control the underlying workflow using “blocking events”, which we will discuss in more depth in upcoming articles.

Under the hood, the Event Broker is implemented using a persistent message queue, but most users are probably going to interact with it using vRealize Orchestrator (vRO). When an event that matches the topic and condition for a subscription is triggered, a corresponding vRO workflow is started.

Each topic has a schema that describes the structure of the data carried by the event. For example, a machine provisioning event carries data about the request, such as the name of the user making the request, the type of machine, amount of CPU and RAM, etc.

It’s important to remember that event subscriptions will trigger on any object that’s applicable to the topic. In vRA 6.x, you would typically tie a subscription to a specific blueprint, but with the Event Broker, you would get any event for any machine from any blueprint. In order to restrict that to specific blueprints or machine types, you will have to use a conditional subscription. While there is a slight learning curve involved, the end result is a much more flexible and efficient event mechanism.

Taking the Event Broker for a spin

Enough of theory and architecture for now. Let’s take it for a spin and see if we can get it to work!

A Hello World example

First, let’s just see if we can get the Event Broker to trigger a workflow on any event during a VM provisioning. Let’s create a simple vRO workflow that looks like this:

Screen Shot 2016-01-07 at 10.27.27 AM

Our workflow takes no parameters and just prints a string to the vRO system log. Let’s save it and go back to vRA. In vRA, we go to Administration->Events-Subscriptions and click the “New” button (the plus). This is what we’ll see:

Screen Shot 2016-01-07 at 10.34.49 AM

As we discussed earlier, an event topic is just a category of events you can subscribe to. Since we were going to trigger our vRO workflow on provisioning events, we select “Machine provisioning” from the list of topics. Once we select the topic, we’ll see the schema for it, i.e. a description of the data carried by the event. As you can see from the screen shot, we now have access to all the data about the request and the machine being requested. Let’s click the Next-button and go to the next screen!

Screen Shot 2016-01-07 at 10.37.22 AM

Here we can choose between subscribing to all events or to do it based on a condition. A machine provisioning triggers multiple events: When the machine is built, software is provisioned, turned on and so on. Normally we’d add some conditions to pick exactly the events we want, but in this case, let’s just go with everything. Let’s go to the next screen!

Screen Shot 2016-01-07 at 10.40.17 AM

This dialog let’s us pick the vRO workflow to run. We’re going to pick the “Hello World” workflow we just created. Let’s just click “Next” and “Finish” from here and we’ll have a subscription called “Hello World”, since it gets the workflow name by default.

Screen Shot 2016-01-07 at 10.43.01 AM

The last thing we need to do is to select the subscription and click publish. Like most objects in vRA, they are initially saved in a draft state and need to be activated before they kick in. Now we’re ready to test out workflow! Let’s go request a machine and wait for it to complete! Once the provisioning has completed, we can go to vRO to check on our workflow. It should now have a bunch of tokens (executions) and look something like this:

Screen Shot 2016-01-07 at 10.52.25 AM

Why so many executions? Remember that the provisioning process generates an event per state it goes through, so you’ll see a lot of events, since there’s a lot of states to go through. And in fact, there’s two events generated per state, one just before it goes into the state and one just after it leaves it. We call these pre-events and post-events.

Congratulations! We’ve created our first event subscription!

Where are the parameters?

So far, our event subscription is pretty useless, since all it can do is to print a static string to the log. We don’t know anything about what kind of event we received or which object it was triggered for. To build something more interesting than just a “Hello World”, we’ll need access to the event parameters. But where are they?

Let’s click on one of the workflow tokens in vRO and select the variables tab!

Screen Shot 2016-01-07 at 10.58.28 AM

Looks interesting! So it appears that all the variables from the event schema were actually sent to the vRO workflow. But how can we access them? That turns out to be very simple! Just create an input parameter to your vRO workflow with the same name and type as the event variable.

Let’s add the input parameters “lifecycleState” and “machine” to our workflow! This should give us information on what state we’re in and the machine that triggered the event. Let’s also change the code of our scriptable task a bit to look like this:

Screen Shot 2016-01-07 at 11.04.59 AM

Each time we get an event, we’re going to print information on what lifecycle state we’re in and what machine we’re provisioning. Provisioning a machine should get you a log record that looks something like this:

Screen Shot 2016-01-07 at 11.09.49 AM

The state information deserves a closer look. It’s made up of two variables: A phase and a state. The state simply corresponds to the machine states you may be familiar with from IaaS. “VMPSMasterWorkflow32.BuildingMachine” means that the machine is in the initial building stage. The “phase” variable says “PRE” in this example, which means that we caught this event just before the initial build happened. Had it said “POST”, we would have gotten it right after the initial build was done.

And, by the way, those lifecycleState and machine variables are encoded as JSON strings. If you want to manipulate them as associative arrays/dictionaries instead, just call e.g. JSON.parse(lifecycleState). We’ll be doing that in some of the more advanced examples.

Adding a condition

At this point, we have a semi-interesting example of an event subscription. But it’s triggering on every single event for every single VM being provisioned. In a real life scenario, this wouldn’t be very nice to your vRO instance. So let’s dial things down a bit by introducing a condition to our subscription.

To do this, we need to go back to Adminitration->Events->Subscriptions and click on our “Hello World” subscription to edit it. Then we go to the “Conditions” tab and select “Run based on conditions”.

Screen Shot 2016-01-07 at 11.22.21 AM

The condition can be any test on any field, or a combination of several fields. In our case, we’re just going to select the lifecycle state name.

Screen Shot 2016-01-07 at 11.22.49 AM

Let’s do a simple “equals” operation and test against a static value. The drop-down allows us to select from a list of predefined value. In this case, we want to trigger only if we’re in the “BuildingMachine” state.

We’re now saving the subscription by clicking “Finish” and testing it by provisioning another VM. This time, we should only see two executions of our vRO workflow: The PRE and POST events for “BuildingMachine”.

Conclusion

This is the first installment in a series of articles on the Event Broker in vRA 7.0 and is intended to get you familiarized with the new functionality. While the examples given aren’t useful for real-life scenarios, they might serve as templates for more realistic workflows.

In the next installment (coming soon) of this series we are going to look at more useful examples, such as an auditing workflow imposing hard limits on resource usage. If that sounds interesting, I recommend you add this blog to the list of blogs you follow!

Advertisement

Day two operations on multi machine blueprints in vRealize Automation 6.2

Introduction

If you’ve ever tried to add a custom day two operation to a multi-machine blueprint, you must have hit the same dead end as I did. There’s simply no object type to tie the action to. As I’m sure you know if you’re into day two operations, the object type a typical day two operation is a VC:VirtualMachine. But that clearly wouldn’t work for a multi-machine deployment, since they act on a set of machines, rather than a single one.

In this article, I’m describing a simple technique to solve this problem and provide a framework for day two operations on multi-machine deployments.

If you are just interested in getting it to work and don’t have time to read about the design, just skip ahead to “Putting it all together” below!

A quick recap of custom day two operations

A day two operation is simply an action that you can take on an existing resource, such as a running virtual machine. For a VM, they typically include power on, power off, reconfigure, destroy, etc. All of the basic and essential day two operations for standard objects, such as VMs and multi-machine deployments come out of the box with vRealize Automation. But sometimes you want to add your own operations. Suppose, for example, that your organization uses some proprietary backup solution and you want to allow users to request a backup from the vRealize Automation UI. You could do this by adding a custom day two operation.

The core of a custom day two operation is just a vRealize Orchestrator workflow. As a workflow designer, you have a lot of freedom when it comes to building a day two operation. There’s really just one rule you have to follow: The workflow needs to accept a parameter that represents the object is was executed on. So if you’re building a workflow that’s used as a day two operation on virtual machines, your workflow must have a parameter of the type VC:VirtualMachine.

The problem with multi-machine deployments

The problem is simply this: There is no type in vRealize Orchestrator that represents multi-machine deployments. So it’s impossible to build a workflow that takes the right kind of input parameter, and since this is a strict requirement for day two operation workflows, you’re going to hit a dead end at that point.

Why that is has to do with the design of vRealize Operations. The concept of a multi-machine deployment lives in the IaaS portion of the code, whereas the custom day two operations live in the core vRealize Automation code and the mapping between them isn’t exposed to the user. The good news is that this issue is fixed with the blueprint redesign in vRealize Automation 7.0. But what about us who still live in the 6.x world?

The (surprisingly simple) solution

As of vRealize Automation 6.1, something called resource mappings where introduced. These are simply vRealize Orchestrator workflows or actions that converts from a native vRealize Automation type to a type that vRealize Orchestrator can handle. As you may expect, there are a handful of these included by default, and not surprisingly, the one that converts from the vRealize Automation representation of a virtual machine to a VC:VirtualMachine is one of the more prominent ones.

When you write a resource conversion workflow or action, you’re getting all the properties of the resource in a Properties object from vRealize Automation. It is then your responsibility to write the code that builds an inventory object from these values.

But what about a multi-machine deployment? Which inventory object type do we map that one to? It turns out that the IaaS representation of a machine is a perfect candidate for this. This object type, called VCAC:VirtualMachine happens to be able to represent multi-machine deployments as well. What’s even better is that there’s already an API-function for getting hold of VCAC:VirtualMachine and we have all the data we need to do it.

Here’s the code needed to do this:


var host = System.getModule("net.virtualviking.mmday2").getvCACHost();
var identifiers = { VirtualMachineID:properties["machineId"] };
machine = vCACEntityManager.readModelEntity(host.id, "ManagementModelEntities.svc", "VirtualMachines", identifiers, null);
return machine.getInventoryObject();


What’s going on here? Well, first we get hold of our IaaS host. Next we build an identifier we’re going to use to search for a virtual machine in IaaS. Then we run the query and finally we return the corresponding inventory object. That’s really all there’s to it in terms of code.

You can implement this either as a workflow or an action in vRealize Orchestrator. I chose to do it as an action, since it runs slightly faster.

Putting it all together

  1. Download and install the package. See the end of this article for a download link!
  2. Select the “Design” perspective and go to the configurations tab.
  3. Edit the “net.virtualviking.mmday2” configuration bundle.
  4. Set the vCACHost property to point to your IaaS service (must be registered first).
    Screen Shot 2015-11-13 at 3.51.11 PM
  5. In vRealize Automation, log in as a tenant admin and go to Advanced Services->Custom Resources.
  6. Create a new custom resource.
    1. Enter orchestrator type vCACCAFE:Catalog
    2. Enter name “Catalog Resource”.
    3. (Optional) enter a description and version.
    4. Click next
      Screen Shot 2015-11-13 at 3.39.31 PM
    5. Leave the defaults values on the Details Form page.
    6. Click next.
    7. Leave the defaults on the Where Used form.
    8. Click Finish.
  7. Go to Advanced Services->Resource  Mappings.
  8. Create a new Resource Mapping.
    1. Name it “Multi-Machine Service”.
    2. Enter Catalog Resource Type “Multi-Machine Service”
    3. Enter Orchestrator Type “vCAC:Virtual Machine”
    4. Select “Mapping Script action” and navigate to “net.virtualviking.mmday2” and “convertMultimachine”.Screen Shot 2015-11-13 at 3.44.03 PM
    5. Select “Add”.

You should now be able to create day two operations on multi-machine deployments. To test it, you can use the “Print Multimachine Details” workflow provided with the package.

Please reach out to me in the comments if you have questions!

Downloadable components

The vRealize Orchestrator package can be downloaded from here.

Using ServiceNow as a front end for vRealize Automation

Background

Today, ServiceNow has become something of a de-facto standard for self-service portals and ticket tracking systems. It is being used in virtually every industry and by all sizes of organizations. While vRealize Automation (vRA) offers its own self-service portal with a wide range of features, it is sometimes desirable to allow users to request virtual machines and other resources from a common ServiceNow portal, while carrying out the actual provisioning tasks using vRA. This combines the benefits of a single request portal with the power and governance of vRA.

This article describes how ServiceNow can be tied into vRA using a couple of simple workflows.

Direct vRealize Automation API vs. vRealize Orchestration

One of the first design decisions I had to make was how to expose vRA to ServiceNow. I could, of course, call the vRA API directly from ServiceNow. Another alternative would be to encapsulate the vRA API calls in a vRealize Orchestrator (vRO) workflow. I opted for the latter, since it makes the code ServiceNow a bit simpler and leverages the vRA Plugin in vRO.

Implementing the vRO workflow

To make the ServiceNow workflow simpler, I wanted to make a single vRO workflow that handled the entire provisioning process. This way, I only have to make one API call from ServiceNow. These are the basic steps the workflow needs to accomplish:

  1. Check parameters against blueprint, reject invalid parameters and fill in defaults for ones that are missing.
  2. Look up blueprint and request provisioning.
  3. Wait for completion and check outcome (success or failure).

To avoid having the code break due to small differences in the API, I used the vRA plugin to make all calls, rather than calling the APIs directly.

Here is what the workflow looks like:

Screen Shot 2015-07-09 at 12.41.15 PM

The workflows us pretty straightforward and uses mostly pre-existing components. The “Prepare Properties” script is really the only code in the workflow. All it does it to examine which parameters were supplied and providing the blueprint defaults for the rest.

Calling vRO via REST

In order for ServiceNow to be able to make a call to vRO, we need to register a REST call. In vRO, the https://server:8281/vco/api/workflows endpoint gives you a list of workflows. If you know the UUID of a workflow, you can address it directly by doing a HTTP GET against the URL https://server:8281/vco/api/workflows/{UUID}. Since the UUID of a workflow is permanent and doesn’t change under any circumstances, we can safely just copy it from vRO and use it directly in the URL.

Under the workflow, there are several subsections of linked information, such as the executions of each workflow. We can check on previously started workflows by issuing a GET against the executions URL. Consequently, issuing a POST against the URL https://server:8281/vco/api/workflows/{UUID}/executions creates a new execution by running the specified workflow. Any parameters to the workflow can be send as a json document.

Successfully starting a workflow immediately returns the HTTP status code 202 (Accepted) and sets the Location header to an URL where we can check on the completion of the workflow. Note that the workflow has not finished running at this point. All we got was an acknowledgement that vRO has successfully started the workflow.

To check for the status of the workflow, we need to issue a GET against the URL returned in the “Location” header. That URL looks something like this: https://server:8281/vco/api/workflows/{UUID}/executions/{token} where “token” is a unique identifier for the workflow execution.

Defining the REST message in ServiceNow

Let’s put it all together!

First we need to specify the rest message. We’re including the workflow UUID in the URL. You can obtain the workflow UUID from the workflow properties page in vRO. Note that the ID is permanent and will remain the same even if the workflow is imported, exported and renamed, so it is safe to hard code it into the URL. You will probably have to specify a MID server, since your vRO instance is likely to be behind a firewall.

rest

Next, we are going to focus on the POST message. Remember, if you POST to workflow executions, you start a new execution of the workflow. Note that there we accept for parameters: The name of the OS image (blueprint name), number of CPUs, memory and storage. We use these to construct a json message in the body. This will pass the parameters from ServiceNow to the vRO workflow.

rest-post

Finally, we need to specify the GET message. This is fairly straightforward. All we need to do is to make sure we pass the execution ID at the end of the URL. This is done using the “token” variable.

rest-get

Implementing the ServiceNow workflow

On the ServiceNow side, we need to create a workflow that carries out the provisioning action, i.e. calls vRO to activate the vRO workflow and the provisioning. Normally, you would most likely have approvals and other actions tied to the workflow, but in this example, we’re simply focusing on the provisioning part.

Here is our ServiceNow workflow.

workflow

Let’s walk through the steps:

  1. Run a script to send a request to vRO for starting the provisioning workflow. We’re pulling the parameters from the current item request and passing it as a REST call.
    gs.log("image: " + current.variables.vm_os_image);
    gs.log("CPUs: " + current.variables.vm_numCPUs);
    gs.log("Memory: " + current.variables.vm_memoryMB);
    gs.log("Storage: " + current.variables.vm_storageGB);
    var r = new RESTMessage('Create VM in vRA', 'post');
    r.setStringParameter('image', current.variables.vm_os_image);
    r.setStringParameter("numCPUs", current.variables.vm_numCPUs);
    r.setStringParameter("memoryMB", current.variables.vm_memoryMB);
    r.setStringParameter("storageGB", current.variables.vm_storageGB);
    
    // Send the request
    //
    var response = r.execute();
    if(response == null) 
        response = r.getResponse(30000);
    gs.log("Status: " + response.getStatusCode());
    gs.log("Workflow token: " + response.getHeader("Location"));
    gs.log("Response: " + response);
    var location = response.getHeader("Location");
    var arr = /.*executions\/(.*)\//.exec(location);
    workflow.scratchpad.vroToken = arr[1];
    gs.log("Set vRO token to: " + workflow.scratchpad.vroToken);
    activity.result = response.getStatusCode();
  2. Send a HTTP GET to obtain the status of the workflow execution. Due to some quirks in the MID server, we need to check for the result twice. We’re parsing the return body and extracting the result code from vRO.
    gs.log("vRO token is: " + workflow.scratchpad.vroToken);
    var r = new RESTMessage('Create VM in vRA', 'get');
    r.setStringParameter('token', workflow.scratchpad.vroToken);
    var response = r.execute();
    if(response == null) 
        response = r.getResponse(30000);
    var body = response.getBody();
    gs.log("Response code: " + response.getStatusCode());
    gs.log("Body: " + body);
    
    // Parse return value
    //
    var parser = new JSONParser();
    var jsonResult = parser.parse(body);
    var status = jsonResult.state;
    if(status == "failed")
        workflow.scratchpad.vraError = jsonResult.content_exception;
    gs.log("Workflow status: " + status);
    workflow.scratchpad.vraWorkflorStatus = status;
    activity.result = status;

The final step: Creating a Catalog Item

Once we have the workflows created, it’s time to tie them into a Catalog Item. This is pretty straightforward. We have to make sure we call the workflow we just created and we need to define the variables used by the workflow. You’ll recall we’re using the variables vm_os_image, vm_numCPUs, vm_memoryMB and vm_storageGB.

This is what the catalog item looks like:

Screen Shot 2015-06-18 at 5.48.57 PM

In this case, I chose to create a specific service for this OS image, but you could just as well create a generic one and have a drop-down for selecting the image. There’s a variety of input options you can choose between for the variables. I picked a numeric scale for the CPUs and multiple choices for the other parameters.

Screen Shot 2015-06-18 at 6.00.58 PM

When specifying the variables, you should make sure they have the same name as what you’re using your workflow. This is what my definition of vm_numCPUs looks like, for example:

Screen Shot 2015-06-18 at 6.04.14 PM

Finally, when we have everything defined, we should get a Catalog Item that looks something like this.

Screen Shot 2015-06-18 at 6.13.29 PM

Downloads

Complete vCO Workflows: Click here!