Exploring the vRealize Automation 7.0 Event Broker – Part 3

Abstract

In this article, we will explore the approval topic of the Event Broker. This allows you to plug in custom approval policies and even delegate approvals to external systems.

Approval policies in vRealize Automation allows administrators to insert gating rules into a provisioning workflow. Typically, approval policies work by notifying an approver through email allowing them to approve or reject a provisioning request by clicking on a link or by interacting with the vRealize Automation console. In this article, we will show you how to extend an approval policy by plugging in a custom workflow. Specifically, we will explore how to leverage ServiceNow to approve or reject a request.

Approval Policies in vRealize Automation

Approval policies offer a highly flexible method of adding gating rules to a provisioning workflow. Out of the box, approval policies can be conditional to force approvals only when a user exceeds a certain memory limit, for example. In fact, conditional approvals can be controlled by any aspect of a request, making them extremely flexible.

Also, the fact that approval policies are tied to catalog items through entitlements makes it very easy to have different policies depending on where you sit in an organization.

However, the most powerful feature around approvals that was added in vRealize Automation 7.0 is the ability to use the Event Broker to plug in external approval mechanisms. In previous versions, you were limited to manual approvals through emails. With the new Event Broker architecture, you can implement very advanced automatic approvals as well as delegating approvals to an external incident management system. The later is what we will focus on in this article.

Integrating with ServiceNow

Why did we pick ServiceNow for this example? For a number of reasons, actually. First, it has a developer program which allows you full access to test instances free of charge. Second, it’s very widely used. And third, it has a very straightforward API.

The goal

Our goal is very simple: Whenever a provisioning workflow in vRealize Automation needs an approval, we will call out to ServiceNow to make a request for approval, then block the provisioning workflow until the request is approved in ServiceNow. Also, we are going to supply some basic information about the request to ServiceNow the approver can base their decision on.

Using a Service Catalog Request

There are several ways we could do this. One is to create an incident in ServiceNow and wait for it to be resolved. In this article, however, we are going to use the approval mechanism built into ServiceNow and it seems a Service Catalog Request is better suited for that.

A Service Catalog Request is what ServiceNow uses to handle a user request for some service, be it a cellphone, a laptop or some service from human resources. These requests have an underlying workflow and an approval policy attached to them, making them suitable vehicles for approvals coming from vRealize Automation. In a real life scenario, we would have designed a form and a workflow in ServiceNow for this specific type of request, but in this example, we are just going to use the generic constructs.


 

Screen Shot 2016-03-15 at 12.51.06 PM


 

Implementation

We implement this in three steps: First we build a workflow that can create a Service Catalog Request in ServiceNow and wait for it to be approved. Next, we create an event subscription based on that workflow and finally we link it to an approval policy. Let’s walk through the steps.

vRealize Orchestrator workflow

Let me start by pointing out that this is NOT the way you should be doing it in production. We create a Service Catalog Request in ServiceNow and then we check it once every minute to see if it has been approved. Since there may be hundreds of outstanding approvals in a production system, this approach is very inefficient and could generate massive amounts of calls to ServiceNow. A better solution would be to have ServiceNow push back an event to vRealize Orchestrator. This is doable, but beyond the scope for this article. We may address how to receive events from ServiceNow in an upcoming article.

This is what out vRealize Orchestrator workflow looks like:


 

Screen Shot 2016-03-15 at 1.43.44 PM


 

The workflow is relatively straightforward. First, we extract the parameters and do some other basic initializations. Next, we call ServiceNow to create a Service Catalog Request. After that, we enter a into a loop that checks the approval status of the request and exits with the appropriate return code once we’ve determined whether the request was approved or rejected. Keep in mind that we may spend days in this loop, which is why we need to find a better way of doing this in production (see upcoming article).

Inputs and outputs

An approval workflow takes four input parameters that are all of type Properties (a name-value pair map).

  • fieldNames – The keys represent the name of the fields defined for this approval. The values represent the field types.
  • fieldValue – The values of the fields specified in fieldNames.
  • requestInfo – A map of request properties.
  • sourceInfo – A map of properties from the source of the request (e.g. internal references to the request)

The contents of fieldNames and fieldValues are determined by the fields that are exposed by the approval policy. More about this later.

The fields map will always contain a field called businessJustification which can be changed by an approval workflow to send comments back to vRealize Automation. This is typically used to specify a reason for an approval or rejection.

Creating the Catalog Item Request in ServiceNow

Working with objects in ServiceNow is very straightforward. Every table in their data model has a REST interface for performing basic CRUD (Create/Read/Update/Delete) operations. All we have to do is to post a new request to the table. In a production implementation of this, we’d probably provide a bit more data, such as actual links to the requester and a link to the catalog item we’re requesting. But for a simple demo, this will do just fine.


 

var payload = {};
payload["short_description"] = shortDescription;
payload["description"] = longDescription;
payload["active"] = "true";
payload["price"] = "1001";
payload["special_instructions"] = "vRA-Request";
var rq = snowHost.createRequest("POST", "/api/now/table/sc_request", 
     JSON.stringify(payload));
rq.contentType = "application/json";
rq.setHeader("accept", "application/json");
var response = rq.execute();
if(response.statusCode != 201)
    throw "ServiceNow error. Status code: " + response.statusCode + " details: " 
        + response.contentAsString;
var json = JSON.parse(response.contentAsString);
incidentId = json.result.sys_id;

Let’s examine the code! First, we build a payload based on the description and short description. We’re also setting a bogus price for the item, which is there to bypass one of the default rules in the ServiceNow development instances that auto-approves anything that costs below $1000. We could (and should) have disabled that rule on the ServiceNow side, but the idea is that this should be able to run against an unmodified ServiceNow development instance. Again, this code is purely for  demo purposes.

Next, we make the request itself, a POST to the sc_request table, which creates a new request for us. ServiceNow will return the created record and the very last line of the script picks up the ID of the request. We will use this in the next step as we check on the status for the request.

Checking the approval status in ServiceNow

Once we have a Service Catalog Request, we periodically check on the status of it. Again, this needs to be replaced by a push-based solution in a production scenario, but the current approach will work fine demo purposes.


var rq = snowHost.createRequest("GET", "/api/now/table/sc_request/" + incidentId,
     null);
rq.setHeader("accept", "application/json");
var response = rq.execute();
if(response.statusCode != 200)
     throw "ServiceNow error. Status code: " + response.statusCode + 
     " details: " + response.contentAsString;
var json = JSON.parse(response.contentAsString);
approval = json.result.approval.toString();

All this script does is to check on the Service Catalog Request we just created in ServiceNow to see if it’s approved. The rest of the workflow just deals with sleeping between checks and keeping track of how many times we should do this before timing out.

Creating the subscription

Once we have the workflow in place, it’s time to register a subscription through the Event Broker. Doing so it pretty straightforward. As an administrator, we go to Administration->Events->Subscriptions and click on “New”.


 

Screen Shot 2016-03-24 at 12.23.59 PM


 

Select “Pre-Approval” and click “Next”. A pre-approval subscription is called before any provisioning activities take place. If we wanted to register an approval after provisioning is made, but before the provisioned asset is released to its user, we could use a “Post-Approval”.


 

Screen Shot 2016-03-24 at 12.24.14 PM


 

Let’s run this for all events. If necessary, you can create filters here, for example applying the policy only to certain types of assets.

Click “Next”.


 

Screen Shot 2016-03-24 at 12.24.44 PM


 

Select the “Delegate approval to ServiceNow” workflow. This is the vRO workflow we discussed earlier in the workflow.


 

Screen Shot 2016-03-24 at 12.25.48 PM


 

Optionally, change the name and give it a description. You can also assign a timeout on the subscription. In this case, we’re stating that if the request hasn’t been approved within 24 hours, we automatically cancel the request.

Click “Finish”.

We now have a subscription, but if you were to start deploying VMs you’d notice that our workflow is never called. That’s because we’re missing an Approval Policy, which is the final piece of this puzzle.

Creating an approval policy

An approval policy in vRealize Automation simply states when an approval is needed and how to handle it. It can also deal with approvals that require multiple steps and multiple roles. In this example, we’re going to walk through a simple approval policy that requires a single approval done through a call to ServiceNow.

To add an approval policy, log in as a tenant administrator and select Administration->Approval Policies.


 

Screen Shot 2016-03-24 at 3.14.44 PM


 

Select “Service Catalog – Catalog Item Request”. This makes the approval policy applicable to any request. If you want to narrow it down to a specific catalog item type, you can select that here.

Screen Shot 2016-03-24 at 3.15.52 PM


 

Fill out name and description. Leave the policy as a draft for now.


 

Screen Shot 2016-03-24 at 3.16.12 PM


 

Add a new approval level by clicking the plus sign next to “Levels”. Notice how you can have multiple levels that are executed in series or parallel.

Create a new level and give it a name. Select “Use event subscription” to make it run the pre-approval event subscription we created earlier. You may also enter conditions here if you don’t want the approval to be required for every request. In this example, we make the approval policy “Always required”.

Verify that everything looks OK, change the status of the policy to “Active” and click OK to save it.

Attaching the policy to an entitlement

Approvals are controlled by approval policies and entitlements. For an approval policy to be considered, it has to be attached to an entitlement. This gives you great flexibility when designing approvals. Not only can you define how the approval will behave, but you can also define within what scope of catalog items and users it will apply. For example, you can set up a rule saying “This service is available to all developers, but junior developers will have to obtain an approval by their manager first”.

In our case, we will just set it up to apply to all developers trying to deploy a certain type of VM.

To edit entitlements, log in as a tenant administrator and select Administration->Catalog Management->Entitlements. In our case, we will use an existing entitlement named “Developers” and add our approval policy to a certain service.


 

Screen Shot 2016-03-24 at 3.32.35 PM


In this case, we are applying the approval policy to an entire service, which will cause it to be applied to every catalog item in that service. You may also attach the approval policy to an individual catalog item. Click the drop down next to the service and select “Modify Policy”.


 

Screen Shot 2016-03-24 at 3.35.59 PM


Select the approval policy we just created. Click OK on the dialog box. Click OK on the entitlement form to save the entitlement.

Congratulations! You’ve just configured a custom approval policy that calls out to ServiceNow!

Testing the ServiceNow-based approval

Now that we’ve created and configured our custom approval, it’s time to test it. To do that, let’s just request one of the catalog items from the service we attached the approval policy to.


 

Screen Shot 2016-03-24 at 3.40.03 PM


Request the catalog item and fill out any information needed. Add something for the description and reason for request, as these will be transferred to ServiceNow.


 

Screen Shot 2016-03-24 at 3.41.23 PM


If we go to the “Requests” tab, we should now see the request getting stuck on the “Pending Approval” state. It will not be released until we’ve approved it in ServiceNow (or the 24 hour timeout we specified has elapsed).


 

Screen Shot 2016-03-24 at 3.45.13 PM


Let’s approve it by clicking the Approve button and wait for a minute or two for the vRealize Automation workflow to catch up.


 

Screen Shot 2016-03-24 at 3.47.24 PM


As you can see, the provisioning process has resumed and our VM is well on its way to being created.

Conclusion

Although this example is not intended for production use, it serves to illustrate the flexibility of the Event Broker and approval engine, as well as the ease at which external components can be integrated into the lifecycle of a catalog item in vRealize Automation.

I am personally, as an engineer, extremely excited about the possibilities the Event Broker opens up and will continue to come up with interesting ways it can be used.

Downloadable content

vRealize Orchestrator Workflow

 

 

Exploring the vRealize Automation 7.0 Event Broker – Part 1

Abstract

One of the most prominent additions to vRealize Automation 7.0, aside from the converged blueprints, is a feature called the Event Broker. This service provides a uniform and robust way of subscribing to and responding to various events originating from vRA. It allows you to plug in actions and checks throughout a machine life cycle, but it goes further than that. With the Event Broker, you can subscribe to other events as well, such as blueprint lifecycle events, approval events and system log events.

This article aims to explore the Event Broker and to show a few examples of how it may be used.

Event Broker architecture

The design of Event Broker is, in essence, a classic publish/subscribe architecture. In other words, the service acts as an event distribution point where producers of events can publish them and consumers can subscribe to specific event streams. Clients would typically subscribe to a category of events, typically known as a “topic”. Events are often filtered even further using conditions, e.g. subscribing to events only from a certain part of the life cycle or only for a certain type of objects. Unlike a traditional publish/subscribe architecture, subscribers of the Event Service can also control the underlying workflow using “blocking events”, which we will discuss in more depth in upcoming articles.

Under the hood, the Event Broker is implemented using a persistent message queue, but most users are probably going to interact with it using vRealize Orchestrator (vRO). When an event that matches the topic and condition for a subscription is triggered, a corresponding vRO workflow is started.

Each topic has a schema that describes the structure of the data carried by the event. For example, a machine provisioning event carries data about the request, such as the name of the user making the request, the type of machine, amount of CPU and RAM, etc.

It’s important to remember that event subscriptions will trigger on any object that’s applicable to the topic. In vRA 6.x, you would typically tie a subscription to a specific blueprint, but with the Event Broker, you would get any event for any machine from any blueprint. In order to restrict that to specific blueprints or machine types, you will have to use a conditional subscription. While there is a slight learning curve involved, the end result is a much more flexible and efficient event mechanism.

Taking the Event Broker for a spin

Enough of theory and architecture for now. Let’s take it for a spin and see if we can get it to work!

A Hello World example

First, let’s just see if we can get the Event Broker to trigger a workflow on any event during a VM provisioning. Let’s create a simple vRO workflow that looks like this:

Screen Shot 2016-01-07 at 10.27.27 AM

Our workflow takes no parameters and just prints a string to the vRO system log. Let’s save it and go back to vRA. In vRA, we go to Administration->Events-Subscriptions and click the “New” button (the plus). This is what we’ll see:

Screen Shot 2016-01-07 at 10.34.49 AM

As we discussed earlier, an event topic is just a category of events you can subscribe to. Since we were going to trigger our vRO workflow on provisioning events, we select “Machine provisioning” from the list of topics. Once we select the topic, we’ll see the schema for it, i.e. a description of the data carried by the event. As you can see from the screen shot, we now have access to all the data about the request and the machine being requested. Let’s click the Next-button and go to the next screen!

Screen Shot 2016-01-07 at 10.37.22 AM

Here we can choose between subscribing to all events or to do it based on a condition. A machine provisioning triggers multiple events: When the machine is built, software is provisioned, turned on and so on. Normally we’d add some conditions to pick exactly the events we want, but in this case, let’s just go with everything. Let’s go to the next screen!

Screen Shot 2016-01-07 at 10.40.17 AM

This dialog let’s us pick the vRO workflow to run. We’re going to pick the “Hello World” workflow we just created. Let’s just click “Next” and “Finish” from here and we’ll have a subscription called “Hello World”, since it gets the workflow name by default.

Screen Shot 2016-01-07 at 10.43.01 AM

The last thing we need to do is to select the subscription and click publish. Like most objects in vRA, they are initially saved in a draft state and need to be activated before they kick in. Now we’re ready to test out workflow! Let’s go request a machine and wait for it to complete! Once the provisioning has completed, we can go to vRO to check on our workflow. It should now have a bunch of tokens (executions) and look something like this:

Screen Shot 2016-01-07 at 10.52.25 AM

Why so many executions? Remember that the provisioning process generates an event per state it goes through, so you’ll see a lot of events, since there’s a lot of states to go through. And in fact, there’s two events generated per state, one just before it goes into the state and one just after it leaves it. We call these pre-events and post-events.

Congratulations! We’ve created our first event subscription!

Where are the parameters?

So far, our event subscription is pretty useless, since all it can do is to print a static string to the log. We don’t know anything about what kind of event we received or which object it was triggered for. To build something more interesting than just a “Hello World”, we’ll need access to the event parameters. But where are they?

Let’s click on one of the workflow tokens in vRO and select the variables tab!

Screen Shot 2016-01-07 at 10.58.28 AM

Looks interesting! So it appears that all the variables from the event schema were actually sent to the vRO workflow. But how can we access them? That turns out to be very simple! Just create an input parameter to your vRO workflow with the same name and type as the event variable.

Let’s add the input parameters “lifecycleState” and “machine” to our workflow! This should give us information on what state we’re in and the machine that triggered the event. Let’s also change the code of our scriptable task a bit to look like this:

Screen Shot 2016-01-07 at 11.04.59 AM

Each time we get an event, we’re going to print information on what lifecycle state we’re in and what machine we’re provisioning. Provisioning a machine should get you a log record that looks something like this:

Screen Shot 2016-01-07 at 11.09.49 AM

The state information deserves a closer look. It’s made up of two variables: A phase and a state. The state simply corresponds to the machine states you may be familiar with from IaaS. “VMPSMasterWorkflow32.BuildingMachine” means that the machine is in the initial building stage. The “phase” variable says “PRE” in this example, which means that we caught this event just before the initial build happened. Had it said “POST”, we would have gotten it right after the initial build was done.

And, by the way, those lifecycleState and machine variables are encoded as JSON strings. If you want to manipulate them as associative arrays/dictionaries instead, just call e.g. JSON.parse(lifecycleState). We’ll be doing that in some of the more advanced examples.

Adding a condition

At this point, we have a semi-interesting example of an event subscription. But it’s triggering on every single event for every single VM being provisioned. In a real life scenario, this wouldn’t be very nice to your vRO instance. So let’s dial things down a bit by introducing a condition to our subscription.

To do this, we need to go back to Adminitration->Events->Subscriptions and click on our “Hello World” subscription to edit it. Then we go to the “Conditions” tab and select “Run based on conditions”.

Screen Shot 2016-01-07 at 11.22.21 AM

The condition can be any test on any field, or a combination of several fields. In our case, we’re just going to select the lifecycle state name.

Screen Shot 2016-01-07 at 11.22.49 AM

Let’s do a simple “equals” operation and test against a static value. The drop-down allows us to select from a list of predefined value. In this case, we want to trigger only if we’re in the “BuildingMachine” state.

We’re now saving the subscription by clicking “Finish” and testing it by provisioning another VM. This time, we should only see two executions of our vRO workflow: The PRE and POST events for “BuildingMachine”.

Conclusion

This is the first installment in a series of articles on the Event Broker in vRA 7.0 and is intended to get you familiarized with the new functionality. While the examples given aren’t useful for real-life scenarios, they might serve as templates for more realistic workflows.

In the next installment (coming soon) of this series we are going to look at more useful examples, such as an auditing workflow imposing hard limits on resource usage. If that sounds interesting, I recommend you add this blog to the list of blogs you follow!

Day two operations on multi machine blueprints in vRealize Automation 6.2

Introduction

If you’ve ever tried to add a custom day two operation to a multi-machine blueprint, you must have hit the same dead end as I did. There’s simply no object type to tie the action to. As I’m sure you know if you’re into day two operations, the object type a typical day two operation is a VC:VirtualMachine. But that clearly wouldn’t work for a multi-machine deployment, since they act on a set of machines, rather than a single one.

In this article, I’m describing a simple technique to solve this problem and provide a framework for day two operations on multi-machine deployments.

If you are just interested in getting it to work and don’t have time to read about the design, just skip ahead to “Putting it all together” below!

A quick recap of custom day two operations

A day two operation is simply an action that you can take on an existing resource, such as a running virtual machine. For a VM, they typically include power on, power off, reconfigure, destroy, etc. All of the basic and essential day two operations for standard objects, such as VMs and multi-machine deployments come out of the box with vRealize Automation. But sometimes you want to add your own operations. Suppose, for example, that your organization uses some proprietary backup solution and you want to allow users to request a backup from the vRealize Automation UI. You could do this by adding a custom day two operation.

The core of a custom day two operation is just a vRealize Orchestrator workflow. As a workflow designer, you have a lot of freedom when it comes to building a day two operation. There’s really just one rule you have to follow: The workflow needs to accept a parameter that represents the object is was executed on. So if you’re building a workflow that’s used as a day two operation on virtual machines, your workflow must have a parameter of the type VC:VirtualMachine.

The problem with multi-machine deployments

The problem is simply this: There is no type in vRealize Orchestrator that represents multi-machine deployments. So it’s impossible to build a workflow that takes the right kind of input parameter, and since this is a strict requirement for day two operation workflows, you’re going to hit a dead end at that point.

Why that is has to do with the design of vRealize Operations. The concept of a multi-machine deployment lives in the IaaS portion of the code, whereas the custom day two operations live in the core vRealize Automation code and the mapping between them isn’t exposed to the user. The good news is that this issue is fixed with the blueprint redesign in vRealize Automation 7.0. But what about us who still live in the 6.x world?

The (surprisingly simple) solution

As of vRealize Automation 6.1, something called resource mappings where introduced. These are simply vRealize Orchestrator workflows or actions that converts from a native vRealize Automation type to a type that vRealize Orchestrator can handle. As you may expect, there are a handful of these included by default, and not surprisingly, the one that converts from the vRealize Automation representation of a virtual machine to a VC:VirtualMachine is one of the more prominent ones.

When you write a resource conversion workflow or action, you’re getting all the properties of the resource in a Properties object from vRealize Automation. It is then your responsibility to write the code that builds an inventory object from these values.

But what about a multi-machine deployment? Which inventory object type do we map that one to? It turns out that the IaaS representation of a machine is a perfect candidate for this. This object type, called VCAC:VirtualMachine happens to be able to represent multi-machine deployments as well. What’s even better is that there’s already an API-function for getting hold of VCAC:VirtualMachine and we have all the data we need to do it.

Here’s the code needed to do this:


var host = System.getModule("net.virtualviking.mmday2").getvCACHost();
var identifiers = { VirtualMachineID:properties["machineId"] };
machine = vCACEntityManager.readModelEntity(host.id, "ManagementModelEntities.svc", "VirtualMachines", identifiers, null);
return machine.getInventoryObject();


What’s going on here? Well, first we get hold of our IaaS host. Next we build an identifier we’re going to use to search for a virtual machine in IaaS. Then we run the query and finally we return the corresponding inventory object. That’s really all there’s to it in terms of code.

You can implement this either as a workflow or an action in vRealize Orchestrator. I chose to do it as an action, since it runs slightly faster.

Putting it all together

  1. Download and install the package. See the end of this article for a download link!
  2. Select the “Design” perspective and go to the configurations tab.
  3. Edit the “net.virtualviking.mmday2” configuration bundle.
  4. Set the vCACHost property to point to your IaaS service (must be registered first).
    Screen Shot 2015-11-13 at 3.51.11 PM
  5. In vRealize Automation, log in as a tenant admin and go to Advanced Services->Custom Resources.
  6. Create a new custom resource.
    1. Enter orchestrator type vCACCAFE:Catalog
    2. Enter name “Catalog Resource”.
    3. (Optional) enter a description and version.
    4. Click next
      Screen Shot 2015-11-13 at 3.39.31 PM
    5. Leave the defaults values on the Details Form page.
    6. Click next.
    7. Leave the defaults on the Where Used form.
    8. Click Finish.
  7. Go to Advanced Services->Resource  Mappings.
  8. Create a new Resource Mapping.
    1. Name it “Multi-Machine Service”.
    2. Enter Catalog Resource Type “Multi-Machine Service”
    3. Enter Orchestrator Type “vCAC:Virtual Machine”
    4. Select “Mapping Script action” and navigate to “net.virtualviking.mmday2” and “convertMultimachine”.Screen Shot 2015-11-13 at 3.44.03 PM
    5. Select “Add”.

You should now be able to create day two operations on multi-machine deployments. To test it, you can use the “Print Multimachine Details” workflow provided with the package.

Please reach out to me in the comments if you have questions!

Downloadable components

The vRealize Orchestrator package can be downloaded from here.

Using ServiceNow as a front end for vRealize Automation

Background

Today, ServiceNow has become something of a de-facto standard for self-service portals and ticket tracking systems. It is being used in virtually every industry and by all sizes of organizations. While vRealize Automation (vRA) offers its own self-service portal with a wide range of features, it is sometimes desirable to allow users to request virtual machines and other resources from a common ServiceNow portal, while carrying out the actual provisioning tasks using vRA. This combines the benefits of a single request portal with the power and governance of vRA.

This article describes how ServiceNow can be tied into vRA using a couple of simple workflows.

Direct vRealize Automation API vs. vRealize Orchestration

One of the first design decisions I had to make was how to expose vRA to ServiceNow. I could, of course, call the vRA API directly from ServiceNow. Another alternative would be to encapsulate the vRA API calls in a vRealize Orchestrator (vRO) workflow. I opted for the latter, since it makes the code ServiceNow a bit simpler and leverages the vRA Plugin in vRO.

Implementing the vRO workflow

To make the ServiceNow workflow simpler, I wanted to make a single vRO workflow that handled the entire provisioning process. This way, I only have to make one API call from ServiceNow. These are the basic steps the workflow needs to accomplish:

  1. Check parameters against blueprint, reject invalid parameters and fill in defaults for ones that are missing.
  2. Look up blueprint and request provisioning.
  3. Wait for completion and check outcome (success or failure).

To avoid having the code break due to small differences in the API, I used the vRA plugin to make all calls, rather than calling the APIs directly.

Here is what the workflow looks like:

Screen Shot 2015-07-09 at 12.41.15 PM

The workflows us pretty straightforward and uses mostly pre-existing components. The “Prepare Properties” script is really the only code in the workflow. All it does it to examine which parameters were supplied and providing the blueprint defaults for the rest.

Calling vRO via REST

In order for ServiceNow to be able to make a call to vRO, we need to register a REST call. In vRO, the https://server:8281/vco/api/workflows endpoint gives you a list of workflows. If you know the UUID of a workflow, you can address it directly by doing a HTTP GET against the URL https://server:8281/vco/api/workflows/{UUID}. Since the UUID of a workflow is permanent and doesn’t change under any circumstances, we can safely just copy it from vRO and use it directly in the URL.

Under the workflow, there are several subsections of linked information, such as the executions of each workflow. We can check on previously started workflows by issuing a GET against the executions URL. Consequently, issuing a POST against the URL https://server:8281/vco/api/workflows/{UUID}/executions creates a new execution by running the specified workflow. Any parameters to the workflow can be send as a json document.

Successfully starting a workflow immediately returns the HTTP status code 202 (Accepted) and sets the Location header to an URL where we can check on the completion of the workflow. Note that the workflow has not finished running at this point. All we got was an acknowledgement that vRO has successfully started the workflow.

To check for the status of the workflow, we need to issue a GET against the URL returned in the “Location” header. That URL looks something like this: https://server:8281/vco/api/workflows/{UUID}/executions/{token} where “token” is a unique identifier for the workflow execution.

Defining the REST message in ServiceNow

Let’s put it all together!

First we need to specify the rest message. We’re including the workflow UUID in the URL. You can obtain the workflow UUID from the workflow properties page in vRO. Note that the ID is permanent and will remain the same even if the workflow is imported, exported and renamed, so it is safe to hard code it into the URL. You will probably have to specify a MID server, since your vRO instance is likely to be behind a firewall.

rest

Next, we are going to focus on the POST message. Remember, if you POST to workflow executions, you start a new execution of the workflow. Note that there we accept for parameters: The name of the OS image (blueprint name), number of CPUs, memory and storage. We use these to construct a json message in the body. This will pass the parameters from ServiceNow to the vRO workflow.

rest-post

Finally, we need to specify the GET message. This is fairly straightforward. All we need to do is to make sure we pass the execution ID at the end of the URL. This is done using the “token” variable.

rest-get

Implementing the ServiceNow workflow

On the ServiceNow side, we need to create a workflow that carries out the provisioning action, i.e. calls vRO to activate the vRO workflow and the provisioning. Normally, you would most likely have approvals and other actions tied to the workflow, but in this example, we’re simply focusing on the provisioning part.

Here is our ServiceNow workflow.

workflow

Let’s walk through the steps:

  1. Run a script to send a request to vRO for starting the provisioning workflow. We’re pulling the parameters from the current item request and passing it as a REST call.
    gs.log("image: " + current.variables.vm_os_image);
    gs.log("CPUs: " + current.variables.vm_numCPUs);
    gs.log("Memory: " + current.variables.vm_memoryMB);
    gs.log("Storage: " + current.variables.vm_storageGB);
    var r = new RESTMessage('Create VM in vRA', 'post');
    r.setStringParameter('image', current.variables.vm_os_image);
    r.setStringParameter("numCPUs", current.variables.vm_numCPUs);
    r.setStringParameter("memoryMB", current.variables.vm_memoryMB);
    r.setStringParameter("storageGB", current.variables.vm_storageGB);
    
    // Send the request
    //
    var response = r.execute();
    if(response == null) 
        response = r.getResponse(30000);
    gs.log("Status: " + response.getStatusCode());
    gs.log("Workflow token: " + response.getHeader("Location"));
    gs.log("Response: " + response);
    var location = response.getHeader("Location");
    var arr = /.*executions\/(.*)\//.exec(location);
    workflow.scratchpad.vroToken = arr[1];
    gs.log("Set vRO token to: " + workflow.scratchpad.vroToken);
    activity.result = response.getStatusCode();
  2. Send a HTTP GET to obtain the status of the workflow execution. Due to some quirks in the MID server, we need to check for the result twice. We’re parsing the return body and extracting the result code from vRO.
    gs.log("vRO token is: " + workflow.scratchpad.vroToken);
    var r = new RESTMessage('Create VM in vRA', 'get');
    r.setStringParameter('token', workflow.scratchpad.vroToken);
    var response = r.execute();
    if(response == null) 
        response = r.getResponse(30000);
    var body = response.getBody();
    gs.log("Response code: " + response.getStatusCode());
    gs.log("Body: " + body);
    
    // Parse return value
    //
    var parser = new JSONParser();
    var jsonResult = parser.parse(body);
    var status = jsonResult.state;
    if(status == "failed")
        workflow.scratchpad.vraError = jsonResult.content_exception;
    gs.log("Workflow status: " + status);
    workflow.scratchpad.vraWorkflorStatus = status;
    activity.result = status;

The final step: Creating a Catalog Item

Once we have the workflows created, it’s time to tie them into a Catalog Item. This is pretty straightforward. We have to make sure we call the workflow we just created and we need to define the variables used by the workflow. You’ll recall we’re using the variables vm_os_image, vm_numCPUs, vm_memoryMB and vm_storageGB.

This is what the catalog item looks like:

Screen Shot 2015-06-18 at 5.48.57 PM

In this case, I chose to create a specific service for this OS image, but you could just as well create a generic one and have a drop-down for selecting the image. There’s a variety of input options you can choose between for the variables. I picked a numeric scale for the CPUs and multiple choices for the other parameters.

Screen Shot 2015-06-18 at 6.00.58 PM

When specifying the variables, you should make sure they have the same name as what you’re using your workflow. This is what my definition of vm_numCPUs looks like, for example:

Screen Shot 2015-06-18 at 6.04.14 PM

Finally, when we have everything defined, we should get a Catalog Item that looks something like this.

Screen Shot 2015-06-18 at 6.13.29 PM

Downloads

Complete vCO Workflows: Click here!