Exploring the vRealize Automation 7.0 Event Broker – Part 3

Abstract

In this article, we will explore the approval topic of the Event Broker. This allows you to plug in custom approval policies and even delegate approvals to external systems.

Approval policies in vRealize Automation allows administrators to insert gating rules into a provisioning workflow. Typically, approval policies work by notifying an approver through email allowing them to approve or reject a provisioning request by clicking on a link or by interacting with the vRealize Automation console. In this article, we will show you how to extend an approval policy by plugging in a custom workflow. Specifically, we will explore how to leverage ServiceNow to approve or reject a request.

Approval Policies in vRealize Automation

Approval policies offer a highly flexible method of adding gating rules to a provisioning workflow. Out of the box, approval policies can be conditional to force approvals only when a user exceeds a certain memory limit, for example. In fact, conditional approvals can be controlled by any aspect of a request, making them extremely flexible.

Also, the fact that approval policies are tied to catalog items through entitlements makes it very easy to have different policies depending on where you sit in an organization.

However, the most powerful feature around approvals that was added in vRealize Automation 7.0 is the ability to use the Event Broker to plug in external approval mechanisms. In previous versions, you were limited to manual approvals through emails. With the new Event Broker architecture, you can implement very advanced automatic approvals as well as delegating approvals to an external incident management system. The later is what we will focus on in this article.

Integrating with ServiceNow

Why did we pick ServiceNow for this example? For a number of reasons, actually. First, it has a developer program which allows you full access to test instances free of charge. Second, it’s very widely used. And third, it has a very straightforward API.

The goal

Our goal is very simple: Whenever a provisioning workflow in vRealize Automation needs an approval, we will call out to ServiceNow to make a request for approval, then block the provisioning workflow until the request is approved in ServiceNow. Also, we are going to supply some basic information about the request to ServiceNow the approver can base their decision on.

Using a Service Catalog Request

There are several ways we could do this. One is to create an incident in ServiceNow and wait for it to be resolved. In this article, however, we are going to use the approval mechanism built into ServiceNow and it seems a Service Catalog Request is better suited for that.

A Service Catalog Request is what ServiceNow uses to handle a user request for some service, be it a cellphone, a laptop or some service from human resources. These requests have an underlying workflow and an approval policy attached to them, making them suitable vehicles for approvals coming from vRealize Automation. In a real life scenario, we would have designed a form and a workflow in ServiceNow for this specific type of request, but in this example, we are just going to use the generic constructs.


 

Screen Shot 2016-03-15 at 12.51.06 PM


 

Implementation

We implement this in three steps: First we build a workflow that can create a Service Catalog Request in ServiceNow and wait for it to be approved. Next, we create an event subscription based on that workflow and finally we link it to an approval policy. Let’s walk through the steps.

vRealize Orchestrator workflow

Let me start by pointing out that this is NOT the way you should be doing it in production. We create a Service Catalog Request in ServiceNow and then we check it once every minute to see if it has been approved. Since there may be hundreds of outstanding approvals in a production system, this approach is very inefficient and could generate massive amounts of calls to ServiceNow. A better solution would be to have ServiceNow push back an event to vRealize Orchestrator. This is doable, but beyond the scope for this article. We may address how to receive events from ServiceNow in an upcoming article.

This is what out vRealize Orchestrator workflow looks like:


 

Screen Shot 2016-03-15 at 1.43.44 PM


 

The workflow is relatively straightforward. First, we extract the parameters and do some other basic initializations. Next, we call ServiceNow to create a Service Catalog Request. After that, we enter a into a loop that checks the approval status of the request and exits with the appropriate return code once we’ve determined whether the request was approved or rejected. Keep in mind that we may spend days in this loop, which is why we need to find a better way of doing this in production (see upcoming article).

Inputs and outputs

An approval workflow takes four input parameters that are all of type Properties (a name-value pair map).

  • fieldNames – The keys represent the name of the fields defined for this approval. The values represent the field types.
  • fieldValue – The values of the fields specified in fieldNames.
  • requestInfo – A map of request properties.
  • sourceInfo – A map of properties from the source of the request (e.g. internal references to the request)

The contents of fieldNames and fieldValues are determined by the fields that are exposed by the approval policy. More about this later.

The fields map will always contain a field called businessJustification which can be changed by an approval workflow to send comments back to vRealize Automation. This is typically used to specify a reason for an approval or rejection.

Creating the Catalog Item Request in ServiceNow

Working with objects in ServiceNow is very straightforward. Every table in their data model has a REST interface for performing basic CRUD (Create/Read/Update/Delete) operations. All we have to do is to post a new request to the table. In a production implementation of this, we’d probably provide a bit more data, such as actual links to the requester and a link to the catalog item we’re requesting. But for a simple demo, this will do just fine.


 

var payload = {};
payload["short_description"] = shortDescription;
payload["description"] = longDescription;
payload["active"] = "true";
payload["price"] = "1001";
payload["special_instructions"] = "vRA-Request";
var rq = snowHost.createRequest("POST", "/api/now/table/sc_request", 
     JSON.stringify(payload));
rq.contentType = "application/json";
rq.setHeader("accept", "application/json");
var response = rq.execute();
if(response.statusCode != 201)
    throw "ServiceNow error. Status code: " + response.statusCode + " details: " 
        + response.contentAsString;
var json = JSON.parse(response.contentAsString);
incidentId = json.result.sys_id;

Let’s examine the code! First, we build a payload based on the description and short description. We’re also setting a bogus price for the item, which is there to bypass one of the default rules in the ServiceNow development instances that auto-approves anything that costs below $1000. We could (and should) have disabled that rule on the ServiceNow side, but the idea is that this should be able to run against an unmodified ServiceNow development instance. Again, this code is purely for  demo purposes.

Next, we make the request itself, a POST to the sc_request table, which creates a new request for us. ServiceNow will return the created record and the very last line of the script picks up the ID of the request. We will use this in the next step as we check on the status for the request.

Checking the approval status in ServiceNow

Once we have a Service Catalog Request, we periodically check on the status of it. Again, this needs to be replaced by a push-based solution in a production scenario, but the current approach will work fine demo purposes.


var rq = snowHost.createRequest("GET", "/api/now/table/sc_request/" + incidentId,
     null);
rq.setHeader("accept", "application/json");
var response = rq.execute();
if(response.statusCode != 200)
     throw "ServiceNow error. Status code: " + response.statusCode + 
     " details: " + response.contentAsString;
var json = JSON.parse(response.contentAsString);
approval = json.result.approval.toString();

All this script does is to check on the Service Catalog Request we just created in ServiceNow to see if it’s approved. The rest of the workflow just deals with sleeping between checks and keeping track of how many times we should do this before timing out.

Creating the subscription

Once we have the workflow in place, it’s time to register a subscription through the Event Broker. Doing so it pretty straightforward. As an administrator, we go to Administration->Events->Subscriptions and click on “New”.


 

Screen Shot 2016-03-24 at 12.23.59 PM


 

Select “Pre-Approval” and click “Next”. A pre-approval subscription is called before any provisioning activities take place. If we wanted to register an approval after provisioning is made, but before the provisioned asset is released to its user, we could use a “Post-Approval”.


 

Screen Shot 2016-03-24 at 12.24.14 PM


 

Let’s run this for all events. If necessary, you can create filters here, for example applying the policy only to certain types of assets.

Click “Next”.


 

Screen Shot 2016-03-24 at 12.24.44 PM


 

Select the “Delegate approval to ServiceNow” workflow. This is the vRO workflow we discussed earlier in the workflow.


 

Screen Shot 2016-03-24 at 12.25.48 PM


 

Optionally, change the name and give it a description. You can also assign a timeout on the subscription. In this case, we’re stating that if the request hasn’t been approved within 24 hours, we automatically cancel the request.

Click “Finish”.

We now have a subscription, but if you were to start deploying VMs you’d notice that our workflow is never called. That’s because we’re missing an Approval Policy, which is the final piece of this puzzle.

Creating an approval policy

An approval policy in vRealize Automation simply states when an approval is needed and how to handle it. It can also deal with approvals that require multiple steps and multiple roles. In this example, we’re going to walk through a simple approval policy that requires a single approval done through a call to ServiceNow.

To add an approval policy, log in as a tenant administrator and select Administration->Approval Policies.


 

Screen Shot 2016-03-24 at 3.14.44 PM


 

Select “Service Catalog – Catalog Item Request”. This makes the approval policy applicable to any request. If you want to narrow it down to a specific catalog item type, you can select that here.

Screen Shot 2016-03-24 at 3.15.52 PM


 

Fill out name and description. Leave the policy as a draft for now.


 

Screen Shot 2016-03-24 at 3.16.12 PM


 

Add a new approval level by clicking the plus sign next to “Levels”. Notice how you can have multiple levels that are executed in series or parallel.

Create a new level and give it a name. Select “Use event subscription” to make it run the pre-approval event subscription we created earlier. You may also enter conditions here if you don’t want the approval to be required for every request. In this example, we make the approval policy “Always required”.

Verify that everything looks OK, change the status of the policy to “Active” and click OK to save it.

Attaching the policy to an entitlement

Approvals are controlled by approval policies and entitlements. For an approval policy to be considered, it has to be attached to an entitlement. This gives you great flexibility when designing approvals. Not only can you define how the approval will behave, but you can also define within what scope of catalog items and users it will apply. For example, you can set up a rule saying “This service is available to all developers, but junior developers will have to obtain an approval by their manager first”.

In our case, we will just set it up to apply to all developers trying to deploy a certain type of VM.

To edit entitlements, log in as a tenant administrator and select Administration->Catalog Management->Entitlements. In our case, we will use an existing entitlement named “Developers” and add our approval policy to a certain service.


 

Screen Shot 2016-03-24 at 3.32.35 PM


In this case, we are applying the approval policy to an entire service, which will cause it to be applied to every catalog item in that service. You may also attach the approval policy to an individual catalog item. Click the drop down next to the service and select “Modify Policy”.


 

Screen Shot 2016-03-24 at 3.35.59 PM


Select the approval policy we just created. Click OK on the dialog box. Click OK on the entitlement form to save the entitlement.

Congratulations! You’ve just configured a custom approval policy that calls out to ServiceNow!

Testing the ServiceNow-based approval

Now that we’ve created and configured our custom approval, it’s time to test it. To do that, let’s just request one of the catalog items from the service we attached the approval policy to.


 

Screen Shot 2016-03-24 at 3.40.03 PM


Request the catalog item and fill out any information needed. Add something for the description and reason for request, as these will be transferred to ServiceNow.


 

Screen Shot 2016-03-24 at 3.41.23 PM


If we go to the “Requests” tab, we should now see the request getting stuck on the “Pending Approval” state. It will not be released until we’ve approved it in ServiceNow (or the 24 hour timeout we specified has elapsed).


 

Screen Shot 2016-03-24 at 3.45.13 PM


Let’s approve it by clicking the Approve button and wait for a minute or two for the vRealize Automation workflow to catch up.


 

Screen Shot 2016-03-24 at 3.47.24 PM


As you can see, the provisioning process has resumed and our VM is well on its way to being created.

Conclusion

Although this example is not intended for production use, it serves to illustrate the flexibility of the Event Broker and approval engine, as well as the ease at which external components can be integrated into the lifecycle of a catalog item in vRealize Automation.

I am personally, as an engineer, extremely excited about the possibilities the Event Broker opens up and will continue to come up with interesting ways it can be used.

Downloadable content

vRealize Orchestrator Workflow

 

 

Advertisement

Exploring the vRealize Automation 7.0 Event Broker – Part 2

Abstract

In a previous article, we introduced the Event Broker in vRealize Automation 7.0 and walked through some simple examples of how to subscribe to events in a provisioning workflow.

In this article, we are continuing our exploration by using the Event Broker to implement a more realistic example. We will walk through how to create a simple validation workflow for virtual machines and how to use the Event Broker to drive it.

The problem

In our imaginary organization, we have a policy that no development VM should ever be larger than 2 CPUs/4GB RAM. If we violate that restriction, the VM provisioning process should stop with an error message. Also, if a VM is larger than 1CPU/2GB RAM, we want a notification email to be sent to the infrastructure manager.

Currently, we implement this restriction through blueprints and business groups, but we have noticed that some groups in our organization don’t seem to enforce these rules and allow creation of blueprints that allow bigger machines. Instead of manually policing this, we want to build a mechanism that enforces these rules, no matter which business group or blueprint they come from.

The solution

Blocking subscriptions

Rather than just listening to events, we need to influence the provisioning process if a user tries to create a machine that’s too large. To do this, we need to introduce the concept of blocking subscriptions.

In a typical publish/subscribe scenario, events are delivered asynchronously to subscribers. Sometimes we call this a “fire and forget” mode of delivery. Once a running workflow has fired an event, it keeps on running and doesn’t keep track of whether the event was picked up or if the subscriber needed something to happen to the workflow.

Blocking subscriptions are different in that they actually pause the workflow until they have been fully process by the subscribers. Furthermore, a subscriber can send feedback to the workflow by modifying variables or throwing exceptions. Since our validation workflow is going to cause a provisioning workflow to fail if the sizing of a VM violates our limits, blocking subscriptions seem perfect for us!

Defining the subscription

We’re setting up the first part of the event subscription the same way we did in the previous article. Since we want to check things before we provision a machine, we subscribe to the PRE-condition of the BuildingMachine-event.

Screen Shot 2016-01-07 at 11.22.49 AM

Configuring a blocking subscription

Whether a subscription is blocking must be determined when you create the subscription and can’t be subsequently changed. We specify this in the last screen of the subscription definition.

Screen Shot 2016-01-08 at 3.50.26 PM

Notice the checkbox marked “Blocking”!

When we make a subscription blocking, we also need to specify its priority and timeout. The priority helps us run the subscriptions in a deterministic order. The subscriptions will be run in an ascending order of priority (lowest number first). Since blocking subscriptions pause the workflow while they’re executing we need to prevent workflows from “hanging” indefinitely if some code called by a subscription is faulty and doesn’t return within a reasonable amount of time. The default timeout is 30 minutes, but since the checks we’re going to perform are very simple, 5 minutes should be more than enough. If our subscription hasn’t finished after 5 minutes, something is definitely wrong!

Enabling properties passing

By default, the virtual machine properties aren’t passed to subscription workflows. To do this, we need to set the property Extensibility.Lifecycle.Properties.VMPSMasterWorkflow32.BuildingMachine.

You can set the property at any level, including the blueprint, business group, reservation, etc. In our case, we’re defining it on the vCenter endpoint. This way, anything that comes from that endpoint will pass the properties to any subscribers.

The value of the property should be the names of the properties you want to pass. Wildcards are accepted, so we’re just setting it to “*”, meaning all properties are passed.

Screen Shot 2016-01-21 at 12.22.39 PM

Designing the vRO workflow

Our vRO workflow will simply dig out the memory and CPU settings from the provisioning data and compare them to the limits we specified. If we hit the “hard” limit, we should halt the provisioning with an error message. If we only hit the “soft” limit, we let the provisioning continue, but send a notification email.

Screen Shot 2016-01-08 at 4.02.49 PM

Parsing the parameters

The checks should be somewhat self-explanatory, but let’s have a look what goes on in the first scriptable task (Extract parameters).

Screen Shot 2016-01-08 at 4.04.53 PM

The first line parses the “machine” parameter passed by the Event Broker. If you recall from our previous article, this variable contains all the information about the machine being provisioned encoded as JSON. Once we’ve parsed it, we can treat it as an associative array and pull values out of it.

The last two lines simply pull out the CPU count and memory size and sets them as attributes on the workflow so that they can be picked up by the subsequent checks.

Checking the resources

Next, we have two sets of checks. The first one checks CPU and memory allocation against our “hard” limit and throws an exception if we are in violation. Throwing an exception in response to a “BuildingMachine” event will stop the entire provisioning workflow with an error status.

The second set of checks test against the soft limit. If we violate that limit, the provisioning workflow will continue, but an email will be sent to an administrator advising them that a soft limit was violated. The only thing of interest in this branch is the building of the email message and subject line from the machine data.

Screen Shot 2016-01-19 at 2.43.51 PM

Here we’re reusing the “parsedMachine” structure we gained from parsing the input parameters. All we have to do is to reach into it for the machine name and the owner and merge that into a subject line and message body.

Testing it

To test the event subscription, we provision three machines: One that doesn’t violate any limits, one that violates a soft limit and one that violates a hard limit.

If things work correctly, two out of the three deployments should work and you should see an email looking similar to this:

 

Behavior and limitations

There are some important rules and limitations you should be aware of when it comes to blocking event subscriptions. These are the most important ones:

  1. You may only stop a vRA workflow from a subscription in the PRE-state of an event. In other words, you have to stop it before any default processing related to that event takes place.
  2. Currently, stopping a vRA workflow by throwing an exception is only supported in the following states: BuildingMachine, MachineProvisioned, MachineRegistered. Throwing an exception in any other state does NOT stop the provisioning workflow. If you want to stop the provisioning workflow from any other state, see below!
  3. As mentioned above, the subscription must be marked as “blocking” once it’s created.

Changing the progression of a workflow

As mentioned above, provisioning workflows are only stopped on error if they are in any of three distinct states (BuildingMachine, MachineProvisioned, MachineRegistered). If you want to stop a workflow or change its progression from another state, you still can but the mechanism is different.

Remember the workflow parameters we discussed in part 1? It turns out that if your vRO workflow responds to a blocking subscription, you can declare some of the parameters as outputs and change their value from within the workflow. This can be very useful for workflows that update the machine name, for example. But there is also a special parameter called “virtualMachineEvent” we can use to control the progression of a workflow.

All we have to do is to assign the name of an event to the “virtualMachineEvent” variable and that event will be processed by the vRA provisioning workflow. There are many different events a provisioning workflow can accept (see the full list here), but the ones we’re interested in here are “BuildFailure” and “BuildSuccess”. As you may have noticed, when you send events, you skip the leading “On” in the event name, so “OnBuildFailure” becomes “BuildFailure”.

Let’s look closer at these two events:

  • BuildFailure – Stops the provisioning workflow in a failed state.
  • BuildSuccess – Stops the provisioning workflow in a success state.

Assume you want to stop the workflow at a failure in a state that doesn’t allow you to stop it by throwing an exception. All you would have to do is to declare “virtualMachineEvent” as an output parameter of your workflow and set it to “BuildFailure” like this:

Screen Shot 2016-01-20 at 2.08.35 PM

For this to work, two requirements have to me met:

  • The subscription must be blocking.
  • The subscription must be in response to the PRE-condition of the event.

Conclusion

This installment in the series about the Event Broker has been focusing on blocking subscriptions, i.e. subscriptions where you can change the outcome of a provisioning workflow. We started by showing how you can stop a workflow in certain states by throwing an exception and went on to a more generic version of workflow control using the virtualMachineEvent variable.

In the next installment, we will discuss more “exotic” events, such as log events and blueprint editing events.

Downloads

The sample workflow discussed in this article can be found here.

Exploring the vRealize Automation 7.0 Event Broker – Part 1

Abstract

One of the most prominent additions to vRealize Automation 7.0, aside from the converged blueprints, is a feature called the Event Broker. This service provides a uniform and robust way of subscribing to and responding to various events originating from vRA. It allows you to plug in actions and checks throughout a machine life cycle, but it goes further than that. With the Event Broker, you can subscribe to other events as well, such as blueprint lifecycle events, approval events and system log events.

This article aims to explore the Event Broker and to show a few examples of how it may be used.

Event Broker architecture

The design of Event Broker is, in essence, a classic publish/subscribe architecture. In other words, the service acts as an event distribution point where producers of events can publish them and consumers can subscribe to specific event streams. Clients would typically subscribe to a category of events, typically known as a “topic”. Events are often filtered even further using conditions, e.g. subscribing to events only from a certain part of the life cycle or only for a certain type of objects. Unlike a traditional publish/subscribe architecture, subscribers of the Event Service can also control the underlying workflow using “blocking events”, which we will discuss in more depth in upcoming articles.

Under the hood, the Event Broker is implemented using a persistent message queue, but most users are probably going to interact with it using vRealize Orchestrator (vRO). When an event that matches the topic and condition for a subscription is triggered, a corresponding vRO workflow is started.

Each topic has a schema that describes the structure of the data carried by the event. For example, a machine provisioning event carries data about the request, such as the name of the user making the request, the type of machine, amount of CPU and RAM, etc.

It’s important to remember that event subscriptions will trigger on any object that’s applicable to the topic. In vRA 6.x, you would typically tie a subscription to a specific blueprint, but with the Event Broker, you would get any event for any machine from any blueprint. In order to restrict that to specific blueprints or machine types, you will have to use a conditional subscription. While there is a slight learning curve involved, the end result is a much more flexible and efficient event mechanism.

Taking the Event Broker for a spin

Enough of theory and architecture for now. Let’s take it for a spin and see if we can get it to work!

A Hello World example

First, let’s just see if we can get the Event Broker to trigger a workflow on any event during a VM provisioning. Let’s create a simple vRO workflow that looks like this:

Screen Shot 2016-01-07 at 10.27.27 AM

Our workflow takes no parameters and just prints a string to the vRO system log. Let’s save it and go back to vRA. In vRA, we go to Administration->Events-Subscriptions and click the “New” button (the plus). This is what we’ll see:

Screen Shot 2016-01-07 at 10.34.49 AM

As we discussed earlier, an event topic is just a category of events you can subscribe to. Since we were going to trigger our vRO workflow on provisioning events, we select “Machine provisioning” from the list of topics. Once we select the topic, we’ll see the schema for it, i.e. a description of the data carried by the event. As you can see from the screen shot, we now have access to all the data about the request and the machine being requested. Let’s click the Next-button and go to the next screen!

Screen Shot 2016-01-07 at 10.37.22 AM

Here we can choose between subscribing to all events or to do it based on a condition. A machine provisioning triggers multiple events: When the machine is built, software is provisioned, turned on and so on. Normally we’d add some conditions to pick exactly the events we want, but in this case, let’s just go with everything. Let’s go to the next screen!

Screen Shot 2016-01-07 at 10.40.17 AM

This dialog let’s us pick the vRO workflow to run. We’re going to pick the “Hello World” workflow we just created. Let’s just click “Next” and “Finish” from here and we’ll have a subscription called “Hello World”, since it gets the workflow name by default.

Screen Shot 2016-01-07 at 10.43.01 AM

The last thing we need to do is to select the subscription and click publish. Like most objects in vRA, they are initially saved in a draft state and need to be activated before they kick in. Now we’re ready to test out workflow! Let’s go request a machine and wait for it to complete! Once the provisioning has completed, we can go to vRO to check on our workflow. It should now have a bunch of tokens (executions) and look something like this:

Screen Shot 2016-01-07 at 10.52.25 AM

Why so many executions? Remember that the provisioning process generates an event per state it goes through, so you’ll see a lot of events, since there’s a lot of states to go through. And in fact, there’s two events generated per state, one just before it goes into the state and one just after it leaves it. We call these pre-events and post-events.

Congratulations! We’ve created our first event subscription!

Where are the parameters?

So far, our event subscription is pretty useless, since all it can do is to print a static string to the log. We don’t know anything about what kind of event we received or which object it was triggered for. To build something more interesting than just a “Hello World”, we’ll need access to the event parameters. But where are they?

Let’s click on one of the workflow tokens in vRO and select the variables tab!

Screen Shot 2016-01-07 at 10.58.28 AM

Looks interesting! So it appears that all the variables from the event schema were actually sent to the vRO workflow. But how can we access them? That turns out to be very simple! Just create an input parameter to your vRO workflow with the same name and type as the event variable.

Let’s add the input parameters “lifecycleState” and “machine” to our workflow! This should give us information on what state we’re in and the machine that triggered the event. Let’s also change the code of our scriptable task a bit to look like this:

Screen Shot 2016-01-07 at 11.04.59 AM

Each time we get an event, we’re going to print information on what lifecycle state we’re in and what machine we’re provisioning. Provisioning a machine should get you a log record that looks something like this:

Screen Shot 2016-01-07 at 11.09.49 AM

The state information deserves a closer look. It’s made up of two variables: A phase and a state. The state simply corresponds to the machine states you may be familiar with from IaaS. “VMPSMasterWorkflow32.BuildingMachine” means that the machine is in the initial building stage. The “phase” variable says “PRE” in this example, which means that we caught this event just before the initial build happened. Had it said “POST”, we would have gotten it right after the initial build was done.

And, by the way, those lifecycleState and machine variables are encoded as JSON strings. If you want to manipulate them as associative arrays/dictionaries instead, just call e.g. JSON.parse(lifecycleState). We’ll be doing that in some of the more advanced examples.

Adding a condition

At this point, we have a semi-interesting example of an event subscription. But it’s triggering on every single event for every single VM being provisioned. In a real life scenario, this wouldn’t be very nice to your vRO instance. So let’s dial things down a bit by introducing a condition to our subscription.

To do this, we need to go back to Adminitration->Events->Subscriptions and click on our “Hello World” subscription to edit it. Then we go to the “Conditions” tab and select “Run based on conditions”.

Screen Shot 2016-01-07 at 11.22.21 AM

The condition can be any test on any field, or a combination of several fields. In our case, we’re just going to select the lifecycle state name.

Screen Shot 2016-01-07 at 11.22.49 AM

Let’s do a simple “equals” operation and test against a static value. The drop-down allows us to select from a list of predefined value. In this case, we want to trigger only if we’re in the “BuildingMachine” state.

We’re now saving the subscription by clicking “Finish” and testing it by provisioning another VM. This time, we should only see two executions of our vRO workflow: The PRE and POST events for “BuildingMachine”.

Conclusion

This is the first installment in a series of articles on the Event Broker in vRA 7.0 and is intended to get you familiarized with the new functionality. While the examples given aren’t useful for real-life scenarios, they might serve as templates for more realistic workflows.

In the next installment (coming soon) of this series we are going to look at more useful examples, such as an auditing workflow imposing hard limits on resource usage. If that sounds interesting, I recommend you add this blog to the list of blogs you follow!

A Simple Template for vR Ops Python Agents

Background

Since I wrote the post about vraidmon, I’ve gotten some requests for a more generic framework. Although what I’m describing here isn’t so much a framework as a simple template, some may find it useful. You can find it here.

What does it do?

In a nutshell, this template helps you with three things:

  1. Parsing a config file
  2. Simplifying metric and properties posting
  3. Simplifying management of parent-child relationships.

The config file

The template takes the name of a configuration file as its single argument. The file contains a list of simple name/value pairs. Here is an example:

vropshost=vrops-01.rydin.nu
vropsuser=admin
vropspassword=secret
adapterSpecific1=bla
adapterSpecific2=blabla

The vropshost, vropsuser and vropspassword are mandatory and specify the host address, username and password for connecting to vR Ops. In addition to this, adapter specific configuration items can be specified.

Writing the adapter

Screen Shot 2015-12-22 at 3.51.03 PM

Using this template is very easy. Just locate the “YOUR CODE GOES HERE” marker and start coding! The only requirement is that you have to set the adapterKind and adapterName variables.

Setting the adapter type and name

The adapterKind is the “adapter kind key” for the adapter you’re building. It basically tells the system what kind of adapter you are. You can really put anything in here, but it should be descriptive to the adapter type, such as “SAPAdapter” or “CISCORouterAdapter”.

The “adapterName” is a name of this particular instance of the adapter. There could very well be multiple adapters of the same type, so a unique name is needed to tell them apart.

Posting data

Once we’ve got the adapter naming out of the way, we can start posting data. This is done through the “sendData” function. It takes 5 parameters:

  1. The vR Ops connection we’re using
  2. The resource type we’re posting for. This should uniquely specify what kind of object the data is for. E.g. “CISCORouterPort” if we’re posting data about a port.
  3. Resource name. This uniquely identities the object we’re posting data for, e.g. “port-4711”.
  4. A name-value pair list of metrics we’re posting for the resource, e.g. { “throughput”: 1234, “errors”: 55 }
  5. A name-value pair list of properties we’re posting e.g. { “state”: “functional”, “vendor”: “VMware” }

The sendData function returns the resource we posted data for. Note that if the resource didn’t already exist, it will be created for you.

Building relationships

Sometimes you want your monitored resources to be connected to some parent object. For example, if you are monitoring tables in a database, you may want them to be children of a database object. This way, they will show up correctly in dependency graphs in vR Ops and your users will find it easier to navigate to them.

To form a parent-child relationship, use the addChild function. It takes the following parameters:

  1. The vR Ops connection to use
  2. The adapter kind of the parent object
  3. The adapter name of the parent object
  4. The resource kind of the parent object
  5. The resource name of the adapter object
  6. The child resource to add
  7. (Optional) A flag determining whether parent objects should be automatically created if they don’t exist. By default, they are not automatically created and adding a child results in a no-op.

Here’s an example of an agent using this template that monitors Linux processes. As you can see, we’ve used the “addChild” function to add the Linux process objects to a Linux host.

Screen Shot 2015-12-22 at 4.10.51 PM.png

Downloadable content

Github repository: https://github.com/prydin/vrops-py-template

vraidmon – A vRealize Operations Adapter Written in Python

Background

Two events coincided today: Realizing that I’ve been running without redundancy on a Linux software RAID coincided with me having some time to spare for a change. So I decided to write a simple vR Ops agent to monitor a software RAID, and, since I had some time on my hands, I wanted in Python instead of my go-to language Java. This is my very first Python program, so bear with me.

Learning Python proved to be very easy. It’s a very intuitive language. I’ve got some issues with loosely typed languages, but for a quick hack I’m willing to put that aside. What turned out to be a bit more difficult was to find any good examples of agents for vR Ops written in Python. The blog vRandom had a getting started guide that allowed me to understand how to install and get the thing running, but how to code the actual agent was harder to find. But after some trial and error I got the hang of it and I decided to share my experience here.

A quick look at the code

Initializing

The client library is called “nagini”, so before we can do anything, we need to import it:

import nagini

Next, we need to obtain a connection to vR Ops. This is done like this:

vrops = nagini.Nagini(host=vropshost, user_pass=(vropsuser, 
vropspassword))

It’s worth noticing that this doesn’t actually make the connection. It just sets up the connection parameters. So if you do something wrong, like mistyping the password, this code will gladly execute but you’ll get an error later on when you’re trying to use the connection.

Sending the data

Here is where things get quite interesting and the Python client libraries show their strength. While all the REST methods are exposed to the programmer, the libraries also add some nice convenience methods.

Normally, when you post metrics for a resource, you need to do it in three steps:

  1. Look up the resource to obtain its ID.
  2. If the resource doesn’t exist, create it.
  3. Send the metrics using the resource ID you just obtained by looking it up or creating it.

The vR Ops Python client can do this in a single call! All we have to do is to put together somewhat elaborate data structure and send it off. Let’s have a look!

Screen Shot 2015-12-15 at 9.21.09 AM

Wow, that’s a lot of code! But don’t despair! Let’s break it down. In this example, I’m sending but stats and properties. If you recall, stats are numeric values, such as memory utilization and remaining disk space, whereas properties are strings and typically more static in nature. Let’s examine the stats structure.

The REST API states that stats should be sent as a single member called “stat-content”. That member contains an array of sub-structures, each representing a sample. Each sample has a “statKey” holding the name of the metric, a “timestamp” member and a “data” member holding the actual data value. As you can see above, I’m sending three values, the “totalMembers”, “healthyMenbers” and “percentHealthy”.

Now that we have our fancy structure, we can send it off using the find_create_resource_push_data method. It does exactly what the name implies: It tries to find the resource, if it doesn’t it creates it and then it pushes the data. To make it woek, we need to send the name of the resource, the name of the resource kind as well as adapter kind and the stats and/or properties. And that’s it! We’ve now built an adapter with Python using a single call to the vR Ops API!

What about the timestamp?

But wait, there’s one more thing we need to know. The vR Ops API expects us to send a timestamp as a Java-style epoc milliseconds long integer. How do we obtain one of those in Python.

Easy! We just time.time() method (you have to import “time” first). This gives us the epoc seconds (as opposed to milliseconds) as a float, so we have to multiply it by 1000 and convert it to a long. It will look something like this:

timestamp = long(time.time() * 1000)

Installing and running

Prerequisites

First of all, you need to install python (obviously). Most likely it’s already installed, but if it isn’t, just go to the package manager on your system and install it. Next, you need the client libraries for vR Ops. They are located on your vR Ops server and can be downloaded and installed like this:

mkdir tmp
cd tmp
wget --no-check-certificate https://<your vrops server>/suite-api/docs/bindings/python/vcops-python.zip
unzip vcops-python.zip
python setup.py install

Installing the agent

Download the agent from here. Unzip it in the directory where you want to run it, e.g. /opt/vraidmon

mkdir /opt/vraidmon
unzip <path to my zip>

Next, we need to edit the configuration file vraidmon.cfg. You need to provide hostname and login details for your vR Ops. The name of the raid status file should remain /proc/mdstat unless you have some very exotic version of Linux. Here is an example:

vropshost=myvrops.mydomain.com
vropsuser=admin
vropspassword=topsecret!
mdstat=/proc/mdstat

Now you can run the agent using this command:

python vraidmon.py vraidmon.cfg

You’ll probably get some warning messages about certificates not being trusted, but we can safely ignore those for now.

Making it automatic

Running the agent manually just sends a single sample. We need some way of making this run periodically and the simplest way of doing that is through a cron job. I simply added this to my crontab (ignore any line breaks and paste it in as a single line):

*/5 * * * * /usr/bin/python /opt/vraidmon/vraidmon.py /opt/vraidmon/vraidmon.cfg 2> /dev/null

Adding alerts

In order for this adapter to be useful, I added a simple alert that fires when the percentage of healthy members drops below 100%. This is what a simulated failure looks like:

Screen Shot 2015-12-15 at 11.30.49 AM.png

Downloadable material

Python source code and sample config file.

Alert and symptom definition.

 

$500 Poor Man’s vCloud Suite Lab

Yes, I’m cheap!

I guess I’m a bit cheap when it comes to buying equipment for my lab. To be honest, I’m more interested in funneling money to my hobbies and my wife’s hobbies than spending money on an elaborate lab. However, since I felt the need of a true sandbox where I could experiment freely, I arrived at a point when a home lab was more or less necessary.

So I set out to create the most inexpensive lab that would still get the job done. It became almost an obsession. Was it possible to build a viable lab for less than $500? The result was pretty astonishing, so I decided I wanted to share it.

Requirements

I needed a demo lab where I could run most or all of the vCloud Suite products as well as NSX. It didn’t have to be extremely beefy, but fast enough to run demos without making the products look slow. Here’s a condensed list of requirements:

  • Run the basic components of the vCloud Suite: vRealize Automation, vRealize Orchestrator, vRealize Operations, LogInsight.
  • Run some simulated workloads, including databases and appservers.
  • Run a full installation of NSX
  • Provide a VPN and some simplistic virtual desktops.
  • Have at least 1GB Ethernet connections between the hardware components.

Deciding on hardware

I did a lot of research on various hardware alternatives. Let’s face it: What it comes down to is RAM. In a lab, the CPUs are waiting for work most of the time, but the vCloud Suite software components eat up a fair amount of memory. I calculated that I needed about 100GB or RAM for it to run smoothly.

At first, I was looking at older HP Proliant servers. I could easily get a big DL580 with 128GB RAM for about $200-300. But those babies are HUGE, make tons of noise and suck a lot of power. Living in a normal house without a huge air conditioned server room and without a $100/month allowance for powering a single machine, this wasn’t really an alternative.

My next idea was to buy components for bare-bones systems with a lot of memory. I gave up that idea once the value of my shopping cart topped $3000.

I was about to give up with I found some old Dell Precision 490 desktops on eBay. These used to be super-high-end machines back in 2006-2007. Now, of course they are hopelessly obsolete and are being dumped on eBay for close to nothing. However, they have two interesting properties: They run dual quadcore Xeon CPUs (albeit ancient) and your can stuff them with 32GB of RAM. And, best of all, they can be yours for about $160 from PC refurbishing firms on eBay. On top of that, memory upgrades to 32GB can be found for around $40.

Waking up the old dinosaurs

So thinking “what do I have to lose?” I ordered two Dell Precision 490 along with a 32GB memory upgrade for each one. That all cost me around $400.

I fired everything up, installed ESXi 6 and started spinning up VMs with software. To my surprise and delight, my old dinosaurs of ESXi servers handled it beautifully.

vms

Here’s what vRealize Operations had to say about it.

vrops lab

As you can see, my environment is barely breaking a sweat. And that’s running vRealize Operations, vRealize Operations Insigt, vRealize Automation 6.2, vRealize Operations 7.0 Beta, NSX controllers and edges along with some demo workloads including a SQL Server instance and an Oracle instance. Page load times are surprisingly fast too!

A Jumbo Sized Problem!

After a while I noticed that some NSX workloads didn’t want to talk to each other. After some troubleshooting I realized that there was an MTU issue between the two ESXi servers. Although the Distributed vSwitch was configured for an MTU of 6000 bytes without complaining, the physical NICs were still pinned at 1500 bytes MTU. Turns out, the on-board NIC in the Precision 490 doesn’t support jumbo frames.

The only way to fix this seems to be to use another NIC, so I went back and got myself two Intel PRO/1000 GT Desktop Adapters. They normally run about $30-35, but I was lucky enough to find some Amazon vendor who had them on sale for $20 a piece. If you didn’t already know it, the Intel PRO/1000 GT has one of the most widely supported chipsets, so it is guaranteed to run well with virtually any system. And it’s cheap. And supports jumbo frames.

How much did I spend?

I already owned some of the necessary components (more about that later), so I cheated a little when meeting my $500 goal. Here’s a breakdown of what I spent:


 

$320 – 2x  Dell Precision 490 2×2.66GHz Quad Core X5355 16GB 1TB

$80 – 2x 32GB Memory Upgrade

$40 – 2x Intel PRO/1000 GT


 

This adds up to $440. Add some shipping to it, and we just about reach $500.

Vendors

I got the servers from the SaveMyServer eBay store. Here is their lineup of Dell Precision 490.

The memory is from MemoryMasters.

The network adapters are from Amazon. There’s a dozen of vendors that carry it. I just picked the cheapest one.

What else do you need?

In my basement I already had the following components:


 

1 Dell PowerEdge T110 with 16GB memory

1 old PC running Linux with 2x 2GB disks in it.

1 16 port Netgear 1GB/s Ethernet switch.


 

Assuming you already own a switch and some network cables, you’ll need at least some kind of NAS. I just took an old PC, loaded CentOS on it and set up and NFS share. That worked fine for me. Not the fastest when you’re seeing IO peaks, but for normal operations it runs just fine. You can also get e.g. a Synology NAS. With some basic disks, that’s going to set you back about $300.

I already had an old ESXi server (the PowerEdge T110) and I decided to run my AD, vCenter and NSX Manager on it. I could probably have squeezed it into the two Precision 490, but it’s a good idea to run the underpinnings, such as vCenter and AD on a separate machine. If I didn’t already had that machine, I would probably have ordered three of the Precision 490 and used one for vCenter and AD.

Room for improvement

Backup

The most dire need right now is backup. Although my NAS runs a RAID1 and gives me some protection, I’m still at risk of losing a few weeks of work in setting up vCloud Suite the way I wanted it. I’ll probably buy a Synology NAS at some point and set it up to replicate to or from my old NAS.

Network

Currently, everything runs on the same physical network. This is obviously not a good design, since you want to at least separate application traffic, management traffic, storage traffic and vMotion into separate physical networks. I might not get to four separate networks, but I would like to separate storage traffic from the rest. Given that I have free NIC on the motherboard, it’s just a matter of cabling and a small switch to get to that point.

Backup power

I’ve already had a power outage in my house since I built this lab. And booting up all my VMs on fairly slow hardware wasn’t the greatest thing in the world. I’ll probably get a small UPS that can last me through a few minutes of power outage, at least until I can get my generator running.

A final word

If you have a few hundred dollars and a few days to spend, I highly recommend building a simple lab. Not only is it great to have an environment that works just the way you want it, you’ll also learn a lot from setting it up from scratch. In my line of work, the infrastructure is already there when I get on the scene, so it was extremely valuable to immerse myself into everything that’s needed to get to that point.

I’d also like to thank my friend Sid over at DailyHypervisor for his excellent tutorials on how to set up NSX. I would probably still have been scratching my head if it wasn’t for his excellent blog. Check it out here on DailyHypervisor.

Day two operations on multi machine blueprints in vRealize Automation 6.2

Introduction

If you’ve ever tried to add a custom day two operation to a multi-machine blueprint, you must have hit the same dead end as I did. There’s simply no object type to tie the action to. As I’m sure you know if you’re into day two operations, the object type a typical day two operation is a VC:VirtualMachine. But that clearly wouldn’t work for a multi-machine deployment, since they act on a set of machines, rather than a single one.

In this article, I’m describing a simple technique to solve this problem and provide a framework for day two operations on multi-machine deployments.

If you are just interested in getting it to work and don’t have time to read about the design, just skip ahead to “Putting it all together” below!

A quick recap of custom day two operations

A day two operation is simply an action that you can take on an existing resource, such as a running virtual machine. For a VM, they typically include power on, power off, reconfigure, destroy, etc. All of the basic and essential day two operations for standard objects, such as VMs and multi-machine deployments come out of the box with vRealize Automation. But sometimes you want to add your own operations. Suppose, for example, that your organization uses some proprietary backup solution and you want to allow users to request a backup from the vRealize Automation UI. You could do this by adding a custom day two operation.

The core of a custom day two operation is just a vRealize Orchestrator workflow. As a workflow designer, you have a lot of freedom when it comes to building a day two operation. There’s really just one rule you have to follow: The workflow needs to accept a parameter that represents the object is was executed on. So if you’re building a workflow that’s used as a day two operation on virtual machines, your workflow must have a parameter of the type VC:VirtualMachine.

The problem with multi-machine deployments

The problem is simply this: There is no type in vRealize Orchestrator that represents multi-machine deployments. So it’s impossible to build a workflow that takes the right kind of input parameter, and since this is a strict requirement for day two operation workflows, you’re going to hit a dead end at that point.

Why that is has to do with the design of vRealize Operations. The concept of a multi-machine deployment lives in the IaaS portion of the code, whereas the custom day two operations live in the core vRealize Automation code and the mapping between them isn’t exposed to the user. The good news is that this issue is fixed with the blueprint redesign in vRealize Automation 7.0. But what about us who still live in the 6.x world?

The (surprisingly simple) solution

As of vRealize Automation 6.1, something called resource mappings where introduced. These are simply vRealize Orchestrator workflows or actions that converts from a native vRealize Automation type to a type that vRealize Orchestrator can handle. As you may expect, there are a handful of these included by default, and not surprisingly, the one that converts from the vRealize Automation representation of a virtual machine to a VC:VirtualMachine is one of the more prominent ones.

When you write a resource conversion workflow or action, you’re getting all the properties of the resource in a Properties object from vRealize Automation. It is then your responsibility to write the code that builds an inventory object from these values.

But what about a multi-machine deployment? Which inventory object type do we map that one to? It turns out that the IaaS representation of a machine is a perfect candidate for this. This object type, called VCAC:VirtualMachine happens to be able to represent multi-machine deployments as well. What’s even better is that there’s already an API-function for getting hold of VCAC:VirtualMachine and we have all the data we need to do it.

Here’s the code needed to do this:


var host = System.getModule("net.virtualviking.mmday2").getvCACHost();
var identifiers = { VirtualMachineID:properties["machineId"] };
machine = vCACEntityManager.readModelEntity(host.id, "ManagementModelEntities.svc", "VirtualMachines", identifiers, null);
return machine.getInventoryObject();


What’s going on here? Well, first we get hold of our IaaS host. Next we build an identifier we’re going to use to search for a virtual machine in IaaS. Then we run the query and finally we return the corresponding inventory object. That’s really all there’s to it in terms of code.

You can implement this either as a workflow or an action in vRealize Orchestrator. I chose to do it as an action, since it runs slightly faster.

Putting it all together

  1. Download and install the package. See the end of this article for a download link!
  2. Select the “Design” perspective and go to the configurations tab.
  3. Edit the “net.virtualviking.mmday2” configuration bundle.
  4. Set the vCACHost property to point to your IaaS service (must be registered first).
    Screen Shot 2015-11-13 at 3.51.11 PM
  5. In vRealize Automation, log in as a tenant admin and go to Advanced Services->Custom Resources.
  6. Create a new custom resource.
    1. Enter orchestrator type vCACCAFE:Catalog
    2. Enter name “Catalog Resource”.
    3. (Optional) enter a description and version.
    4. Click next
      Screen Shot 2015-11-13 at 3.39.31 PM
    5. Leave the defaults values on the Details Form page.
    6. Click next.
    7. Leave the defaults on the Where Used form.
    8. Click Finish.
  7. Go to Advanced Services->Resource  Mappings.
  8. Create a new Resource Mapping.
    1. Name it “Multi-Machine Service”.
    2. Enter Catalog Resource Type “Multi-Machine Service”
    3. Enter Orchestrator Type “vCAC:Virtual Machine”
    4. Select “Mapping Script action” and navigate to “net.virtualviking.mmday2” and “convertMultimachine”.Screen Shot 2015-11-13 at 3.44.03 PM
    5. Select “Add”.

You should now be able to create day two operations on multi-machine deployments. To test it, you can use the “Print Multimachine Details” workflow provided with the package.

Please reach out to me in the comments if you have questions!

Downloadable components

The vRealize Orchestrator package can be downloaded from here.

Glucose Monitoring – A vRealize Operations Adapter using the SDK

Background

I have close friend who is unfortunate enough to suffer from Type 1 Diabetes. As you may know, this deadly disease can only be managed by constantly monitoring your blood glucose and injecting insulin. Fortunately, there’s a silver lining and it’s called “cool gadgets”. Many diabetes sufferers are now wearing continuous glucose monitors, which electronically monitor the patient’s glucose level and displays the numbers and trends.

But it doesn’t stop there. The Dexcom glucose monitor allows the metrics to be sent to the cloud in real time for remote monitoring an analysis. Of course it didn’t take long before some geeky diabetics figured out how to harvest the data and build open-source tools around it. They founded the Nightscout Foundation and continue to build useful and life-saving tools for managing diabetes on an open-source basis. You should check them out! They do amazing work!

When I heard about this, I was thinking that this is just time series data with trends and patterns that’s accessible through a simple REST-API. Then I thought of vR Ops, which is a tool that can analyze such data and I figured it would be fun to write an adapter that pulls in blood glucose data into vR Ops. So the next time I had a break between two meetings, I went to work!

The most interesting thing about this project is, in my opinion, how it leverages the dynamic thresholds to determine the normal range for blood glucose levels over the course of a day. It serves as a great demo of how the algorithms in vRealize Operations can adapt to virtually anything!

graph

vRealize Operations Java API

Since I have a background as a Java developer, I realized that using the Java API for vRealize Operations would be the easiest way to go. The Java API is essentially a wrapper around the REST API that allows you to interact with vRealize Operations through regular Java classes. If you are a Java programmer and want to interact with vRealize Operations, this is the way to go!

The Nightscout API

This API is dead simple. All you have to do is to issue a GET against http://{host}/api/v1/entries/current to get the latest sample. The sample is then returned as a tab-separated string containing timestamp, glucose reading and a trend indicator. For this project, we’re just using the timestamp and the glucose reading.

Running the code

We could have created some kind of intelligent scheduler trying to synchronize data collection with the timestamps of returned data to minimize lag, but in this case, I just went with a simple Java program that’s kicked off by a cron-job set to execute every five minutes.

The Java program takes the following parameters: A username and password for vRealize Operations, a URL for Nightscout and the name of the patient we’re collecting data for.

A walkthrough of the code

The run() method

The code is fairly simple, but it illustrates a few key concepts of the Java API for vRealize Operations. The bulk of the work happens in the run() method.

Screen Shot 2015-07-10 at 3.41.18 PM

This function is called once the parameters are extracted and parsed. First, it opens an HTTP connection to Nightscout and issues a GET. The result is then parsed, simply by splitting up the comma-separated string.

Once that’s done, we call vRealize Operations to look up an object representing the patient we’re monitoring. We do that by looking up an object of the type “Human” with the patient name supplied on the command like. If we don’t find the patient record, we create it.

Finally, we pack the data samples into an array and call “addStats” to send the new sample to vR Ops.

The findResourceByName() method

This method is really just a wrapper around the API call for looking up an object by name.

Screen Shot 2015-07-10 at 3.48.38 PM

The first three lines deal with the resource type. When this is first executed, the resource type “Human” isn’t going to exist so we have to create it.

Once we have the resource kind identifier, we can go ahead and call the “getResources” API method. This will return a list of resource identifiers, but since we know there should only be one, we can return the first (and only) object we find.

The createHuman() method

If the object representing the patient doesn’t exist, we need to create it.

Screen Shot 2015-07-10 at 3.52.33 PM

Again, this is just a prettier version of the API method for creating a resource. All it does is to fill out a structure with the resource kind identifiers, adapter kind and adapter name. The “resourceKey” is the unique identifier for this object and in our case consists of the patient name.

The addStats() method

This method performs the final task of actually sending the metrics to vRealize Operations.

Screen Shot 2015-07-10 at 3.56.06 PM

 

As usual, this method does little more than just assembling a structure containing the necessary values for the API call. Notice that we could send multiple timestamps and values if we wanted. In this case, however, we’re just dealing with the latest value from Nightscout, so we’re only supplying a single timestamp and value.

The end result

Screen Shot 2015-07-10 at 4.02.39 PM

The end result is actually quite fascinating and a great testament to the dynamic thresholds in vRealize Operations. Most people experience a spike in blood glucose level after a meal and a lower blood glucose level during the night. For a person with diabetes, these swings are typically more pronounced. If you look at the dynamic thresholds, you’ll see a lower and more narrow range during the night. After lunch and dinner, the range is wider and spikes are more common. This is clearly visible from the dynamic thresholds (the gray area) behind the graph.

This particular person usually has a low-carb breakfast, so you don’t normally see a spike. However, this day, he had pancakes for breakfast and the blood glucose spiked significantly outside out the normal range. How is that for a demonstration of the dynamic threshold feature in vRealize Operations?

Dashboards

No metric collector is complete without some great dashboards. Here are some examples of dashboards we built for the glucose data.

real-time

Real time view

long-term

Long term view

Conclusion

Although this is a somewhat exotic example of API use, it clearly illustrates the simplicity of use and the power of the automatically calculated dynamic thresholds.

Please consider supporting the research for a cure to Type 1 diabetes. JDRF is an excellent organization to support!

http://jdrf.org/get-involved/ways-to-donate/

The Nightscout Foundation also could use your donation to further promote and develop the open source technology for helping diabetics.

http://www.nightscoutfoundation.org/product/donate-2/

Downloads

The complete source code can be downloaded here.

Source code

Disclaimer

This project is intended as an example of what is possible using glucose monitoring technology and vRealize Operations. It is NOT intended to diagnose or cure any disease or condition. Type 1 Diabetes is a serious condition and should ONLY be managed using methods approved by the appropriate government bodies and recommended by your healthcare professional!

Using ServiceNow as a front end for vRealize Automation

Background

Today, ServiceNow has become something of a de-facto standard for self-service portals and ticket tracking systems. It is being used in virtually every industry and by all sizes of organizations. While vRealize Automation (vRA) offers its own self-service portal with a wide range of features, it is sometimes desirable to allow users to request virtual machines and other resources from a common ServiceNow portal, while carrying out the actual provisioning tasks using vRA. This combines the benefits of a single request portal with the power and governance of vRA.

This article describes how ServiceNow can be tied into vRA using a couple of simple workflows.

Direct vRealize Automation API vs. vRealize Orchestration

One of the first design decisions I had to make was how to expose vRA to ServiceNow. I could, of course, call the vRA API directly from ServiceNow. Another alternative would be to encapsulate the vRA API calls in a vRealize Orchestrator (vRO) workflow. I opted for the latter, since it makes the code ServiceNow a bit simpler and leverages the vRA Plugin in vRO.

Implementing the vRO workflow

To make the ServiceNow workflow simpler, I wanted to make a single vRO workflow that handled the entire provisioning process. This way, I only have to make one API call from ServiceNow. These are the basic steps the workflow needs to accomplish:

  1. Check parameters against blueprint, reject invalid parameters and fill in defaults for ones that are missing.
  2. Look up blueprint and request provisioning.
  3. Wait for completion and check outcome (success or failure).

To avoid having the code break due to small differences in the API, I used the vRA plugin to make all calls, rather than calling the APIs directly.

Here is what the workflow looks like:

Screen Shot 2015-07-09 at 12.41.15 PM

The workflows us pretty straightforward and uses mostly pre-existing components. The “Prepare Properties” script is really the only code in the workflow. All it does it to examine which parameters were supplied and providing the blueprint defaults for the rest.

Calling vRO via REST

In order for ServiceNow to be able to make a call to vRO, we need to register a REST call. In vRO, the https://server:8281/vco/api/workflows endpoint gives you a list of workflows. If you know the UUID of a workflow, you can address it directly by doing a HTTP GET against the URL https://server:8281/vco/api/workflows/{UUID}. Since the UUID of a workflow is permanent and doesn’t change under any circumstances, we can safely just copy it from vRO and use it directly in the URL.

Under the workflow, there are several subsections of linked information, such as the executions of each workflow. We can check on previously started workflows by issuing a GET against the executions URL. Consequently, issuing a POST against the URL https://server:8281/vco/api/workflows/{UUID}/executions creates a new execution by running the specified workflow. Any parameters to the workflow can be send as a json document.

Successfully starting a workflow immediately returns the HTTP status code 202 (Accepted) and sets the Location header to an URL where we can check on the completion of the workflow. Note that the workflow has not finished running at this point. All we got was an acknowledgement that vRO has successfully started the workflow.

To check for the status of the workflow, we need to issue a GET against the URL returned in the “Location” header. That URL looks something like this: https://server:8281/vco/api/workflows/{UUID}/executions/{token} where “token” is a unique identifier for the workflow execution.

Defining the REST message in ServiceNow

Let’s put it all together!

First we need to specify the rest message. We’re including the workflow UUID in the URL. You can obtain the workflow UUID from the workflow properties page in vRO. Note that the ID is permanent and will remain the same even if the workflow is imported, exported and renamed, so it is safe to hard code it into the URL. You will probably have to specify a MID server, since your vRO instance is likely to be behind a firewall.

rest

Next, we are going to focus on the POST message. Remember, if you POST to workflow executions, you start a new execution of the workflow. Note that there we accept for parameters: The name of the OS image (blueprint name), number of CPUs, memory and storage. We use these to construct a json message in the body. This will pass the parameters from ServiceNow to the vRO workflow.

rest-post

Finally, we need to specify the GET message. This is fairly straightforward. All we need to do is to make sure we pass the execution ID at the end of the URL. This is done using the “token” variable.

rest-get

Implementing the ServiceNow workflow

On the ServiceNow side, we need to create a workflow that carries out the provisioning action, i.e. calls vRO to activate the vRO workflow and the provisioning. Normally, you would most likely have approvals and other actions tied to the workflow, but in this example, we’re simply focusing on the provisioning part.

Here is our ServiceNow workflow.

workflow

Let’s walk through the steps:

  1. Run a script to send a request to vRO for starting the provisioning workflow. We’re pulling the parameters from the current item request and passing it as a REST call.
    gs.log("image: " + current.variables.vm_os_image);
    gs.log("CPUs: " + current.variables.vm_numCPUs);
    gs.log("Memory: " + current.variables.vm_memoryMB);
    gs.log("Storage: " + current.variables.vm_storageGB);
    var r = new RESTMessage('Create VM in vRA', 'post');
    r.setStringParameter('image', current.variables.vm_os_image);
    r.setStringParameter("numCPUs", current.variables.vm_numCPUs);
    r.setStringParameter("memoryMB", current.variables.vm_memoryMB);
    r.setStringParameter("storageGB", current.variables.vm_storageGB);
    
    // Send the request
    //
    var response = r.execute();
    if(response == null) 
        response = r.getResponse(30000);
    gs.log("Status: " + response.getStatusCode());
    gs.log("Workflow token: " + response.getHeader("Location"));
    gs.log("Response: " + response);
    var location = response.getHeader("Location");
    var arr = /.*executions\/(.*)\//.exec(location);
    workflow.scratchpad.vroToken = arr[1];
    gs.log("Set vRO token to: " + workflow.scratchpad.vroToken);
    activity.result = response.getStatusCode();
  2. Send a HTTP GET to obtain the status of the workflow execution. Due to some quirks in the MID server, we need to check for the result twice. We’re parsing the return body and extracting the result code from vRO.
    gs.log("vRO token is: " + workflow.scratchpad.vroToken);
    var r = new RESTMessage('Create VM in vRA', 'get');
    r.setStringParameter('token', workflow.scratchpad.vroToken);
    var response = r.execute();
    if(response == null) 
        response = r.getResponse(30000);
    var body = response.getBody();
    gs.log("Response code: " + response.getStatusCode());
    gs.log("Body: " + body);
    
    // Parse return value
    //
    var parser = new JSONParser();
    var jsonResult = parser.parse(body);
    var status = jsonResult.state;
    if(status == "failed")
        workflow.scratchpad.vraError = jsonResult.content_exception;
    gs.log("Workflow status: " + status);
    workflow.scratchpad.vraWorkflorStatus = status;
    activity.result = status;

The final step: Creating a Catalog Item

Once we have the workflows created, it’s time to tie them into a Catalog Item. This is pretty straightforward. We have to make sure we call the workflow we just created and we need to define the variables used by the workflow. You’ll recall we’re using the variables vm_os_image, vm_numCPUs, vm_memoryMB and vm_storageGB.

This is what the catalog item looks like:

Screen Shot 2015-06-18 at 5.48.57 PM

In this case, I chose to create a specific service for this OS image, but you could just as well create a generic one and have a drop-down for selecting the image. There’s a variety of input options you can choose between for the variables. I picked a numeric scale for the CPUs and multiple choices for the other parameters.

Screen Shot 2015-06-18 at 6.00.58 PM

When specifying the variables, you should make sure they have the same name as what you’re using your workflow. This is what my definition of vm_numCPUs looks like, for example:

Screen Shot 2015-06-18 at 6.04.14 PM

Finally, when we have everything defined, we should get a Catalog Item that looks something like this.

Screen Shot 2015-06-18 at 6.13.29 PM

Downloads

Complete vCO Workflows: Click here!