Exploring the vRealize Automation 7.0 Event Broker – Part 1

Abstract

One of the most prominent additions to vRealize Automation 7.0, aside from the converged blueprints, is a feature called the Event Broker. This service provides a uniform and robust way of subscribing to and responding to various events originating from vRA. It allows you to plug in actions and checks throughout a machine life cycle, but it goes further than that. With the Event Broker, you can subscribe to other events as well, such as blueprint lifecycle events, approval events and system log events.

This article aims to explore the Event Broker and to show a few examples of how it may be used.

Event Broker architecture

The design of Event Broker is, in essence, a classic publish/subscribe architecture. In other words, the service acts as an event distribution point where producers of events can publish them and consumers can subscribe to specific event streams. Clients would typically subscribe to a category of events, typically known as a “topic”. Events are often filtered even further using conditions, e.g. subscribing to events only from a certain part of the life cycle or only for a certain type of objects. Unlike a traditional publish/subscribe architecture, subscribers of the Event Service can also control the underlying workflow using “blocking events”, which we will discuss in more depth in upcoming articles.

Under the hood, the Event Broker is implemented using a persistent message queue, but most users are probably going to interact with it using vRealize Orchestrator (vRO). When an event that matches the topic and condition for a subscription is triggered, a corresponding vRO workflow is started.

Each topic has a schema that describes the structure of the data carried by the event. For example, a machine provisioning event carries data about the request, such as the name of the user making the request, the type of machine, amount of CPU and RAM, etc.

It’s important to remember that event subscriptions will trigger on any object that’s applicable to the topic. In vRA 6.x, you would typically tie a subscription to a specific blueprint, but with the Event Broker, you would get any event for any machine from any blueprint. In order to restrict that to specific blueprints or machine types, you will have to use a conditional subscription. While there is a slight learning curve involved, the end result is a much more flexible and efficient event mechanism.

Taking the Event Broker for a spin

Enough of theory and architecture for now. Let’s take it for a spin and see if we can get it to work!

A Hello World example

First, let’s just see if we can get the Event Broker to trigger a workflow on any event during a VM provisioning. Let’s create a simple vRO workflow that looks like this:

Screen Shot 2016-01-07 at 10.27.27 AM

Our workflow takes no parameters and just prints a string to the vRO system log. Let’s save it and go back to vRA. In vRA, we go to Administration->Events-Subscriptions and click the “New” button (the plus). This is what we’ll see:

Screen Shot 2016-01-07 at 10.34.49 AM

As we discussed earlier, an event topic is just a category of events you can subscribe to. Since we were going to trigger our vRO workflow on provisioning events, we select “Machine provisioning” from the list of topics. Once we select the topic, we’ll see the schema for it, i.e. a description of the data carried by the event. As you can see from the screen shot, we now have access to all the data about the request and the machine being requested. Let’s click the Next-button and go to the next screen!

Screen Shot 2016-01-07 at 10.37.22 AM

Here we can choose between subscribing to all events or to do it based on a condition. A machine provisioning triggers multiple events: When the machine is built, software is provisioned, turned on and so on. Normally we’d add some conditions to pick exactly the events we want, but in this case, let’s just go with everything. Let’s go to the next screen!

Screen Shot 2016-01-07 at 10.40.17 AM

This dialog let’s us pick the vRO workflow to run. We’re going to pick the “Hello World” workflow we just created. Let’s just click “Next” and “Finish” from here and we’ll have a subscription called “Hello World”, since it gets the workflow name by default.

Screen Shot 2016-01-07 at 10.43.01 AM

The last thing we need to do is to select the subscription and click publish. Like most objects in vRA, they are initially saved in a draft state and need to be activated before they kick in. Now we’re ready to test out workflow! Let’s go request a machine and wait for it to complete! Once the provisioning has completed, we can go to vRO to check on our workflow. It should now have a bunch of tokens (executions) and look something like this:

Screen Shot 2016-01-07 at 10.52.25 AM

Why so many executions? Remember that the provisioning process generates an event per state it goes through, so you’ll see a lot of events, since there’s a lot of states to go through. And in fact, there’s two events generated per state, one just before it goes into the state and one just after it leaves it. We call these pre-events and post-events.

Congratulations! We’ve created our first event subscription!

Where are the parameters?

So far, our event subscription is pretty useless, since all it can do is to print a static string to the log. We don’t know anything about what kind of event we received or which object it was triggered for. To build something more interesting than just a “Hello World”, we’ll need access to the event parameters. But where are they?

Let’s click on one of the workflow tokens in vRO and select the variables tab!

Screen Shot 2016-01-07 at 10.58.28 AM

Looks interesting! So it appears that all the variables from the event schema were actually sent to the vRO workflow. But how can we access them? That turns out to be very simple! Just create an input parameter to your vRO workflow with the same name and type as the event variable.

Let’s add the input parameters “lifecycleState” and “machine” to our workflow! This should give us information on what state we’re in and the machine that triggered the event. Let’s also change the code of our scriptable task a bit to look like this:

Screen Shot 2016-01-07 at 11.04.59 AM

Each time we get an event, we’re going to print information on what lifecycle state we’re in and what machine we’re provisioning. Provisioning a machine should get you a log record that looks something like this:

Screen Shot 2016-01-07 at 11.09.49 AM

The state information deserves a closer look. It’s made up of two variables: A phase and a state. The state simply corresponds to the machine states you may be familiar with from IaaS. “VMPSMasterWorkflow32.BuildingMachine” means that the machine is in the initial building stage. The “phase” variable says “PRE” in this example, which means that we caught this event just before the initial build happened. Had it said “POST”, we would have gotten it right after the initial build was done.

And, by the way, those lifecycleState and machine variables are encoded as JSON strings. If you want to manipulate them as associative arrays/dictionaries instead, just call e.g. JSON.parse(lifecycleState). We’ll be doing that in some of the more advanced examples.

Adding a condition

At this point, we have a semi-interesting example of an event subscription. But it’s triggering on every single event for every single VM being provisioned. In a real life scenario, this wouldn’t be very nice to your vRO instance. So let’s dial things down a bit by introducing a condition to our subscription.

To do this, we need to go back to Adminitration->Events->Subscriptions and click on our “Hello World” subscription to edit it. Then we go to the “Conditions” tab and select “Run based on conditions”.

Screen Shot 2016-01-07 at 11.22.21 AM

The condition can be any test on any field, or a combination of several fields. In our case, we’re just going to select the lifecycle state name.

Screen Shot 2016-01-07 at 11.22.49 AM

Let’s do a simple “equals” operation and test against a static value. The drop-down allows us to select from a list of predefined value. In this case, we want to trigger only if we’re in the “BuildingMachine” state.

We’re now saving the subscription by clicking “Finish” and testing it by provisioning another VM. This time, we should only see two executions of our vRO workflow: The PRE and POST events for “BuildingMachine”.

Conclusion

This is the first installment in a series of articles on the Event Broker in vRA 7.0 and is intended to get you familiarized with the new functionality. While the examples given aren’t useful for real-life scenarios, they might serve as templates for more realistic workflows.

In the next installment (coming soon) of this series we are going to look at more useful examples, such as an auditing workflow imposing hard limits on resource usage. If that sounds interesting, I recommend you add this blog to the list of blogs you follow!

Advertisements

A Simple Template for vR Ops Python Agents

Background

Since I wrote the post about vraidmon, I’ve gotten some requests for a more generic framework. Although what I’m describing here isn’t so much a framework as a simple template, some may find it useful. You can find it here.

What does it do?

In a nutshell, this template helps you with three things:

  1. Parsing a config file
  2. Simplifying metric and properties posting
  3. Simplifying management of parent-child relationships.

The config file

The template takes the name of a configuration file as its single argument. The file contains a list of simple name/value pairs. Here is an example:

vropshost=vrops-01.rydin.nu
vropsuser=admin
vropspassword=secret
adapterSpecific1=bla
adapterSpecific2=blabla

The vropshost, vropsuser and vropspassword are mandatory and specify the host address, username and password for connecting to vR Ops. In addition to this, adapter specific configuration items can be specified.

Writing the adapter

Screen Shot 2015-12-22 at 3.51.03 PM

Using this template is very easy. Just locate the “YOUR CODE GOES HERE” marker and start coding! The only requirement is that you have to set the adapterKind and adapterName variables.

Setting the adapter type and name

The adapterKind is the “adapter kind key” for the adapter you’re building. It basically tells the system what kind of adapter you are. You can really put anything in here, but it should be descriptive to the adapter type, such as “SAPAdapter” or “CISCORouterAdapter”.

The “adapterName” is a name of this particular instance of the adapter. There could very well be multiple adapters of the same type, so a unique name is needed to tell them apart.

Posting data

Once we’ve got the adapter naming out of the way, we can start posting data. This is done through the “sendData” function. It takes 5 parameters:

  1. The vR Ops connection we’re using
  2. The resource type we’re posting for. This should uniquely specify what kind of object the data is for. E.g. “CISCORouterPort” if we’re posting data about a port.
  3. Resource name. This uniquely identities the object we’re posting data for, e.g. “port-4711”.
  4. A name-value pair list of metrics we’re posting for the resource, e.g. { “throughput”: 1234, “errors”: 55 }
  5. A name-value pair list of properties we’re posting e.g. { “state”: “functional”, “vendor”: “VMware” }

The sendData function returns the resource we posted data for. Note that if the resource didn’t already exist, it will be created for you.

Building relationships

Sometimes you want your monitored resources to be connected to some parent object. For example, if you are monitoring tables in a database, you may want them to be children of a database object. This way, they will show up correctly in dependency graphs in vR Ops and your users will find it easier to navigate to them.

To form a parent-child relationship, use the addChild function. It takes the following parameters:

  1. The vR Ops connection to use
  2. The adapter kind of the parent object
  3. The adapter name of the parent object
  4. The resource kind of the parent object
  5. The resource name of the adapter object
  6. The child resource to add
  7. (Optional) A flag determining whether parent objects should be automatically created if they don’t exist. By default, they are not automatically created and adding a child results in a no-op.

Here’s an example of an agent using this template that monitors Linux processes. As you can see, we’ve used the “addChild” function to add the Linux process objects to a Linux host.

Screen Shot 2015-12-22 at 4.10.51 PM.png

Downloadable content

Python vR Ops agent template.

vraidmon – A vRealize Operations Adapter Written in Python

Background

Two events coincided today: Realizing that I’ve been running without redundancy on a Linux software RAID coincided with me having some time to spare for a change. So I decided to write a simple vR Ops agent to monitor a software RAID, and, since I had some time on my hands, I wanted in Python instead of my go-to language Java. This is my very first Python program, so bear with me.

Learning Python proved to be very easy. It’s a very intuitive language. I’ve got some issues with loosely typed languages, but for a quick hack I’m willing to put that aside. What turned out to be a bit more difficult was to find any good examples of agents for vR Ops written in Python. The blog vRandom had a getting started guide that allowed me to understand how to install and get the thing running, but how to code the actual agent was harder to find. But after some trial and error I got the hang of it and I decided to share my experience here.

A quick look at the code

Initializing

The client library is called “nagini”, so before we can do anything, we need to import it:

import nagini

Next, we need to obtain a connection to vR Ops. This is done like this:

vrops = nagini.Nagini(host=vropshost, user_pass=(vropsuser, 
vropspassword))

It’s worth noticing that this doesn’t actually make the connection. It just sets up the connection parameters. So if you do something wrong, like mistyping the password, this code will gladly execute but you’ll get an error later on when you’re trying to use the connection.

Sending the data

Here is where things get quite interesting and the Python client libraries show their strength. While all the REST methods are exposed to the programmer, the libraries also add some nice convenience methods.

Normally, when you post metrics for a resource, you need to do it in three steps:

  1. Look up the resource to obtain its ID.
  2. If the resource doesn’t exist, create it.
  3. Send the metrics using the resource ID you just obtained by looking it up or creating it.

The vR Ops Python client can do this in a single call! All we have to do is to put together somewhat elaborate data structure and send it off. Let’s have a look!

Screen Shot 2015-12-15 at 9.21.09 AM

Wow, that’s a lot of code! But don’t despair! Let’s break it down. In this example, I’m sending but stats and properties. If you recall, stats are numeric values, such as memory utilization and remaining disk space, whereas properties are strings and typically more static in nature. Let’s examine the stats structure.

The REST API states that stats should be sent as a single member called “stat-content”. That member contains an array of sub-structures, each representing a sample. Each sample has a “statKey” holding the name of the metric, a “timestamp” member and a “data” member holding the actual data value. As you can see above, I’m sending three values, the “totalMembers”, “healthyMenbers” and “percentHealthy”.

Now that we have our fancy structure, we can send it off using the find_create_resource_push_data method. It does exactly what the name implies: It tries to find the resource, if it doesn’t it creates it and then it pushes the data. To make it woek, we need to send the name of the resource, the name of the resource kind as well as adapter kind and the stats and/or properties. And that’s it! We’ve now built an adapter with Python using a single call to the vR Ops API!

What about the timestamp?

But wait, there’s one more thing we need to know. The vR Ops API expects us to send a timestamp as a Java-style epoc milliseconds long integer. How do we obtain one of those in Python.

Easy! We just time.time() method (you have to import “time” first). This gives us the epoc seconds (as opposed to milliseconds) as a float, so we have to multiply it by 1000 and convert it to a long. It will look something like this:

timestamp = long(time.time() * 1000)

Installing and running

Prerequisites

First of all, you need to install python (obviously). Most likely it’s already installed, but if it isn’t, just go to the package manager on your system and install it. Next, you need the client libraries for vR Ops. They are located on your vR Ops server and can be downloaded and installed like this:

mkdir tmp
cd tmp
wget --no-check-certificate https://<your vrops server>/suite-api/docs/bindings/python/vcops-python.zip
unzip vcops-python.zip
python setup.py install

Installing the agent

Download the agent from here. Unzip it in the directory where you want to run it, e.g. /opt/vraidmon

mkdir /opt/vraidmon
unzip <path to my zip>

Next, we need to edit the configuration file vraidmon.cfg. You need to provide hostname and login details for your vR Ops. The name of the raid status file should remain /proc/mdstat unless you have some very exotic version of Linux. Here is an example:

vropshost=myvrops.mydomain.com
vropsuser=admin
vropspassword=topsecret!
mdstat=/proc/mdstat

Now you can run the agent using this command:

python vraidmon.py vraidmon.cfg

You’ll probably get some warning messages about certificates not being trusted, but we can safely ignore those for now.

Making it automatic

Running the agent manually just sends a single sample. We need some way of making this run periodically and the simplest way of doing that is through a cron job. I simply added this to my crontab (ignore any line breaks and paste it in as a single line):

*/5 * * * * /usr/bin/python /opt/vraidmon/vraidmon.py /opt/vraidmon/vraidmon.cfg 2> /dev/null

Adding alerts

In order for this adapter to be useful, I added a simple alert that fires when the percentage of healthy members drops below 100%. This is what a simulated failure looks like:

Screen Shot 2015-12-15 at 11.30.49 AM.png

Downloadable material

Python source code and sample config file.

Alert and symptom definition.

 

$500 Poor Man’s vCloud Suite Lab

Yes, I’m cheap!

I guess I’m a bit cheap when it comes to buying equipment for my lab. To be honest, I’m more interested in funneling money to my hobbies and my wife’s hobbies than spending money on an elaborate lab. However, since I felt the need of a true sandbox where I could experiment freely, I arrived at a point when a home lab was more or less necessary.

So I set out to create the most inexpensive lab that would still get the job done. It became almost an obsession. Was it possible to build a viable lab for less than $500? The result was pretty astonishing, so I decided I wanted to share it.

Requirements

I needed a demo lab where I could run most or all of the vCloud Suite products as well as NSX. It didn’t have to be extremely beefy, but fast enough to run demos without making the products look slow. Here’s a condensed list of requirements:

  • Run the basic components of the vCloud Suite: vRealize Automation, vRealize Orchestrator, vRealize Operations, LogInsight.
  • Run some simulated workloads, including databases and appservers.
  • Run a full installation of NSX
  • Provide a VPN and some simplistic virtual desktops.
  • Have at least 1GB Ethernet connections between the hardware components.

Deciding on hardware

I did a lot of research on various hardware alternatives. Let’s face it: What it comes down to is RAM. In a lab, the CPUs are waiting for work most of the time, but the vCloud Suite software components eat up a fair amount of memory. I calculated that I needed about 100GB or RAM for it to run smoothly.

At first, I was looking at older HP Proliant servers. I could easily get a big DL580 with 128GB RAM for about $200-300. But those babies are HUGE, make tons of noise and suck a lot of power. Living in a normal house without a huge air conditioned server room and without a $100/month allowance for powering a single machine, this wasn’t really an alternative.

My next idea was to buy components for bare-bones systems with a lot of memory. I gave up that idea once the value of my shopping cart topped $3000.

I was about to give up with I found some old Dell Precision 490 desktops on eBay. These used to be super-high-end machines back in 2006-2007. Now, of course they are hopelessly obsolete and are being dumped on eBay for close to nothing. However, they have two interesting properties: They run dual quadcore Xeon CPUs (albeit ancient) and your can stuff them with 32GB of RAM. And, best of all, they can be yours for about $160 from PC refurbishing firms on eBay. On top of that, memory upgrades to 32GB can be found for around $40.

Waking up the old dinosaurs

So thinking “what do I have to lose?” I ordered two Dell Precision 490 along with a 32GB memory upgrade for each one. That all cost me around $400.

I fired everything up, installed ESXi 6 and started spinning up VMs with software. To my surprise and delight, my old dinosaurs of ESXi servers handled it beautifully.

vms

Here’s what vRealize Operations had to say about it.

vrops lab

As you can see, my environment is barely breaking a sweat. And that’s running vRealize Operations, vRealize Operations Insigt, vRealize Automation 6.2, vRealize Operations 7.0 Beta, NSX controllers and edges along with some demo workloads including a SQL Server instance and an Oracle instance. Page load times are surprisingly fast too!

A Jumbo Sized Problem!

After a while I noticed that some NSX workloads didn’t want to talk to each other. After some troubleshooting I realized that there was an MTU issue between the two ESXi servers. Although the Distributed vSwitch was configured for an MTU of 6000 bytes without complaining, the physical NICs were still pinned at 1500 bytes MTU. Turns out, the on-board NIC in the Precision 490 doesn’t support jumbo frames.

The only way to fix this seems to be to use another NIC, so I went back and got myself two Intel PRO/1000 GT Desktop Adapters. They normally run about $30-35, but I was lucky enough to find some Amazon vendor who had them on sale for $20 a piece. If you didn’t already know it, the Intel PRO/1000 GT has one of the most widely supported chipsets, so it is guaranteed to run well with virtually any system. And it’s cheap. And supports jumbo frames.

How much did I spend?

I already owned some of the necessary components (more about that later), so I cheated a little when meeting my $500 goal. Here’s a breakdown of what I spent:


 

$320 – 2x  Dell Precision 490 2×2.66GHz Quad Core X5355 16GB 1TB

$80 – 2x 32GB Memory Upgrade

$40 – 2x Intel PRO/1000 GT


 

This adds up to $440. Add some shipping to it, and we just about reach $500.

Vendors

I got the servers from the SaveMyServer eBay store. Here is their lineup of Dell Precision 490.

The memory is from MemoryMasters.

The network adapters are from Amazon. There’s a dozen of vendors that carry it. I just picked the cheapest one.

What else do you need?

In my basement I already had the following components:


 

1 Dell PowerEdge T110 with 16GB memory

1 old PC running Linux with 2x 2GB disks in it.

1 16 port Netgear 1GB/s Ethernet switch.


 

Assuming you already own a switch and some network cables, you’ll need at least some kind of NAS. I just took an old PC, loaded CentOS on it and set up and NFS share. That worked fine for me. Not the fastest when you’re seeing IO peaks, but for normal operations it runs just fine. You can also get e.g. a Synology NAS. With some basic disks, that’s going to set you back about $300.

I already had an old ESXi server (the PowerEdge T110) and I decided to run my AD, vCenter and NSX Manager on it. I could probably have squeezed it into the two Precision 490, but it’s a good idea to run the underpinnings, such as vCenter and AD on a separate machine. If I didn’t already had that machine, I would probably have ordered three of the Precision 490 and used one for vCenter and AD.

Room for improvement

Backup

The most dire need right now is backup. Although my NAS runs a RAID1 and gives me some protection, I’m still at risk of losing a few weeks of work in setting up vCloud Suite the way I wanted it. I’ll probably buy a Synology NAS at some point and set it up to replicate to or from my old NAS.

Network

Currently, everything runs on the same physical network. This is obviously not a good design, since you want to at least separate application traffic, management traffic, storage traffic and vMotion into separate physical networks. I might not get to four separate networks, but I would like to separate storage traffic from the rest. Given that I have free NIC on the motherboard, it’s just a matter of cabling and a small switch to get to that point.

Backup power

I’ve already had a power outage in my house since I built this lab. And booting up all my VMs on fairly slow hardware wasn’t the greatest thing in the world. I’ll probably get a small UPS that can last me through a few minutes of power outage, at least until I can get my generator running.

A final word

If you have a few hundred dollars and a few days to spend, I highly recommend building a simple lab. Not only is it great to have an environment that works just the way you want it, you’ll also learn a lot from setting it up from scratch. In my line of work, the infrastructure is already there when I get on the scene, so it was extremely valuable to immerse myself into everything that’s needed to get to that point.

I’d also like to thank my friend Sid over at DailyHypervisor for his excellent tutorials on how to set up NSX. I would probably still have been scratching my head if it wasn’t for his excellent blog. Check it out here on DailyHypervisor.

Day two operations on multi machine blueprints in vRealize Automation 6.2

Introduction

If you’ve ever tried to add a custom day two operation to a multi-machine blueprint, you must have hit the same dead end as I did. There’s simply no object type to tie the action to. As I’m sure you know if you’re into day two operations, the object type a typical day two operation is a VC:VirtualMachine. But that clearly wouldn’t work for a multi-machine deployment, since they act on a set of machines, rather than a single one.

In this article, I’m describing a simple technique to solve this problem and provide a framework for day two operations on multi-machine deployments.

If you are just interested in getting it to work and don’t have time to read about the design, just skip ahead to “Putting it all together” below!

A quick recap of custom day two operations

A day two operation is simply an action that you can take on an existing resource, such as a running virtual machine. For a VM, they typically include power on, power off, reconfigure, destroy, etc. All of the basic and essential day two operations for standard objects, such as VMs and multi-machine deployments come out of the box with vRealize Automation. But sometimes you want to add your own operations. Suppose, for example, that your organization uses some proprietary backup solution and you want to allow users to request a backup from the vRealize Automation UI. You could do this by adding a custom day two operation.

The core of a custom day two operation is just a vRealize Orchestrator workflow. As a workflow designer, you have a lot of freedom when it comes to building a day two operation. There’s really just one rule you have to follow: The workflow needs to accept a parameter that represents the object is was executed on. So if you’re building a workflow that’s used as a day two operation on virtual machines, your workflow must have a parameter of the type VC:VirtualMachine.

The problem with multi-machine deployments

The problem is simply this: There is no type in vRealize Orchestrator that represents multi-machine deployments. So it’s impossible to build a workflow that takes the right kind of input parameter, and since this is a strict requirement for day two operation workflows, you’re going to hit a dead end at that point.

Why that is has to do with the design of vRealize Operations. The concept of a multi-machine deployment lives in the IaaS portion of the code, whereas the custom day two operations live in the core vRealize Automation code and the mapping between them isn’t exposed to the user. The good news is that this issue is fixed with the blueprint redesign in vRealize Automation 7.0. But what about us who still live in the 6.x world?

The (surprisingly simple) solution

As of vRealize Automation 6.1, something called resource mappings where introduced. These are simply vRealize Orchestrator workflows or actions that converts from a native vRealize Automation type to a type that vRealize Orchestrator can handle. As you may expect, there are a handful of these included by default, and not surprisingly, the one that converts from the vRealize Automation representation of a virtual machine to a VC:VirtualMachine is one of the more prominent ones.

When you write a resource conversion workflow or action, you’re getting all the properties of the resource in a Properties object from vRealize Automation. It is then your responsibility to write the code that builds an inventory object from these values.

But what about a multi-machine deployment? Which inventory object type do we map that one to? It turns out that the IaaS representation of a machine is a perfect candidate for this. This object type, called VCAC:VirtualMachine happens to be able to represent multi-machine deployments as well. What’s even better is that there’s already an API-function for getting hold of VCAC:VirtualMachine and we have all the data we need to do it.

Here’s the code needed to do this:


var host = System.getModule("net.virtualviking.mmday2").getvCACHost();
var identifiers = { VirtualMachineID:properties["machineId"] };
machine = vCACEntityManager.readModelEntity(host.id, "ManagementModelEntities.svc", "VirtualMachines", identifiers, null);
return machine.getInventoryObject();


What’s going on here? Well, first we get hold of our IaaS host. Next we build an identifier we’re going to use to search for a virtual machine in IaaS. Then we run the query and finally we return the corresponding inventory object. That’s really all there’s to it in terms of code.

You can implement this either as a workflow or an action in vRealize Orchestrator. I chose to do it as an action, since it runs slightly faster.

Putting it all together

  1. Download and install the package. See the end of this article for a download link!
  2. Select the “Design” perspective and go to the configurations tab.
  3. Edit the “net.virtualviking.mmday2” configuration bundle.
  4. Set the vCACHost property to point to your IaaS service (must be registered first).
    Screen Shot 2015-11-13 at 3.51.11 PM
  5. In vRealize Automation, log in as a tenant admin and go to Advanced Services->Custom Resources.
  6. Create a new custom resource.
    1. Enter orchestrator type vCACCAFE:Catalog
    2. Enter name “Catalog Resource”.
    3. (Optional) enter a description and version.
    4. Click next
      Screen Shot 2015-11-13 at 3.39.31 PM
    5. Leave the defaults values on the Details Form page.
    6. Click next.
    7. Leave the defaults on the Where Used form.
    8. Click Finish.
  7. Go to Advanced Services->Resource  Mappings.
  8. Create a new Resource Mapping.
    1. Name it “Multi-Machine Service”.
    2. Enter Catalog Resource Type “Multi-Machine Service”
    3. Enter Orchestrator Type “vCAC:Virtual Machine”
    4. Select “Mapping Script action” and navigate to “net.virtualviking.mmday2” and “convertMultimachine”.Screen Shot 2015-11-13 at 3.44.03 PM
    5. Select “Add”.

You should now be able to create day two operations on multi-machine deployments. To test it, you can use the “Print Multimachine Details” workflow provided with the package.

Please reach out to me in the comments if you have questions!

Downloadable components

The vRealize Orchestrator package can be downloaded from here.

Glucose Monitoring – A vRealize Operations Adapter using the SDK

Background

I have close friend who is unfortunate enough to suffer from Type 1 Diabetes. As you may know, this deadly disease can only be managed by constantly monitoring your blood glucose and injecting insulin. Fortunately, there’s a silver lining and it’s called “cool gadgets”. Many diabetes sufferers are now wearing continuous glucose monitors, which electronically monitor the patient’s glucose level and displays the numbers and trends.

But it doesn’t stop there. The Dexcom glucose monitor allows the metrics to be sent to the cloud in real time for remote monitoring an analysis. Of course it didn’t take long before some geeky diabetics figured out how to harvest the data and build open-source tools around it. They founded the Nightscout Foundation and continue to build useful and life-saving tools for managing diabetes on an open-source basis. You should check them out! They do amazing work!

When I heard about this, I was thinking that this is just time series data with trends and patterns that’s accessible through a simple REST-API. Then I thought of vR Ops, which is a tool that can analyze such data and I figured it would be fun to write an adapter that pulls in blood glucose data into vR Ops. So the next time I had a break between two meetings, I went to work!

The most interesting thing about this project is, in my opinion, how it leverages the dynamic thresholds to determine the normal range for blood glucose levels over the course of a day. It serves as a great demo of how the algorithms in vRealize Operations can adapt to virtually anything!

graph

vRealize Operations Java API

Since I have a background as a Java developer, I realized that using the Java API for vRealize Operations would be the easiest way to go. The Java API is essentially a wrapper around the REST API that allows you to interact with vRealize Operations through regular Java classes. If you are a Java programmer and want to interact with vRealize Operations, this is the way to go!

The Nightscout API

This API is dead simple. All you have to do is to issue a GET against http://{host}/api/v1/entries/current to get the latest sample. The sample is then returned as a tab-separated string containing timestamp, glucose reading and a trend indicator. For this project, we’re just using the timestamp and the glucose reading.

Running the code

We could have created some kind of intelligent scheduler trying to synchronize data collection with the timestamps of returned data to minimize lag, but in this case, I just went with a simple Java program that’s kicked off by a cron-job set to execute every five minutes.

The Java program takes the following parameters: A username and password for vRealize Operations, a URL for Nightscout and the name of the patient we’re collecting data for.

A walkthrough of the code

The run() method

The code is fairly simple, but it illustrates a few key concepts of the Java API for vRealize Operations. The bulk of the work happens in the run() method.

Screen Shot 2015-07-10 at 3.41.18 PM

This function is called once the parameters are extracted and parsed. First, it opens an HTTP connection to Nightscout and issues a GET. The result is then parsed, simply by splitting up the comma-separated string.

Once that’s done, we call vRealize Operations to look up an object representing the patient we’re monitoring. We do that by looking up an object of the type “Human” with the patient name supplied on the command like. If we don’t find the patient record, we create it.

Finally, we pack the data samples into an array and call “addStats” to send the new sample to vR Ops.

The findResourceByName() method

This method is really just a wrapper around the API call for looking up an object by name.

Screen Shot 2015-07-10 at 3.48.38 PM

The first three lines deal with the resource type. When this is first executed, the resource type “Human” isn’t going to exist so we have to create it.

Once we have the resource kind identifier, we can go ahead and call the “getResources” API method. This will return a list of resource identifiers, but since we know there should only be one, we can return the first (and only) object we find.

The createHuman() method

If the object representing the patient doesn’t exist, we need to create it.

Screen Shot 2015-07-10 at 3.52.33 PM

Again, this is just a prettier version of the API method for creating a resource. All it does is to fill out a structure with the resource kind identifiers, adapter kind and adapter name. The “resourceKey” is the unique identifier for this object and in our case consists of the patient name.

The addStats() method

This method performs the final task of actually sending the metrics to vRealize Operations.

Screen Shot 2015-07-10 at 3.56.06 PM

 

As usual, this method does little more than just assembling a structure containing the necessary values for the API call. Notice that we could send multiple timestamps and values if we wanted. In this case, however, we’re just dealing with the latest value from Nightscout, so we’re only supplying a single timestamp and value.

The end result

Screen Shot 2015-07-10 at 4.02.39 PM

The end result is actually quite fascinating and a great testament to the dynamic thresholds in vRealize Operations. Most people experience a spike in blood glucose level after a meal and a lower blood glucose level during the night. For a person with diabetes, these swings are typically more pronounced. If you look at the dynamic thresholds, you’ll see a lower and more narrow range during the night. After lunch and dinner, the range is wider and spikes are more common. This is clearly visible from the dynamic thresholds (the gray area) behind the graph.

This particular person usually has a low-carb breakfast, so you don’t normally see a spike. However, this day, he had pancakes for breakfast and the blood glucose spiked significantly outside out the normal range. How is that for a demonstration of the dynamic threshold feature in vRealize Operations?

Dashboards

No metric collector is complete without some great dashboards. Here are some examples of dashboards we built for the glucose data.

real-time

Real time view

long-term

Long term view

Conclusion

Although this is a somewhat exotic example of API use, it clearly illustrates the simplicity of use and the power of the automatically calculated dynamic thresholds.

Please consider supporting the research for a cure to Type 1 diabetes. JDRF is an excellent organization to support!

http://jdrf.org/get-involved/ways-to-donate/

The Nightscout Foundation also could use your donation to further promote and develop the open source technology for helping diabetics.

http://www.nightscoutfoundation.org/product/donate-2/

Downloads

The complete source code can be downloaded here.

Source code

Disclaimer

This project is intended as an example of what is possible using glucose monitoring technology and vRealize Operations. It is NOT intended to diagnose or cure any disease or condition. Type 1 Diabetes is a serious condition and should ONLY be managed using methods approved by the appropriate government bodies and recommended by your healthcare professional!

Using ServiceNow as a front end for vRealize Automation

Background

Today, ServiceNow has become something of a de-facto standard for self-service portals and ticket tracking systems. It is being used in virtually every industry and by all sizes of organizations. While vRealize Automation (vRA) offers its own self-service portal with a wide range of features, it is sometimes desirable to allow users to request virtual machines and other resources from a common ServiceNow portal, while carrying out the actual provisioning tasks using vRA. This combines the benefits of a single request portal with the power and governance of vRA.

This article describes how ServiceNow can be tied into vRA using a couple of simple workflows.

Direct vRealize Automation API vs. vRealize Orchestration

One of the first design decisions I had to make was how to expose vRA to ServiceNow. I could, of course, call the vRA API directly from ServiceNow. Another alternative would be to encapsulate the vRA API calls in a vRealize Orchestrator (vRO) workflow. I opted for the latter, since it makes the code ServiceNow a bit simpler and leverages the vRA Plugin in vRO.

Implementing the vRO workflow

To make the ServiceNow workflow simpler, I wanted to make a single vRO workflow that handled the entire provisioning process. This way, I only have to make one API call from ServiceNow. These are the basic steps the workflow needs to accomplish:

  1. Check parameters against blueprint, reject invalid parameters and fill in defaults for ones that are missing.
  2. Look up blueprint and request provisioning.
  3. Wait for completion and check outcome (success or failure).

To avoid having the code break due to small differences in the API, I used the vRA plugin to make all calls, rather than calling the APIs directly.

Here is what the workflow looks like:

Screen Shot 2015-07-09 at 12.41.15 PM

The workflows us pretty straightforward and uses mostly pre-existing components. The “Prepare Properties” script is really the only code in the workflow. All it does it to examine which parameters were supplied and providing the blueprint defaults for the rest.

Calling vRO via REST

In order for ServiceNow to be able to make a call to vRO, we need to register a REST call. In vRO, the https://server:8281/vco/api/workflows endpoint gives you a list of workflows. If you know the UUID of a workflow, you can address it directly by doing a HTTP GET against the URL https://server:8281/vco/api/workflows/{UUID}. Since the UUID of a workflow is permanent and doesn’t change under any circumstances, we can safely just copy it from vRO and use it directly in the URL.

Under the workflow, there are several subsections of linked information, such as the executions of each workflow. We can check on previously started workflows by issuing a GET against the executions URL. Consequently, issuing a POST against the URL https://server:8281/vco/api/workflows/{UUID}/executions creates a new execution by running the specified workflow. Any parameters to the workflow can be send as a json document.

Successfully starting a workflow immediately returns the HTTP status code 202 (Accepted) and sets the Location header to an URL where we can check on the completion of the workflow. Note that the workflow has not finished running at this point. All we got was an acknowledgement that vRO has successfully started the workflow.

To check for the status of the workflow, we need to issue a GET against the URL returned in the “Location” header. That URL looks something like this: https://server:8281/vco/api/workflows/{UUID}/executions/{token} where “token” is a unique identifier for the workflow execution.

Defining the REST message in ServiceNow

Let’s put it all together!

First we need to specify the rest message. We’re including the workflow UUID in the URL. You can obtain the workflow UUID from the workflow properties page in vRO. Note that the ID is permanent and will remain the same even if the workflow is imported, exported and renamed, so it is safe to hard code it into the URL. You will probably have to specify a MID server, since your vRO instance is likely to be behind a firewall.

rest

Next, we are going to focus on the POST message. Remember, if you POST to workflow executions, you start a new execution of the workflow. Note that there we accept for parameters: The name of the OS image (blueprint name), number of CPUs, memory and storage. We use these to construct a json message in the body. This will pass the parameters from ServiceNow to the vRO workflow.

rest-post

Finally, we need to specify the GET message. This is fairly straightforward. All we need to do is to make sure we pass the execution ID at the end of the URL. This is done using the “token” variable.

rest-get

Implementing the ServiceNow workflow

On the ServiceNow side, we need to create a workflow that carries out the provisioning action, i.e. calls vRO to activate the vRO workflow and the provisioning. Normally, you would most likely have approvals and other actions tied to the workflow, but in this example, we’re simply focusing on the provisioning part.

Here is our ServiceNow workflow.

workflow

Let’s walk through the steps:

  1. Run a script to send a request to vRO for starting the provisioning workflow. We’re pulling the parameters from the current item request and passing it as a REST call.
    gs.log("image: " + current.variables.vm_os_image);
    gs.log("CPUs: " + current.variables.vm_numCPUs);
    gs.log("Memory: " + current.variables.vm_memoryMB);
    gs.log("Storage: " + current.variables.vm_storageGB);
    var r = new RESTMessage('Create VM in vRA', 'post');
    r.setStringParameter('image', current.variables.vm_os_image);
    r.setStringParameter("numCPUs", current.variables.vm_numCPUs);
    r.setStringParameter("memoryMB", current.variables.vm_memoryMB);
    r.setStringParameter("storageGB", current.variables.vm_storageGB);
    
    // Send the request
    //
    var response = r.execute();
    if(response == null) 
        response = r.getResponse(30000);
    gs.log("Status: " + response.getStatusCode());
    gs.log("Workflow token: " + response.getHeader("Location"));
    gs.log("Response: " + response);
    var location = response.getHeader("Location");
    var arr = /.*executions\/(.*)\//.exec(location);
    workflow.scratchpad.vroToken = arr[1];
    gs.log("Set vRO token to: " + workflow.scratchpad.vroToken);
    activity.result = response.getStatusCode();
  2. Send a HTTP GET to obtain the status of the workflow execution. Due to some quirks in the MID server, we need to check for the result twice. We’re parsing the return body and extracting the result code from vRO.
    gs.log("vRO token is: " + workflow.scratchpad.vroToken);
    var r = new RESTMessage('Create VM in vRA', 'get');
    r.setStringParameter('token', workflow.scratchpad.vroToken);
    var response = r.execute();
    if(response == null) 
        response = r.getResponse(30000);
    var body = response.getBody();
    gs.log("Response code: " + response.getStatusCode());
    gs.log("Body: " + body);
    
    // Parse return value
    //
    var parser = new JSONParser();
    var jsonResult = parser.parse(body);
    var status = jsonResult.state;
    if(status == "failed")
        workflow.scratchpad.vraError = jsonResult.content_exception;
    gs.log("Workflow status: " + status);
    workflow.scratchpad.vraWorkflorStatus = status;
    activity.result = status;

The final step: Creating a Catalog Item

Once we have the workflows created, it’s time to tie them into a Catalog Item. This is pretty straightforward. We have to make sure we call the workflow we just created and we need to define the variables used by the workflow. You’ll recall we’re using the variables vm_os_image, vm_numCPUs, vm_memoryMB and vm_storageGB.

This is what the catalog item looks like:

Screen Shot 2015-06-18 at 5.48.57 PM

In this case, I chose to create a specific service for this OS image, but you could just as well create a generic one and have a drop-down for selecting the image. There’s a variety of input options you can choose between for the variables. I picked a numeric scale for the CPUs and multiple choices for the other parameters.

Screen Shot 2015-06-18 at 6.00.58 PM

When specifying the variables, you should make sure they have the same name as what you’re using your workflow. This is what my definition of vm_numCPUs looks like, for example:

Screen Shot 2015-06-18 at 6.04.14 PM

Finally, when we have everything defined, we should get a Catalog Item that looks something like this.

Screen Shot 2015-06-18 at 6.13.29 PM

Downloads

Complete vCO Workflows: Click here!