Geek corner: Finding Patterns in Wavefront Time Series Data using Python and SciPy

Introduction

Today we’re going to geek out even more than usual. Follow me down memory lane! My first job was to write code that processed data from testing integrated circuits. That was an eternity ago, but what I learned about signal processing turned out to be applicable to pretty much any kind of time series data.

So I thought I should play around with some of the time series data I have stored in Wavefront.

Of course, it deserves to be said that Wavefront can do almost any math natively in its web user interface. So I really had to think hard to come up with something that would give me an excuse to play around with the Wavefront API and some Python code.

Here’s the example I came up with: Find out which objects exhibits the highest degree of periodic self similarity on some metric. For example, tell me which VMs have a CPU utilization that have the strongest tendency to repeat themselves over some period (e.g. daily, hourly, every 23.5 minutes, etc.).

A possible application for this may be around capacity planning. By understanding the which workloads have a tendency to vary strongly and predictably over some period of time, we may be able to place workloads in such a way that their peak demands don’t coincide.

The first few paragraphs of this post should be interesting to anyone wanting to do advanced data processing of Wavefront data using Python. The final paragraphs about the math are mostly me geeking out for the fun of it. It can be safely ignored if it doesn’t interest you.

Problem Statement

The purpose of the tool we’re building in this post is to answer the question: “Which objects have the strongest periodic variation (seasonality) on some metric?”.

For example, which VMs have a CPU load that tends to have a strong periodic variation? And furthermore, what is the period of that variation?

Usage

It should be pointed out that this tool would need some work before it’s useful in production. It is intended as an example of what can be done.

To obtain the code, clone the repository from here or download this single Python file.

To run the code, you have to set the following two environment variables:

  • WF_URL – The URL to your Wavefront instance, e.g. https://try.wavefront.com
  • WF_TOKEN – Your API token. Refer to the Wavefront documentation for how to obtain it.
python fft.py name.of.metric

You can add the –plot flag at the end of the command to get a visual representation of the top periodic time series.

The output will be a list of object names, period length in minutes and peak value.

Example:

Screen Shot 2017-09-25 at 8.55.09 AM

Let’s pick an example and examine it!

nsx-mgr-west,31.45390070921986,0.17699407037085668

The first column is just the name of the object. The next column says that the pattern seems to repeat every 31.5 minutes. The third column is the relative spectral peak energy. The best way of interpreting that is by saying that 18% of the total “energy” of the signal can be attributed component that repeats itself every 31.5 minutes.

Let’s have a look at the original time series in Wavefront and see if we were right!

Screen Shot 2017-09-25 at 9.48.11 AM

Yes indeed! It sure looks like this signal is repeating itself about every 30 minutes.

A Note on Accuracy

Why did we get a result of 31.5 minutes? Isn’t it a little strange that we didn’t get an even 30 minutes? Yes it is and there’s a reason for it. The algorithm introduces false accuracy. In our example, we’re processing 5 minute samples, so the resolution will never get better than 5 minutes. Therefore, you should round the results to the nearest 5 minutes.

Data Source

The data for this example comes from one of our demo vRealize Operations instances and was exported to Wavefront using the vRealize Operations Export tool that can be found here.

Design

The design is very simple: We use Python to pull down time series data for some set of objects. Then we’re using the SciPy and NumPy libraries to perform the mathematical analysis.

SciPy

SciPy (which is based on Numpy) is an extensive library for scientific mathematics. It offers a wide range of statistical functions and signal processing tools, making it very useful for advanced processing of time series data.

High Level Summary of the Algorithm

There are many ways of finding self similarity in time series data. The most common ones are probably spectral analysis and autocorrelation. Here is a quick comparison of the two:

  • Spectral Analysis: Transform the time series to the Frequency Domain. This rearranges the data into a set of frequencies and amplitudes. By finding the frequency with the highest amplitude, we can determine the most prevailing periodicity in the time series data.
  • Autocorrelation: Suppose a time series repeats itself every 1 hour. If you correlate the the time series with a time shifted version of the same series you should get a very good correlation when the time shift is 1 hour in our example. Autocorrelation works by performing correlations with increasing time shifts and picking the time shift the gave the best correlation. That should correspond to the length of the periodicity of the data.

Autocorrelation seems to be the most popular algorithm. However, for very noisy data, it can be hard to find the best correlation among all the noise. In our tests, we found that spectral analysis (or spectral power density analysis, to be exact) gave the best results.

To find the highest peak in the frequency spectrum, we huse the “power spectrum”, which simply means that we square the result of the Fourier transform. This will exaggerate any peaks and makes it easier to find the most prevalent frequencies. It also creates an interesting relationship to the autocorrelation function. If you’re interested, we’ll geek out over that towards the end of this article.

Once we’ve done that for all our objects, we sort them based on the highest peak and print them. To make the scoring fair, we calculate the total “energy” (sum of “power” over time) and divide the peak amplitudes by that. This way, we measure the percentage that a peak contributes to the total “energy” of the signal.

Code Highlights

Pulling the data

Before we do anything, we need to pull the data from Wavefront. We do this using the Wavefront REST API.

query = 'align({0}m, interpolate(ts("{1}")))'.format(granularity, metric)
start = time.time() - 86400 * 5 # Look at 5 days of data
result = requests.get('{0}/api/v2/chart/api?q={1}&s={2}&g=m'.format(base_url, query, start),
                      headers={"Authorization": "Bearer " + api_key, "Accept": "application/json"})

Next, we rearrange the data so that for each object we have just a list of samples that we assume are spaced 5 minutes apart.

candidates = []
data = json.loads(result.content)
for object in data["timeseries"]:
    samples = []
    for point in object["data"]:
        samples.append(point[1])

Normalizing the data

Now we can start doing the math. First we normalize the data to force it to have an average of 0, a maximum of 1 and a minimum of -1. It’s important to do this to remove any and constant bias (sometimes referred to as “DC-component”) from the signal.

top = np.max(samples)
bottom = np.min(samples)
mid = np.average(samples)
normSamples = (samples - mid)
normSamples /= top - bottom

Turn into a power spectrum using the fft function from SciPy

Now that we have the data on a form we like, use a Fast Fourier Transform (FFT) to go from the time domain to the frequency domain. The output of and FFT is a series of complex numbers, so we take the absolute value (magnitude) of them to turn it into a real number. Finally, we square every value. This will exaggerate any frequency spikes and also has a cool relationship to autocorrelation we’ll discuss later.

spectrum = np.abs(fft(normSamples))
spectrum *= spectrum

At this point, we should have a spectrum that, for a good match, looks something like this. This corresponds to a signal repeating at a 60 minute interval.

Screen Shot 2017-09-25 at 10.38.50 AM

The algorithm we use is has very low accuracy at low frequencies (long cycles), so we dump the firs few samples. It’s important to sample enough of the signal. If we want to detect behaviors the repeat over, say, 24 hours, you should sample at least 5 days of data. This way, we can discard the lowest frequencies and still find what we’re looking for.

offset = 5
spectrum = spectrum[offset:]

Now we can find the peak by simply looking for the highest value in the array. Notice that we’re only looking in the lower half of the array. This is because of a property of the FFT known as “aliasing”. The latter n/2 samples will just be mirror image of the first n/2 ones.

Finding the peak and normalizing

maxInd = np.argmax(spectrum[:int(n/2)+1])

Next we scale the peak we found. In order to compare this peak with peaks we found from other objects, we need to somehow normalize it. A straightforward way of doing this is to estimate how much of the total energy this peak contributed with. Energy is the integral over power, but since we’re operating in a discrete time world, we can calculate it as a simple sum over the spectrum. The result will be the relative contribution this frequency has to the total energy.

energy = np.sum(spectrum)
top = spectrum[maxInd] / energy

The final step of the math is to convert the raw frequency (as expressed by an index in the spectrum array) to a period length.

lag = 5 * n / (maxInd + offset)

Filtering the best matches

We consider a “good” match a peak where the local spectral energy is at least 10% of the total energy. This is an arbitrary value, but it seems to work quite well.

if top > 0.1:
    entry = [ top, lag, object["host"] ]
    candidates.append(entry)

Sorting and outputting

Finally, once we’ve gone through every object, we sort the list based on relative peak strength and output as CSV.

best_matches = sorted(candidates, key=lambda match: match[0], reverse=True)
for m in best_matches:
   print("{0},{1},{2}".format(m[2], m[1], m[0]))

Extra Geek Credits: Autocorrelation vs. Power Spectrum

If you’ve made it here, I applaud you. You should have a pretty good understanding of the code by now.  It’s about to get a lot geekier…

Autocorrelation

Mathematically, autocorrelation is a very simple operation. For a real-valued signal, you simply sum up every sample of the signal multiplied by a sample from the same signal, but with a time shift. It can be written like this:

a(τ)=Σx(t)x(t+τ)

Let’s say a signal is repeating itself every 60 minutes. If we set the lag, τ, to 60 minutes, we should get a better correlation than for any other value of τ, since the signal should be very similar between now and 60 minute ago.

So one way to find periodic behavior is to repeatedly calculate the autocorrelation and sweep the value of τ from the shortest to the longest period you’re interested in. When you find the highest value of your autocorrelation, you have also found the period of the signal.

Well, at least in theory.

What ends up happening is that a lot of times, it isn’t obvious where the highest peak is and it could be affected by noise, causing a lot of random errors. Consider this autocorrelation plot, for example.

Screen Shot 2017-09-25 at 11.02.06 AM

One thing we notice, though, is that autocorrelations of signals that have meaningful periodic behaviors are themselves periodic. Here’s a good example:

Screen Shot 2017-09-25 at 11.05.33 AM

So maybe looking for the period of the autocorrelation would be more fruitful than hunting for a peak? And finding the period implies finding a frequency, which sounds almost like a Fourier Transform would come in handy, right? Yes it does. And there’s a beautiful relationship between the autocorrelation and the Fourier Transform that will take us there.

The final piece of the puzzle

If you’ve read this far, I’m really impressed. But here’s the big reveal:

It can be proven that autocorrelation and FFT are related as follows:

{\displaystyle {\begin{aligned}F_{R}(f)&=\operatorname {FFT} [X(t)]\\S(f)&=F_{R}(f)F_{R}^{*}(f)\\R(\tau )&=\operatorname {IFFT} [S(f)]\end{aligned}}}

Where R(τ) is the autocorrelation function.

In other words: The autocorrelation is the Inverse FFT of the FFT of the signal, multiplied by the complex conjugate of the FFT of the signal.

But an complex number multiplied by its own conjugate is equal to the square of the absolute value. So S(f) is really the same as |FFT[X(t)]|2. And since S(f) is the step right before the final Inverse FFT, we can conclude that it must be the FFT of the autocorrelation. So we can comfortably state:

The power spectrum of a signal is the exact same thing as the FFT of its autocorrelation!

Why it’s cool

Not only is the power spectrum a nice way of exaggerating peaks to make them easier to find. The peak of the power spectrum can also be interpreted as a measure of the prevailing period of the autocorrelation function, which, in turn, gives a good indication of the prevailing period of the data we’re analyzing.

Caveats

As I said in the beginning of this post, this code isn’t production quality. There are a few things that should be addressed before it’s useful to a larger audience.

  • All data is loaded into memory. That obviously won’t scale. Instead, we should used a streaming JSon parser and maybe split the query up into smaller shunks.
  • The calculation of the local spectral energy is very naive. It’s assuming that all the energy is concentrated in a single spike of unity length. It should be adjusted to account for spikes that are spread out across the spectrum.
  • The accuracy, especially at low frequencies, is rather poor. One possible improvement would be to use FFT to find the candidates and autocorrelation to fine tune the result.

Conclusion

The idea behind this post was to discuss how Python and a math library like SciPy can be used to analyze Wavefront data. Feel free to comment and to fork the code and make improvements!

 

Advertisements

Exporting metrics from vRealize Operations to Wavefront

Introduction

You may have heard that VMware recently acquired a company (and product) called Wavefront. This is essentially an incredibly scalable SaaS-solution for analyzing time-series data with an emphasis on monitoring. With Wavefront, you can ingest north of a million data points per second and visualize then in the matter of seconds. If you haven’t already done so, hop on over to http://wavefront.com/sign-up and set up a trial account! It’s a pretty cool product.

But for a monitoring and analytics tool to be interesting, you need to feed it data. And what better data to feed it than metrics from vRealize Operations? As you may know, I recently released a Fling for bulk exporting vRealize Operations data that you can find here. So why not modify that tool a bit to make it export data to Wavefront?

A quick primer on Wavefront

From a very high level point of view, Wavefront consists of three main components:

  • The application itself, hosted in the public cloud.
  • A proxy you run locally that’s responsible for sending data to the Wavefront application.
  • One or more agents that produce the data and send it to Wavefront.

You may also use an agent framework such as Telegraf to feed data to the proxy. This is very useful when you’re instrumenting your code, for example. Instead of tying yourself directly to the Wavefront proxy, you use the Telegraf API, which has plugins for a wide range of data receivers, among them Wavefront. The allows you to de-couple your instrumentation from the monitoring solution.

In our example, we will communicate directly with the Wavefront proxy. As you will see, this is a very straightforward way of quickly feeding data into Wavefront.

Metric format

To send data to the Wavefront Proxy, we need to adhere to a specific data format. Luckily for us, the format is really simple. Each metric value is represented by a single line in a textual data stream. For example, sending a sample for CPU data would look something like this:

vrops.cpu.demand 2.929333448410034 1505122200 source=vm01.pontus.lab

Let’s walk through the fields!

  1. The name of the metric (vrops.cpu.demand in our case)
  2. The value of the metric
  3. A timestamp in “epoch seconds” (seconds elapsed since 1970-01-01 00:00 UTC)
  4. A unique, human readable name of the source (a VM name in our case)

That’s really it! (OK, you can specify properties as well, but more about that later)

Pushing the data to Wavefront

Now that we understand the format, let’s discuss how to push the data to Wavefront. If you’re on a UNIX-like system, the easiest way, by far, is to use the netcat (nc) tool to pipe the data to the Wavefront proxy. This command would generate a data point in Wavefront:

echo "vrops.cpu.demand 2.929333448410034 1505122200 source=vm01.pontus.lab" | nc localhost 2878

This assumes that there’s a Wavefront proxy running on port 2878 on localhost.

Putting it together

So now that we know how to get data into Wavefront, let’s discuss how we can use the vRealize Operations Export Tool to take metrics from vRealize Operations to Wavefront. The trick is the newly added output format called – you guessed it – wavefront. We’re assuming you’re already familiar with the vRealize Operations Export Tool definition file (if you’re not, look here). To make it work, we need a definition file with an output type of “wavefront”. This one would export basic VM metrics, for example:

resourceType: VirtualMachine
rollupType: AVG
rollupMinutes: 5
align: 300
outputFormat: wavefront
dateFormat: "yyyy-MM-dd HH:mm:ss"
fields:
# CPU fields
  - alias: vrops.cpu.demand
    metric: cpu|demandPct
  - alias: vrops.cpu.ready
    metric: cpu|readyPct
  - alias: vrops.cpu.costop
    metric: cpu|costopPct
# Memory fields
  - alias: vrops.mem.demand
    metric: mem|guest_demand
  - alias: vrops.mem.swapOut
    metric: mem|swapoutRate_average
  - alias: vrops.mem.swapIn
    metric: mem|swapinRate_average
 # Storage fields
  - alias: vrops.storage.demandKbps
    metric: storage|demandKBps
 # Network fields
  - alias: vrops.net.bytesRx
    metric: net|bytesRx_average
  - alias: vrops.net.bytesTx
    metric: net|bytesTx_average

If we were to run an export using this configuration file, we’d get an output that looks like this:

$ exporttool.sh -H https://host -u user -p secret -d wavefront.yaml -l 24h 
vrops.cpu.demand 1.6353332996368408 1505122200 source=ESG-NAT-2-0 
vrops.cpu.costop 0.0 1505122200 source=ESG-NAT-2-0 
vrops.mem.swapOut 0.0 1505122200 source=ESG-NAT-2-0 
vrops.net.bytesTx 0.0 1505122200 source=ESG-NAT-2-0 
vrops.host.cpu.demand 15575.7890625 1505122200 source=ESG-NAT-2-0 
vrops.mem.swapIn 0.0 1505122200 source=ESG-NAT-2-0
...

The final touch

Being able to print metrics in the Wavefront format to the terminal is fun and all, but we need to get it into Wavefront somehow. This is where netcat (nc) comes in handy again! All we have to do is to pipe the output of the export command to netcat and let the metrics flow into the Wavefront proxy. Assuming we have a proxy listening on port 2878 on localhost, we can just type this:

$ exporttool.sh -H https://host -u user -p secret -d wavefront.yaml -l 24h | nc localhost 2878

Wait for the command to finish, then go to your Wavefront portal and go to “Browse” and “Metrics”. You should see a group of metrics starting with “vrops.” If you don’t see it, give it a few moments and try again. Depending on your load and the capacity you’re entitled to, it may take a while for Wavefront to empty the queue.

Here’s what a day of data looks like from one of my labs. Now we can start analyzing it! And that’s the topic for another post… Stay tuned!

The viking is back!

I’m back!!!

Hi folks!

I’m sorry I’ve been absent for such a long time, but I’ve been busy working on a cool project I unfortunately can’t talk about publicly… yet.

But now that I’ve surfaced again, check out my blog at VMware’s corporate Cloud Management blog!

https://blogs.vmware.com/management/2017/05/exporting-metrics-vrealize-operations.html

Expect more activity here in the very near future. And see you at DellEMC World next week! I’ll be in the Cloud Native Apps section of the VMware booth.

Generating Reports from vRealize Operations with Pentaho Reports – Part 1

Background

Although vRealize Operations has a really solid report generator built in, I often get the question how to hook up an external report generator to vRealize Operations. Sometimes users need some piece of functionality not found in the standard report generator and other times they want to bring data from vRealize Operations into an existing corporate reporting framework.

This led me to spend some time experimenting with Pentaho Report Designer and Pentaho Data Integration (also known as Kettle).

This discussion is going to get pretty technical. But don’t worry if you can’t follow along with every step! The transformation and report are available for download at the end of the article!

What is Pentaho and why do I use it?

Pentaho is an analytics and business intelligence suite made up of several products. It was acquired by Hitachi who sells a commercial version of it but also maintains free community version. In this post, we’re using the community version. Although I haven’t tested it with the commercial version, I’m assuming it would work about the same.

For this project, we’re using two components from Pentaho: Kettle (or data integrations) and Pentaho Reports. Kettle is an ETL tool that lets us take data from an arbitrary source and normalize it so it can be consumed by Pentaho Reports.

Getting the data

Typically, report generators are used with data from a SQL database, but since metrics in vRealize Operations reside in a distributed proprietary datastore, this is not a feasible solution. Also, database-oriented solutions are always very sensitive to changes between versions of the database schema and are therefore typically discouraged.

Instead, we are going to use the vRealize Operations REST API to gain access to the data. Unfortunately, Pentaho Reports doesn’t have an native REST client and there’s where Kettle comes in. As you will see, the report works by using a Kettle transformation as a data source.

Goals for this project

The goal for this project is very simple. We are just going to create a report that shows our VMs and a graph of the CPU utilization for each one of them. In subsequent posts, we are going to explore some more complex reports, but for now, let’s keep it simple. The report is going to look something like this:

screen-shot-2016-10-06-at-4-33-20-pm

Solution overview

DISCLAIMER: I am not a Pentaho expert by any means. In fact, I’ve been learning as I’ve been working on this project. If you know of a better way of doing something, please drop me a note in the comments!

This overview is intended to explain the overall design of the solution and doesn’t go into detail on how to install and run it. See the section “Installing and running” below for a discussion on how to actually make it work.

Kettle Transformation

To produce the data for the report shown above, we need to perform to major tasks: We need to ask vRealize Operation for a list of all eligible VMs and we need to ask it for the CPU demand metric for each one of them.

Kettle principles

Before we get into the design, it’s necessary to understand how Kettle operates. The main unit of work for Kettle is a rowset. This is essentially a grid-like data structure resembling a table in a SQL database. As the rowset travels through a transformation pipeline, we can add and remove columns and rows using various transformation steps.

Our Kettle Transformation

Transformations in Kettle are built using a graphical tool called “spoon” (yes, there’s a kitchen utensil theme going on here). Transformations are depicted as pipelines with each step performing some kind of operation on the rowset. This is what our transformation pipeline looks like. Let’s break it down step by step!

screen-shot-2016-10-06-at-4-51-07-pm

  1. Generate seed rows. Kettle needs a rowset to be able to perform any work, so to get the process started, we generate a “seed” rowset. In this case, we simply generate a single row containing the REST URL we need to hit to get a list of virtual machines.
  2. Get VMs. This step carries out the actual REST call to vRealize Operations and returns a single string containing the JSon payload returned from the rest call.
  3. Parse VM details. Here we parse the JSon and pull out information such as name and ID of the VM.s
  4. Select VM identifiers. This is just a pruning step. Instead of keeping the JSon payload and all the surrounding data around, we select only the name and the ID of each VM.
  5. Generate URLs. Now we generate a list of REST URLs we need to hit to get the metrics.
  6. Lookup Metrics. This is the second REST call to vRealize Operations. This is executed for each VM identified and will look up the last 24 hours of CPU demand metrics and return it as a JSon string.
  7. Parse Metrics. Again, we need to parse the JSon and pull out the actual metric values and their timestamps. This is actually akin to a cartesian product join in a database. The rowset is extended by adding one row per VM and metric sample. Thus, the number of rows is greatly increased in this step.
  8. Remove empties. Some VMs may not have any valid metrics for the last 24 hours. This step removes all rows without valid metrics.
  9. Sort rows. Sorts the rows based on VM name and timestamp for the metric.
  10. Convert date. Dates are returned from vRealize Operations as milliseconds since 01/01/1970. This step converts them to Date objects.
  11. Final select. Final pruning of the dataset, removing the raw timestamp and some other fields that are not needed by the report.
  12. Copy rows to result. Finally, we tell the transformation to return the rowset to the caller.

Lots of steps, but they should be fairly easy to follow. Although Kettle carries a bit of a learning curve, it’s a really powerful tool for data integration and transformation.

The result of our transformation will be a set of rows containing the name of the VM, a timestamp and a metric reading. Each VM will have 24 hours worth of metric readings and timestamps similar to this example:

Screen Shot 2016-10-06 at 5.00.51 PM.png

Our Pentaho Report

At this point, we have a nice data stream with one entry per sample, along with a resource name and a timestamp. Time to build a report! To do that, we use the graphical Pentaho Report Designer tool. Our report will look something like this in the designer:

screen-shot-2016-10-06-at-5-07-05-pm

Grouping the data

We could, of course, just put the data stream in the details section of a report and be done with it, but it wouldn’t make a very interesting report. All it would do is to list a long litany of samples for all the resources we’ve selected. So obviously, some kind of grouping is needed.

We solve this by inserting a, you guessed it, Group object into the report. A group object allows us to group the data based on some field or condition. In our case, we’ll group it by the “resourceName” (the name of the VM). Thus, we’ll have one group per VM.

Adding the graph

Now we can add the graph. But where do we add it? Let’s examine the sections in the screen shot above. There are couple of things to take notice of. First, anything we put in the Details section gets repeated for every row in the dataset. So if we put the graph there, we’d get thousands of graphs each showing a single datapoint. Clearly not what we want.

So what about the group? How can we put our graph at a group level? That’s where the group header and footer come into play. These are intended for summaries of a group. And since we want to summarize all the samples for a VM (which is the grouping object), that seems like a good place. Most tutorials seem to recommend using the group footer.

Configuring the graph

Once we’ve added the graph, we need to configure it. This is done simply by double clicking on it. First we select the graph type. For a time series this has to be set to “XY line graph”.

Once we’ve done that, we can start configuring the details. Here’s what it looks like:

screen-shot-2016-10-06-at-5-07-26-pm

First, we need to select the TimeSeries Collector. This causes the data to be handled as a time series, rather than generic X/Y coordinates. Then we pick the “category-time-column” and set it to our timestamp field and set the “value-column” to “cpuDemand”.

Next, we edit the “series-by-value” to provide a graph legend.

Finally, we set the “reset-group” field to “::group-0”. This resets the data collection after every group, preventing data from being accumulated.

Parameters

Finally, we need some way of keeping track of variables like the hostname, username and password for the vRealize Operations instance we’re working with. This is done through parameters of the transformation that are exposed as report parameters. We can provide defaults values and even hide them from the user if we are always using the same instance and user. See below for a discussion on the parameters!

Installing and running

So far, it’s been a lot of talk about the theory of operation and the design. Let’s discuss how to install and run the report!

Installing Pentaho Reporting

  1. Download the Pentaho Reporting Designer from here.
  2. Unzip the file and go to the directory where you installed the software.
  3. Run it with report-designer.sh on Linux/OSX or report-designer.bat on Windows.

Allowing use of self-signed certs

If your certificate for vRealize Operations is signed by a well-known authority, you should be good to go. If not, you need to perform these two steps.

On windows

Edit the file report-designer.bat. Insert the java parameter “-Djsse.enableSNIExtension=false” on the last line. The line should look like this:

start “Pentaho Report Designer” “%_PENTAHO_JAVA%” -Djsse.enableSNIExtension=false -XX:MaxPermSize=256m -Xmx512M -jar “%~dp0launcher.jar” %*

On Linux/OSX

Edit the file report-designer.bat. Insert the java parameter “-Djsse.enableSNIExtension=false” on the last line. The line should look like this:

“$_PENTAHO_JAVA”-Djsse.enableSNIExtension=false -XX:MaxPermSize=512m -jar “$DIR/launcher.jar” $@

Finally, you will have to add the certificate from vRealize Operations to your java certificate store. This can be done by downloading your cert from your browser (typically by clicking the lock symbol next to the URL field) and following these instructions.

Installing Pentaho Data Integration (Kettle)

This step is optional and needed only if you want to view/modify the data transformation pipeline.

  1. Download Pentaho Data Integration from here.
  2. Unzip the file and go to the directory where you installed the software.
  3. Run it with spoon.sh on Linux/OSX or spoon.bat on Windows.

Allowing use of self-signed certs

If your certificate for vRealize Operations is signed by a well-known authority, you should be good to go. If not, you need to perform these two steps.

On Windows

Edit the file spoon.bat and insert the following line right before the section that says “Run” inside a box of stars:

On Linux/OSX

Edit the file spoon.sh and insert the following line right before the section that says “Run” inside a box of stars:

OPT=”%OPT% -Djsse.enableSNIExtension=false”
REM ***************
REM ** Run…    **
REM ***************

Finally, you will have to add the certificate from vRealize Operations to your java certificate store. This can be done by downloading your cert from your browser (typically by clicking the lock symbol next to the URL field) and following these instructions.

Running the report

  1. Start the report designer.
  2. Select File->Open and load the CPUDemand.prpt file.
  3. Click the “Run” icon.
  4. Select PDF.
  5. Fill in the host, username and password for vR Ops.
  6. Click OK.

Next steps

The purpose of this project was to show the some of the techniques used for running a report generating tool against vRealize Operations. Arguably, the report we created isn’t of much value. For example, it will only pick the 100 first VMs in alphabetical order and it’s rather slow. For this to be useful, you would want to pick VMs based 0n some better criteria, such as group membership or VMs belonging to a host or cluster.

We will explore this and other techniques in an upcoming installment of this series when we are building a more useful “VM dashboard” report. Stay tuned!

Downloadable content

The data transformations and report definitions for this project can be downloaded here.

Bosphorus now available as a Docker image!

Introduction

Bosphorus is a simple custom portal framework for vRealize Automation written in Java with Spring Boot. Read all about it here.

It’s already pretty easy to install, but to remove any issues with java-versions etc., I’m not providing it as a Docker image. This will be one of my shortest blog posts ever, because it’s so darn easy to use!

How to use it

Assuming you have docker installed, just type this:

docker run -d -p 8080:8080 -e VRAURL=https://yourhost vviking/bosphorus:latest

Replace “yourhost” with the FQDN of your vRealize Automation host. Wait about 30 seconds and point a web browser to the server running the Docker image. Don’t forget it’s running on port 8080. Of course, you can very easily change that by modifying the port mapping (-p parameter) in the docker command.

Bosphorus: A portal framework for vRealize Automation

I’m back!

I know it’s been a while since I posted anything here. I’ve been pretty busy helping some large financial customers, being a part of the CTO Ambassador team at VMware and preparing for VMworld. But now I’m back, and boy do I have some exciting things to show you!

One of the things I’ve been working on is a project that provides a framework for those who want to build their own portal in front of vRealize Automation. The framework also comes with a mobile-friendly reference implementation using jQuery Mobile.

This article is just a teaser. I’m planning to talk a lot more in detail about this and how to use the vRA API for building custom portals and other cool things. Below is an excerpt from the description on github. For a full description, along with code and installation instructions, check out my github page here: https://github.com/njswede/bosphorus

Background

This project is aimed at providing a custom portal framework for vRealize Automation (vRA) along with a reference implementation. It is intended for advanced users/developers of vRealize Automation who need to provide an alternate User Interface or who need to integrate vRA into a custom portal.

Bosphorus is written in Java using Spring MVC, Spring Boot, Thymeleaf and jQuery. The reference implementation uses jQuery Mobile as a UI framework. The UI framework can very easily be swapped out for another jQuery-based framework.

The choice of Java/Spring/Thymeleaf/jQuery was deliberate, as it seems to be a combination that’s very commonly used for Enterprise portals at the time of writing.

Why the name?

I wanted a name that was related to the concept of a portal. If you paid attention during geography class, you know that the Bosphorus Strait, located in Turkey is the portal between the Mediterranean Sea and the Black Sea. Plus is sounds cool.

Design goals

  • Allow web coders to develop portals with no or little knowledge of vRA
  • Implement on a robust platform that’s likely to be used in an enterprise setting.
  • Easy to install.
  • Extremely small footprint.
  • Extremely fast startup time.
  • Avoid cross-platform AJAX issues.

Features

Bosphorus was designed to be have a very small footprint, start and run very fast. At the same time, Bosphorus offers many advanced features such as live updates using long-polling and lazy-loading UI-snippets using AJAX.

Known bugs and limitations

  • Currently only works for the default tenant.
  • Only supports day 2 operations for which there is no form.
  • Displays some day 2 operations that won’t work outside the native vRA portal (such as Connect via RDP).
  • Only allows you to edit basic machine parameters when requesting catalog items. Networks, software components, etc. will be created using their default values.
  • In the Requests section, live update doesn’t work for some day 2 operations.

Future updates

I’m currently running Bosphorus as a side project, so updates may be sporadic. However, here is a list of updates I’m likely do post in the somewhat near future:

  • Support for tenants other than the default one.
  • More robust live update code.
  • Support for “skins” and “themes”.
  • Basic support for approvals (e.g. an “inbox” where you can do approve/reject)

Screenshots

screen-shot-2016-09-09-at-4-52-22-pmscreen-shot-2016-09-09-at-4-53-21-pmscreen-shot-2016-09-09-at-4-57-02-pm

screenshot_20160812-125150

Exploring the vRealize Automation 7.0 Event Broker – Part 3

Abstract

In this article, we will explore the approval topic of the Event Broker. This allows you to plug in custom approval policies and even delegate approvals to external systems.

Approval policies in vRealize Automation allows administrators to insert gating rules into a provisioning workflow. Typically, approval policies work by notifying an approver through email allowing them to approve or reject a provisioning request by clicking on a link or by interacting with the vRealize Automation console. In this article, we will show you how to extend an approval policy by plugging in a custom workflow. Specifically, we will explore how to leverage ServiceNow to approve or reject a request.

Approval Policies in vRealize Automation

Approval policies offer a highly flexible method of adding gating rules to a provisioning workflow. Out of the box, approval policies can be conditional to force approvals only when a user exceeds a certain memory limit, for example. In fact, conditional approvals can be controlled by any aspect of a request, making them extremely flexible.

Also, the fact that approval policies are tied to catalog items through entitlements makes it very easy to have different policies depending on where you sit in an organization.

However, the most powerful feature around approvals that was added in vRealize Automation 7.0 is the ability to use the Event Broker to plug in external approval mechanisms. In previous versions, you were limited to manual approvals through emails. With the new Event Broker architecture, you can implement very advanced automatic approvals as well as delegating approvals to an external incident management system. The later is what we will focus on in this article.

Integrating with ServiceNow

Why did we pick ServiceNow for this example? For a number of reasons, actually. First, it has a developer program which allows you full access to test instances free of charge. Second, it’s very widely used. And third, it has a very straightforward API.

The goal

Our goal is very simple: Whenever a provisioning workflow in vRealize Automation needs an approval, we will call out to ServiceNow to make a request for approval, then block the provisioning workflow until the request is approved in ServiceNow. Also, we are going to supply some basic information about the request to ServiceNow the approver can base their decision on.

Using a Service Catalog Request

There are several ways we could do this. One is to create an incident in ServiceNow and wait for it to be resolved. In this article, however, we are going to use the approval mechanism built into ServiceNow and it seems a Service Catalog Request is better suited for that.

A Service Catalog Request is what ServiceNow uses to handle a user request for some service, be it a cellphone, a laptop or some service from human resources. These requests have an underlying workflow and an approval policy attached to them, making them suitable vehicles for approvals coming from vRealize Automation. In a real life scenario, we would have designed a form and a workflow in ServiceNow for this specific type of request, but in this example, we are just going to use the generic constructs.


 

Screen Shot 2016-03-15 at 12.51.06 PM


 

Implementation

We implement this in three steps: First we build a workflow that can create a Service Catalog Request in ServiceNow and wait for it to be approved. Next, we create an event subscription based on that workflow and finally we link it to an approval policy. Let’s walk through the steps.

vRealize Orchestrator workflow

Let me start by pointing out that this is NOT the way you should be doing it in production. We create a Service Catalog Request in ServiceNow and then we check it once every minute to see if it has been approved. Since there may be hundreds of outstanding approvals in a production system, this approach is very inefficient and could generate massive amounts of calls to ServiceNow. A better solution would be to have ServiceNow push back an event to vRealize Orchestrator. This is doable, but beyond the scope for this article. We may address how to receive events from ServiceNow in an upcoming article.

This is what out vRealize Orchestrator workflow looks like:


 

Screen Shot 2016-03-15 at 1.43.44 PM


 

The workflow is relatively straightforward. First, we extract the parameters and do some other basic initializations. Next, we call ServiceNow to create a Service Catalog Request. After that, we enter a into a loop that checks the approval status of the request and exits with the appropriate return code once we’ve determined whether the request was approved or rejected. Keep in mind that we may spend days in this loop, which is why we need to find a better way of doing this in production (see upcoming article).

Inputs and outputs

An approval workflow takes four input parameters that are all of type Properties (a name-value pair map).

  • fieldNames – The keys represent the name of the fields defined for this approval. The values represent the field types.
  • fieldValue – The values of the fields specified in fieldNames.
  • requestInfo – A map of request properties.
  • sourceInfo – A map of properties from the source of the request (e.g. internal references to the request)

The contents of fieldNames and fieldValues are determined by the fields that are exposed by the approval policy. More about this later.

The fields map will always contain a field called businessJustification which can be changed by an approval workflow to send comments back to vRealize Automation. This is typically used to specify a reason for an approval or rejection.

Creating the Catalog Item Request in ServiceNow

Working with objects in ServiceNow is very straightforward. Every table in their data model has a REST interface for performing basic CRUD (Create/Read/Update/Delete) operations. All we have to do is to post a new request to the table. In a production implementation of this, we’d probably provide a bit more data, such as actual links to the requester and a link to the catalog item we’re requesting. But for a simple demo, this will do just fine.


 

var payload = {};
payload["short_description"] = shortDescription;
payload["description"] = longDescription;
payload["active"] = "true";
payload["price"] = "1001";
payload["special_instructions"] = "vRA-Request";
var rq = snowHost.createRequest("POST", "/api/now/table/sc_request", 
     JSON.stringify(payload));
rq.contentType = "application/json";
rq.setHeader("accept", "application/json");
var response = rq.execute();
if(response.statusCode != 201)
    throw "ServiceNow error. Status code: " + response.statusCode + " details: " 
        + response.contentAsString;
var json = JSON.parse(response.contentAsString);
incidentId = json.result.sys_id;

Let’s examine the code! First, we build a payload based on the description and short description. We’re also setting a bogus price for the item, which is there to bypass one of the default rules in the ServiceNow development instances that auto-approves anything that costs below $1000. We could (and should) have disabled that rule on the ServiceNow side, but the idea is that this should be able to run against an unmodified ServiceNow development instance. Again, this code is purely for  demo purposes.

Next, we make the request itself, a POST to the sc_request table, which creates a new request for us. ServiceNow will return the created record and the very last line of the script picks up the ID of the request. We will use this in the next step as we check on the status for the request.

Checking the approval status in ServiceNow

Once we have a Service Catalog Request, we periodically check on the status of it. Again, this needs to be replaced by a push-based solution in a production scenario, but the current approach will work fine demo purposes.


var rq = snowHost.createRequest("GET", "/api/now/table/sc_request/" + incidentId,
     null);
rq.setHeader("accept", "application/json");
var response = rq.execute();
if(response.statusCode != 200)
     throw "ServiceNow error. Status code: " + response.statusCode + 
     " details: " + response.contentAsString;
var json = JSON.parse(response.contentAsString);
approval = json.result.approval.toString();

All this script does is to check on the Service Catalog Request we just created in ServiceNow to see if it’s approved. The rest of the workflow just deals with sleeping between checks and keeping track of how many times we should do this before timing out.

Creating the subscription

Once we have the workflow in place, it’s time to register a subscription through the Event Broker. Doing so it pretty straightforward. As an administrator, we go to Administration->Events->Subscriptions and click on “New”.


 

Screen Shot 2016-03-24 at 12.23.59 PM


 

Select “Pre-Approval” and click “Next”. A pre-approval subscription is called before any provisioning activities take place. If we wanted to register an approval after provisioning is made, but before the provisioned asset is released to its user, we could use a “Post-Approval”.


 

Screen Shot 2016-03-24 at 12.24.14 PM


 

Let’s run this for all events. If necessary, you can create filters here, for example applying the policy only to certain types of assets.

Click “Next”.


 

Screen Shot 2016-03-24 at 12.24.44 PM


 

Select the “Delegate approval to ServiceNow” workflow. This is the vRO workflow we discussed earlier in the workflow.


 

Screen Shot 2016-03-24 at 12.25.48 PM


 

Optionally, change the name and give it a description. You can also assign a timeout on the subscription. In this case, we’re stating that if the request hasn’t been approved within 24 hours, we automatically cancel the request.

Click “Finish”.

We now have a subscription, but if you were to start deploying VMs you’d notice that our workflow is never called. That’s because we’re missing an Approval Policy, which is the final piece of this puzzle.

Creating an approval policy

An approval policy in vRealize Automation simply states when an approval is needed and how to handle it. It can also deal with approvals that require multiple steps and multiple roles. In this example, we’re going to walk through a simple approval policy that requires a single approval done through a call to ServiceNow.

To add an approval policy, log in as a tenant administrator and select Administration->Approval Policies.


 

Screen Shot 2016-03-24 at 3.14.44 PM


 

Select “Service Catalog – Catalog Item Request”. This makes the approval policy applicable to any request. If you want to narrow it down to a specific catalog item type, you can select that here.

Screen Shot 2016-03-24 at 3.15.52 PM


 

Fill out name and description. Leave the policy as a draft for now.


 

Screen Shot 2016-03-24 at 3.16.12 PM


 

Add a new approval level by clicking the plus sign next to “Levels”. Notice how you can have multiple levels that are executed in series or parallel.

Create a new level and give it a name. Select “Use event subscription” to make it run the pre-approval event subscription we created earlier. You may also enter conditions here if you don’t want the approval to be required for every request. In this example, we make the approval policy “Always required”.

Verify that everything looks OK, change the status of the policy to “Active” and click OK to save it.

Attaching the policy to an entitlement

Approvals are controlled by approval policies and entitlements. For an approval policy to be considered, it has to be attached to an entitlement. This gives you great flexibility when designing approvals. Not only can you define how the approval will behave, but you can also define within what scope of catalog items and users it will apply. For example, you can set up a rule saying “This service is available to all developers, but junior developers will have to obtain an approval by their manager first”.

In our case, we will just set it up to apply to all developers trying to deploy a certain type of VM.

To edit entitlements, log in as a tenant administrator and select Administration->Catalog Management->Entitlements. In our case, we will use an existing entitlement named “Developers” and add our approval policy to a certain service.


 

Screen Shot 2016-03-24 at 3.32.35 PM


In this case, we are applying the approval policy to an entire service, which will cause it to be applied to every catalog item in that service. You may also attach the approval policy to an individual catalog item. Click the drop down next to the service and select “Modify Policy”.


 

Screen Shot 2016-03-24 at 3.35.59 PM


Select the approval policy we just created. Click OK on the dialog box. Click OK on the entitlement form to save the entitlement.

Congratulations! You’ve just configured a custom approval policy that calls out to ServiceNow!

Testing the ServiceNow-based approval

Now that we’ve created and configured our custom approval, it’s time to test it. To do that, let’s just request one of the catalog items from the service we attached the approval policy to.


 

Screen Shot 2016-03-24 at 3.40.03 PM


Request the catalog item and fill out any information needed. Add something for the description and reason for request, as these will be transferred to ServiceNow.


 

Screen Shot 2016-03-24 at 3.41.23 PM


If we go to the “Requests” tab, we should now see the request getting stuck on the “Pending Approval” state. It will not be released until we’ve approved it in ServiceNow (or the 24 hour timeout we specified has elapsed).


 

Screen Shot 2016-03-24 at 3.45.13 PM


Let’s approve it by clicking the Approve button and wait for a minute or two for the vRealize Automation workflow to catch up.


 

Screen Shot 2016-03-24 at 3.47.24 PM


As you can see, the provisioning process has resumed and our VM is well on its way to being created.

Conclusion

Although this example is not intended for production use, it serves to illustrate the flexibility of the Event Broker and approval engine, as well as the ease at which external components can be integrated into the lifecycle of a catalog item in vRealize Automation.

I am personally, as an engineer, extremely excited about the possibilities the Event Broker opens up and will continue to come up with interesting ways it can be used.

Downloadable content

vRealize Orchestrator Workflow