Your Use of Double Check Pattern May Not Be That Great!

What is the Double Check pattern?

Synchronization in any programming language is considered to be expensive. The Double Check pattern is simply a way to try to eliminate locks by first testing for the existence of a resource without holding a lock and return the resource directly if it existed. If it didn’t exist, a lock is obtained and the check is done again, and if the resource still doesn’t exist, it’s created and returned. In Java it would look something like this:

Why are two checks cheaper than one?

Locks can be expensive to manage. First of all, there’s the obvious situation where the lock is held by someone and your code has to wait. But even if no one is holding the lock, there could be some performance implications. Locks can be implemented in many different ways, but they all need some kind of so called atomic instruction. These are machine code instructions that guarantee thread safety at the hardware level. What this means is that they have to halt all other cores and hardware threads for a brief moment. There are also implications on hardware caches that could further slow down execution. Experiments have shown that an atomic instruction can be up to 30 times slower than its non-atomic counterpart.

So checking something without holding a lock seems like a much quicker way. Now the lock only needs to be held while a new resource is created. Most of the time, you’d just get the resource back without acquiring a lock! Great, isn’t it?

A Blatantly Broken Example

Recently I saw some code in a widely used package that prompted me to alert the maintainer. Someone was trying to save a few nanoseconds using a Double Check pattern in a place where it definitely didn’t belong. Here’s the essence of that code:

It should be pretty obvious why this is broken. A HashMap is inherently unsafe and the put operation may completely rearrange its internal structures. If the get should happen to execute at the same time, chances are very high that you’ll end up with strange results and very hard to find bugs.

In this case, you can still use the Double Check pattern. In fact, the code above only needs to change a single line to be safe. Instead if instantiating the Map as a HashMap, you could use a ConcurrentHashMap. This variant of a Map uses some clever tactics internally to make sure most accesses can take place without locks being acquired, while being guaranteed to be thread safe.

This ensures that we won’t run into strange bugs stemming from race conditions in the HashMap. But is this code really thread safe? Well, it depends…

Why Double Check May Be a Bad Idea

Even if you avoid the obvious HashMap problem, there’s still some potential issues here. Consider this code where we want to record when something was accessed for the first time. We could do that using a Double Check.

Looks pretty safe, doesn’t it? Well, on most modern processors this code IS safe. Storing and reading a 64-bit long should require a single instruction and a single access to memory across its 64-bit wide bus. But there’s no guarantee that’s the case. In fact, the Java spec explicitly states that 64-bit assignments are not guaranteed to be atomic. So what if someone tried to run your code on a 32-bit machine, like a Raspberry Pi? There’s a chance you’d see a timestamp where only half of the 64 bits had been updated!

Luckily, in Java there’s the volatile keyword. By declaring “firstAccess” volatile, you would guarantee that accesses to it are atomic. But guess what? Depending on your platform, you may now have introduced the need for an atomic instruction, which is what we tried to avoid in the first place!

Your JVM is Probably Smarter than You!

As we have seen, there’s really no safe way of avoiding synchronization or atomic accesses. And when it comes to synchronization, you should understand that in most cases, it’s pretty fast. Most languages implement synchronization something like this (pseudocode):

Do you see what’s going on here? It’s pretty close to a Double Check pattern, isn’t it? It tries the quick and easy way first, then takes the more arduous route if needed. The “atomicIncrement” pseudo function deserves some explanation. Most modern CPUs have an instruction for atomically incrementing a value and returning what it was just before (or is some cases just after) it was incremented. The “waiters” variable holds the number of waiting threads. If I increment it atomically and the number before it was incremented is zero, I can be sure of two things: No one was holding it when I tried to take it and I now own it, since all other threads will see waiters > 0.

Yes, there’s still an atomic instruction here, but as we have shown above, you would need them anyway to implement a Double Check that’s truly safe.

Empirical Testing

So how much does really synchronization affect performance? The answer is, as usual, “it depends”.

On my MacBook Pro with an i7 processor, a loop incrementing a single integer 100,000,000 times took 112ms without a synch inside the loop. With a synch inside the look, it took 221ms. So a 100% performance degradation. That seems bad. Yes, but this isn’t a very realistic use case. How often do you write code like this? Rarely. Also, if we look at the cost for each synchronization, it’s around 2 nanoseconds! Yes, the impact could be higher on a very busy massively parallel machine, but it’s still fairly low for most operations.

Here’s the code:

A more realistic example may be to access a HashMap 10,000,000 times. The unsynched version takes 40ms and the synched version takes 50ms. Still a difference, but there’s a very limited number of applications where such a difference would have any meaningful impact.

Again, here’s my example code in Java:

When Does Double Check Make Sense?

So far, this article reads like I’m bashing the Double Check pattern. In fact, that’s not at all what I’m trying to do. What I’m worried about are all the improper uses of Double Check that I’ve seen and how they could introduce some very subtle and hard-to-find bugs. It also makes the code more complex and a but harder to maintain. But it does have its virtues.

So by all means, use Double Check, but use it with caution and only when it makes sense!

Here are some basic rules.

Use Double Check when the lock contention in the fast path is likely

If your code is called millions of times per second, there’s a high likelihood that threads will be stuck waiting on a lock for no reason. If performance is an issue, you may consider implementing a “fast path” that doesn’t require locking.

Your fast path MUST use atomic accesses only!

In Java, with the exception of object reference assignment and assignment of 32-bit datatypes, nothing is atomic. So you need to take care that your fast path takes the appropriate precautions to make sure all accesses are atomic. The volatile keyword or the java.util.concurrent.atomic package are very useful.

Also keep in mind that even if you make atomic accesses, you’re typically only allowed one such access in your fast path. If you check more than one value, your code is not atomic anymore and may very well end up in a race condition with the slow path.

Consider using a read/write lock!

Sometimes it’s not possible to make the fast path fully atomic. Does that mean that all hope is lost for Double Check? Not necessarily! You can use something like a java.util.concurrent.locks.ReadWriteLock. These are what’s known as asymmetric locks which allows multiple readers, but only one writer. Once the writer acquires a lock, it also blocks all readers. I’m planning to write an article about this in the near future, but in basic terms, you would essentially wrap a read lock around your fast a path and a write lock around the slow path.

Document, document and document! Did I mention “document”?

When you’re implementing a Double Check pattern, add code comments clearly stating what you’re doing. Some maintainer may poke around in the code without understanding the requirements for the fast path to be atomic and someone may have to spend days or weeks chasing strange bugs!

The  catch-all: Use only when needed!

I recently reviewed some code where a Double Check pattern was used in a function that was called maybe ten times during application startup. To make matters worse, the code had a subtle bug in it. So the developer shaved maybe a couple of microseconds off the application startup time at the expense of code complexity that caused a bug. So don’t bother using this pattern unless you expect your code to be called very frequently and where lock contention could have a meaningful impact on performance!

Conclusion

The Double Check pattern can be a life saver when you are under heavy performance requirements with code that’s called millions or billions of times. But there are many pitfalls and I’ve seen a fair amount of bugs caused by programmers that don’t fully understand the semantics of the pattern. So use it with care and only when needed!

Follow me on Twitter @prydin!

 

 

 

Happy Pi Day with Wavefront!

Celebrating Pi Day in style

Today is Pi Day! Ok. It only works in US date format, but still. Pi is so awesome that it should be celebrated around the world, regardless of date format.

And what better way to celebrate than some Wavefront geekness? Let’s dive right into it.

Ah, what a beauty! So first of all, we honor the mother of Pi: The circle. Of course, I could have written a script that fed the results of z = eit into Wavefront, but there’s an easier way. Thanks to the math functions in Wavefront, all I have to do is this:

All we have to do is to use the time parameter of the chart as our input and so some scaling. The rest is just the equation of the circle in its parametric form.

Scatter plot abuse

The next idea I had was to use a scatter plot to render a nice pi symbol. I wrote a small Go program that reads a JPG, scans it for black areas and sends the points where it found black. Then I put it on a Scatter Plot. OK, it took some tweaking, but it works. Here’s the code TODO.

Pi Digits that never go away

Next we need some pi digits. But digits are discrete things that should never be averaged or rolled up. But fret not, Wavefront is unique in that it NEVER rolls up any metrics, so our pi digits will be there for our grandchildren to see. Almost, at least.

Some more math geekness

The Manchin formula is an astonishingly inefficient way to compute Pi. But it makes for a cool time series graph where we can follow the progression of the series. Lots of bounding around!

So there you have it! My two favorite topics, Wavefront and math, all in one place. Happy Pi day!

vRealize Operations reporting on host profiles

Introduction

After I published my post about importing advanced host properties into vRealize Operations, a colleague told me that it was very nice, but the real kicker would be to somehow import the host profile compliance stature into vRealize Operation. Again, true to his motto “How hard can it be?”, the Viking set out to explore this challenge.

And, yet again, it turned out to be not hard at all. A few lines of Python code did the trick.

What’s a host profile?

Maybe a quick recap of host profiles could be useful. Host profiles were introduced several years ago in vSphere 4.0 and roughly serve three purposes:

  1. Work in conjunction with Auto Deploy to allow “stateless” hosts. These hosts don’t need any configuration at the host level, but read their configuration from a host profile when they boot and applies it on-the-fly.
  2. Report compliance against a host profile, i.e. list all deviations of host settings from the host profile.
  3. Proactively apply a host profile to an existing host to force it into compliance.

Under the hood, a host profile is just a long list of rules describing what the desired settings of a host should be. Host profiles are typically generated from a reference host. You simply select a host you consider correctly configured and vCenter will create all the rules to check and enforce compliance based on that host.

Host profiles and vRealize Operations

I often hear how nice it would be if vRealize Operations could check compliance with host profiles. So often that I decided to give it a try. As you may know, adding new functionality to vRealize Operations is really simple with e.g. Python. Building the host profile functionality required about a 100 lines of code.

Here’s how my script works:

  1. List all the host profiles known to a vCenter.
  2. Trigger a host profile compliance check. This will check the profile against all the hosts it’s attached to.
  3. Scan through the result and send notification events to vRealize Operations for each violation found.
  4. Define a symptom that triggers based on the event and compliance alert based on the symptom.

Running the script

For complete information about installing and running the script, refer to the Github site: https://github.com/prydin/vrops-import-hostprops

However, running the script is simple. Create a config file as described on Github and run this command:

python import-host-profiles.py --config path/to/config/file

If you want the check to run periodically, you should put the script in your crontab on Linux or in a scheduled job on Windows. Checking compliance of a large number of hosts take a while, so you should only run the script one or a couple of times per day.

The result

Once you have imported the alert definition and captured some host profile violations, you should see them under the Analysis->Compliance sub tab. It should look something like below. Of course, you can trigger email notifications or webhook notifications based on the compliance alert.

Screen Shot 2017-12-13 at 2.15.23 PM

Useful links

Code, installation instructions and manual:

https://github.com/prydin/vrops-import-hostprops

 

Importing Host Advanced Settings as Properties in vRealize Operations

How hard can it be?

Today I had a discussion with a colleague who wanted to build alerts against some advanced host settings that were not collected by default. I argued that one could build a script that imports the settings.

That’s when I had one of those “How hard can it be?” moments.

It turned out, it’s not that hard at all.

What’s an Advanced Setting?

Objects in vCenter have a set of properties defining anything from licensing to file system buffers. Some of them have dedicated editors in the UI, but the bulk of them are just key/value-pairs accessible from the “Advanced Settings” menu option in vCenter.

Here’s what they look like in vCenter:

Screen Shot 2017-11-21 at 9.38.36 AM

Alerting on Advanced Settings

Alerting on settings and properties can be useful when trying to detect configuration drift. For example, some storage arrays work best with certain settings on NFS.MaxQueueDepth. By importing this setting, we can build symptoms and compliance alerts on this property so that we get alerted anytime this setting is changed from its optimal value.

Some host properties and settings are already captured vRealize Operations, but a lot aren’t. Using the simple script I created, we can import any host settings and make them reportable and alertable.

How it’s Made

The script is very simple and uses plain old REST calls (no SDK library) to vRealize Operations and pyVmomi to communicate with vCenter. It reads its settings from a YAML-file. That’s pretty much it.

The script only runs one import. If you want it to import periodically, you should install it as a cron-job (or scheduled task in Windows). Since host settings aren’t likely to change very often, it’s probably enough to run it once every hour. The code isn’t exactly optimized, so you may run into problems if you try to run it too often.

Since there are hundreds of settings and most aren’t needed, the script uses filesystem-like wildcards to select the settings. For example “NFS.*” gets you all the NFS settings

Configuration File

The script uses a configuration file to determine hostnames, usernames, passwords etc. Here’s a self-explanatory example:

# vCenter details
#
vcHost: "my-vcenter.corp.local"
vcUser: "pontus@corp.local"
vcPassword: "secret"

# vR Ops details
#
vropsUrl: "https://my-vrops.corp.local"
vropsUser: "demouser"
vropsPassword: "secret"

# Property pattern
#
pattern: "NFS.*"

What to do with it

There are many reasons one may need to import host settings. In my case, I needed to check the NFS.MaxQueue settings. In some environments, things tend to work poorly if it’s not set to exactly 64. So I collected that setting (and all the other NFS settings) by specifying the pattern “NFS.*”.

Here’s what I captured:

Screen Shot 2017-11-21 at 9.25.09 AM

Then I created a couple of symptoms and an alert with subtype “Compliance”. The subtype is important, since it makes it show up under the Analysis -> Compliance tab.

Here’s what the end result looks like:

Screen Shot 2017-11-21 at 9.35.02 AM

Where to Find it

You like it? Go get it!

The script can be found on my Github page:

https://github.com/prydin/vrops-import-hostprops

 

Geek corner: Finding Patterns in Wavefront Time Series Data using Python and SciPy

Introduction

Today we’re going to geek out even more than usual. Follow me down memory lane! My first job was to write code that processed data from testing integrated circuits. That was an eternity ago, but what I learned about signal processing turned out to be applicable to pretty much any kind of time series data.

So I thought I should play around with some of the time series data I have stored in Wavefront.

Of course, it deserves to be said that Wavefront can do almost any math natively in its web user interface. So I really had to think hard to come up with something that would give me an excuse to play around with the Wavefront API and some Python code.

Here’s the example I came up with: Find out which objects exhibits the highest degree of periodic self similarity on some metric. For example, tell me which VMs have a CPU utilization that have the strongest tendency to repeat themselves over some period (e.g. daily, hourly, every 23.5 minutes, etc.).

A possible application for this may be around capacity planning. By understanding the which workloads have a tendency to vary strongly and predictably over some period of time, we may be able to place workloads in such a way that their peak demands don’t coincide.

The first few paragraphs of this post should be interesting to anyone wanting to do advanced data processing of Wavefront data using Python. The final paragraphs about the math are mostly me geeking out for the fun of it. It can be safely ignored if it doesn’t interest you.

Problem Statement

The purpose of the tool we’re building in this post is to answer the question: “Which objects have the strongest periodic variation (seasonality) on some metric?”.

For example, which VMs have a CPU load that tends to have a strong periodic variation? And furthermore, what is the period of that variation?

Usage

It should be pointed out that this tool would need some work before it’s useful in production. It is intended as an example of what can be done.

To obtain the code, clone the repository from here or download this single Python file.

To run the code, you have to set the following two environment variables:

  • WF_URL – The URL to your Wavefront instance, e.g. https://try.wavefront.com
  • WF_TOKEN – Your API token. Refer to the Wavefront documentation for how to obtain it.
python fft.py name.of.metric

You can add the –plot flag at the end of the command to get a visual representation of the top periodic time series.

The output will be a list of object names, period length in minutes and peak value.

Example:

Screen Shot 2017-09-25 at 8.55.09 AM

Let’s pick an example and examine it!

nsx-mgr-west,31.45390070921986,0.17699407037085668

The first column is just the name of the object. The next column says that the pattern seems to repeat every 31.5 minutes. The third column is the relative spectral peak energy. The best way of interpreting that is by saying that 18% of the total “energy” of the signal can be attributed component that repeats itself every 31.5 minutes.

Let’s have a look at the original time series in Wavefront and see if we were right!

Screen Shot 2017-09-25 at 9.48.11 AM

Yes indeed! It sure looks like this signal is repeating itself about every 30 minutes.

A Note on Accuracy

Why did we get a result of 31.5 minutes? Isn’t it a little strange that we didn’t get an even 30 minutes? Yes it is and there’s a reason for it. The algorithm introduces false accuracy. In our example, we’re processing 5 minute samples, so the resolution will never get better than 5 minutes. Therefore, you should round the results to the nearest 5 minutes.

Data Source

The data for this example comes from one of our demo vRealize Operations instances and was exported to Wavefront using the vRealize Operations Export tool that can be found here.

Design

The design is very simple: We use Python to pull down time series data for some set of objects. Then we’re using the SciPy and NumPy libraries to perform the mathematical analysis.

SciPy

SciPy (which is based on Numpy) is an extensive library for scientific mathematics. It offers a wide range of statistical functions and signal processing tools, making it very useful for advanced processing of time series data.

High Level Summary of the Algorithm

There are many ways of finding self similarity in time series data. The most common ones are probably spectral analysis and autocorrelation. Here is a quick comparison of the two:

  • Spectral Analysis: Transform the time series to the Frequency Domain. This rearranges the data into a set of frequencies and amplitudes. By finding the frequency with the highest amplitude, we can determine the most prevailing periodicity in the time series data.
  • Autocorrelation: Suppose a time series repeats itself every 1 hour. If you correlate the the time series with a time shifted version of the same series you should get a very good correlation when the time shift is 1 hour in our example. Autocorrelation works by performing correlations with increasing time shifts and picking the time shift the gave the best correlation. That should correspond to the length of the periodicity of the data.

Autocorrelation seems to be the most popular algorithm. However, for very noisy data, it can be hard to find the best correlation among all the noise. In our tests, we found that spectral analysis (or spectral power density analysis, to be exact) gave the best results.

To find the highest peak in the frequency spectrum, we huse the “power spectrum”, which simply means that we square the result of the Fourier transform. This will exaggerate any peaks and makes it easier to find the most prevalent frequencies. It also creates an interesting relationship to the autocorrelation function. If you’re interested, we’ll geek out over that towards the end of this article.

Once we’ve done that for all our objects, we sort them based on the highest peak and print them. To make the scoring fair, we calculate the total “energy” (sum of “power” over time) and divide the peak amplitudes by that. This way, we measure the percentage that a peak contributes to the total “energy” of the signal.

Code Highlights

Pulling the data

Before we do anything, we need to pull the data from Wavefront. We do this using the Wavefront REST API.

query = 'align({0}m, interpolate(ts("{1}")))'.format(granularity, metric)
start = time.time() - 86400 * 5 # Look at 5 days of data
result = requests.get('{0}/api/v2/chart/api?q={1}&s={2}&g=m'.format(base_url, query, start),
                      headers={"Authorization": "Bearer " + api_key, "Accept": "application/json"})

Next, we rearrange the data so that for each object we have just a list of samples that we assume are spaced 5 minutes apart.

candidates = []
data = json.loads(result.content)
for object in data["timeseries"]:
    samples = []
    for point in object["data"]:
        samples.append(point[1])

Normalizing the data

Now we can start doing the math. First we normalize the data to force it to have an average of 0, a maximum of 1 and a minimum of -1. It’s important to do this to remove any and constant bias (sometimes referred to as “DC-component”) from the signal.

top = np.max(samples)
bottom = np.min(samples)
mid = np.average(samples)
normSamples = (samples - mid)
normSamples /= top - bottom

Turn into a power spectrum using the fft function from SciPy

Now that we have the data on a form we like, use a Fast Fourier Transform (FFT) to go from the time domain to the frequency domain. The output of and FFT is a series of complex numbers, so we take the absolute value (magnitude) of them to turn it into a real number. Finally, we square every value. This will exaggerate any frequency spikes and also has a cool relationship to autocorrelation we’ll discuss later.

spectrum = np.abs(fft(normSamples))
spectrum *= spectrum

At this point, we should have a spectrum that, for a good match, looks something like this. This corresponds to a signal repeating at a 60 minute interval.

Screen Shot 2017-09-25 at 10.38.50 AM

The algorithm we use is has very low accuracy at low frequencies (long cycles), so we dump the firs few samples. It’s important to sample enough of the signal. If we want to detect behaviors the repeat over, say, 24 hours, you should sample at least 5 days of data. This way, we can discard the lowest frequencies and still find what we’re looking for.

offset = 5
spectrum = spectrum[offset:]

Now we can find the peak by simply looking for the highest value in the array. Notice that we’re only looking in the lower half of the array. This is because of a property of the FFT known as “aliasing”. The latter n/2 samples will just be mirror image of the first n/2 ones.

Finding the peak and normalizing

maxInd = np.argmax(spectrum[:int(n/2)+1])

Next we scale the peak we found. In order to compare this peak with peaks we found from other objects, we need to somehow normalize it. A straightforward way of doing this is to estimate how much of the total energy this peak contributed with. Energy is the integral over power, but since we’re operating in a discrete time world, we can calculate it as a simple sum over the spectrum. The result will be the relative contribution this frequency has to the total energy.

energy = np.sum(spectrum)
top = spectrum[maxInd] / energy

The final step of the math is to convert the raw frequency (as expressed by an index in the spectrum array) to a period length.

lag = 5 * n / (maxInd + offset)

Filtering the best matches

We consider a “good” match a peak where the local spectral energy is at least 10% of the total energy. This is an arbitrary value, but it seems to work quite well.

if top > 0.1:
    entry = [ top, lag, object["host"] ]
    candidates.append(entry)

Sorting and outputting

Finally, once we’ve gone through every object, we sort the list based on relative peak strength and output as CSV.

best_matches = sorted(candidates, key=lambda match: match[0], reverse=True)
for m in best_matches:
   print("{0},{1},{2}".format(m[2], m[1], m[0]))

Extra Geek Credits: Autocorrelation vs. Power Spectrum

If you’ve made it here, I applaud you. You should have a pretty good understanding of the code by now.  It’s about to get a lot geekier…

Autocorrelation

Mathematically, autocorrelation is a very simple operation. For a real-valued signal, you simply sum up every sample of the signal multiplied by a sample from the same signal, but with a time shift. It can be written like this:

a(τ)=Σx(t)x(t+τ)

Let’s say a signal is repeating itself every 60 minutes. If we set the lag, τ, to 60 minutes, we should get a better correlation than for any other value of τ, since the signal should be very similar between now and 60 minute ago.

So one way to find periodic behavior is to repeatedly calculate the autocorrelation and sweep the value of τ from the shortest to the longest period you’re interested in. When you find the highest value of your autocorrelation, you have also found the period of the signal.

Well, at least in theory.

What ends up happening is that a lot of times, it isn’t obvious where the highest peak is and it could be affected by noise, causing a lot of random errors. Consider this autocorrelation plot, for example.

Screen Shot 2017-09-25 at 11.02.06 AM

One thing we notice, though, is that autocorrelations of signals that have meaningful periodic behaviors are themselves periodic. Here’s a good example:

Screen Shot 2017-09-25 at 11.05.33 AM

So maybe looking for the period of the autocorrelation would be more fruitful than hunting for a peak? And finding the period implies finding a frequency, which sounds almost like a Fourier Transform would come in handy, right? Yes it does. And there’s a beautiful relationship between the autocorrelation and the Fourier Transform that will take us there.

The final piece of the puzzle

If you’ve read this far, I’m really impressed. But here’s the big reveal:

It can be proven that autocorrelation and FFT are related as follows:

{\displaystyle {\begin{aligned}F_{R}(f)&=\operatorname {FFT} [X(t)]\\S(f)&=F_{R}(f)F_{R}^{*}(f)\\R(\tau )&=\operatorname {IFFT} [S(f)]\end{aligned}}}

Where R(τ) is the autocorrelation function.

In other words: The autocorrelation is the Inverse FFT of the FFT of the signal, multiplied by the complex conjugate of the FFT of the signal.

But an complex number multiplied by its own conjugate is equal to the square of the absolute value. So S(f) is really the same as |FFT[X(t)]|2. And since S(f) is the step right before the final Inverse FFT, we can conclude that it must be the FFT of the autocorrelation. So we can comfortably state:

The power spectrum of a signal is the exact same thing as the FFT of its autocorrelation!

Why it’s cool

Not only is the power spectrum a nice way of exaggerating peaks to make them easier to find. The peak of the power spectrum can also be interpreted as a measure of the prevailing period of the autocorrelation function, which, in turn, gives a good indication of the prevailing period of the data we’re analyzing.

Caveats

As I said in the beginning of this post, this code isn’t production quality. There are a few things that should be addressed before it’s useful to a larger audience.

  • All data is loaded into memory. That obviously won’t scale. Instead, we should used a streaming JSon parser and maybe split the query up into smaller shunks.
  • The calculation of the local spectral energy is very naive. It’s assuming that all the energy is concentrated in a single spike of unity length. It should be adjusted to account for spikes that are spread out across the spectrum.
  • The accuracy, especially at low frequencies, is rather poor. One possible improvement would be to use FFT to find the candidates and autocorrelation to fine tune the result.

Conclusion

The idea behind this post was to discuss how Python and a math library like SciPy can be used to analyze Wavefront data. Feel free to comment and to fork the code and make improvements!

 

Exporting metrics from vRealize Operations to Wavefront

Introduction

You may have heard that VMware recently acquired a company (and product) called Wavefront. This is essentially an incredibly scalable SaaS-solution for analyzing time-series data with an emphasis on monitoring. With Wavefront, you can ingest north of a million data points per second and visualize then in the matter of seconds. If you haven’t already done so, hop on over to http://wavefront.com/sign-up and set up a trial account! It’s a pretty cool product.

But for a monitoring and analytics tool to be interesting, you need to feed it data. And what better data to feed it than metrics from vRealize Operations? As you may know, I recently released a Fling for bulk exporting vRealize Operations data that you can find here. So why not modify that tool a bit to make it export data to Wavefront?

A quick primer on Wavefront

From a very high level point of view, Wavefront consists of three main components:

  • The application itself, hosted in the public cloud.
  • A proxy you run locally that’s responsible for sending data to the Wavefront application.
  • One or more agents that produce the data and send it to Wavefront.

You may also use an agent framework such as Telegraf to feed data to the proxy. This is very useful when you’re instrumenting your code, for example. Instead of tying yourself directly to the Wavefront proxy, you use the Telegraf API, which has plugins for a wide range of data receivers, among them Wavefront. The allows you to de-couple your instrumentation from the monitoring solution.

In our example, we will communicate directly with the Wavefront proxy. As you will see, this is a very straightforward way of quickly feeding data into Wavefront.

Metric format

To send data to the Wavefront Proxy, we need to adhere to a specific data format. Luckily for us, the format is really simple. Each metric value is represented by a single line in a textual data stream. For example, sending a sample for CPU data would look something like this:

vrops.cpu.demand 2.929333448410034 1505122200 source=vm01.pontus.lab

Let’s walk through the fields!

  1. The name of the metric (vrops.cpu.demand in our case)
  2. The value of the metric
  3. A timestamp in “epoch seconds” (seconds elapsed since 1970-01-01 00:00 UTC)
  4. A unique, human readable name of the source (a VM name in our case)

That’s really it! (OK, you can specify properties as well, but more about that later)

Pushing the data to Wavefront

Now that we understand the format, let’s discuss how to push the data to Wavefront. If you’re on a UNIX-like system, the easiest way, by far, is to use the netcat (nc) tool to pipe the data to the Wavefront proxy. This command would generate a data point in Wavefront:

echo "vrops.cpu.demand 2.929333448410034 1505122200 source=vm01.pontus.lab" | nc localhost 2878

This assumes that there’s a Wavefront proxy running on port 2878 on localhost.

Putting it together

So now that we know how to get data into Wavefront, let’s discuss how we can use the vRealize Operations Export Tool to take metrics from vRealize Operations to Wavefront. The trick is the newly added output format called – you guessed it – wavefront. We’re assuming you’re already familiar with the vRealize Operations Export Tool definition file (if you’re not, look here). To make it work, we need a definition file with an output type of “wavefront”. This one would export basic VM metrics, for example:

resourceType: VirtualMachine
rollupType: AVG
rollupMinutes: 5
align: 300
outputFormat: wavefront
dateFormat: "yyyy-MM-dd HH:mm:ss"
fields:
# CPU fields
  - alias: vrops.cpu.demand
    metric: cpu|demandPct
  - alias: vrops.cpu.ready
    metric: cpu|readyPct
  - alias: vrops.cpu.costop
    metric: cpu|costopPct
# Memory fields
  - alias: vrops.mem.demand
    metric: mem|guest_demand
  - alias: vrops.mem.swapOut
    metric: mem|swapoutRate_average
  - alias: vrops.mem.swapIn
    metric: mem|swapinRate_average
 # Storage fields
  - alias: vrops.storage.demandKbps
    metric: storage|demandKBps
 # Network fields
  - alias: vrops.net.bytesRx
    metric: net|bytesRx_average
  - alias: vrops.net.bytesTx
    metric: net|bytesTx_average

If we were to run an export using this configuration file, we’d get an output that looks like this:

$ exporttool.sh -H https://host -u user -p secret -d wavefront.yaml -l 24h 
vrops.cpu.demand 1.6353332996368408 1505122200 source=ESG-NAT-2-0 
vrops.cpu.costop 0.0 1505122200 source=ESG-NAT-2-0 
vrops.mem.swapOut 0.0 1505122200 source=ESG-NAT-2-0 
vrops.net.bytesTx 0.0 1505122200 source=ESG-NAT-2-0 
vrops.host.cpu.demand 15575.7890625 1505122200 source=ESG-NAT-2-0 
vrops.mem.swapIn 0.0 1505122200 source=ESG-NAT-2-0
...

The final touch

Being able to print metrics in the Wavefront format to the terminal is fun and all, but we need to get it into Wavefront somehow. This is where netcat (nc) comes in handy again! All we have to do is to pipe the output of the export command to netcat and let the metrics flow into the Wavefront proxy. Assuming we have a proxy listening on port 2878 on localhost, we can just type this:

$ exporttool.sh -H https://host -u user -p secret -d wavefront.yaml -l 24h | nc localhost 2878

Wait for the command to finish, then go to your Wavefront portal and go to “Browse” and “Metrics”. You should see a group of metrics starting with “vrops.” If you don’t see it, give it a few moments and try again. Depending on your load and the capacity you’re entitled to, it may take a while for Wavefront to empty the queue.

Here’s what a day of data looks like from one of my labs. Now we can start analyzing it! And that’s the topic for another post… Stay tuned!

The viking is back!

I’m back!!!

Hi folks!

I’m sorry I’ve been absent for such a long time, but I’ve been busy working on a cool project I unfortunately can’t talk about publicly… yet.

But now that I’ve surfaced again, check out my blog at VMware’s corporate Cloud Management blog!

https://blogs.vmware.com/management/2017/05/exporting-metrics-vrealize-operations.html

Expect more activity here in the very near future. And see you at DellEMC World next week! I’ll be in the Cloud Native Apps section of the VMware booth.