Creating a Third Axis In Excel

One of the challenges that came up when I was creating the time series graph of a 9,000 ton chiller plant load profile that I show in my previous post was that I wanted to plot data series that had numbers in them with very large differences in the order of magnitude. 


In other words, to get something visually meaningful1, I needed to plot:

  • Temperatures that would all fall into the range 0-100°F against
  • Tonnages and flow rates that would fall in the 0 – 15,000 gpm/ton range against
  • The number of chillers running, which would fall in the 1-10 range

The Issue

As you probably know, Excel lets you add a secondary axis to your charts, but, as far as I know, that is were it stops, at least in terms of being able to do it with the chart design tools.  Prior to the insight that lead to the technique I will show in this post, they way I dealt with the need to plot more than two data series with wildly different orders of magnitude was to scale one or more of them so they would be visually meaningful on one of the two axis I had available, and then include the scaling factor in the name of the series.

For instance, to plot the number of chillers running on the same axis as temperature, I might have multiplied the number of chillers running by 10 and then plotted it as Number of Chillers Running x 10.  That worked, but it was kind of confusing in a way. Having a third axis to dedicate to a third order of magnitude range (or a 4th or 5th or 6th if you needed them) makes it easier for me (and I think others) to intuitively read the chart. 

The “Trick”

My trick for adding an additional axis (or more) to an Excel chart it is create a data series that I plot vertically against the X axis which is scaled to reflect the range I need so that it spans the entire height of the chart. 

Then, I scale my data so it is visually meaningful on one of the two real axes that are available.  But providing the additional axis, tied to the data series via its name,  a person using the chart can read the scaled data against the extra axis.

That is the trick in a “nutshell”.  But I thought it might be useful to walk you through the steps in the process.   Moving forward, I will discuss this in the context of having three data series, each of which will be associated with a different axis.  But:

  • Multiple data series with similar ranges and magnitudes can share an axis, including the third axis we will discuss creating, and
  • If you had four (or five or six, etc.) radically different data sets, you can use the third axis technique I will discuss to create additional supplementary axes.

The Data Set

To demonstrate the concept, I created a little data set with three very different orders of magnitude associated with the three data series.


Incidentally, if you want to “play along”, I put the basic data set along with  my example spreadsheet in a zip file that you can download from the Excel Third Axis tool page on our Commissioning Resources web site.

Overview of the Problem

If you plot Value 3 (orange) in the data set above by itself against an appropriately scaled axis, it has an obvious wave form associated with it.


But, if you plot the Value 3 (orange) data set against an axis ranged appropriately for Value 1 (red), along with Value 2 (green) plotted against a secondary axis that is appropriately scaled for the Value 2 range, Value 3 it looks like a flat line.


Obviously, if you tried plotting Value 3 against the Value 2 axis, it would be an even flatter line given the magnitude of the range on the green axis relative to both the Value 1 axis and the Value 3 data set range.

Adding a third axis dedicated to Value 3 solves the problem as can be seen below.  Note that I plotted the value three axis and the associated line in blue and the actual data behind the line as orange markers on the blue line.


As you can see, you can now read the values for the points in the Value 3 data series directly off of the blue axis.  For instance, as illustrated above, Value 3 = .0075 when X = 7.

Some times, I want to make the series as visually large as possible compared to the other series, which is what I did for the previous image.  In other words, I wanted to provide the broadest visualization of the data relative to the other series that the dimensions of my chart would allow.

But, other times, I might want to scale things so that the data series are visually meaningful, but also so they do not lie on top of each other, like this.


That means that the decisions you will make to set up the third axis are a function of how you want to display your data.  You can make a case for either way depending on the scenario. 

For this example, I will set the axis up to plot the curves so they are not on top of each other, as illustrated above.  

Step 1 – Decide Which Data Series will use the “For Real” Excel Axes and Which will use the Third Axis

In the bigger picture, the first step in my process is to pick a scaling factor for your axes.  But to do this, there are actually a number of things you need to think about. 

  1. Which data sets will use the “for real” Excel axes and which data sets will use the third  axis you are going to create?
  2. How should the two Excel axes be scaled?
  3. How should your third axis be scaled?

The third axis is actually a fake axis in the context of plotting data.  In other words, while it will allow you to read an associated data series as if it was plotted against it, you are actually plotting a scaled version of your third axis data set against one of the other two  axes provided by Excel.  That means that when you are thinking about the different scaling factors, you kind of have to think about them interactively since the third axis will be influenced by the scaling factor associated with what ever axis you use to actually plot the data.

I typically start by looking at the maximum and minimum values for all of the data series and seeing if  any two of them have a numerical range that is similar in terms of the values but different in terms of order of magnitude.  For example a data set with a range of  0 to 2 is numerically similar to a data set with a range of 0 t0 2,000, but different by a factor of 1,000.   Neither of those data sets is similar to a data set with a range of –0.175 t0 .25 numerically or in terms of order of magnitude.

When I looked at the example data set in this manner …


… it struck me that the Value 1 and Value 3 data sets met this criteria.  So, I decided that I would use the “for real” Excel axes for Value 1 and Value 2 and create my third axis for the Value 3 data set, which I would plot as scaled data against the Value 1 axis.

Step 2 – Select Scaling Factors for the “For Real” Excel Axes

Now that you have settled on which data will use the built in Excel axes, you need to come up with a scale for those axes.  What you pick will depend on how you want to present the data, (overlapping or not) as I discussed previously. 

Since, for this example, I targeted non-over lapping data series, I needed to select scaling factors that would do two things.

  1. Allow the data, when plotted to be visually meaningful.
  2. Create a “window for the third axis data series to reside in so that it would not overlap the other data series.

This is somewhat arbitrary and I usually make the decision by starting my chart, adding the a data series for each of the Excel axes and then playing with things.   Here is what I ended up with, which basically created a “window” for my third axis data series between the Value 1 and Value 2 data.

imageI could have also set it up so that the “window was above the Value 1 data series or below the value 2 data series;  like I said, it is kind of arbitrary.

One other point I should make before moving on to the next step is with regard to the large gap between the end of the data on the right side of the chart and the right (green) secondary axis.   That gap is not there due to my bifocals creating a distortion such that it looks like there is no gap there to me.

Rather, I created it intentionally because that is the space where we will place our third axis when we get to that step.  I could have put it on the left side of the chart, made it narrower, made it wider, etc.  Those are all arbitrary decisions.  But my point is that before the we finish, we will need a space for the third axis, so I went ahead and created it at this point in the process.

Step 3 – Select a Scaling Factor for the Third Axis

At this point in my process, my focus is on selecting a scaling factor for the third axis that will make it visually meaningful when I plot it.   In some instances, such a scaling factor will also place that data on top of the other data on that axis if they are numerically very similar.  That was the case for this data set as you can see from this table comparing the scaled Value 3 data with the Value 1 data and the associated graphs.




That would not necessarily be true if the Value 1 and Value 3 data was numerically similarish instead of numerically similar as can be seen from this table and its associated chart.



When I first started messing around with this, after going through an infinite number of variations between charts similar to the first two in this section and still ending up with the lines on top of each other, I realized what I needed was a scaling factor along with an offset. 

Kind of a “Well Duh” moment, but as Jay Santos often says;

Engineers are empirical learners

In any case, by applying that principle,  I was able to achieve the following “look” using a scaling factor of 100 with an offset of –2.


Step 4 – Create the Third Axis Line

At this point, if you are “playing along”, you would not have a chart like the ones I have been using to illustrate the impact of various scaling factors and offsets because we have not discussed how to create that axis yet.  But you probably have a table that looks something like this …


… and maybe even a chart that looks something like this if you have plotted the data.


Our next step will be to start to build are third axis.  

At a fundamental level, the third axis is just a plot of a data set where:

  • The X values are all the same and correspond to the value on the X axis where we want the third axis to appear, and
  • The Y values are selected so the data points are evenly space on the line and fall on the chart’s major grid lines.

For me, the easiest way to do that was to build a little tool that would create the table I needed for plotting the line based on the properties I wanted the axis to have, which are:

  • The minimum value on the Y axis that corresponds to the minimum value on my third axis, and
  • The maximum value on the Y axis that corresponds to the maximum value on my third axis, and
  • The number of even increments I want to have on my third axis, and
  • The X value associated with the location of the third axis.

Here is what my little tool looks like.


Here is the same thing with the formulas made visible.


Note that in my tool, if I change the Value 3 Axis Number of Major Increments (cell F27), I manually need to insert some rows in the dark red area and copy and re-paste the formulas in cells I31 and J31 into the same columns in the rows below.   I suspect you could solve that to happen automatically with some VBA code if you wanted to.

But having set up my table, I now can create my 3rd axis line by plotting the Third Axis X and Third Axis Y Values (Columns I and J, rows 30 – 38) on my chart against the primary Y axis (the red axis on the left).


Step 5 – Adding the Tick Marks

Next, we need to add the “tick” marks (the short lines next to the axis that fall on the grid line and reference the number associated with that point).  We do that by adding a marker to the line we plotted for the X axis. 

More specifically, we select the line we just created and the pick the “Format Data Series” option.  I do that by clicking on the line and then right clicking and selecting it from the little window that opens up.  But you can also do it using the “Format” menu associated with the Chart Tools at the top of the page. (Heck, for all I know, there could be a shortcut key that lets you do it with one click in combination holding down Ctrl, Alt, and 8 other keys concurrently with Revolution Number 9 playing in the background.)

But no matter how you get there, once the Format Data Series window opens the data points will be highlighted (the red arrow points to what I mean).  From there, by selecting “Mark Options” (the blue arrow is pointing to it), you can select the marker type you want (where the orange arrow is pointing) and set all of the properties, like the line weight, the color, etc.  For my chart, I selected the marker that is a short, horizontal bar and made it the same color and width as the line I plotted for the axis. 


In fact, if you wanted to, once you were at this point, you could click on an individual marker and make it different from all of the rest.  I’ll give an example of why you might want to do that at in the next section. 

For the time being, here is my result after adding the markers.


Step 6 – Adding the Axis Tick Mark Labels

Next, we need to put numbers beside the tick marks on the third axis we created.  Excel allows you to put a label with each data point in a data series, and we will use that feature to do it.  You can get to it by hovering over the data series, right clicking, and selecting the “Format Data Labels …” option.


But first, we need to set up the number that will appear in the label.  This is fairly straight forward. 

Specifically, if you were not offsetting the data, you would simply scale the Y axis labels using the same scaling factor you developed for plotting the data.  But if you are offsetting the data in addition to scaling it, then you also need adjust for the scaled offset.  I do this by adding a column to the table in my little tool.  Here is what that looks like along with a view of it with the formulas exposed.



Now, the trick becomes getting the third axis labels to show up next to the tick marks on the blue line we created to represent the third axis.   That is where the “Format Data Labels …” option I mentioned above comes in.

If I hover over my blue third axis line, right click, and select that option, I end up with a formatting window on the side of my screen very similar to any of the other formatting windows I get when I have selected an object and picked the formatting option.


Notice how the location where the labels will appear are now highlighted on the graph (the red arrow points to what I mean) and also that when I selected “Label Options” by clicking on the three little green bars (where the orange arrow is pointing), I have a place where I can select where the label contents come from (where the blue arrow is pointing).

By checking the “Value from Cell” check box, I can select my scaled values and have them appear next to the axis.  Note that when you do this, you may need to uncheck the “Y Value” check box, which is the default because, you can use multiple sources for the labels and we probably don’t want to do that in this case.

You can also format things like the font, the font color, the text box fill, etc. to get the look you want, just like you would for any other Excel object you were formatting.  Here is where I ended up after going through those steps.


You may notice that the most negative value on the third axis is hard to read because it lies on the X axis line.   You can fix that by selecting that particular label and adding fill to the text box (at least that is how I solve that problem).


I also inserted a rectangular shape, formatted it to match the chart background, and located and sized it to cover up the 11 and 12 on the X axis since they are superfluous for the purposes of my chart.

And because the values on my axis transition from positive to negative, but for the scaling factor and number of increments I selected, 0 (zero) is not specifically called out, I may want to change the font, tick, and line color for the negative part of the axis to highlight the transition.


Of course, it would also be possible to play with your scaling factors and offsets, and increments until you ended up with a scale that had zero on one of the major grid lines.  But given all the variables in play, which for this example included:

  • Separating the data series, and
  • Having a set of grid lines that hit even, meaningful divisions on all three axes, and
  • Having all of the data be visually meaningful

that can actually be tricky.

Bear in mind that how far you go with that type of stuff is a matter of personal choice. Just because you can do it doesn’t mean you should  do it.  And what makes sense to you may not make sense to others.  For instance, to a color blind person, the little nuance I just illustrated is not very helpful.

Adding a Few Whistles and Bells

Pointer Lines

At this point, we have successfully (I hope) created our third axis.  If you needed even more axes, you could go through these same steps and add them.  

But no matter how many axes you add, it is good to take a reading or two using them to make sure you didn’t zig when you should have zagged somewhere in the development process.  

To facilitate that, I often add a little feature to my charts that makes it easy to precisely read data from the chart and also to highlight something for someone else viewing the chart.   More specifically, I add little arrows that point to the value on the third axis (or any axis I want) associated with a given value on the Y axis.  You may have already noticed them in some of the charts I was using earlier on in the post to illustrate the effect of different scaling factors.


The lines are generated in a manner very similar to the third axis.  In other words, the vertical line is just a series that is plotted with two points;

  1. One point has an X coordinate equal to the X value of interest and a Y value equal to the minimum value on the Y axis that you are plotting the series against.
  2. The other point also has an X coordinate equal to the X value of interest but the  Y value is the Y value of the data point you are wanting to read.

The horizontal line is generated in a similar manner, but the Y values are constant.  I do all of this by building a little table that lets me enter the X and Y values of choice and then sets up the other points in the table to generate the line. 


In this particular example, I use VLOOKUP to come up with the Y value associated with an X value.  But, I can also manually enter the point, or I could even develop the equation for the line I am looking at using Excel’s trendline feature and then have it calculate the Y value for the X value I entered.

In terms of checking things, the easiest points to check, of course, are ones that by chance happen to lie on or near a grid line.   You really don’t need my little lines to do that , but for illustration purposes, if I have done things correctly, then the value on the third axis associated with X=5 should be just slightly above the .0025 tick mark/grid line on the third axis …


… which seems to check out in the graph above.  And the values associated with X=3 and X=9 should be just slightly above the –.0100 tick mark/grid line.


So things seem to check out.

Using Pointer Lines to Precisely Read the Data from Your Third Axis Series

Obviously, you could also do what I just suggested by simply inserting shapes on the chart where you wanted them.   But if my make them plotted lines, the  other thing you can do with them is use them as a tool to precisely read the chart.  

In other words, by setting up your table so that you can pick an X value and then “play” with the Y value until the to lines meet at exactly the point of interest, you can “read” the Y value because it is the number you have in the cell that you were playing with. 

In this example, I have my little table set up to adjust all of the other values based on what I enter in the yellow X cell and the orange Y cell.  The vertical orange line is the series in the red box plotted against the left axis and the horizontal orange line is the series in the blue boxes plotted against the left axis. 


To use the tool in this manner, I arbitrarily set X to the value of interest.  In this case, I picked an arbitrary value of  4.86 and entered it in the yellow cell. 

I will arrive at the Y value by “tweaking” (which is the technical term for “playing with”) the Y value in the orange cell and zooming in to look closely at where the two orange lines meet on the blue waveform.  I will continue the highly technical process of “tweaking” until the two lines meet exactly in the middle of the blue line.

To start that process for this example, I made an educated guess at about what the Y value  should be when X was 4.86 by considering the fact that the major division for the blue third axis was .0125 and then “eyeballing” (another technical term) that the point where the X axis intercepted the blue wave form was going to fall at least 4/5ths of the way between –0.0100 and +.0025.  I used 4/5ths because .0125 is pretty easy to divide by 5 in your head, even at my age. 


  • One fifth of .0125 is .0025, meaning
  • Four fifths of .0125 would be .01, and
  • -0.0100 plus .01 will be about 0.

So I started there. 


Obviously, I didn’t even need to zoom in to see that I needed to do some additional “tweaking”.  But I had kind of expected that;  my first estimate was just to get me in the “ball park” of where I needed to be.  As a result, I now knew that the number I was looking for was somewhere between 0.0000 and 0.0025. 

Visually, it looked like it would be about half way between the two. So I tried that.


(Since my cell was formatted for 4 decimal places, 0.00125 got rounded to 0.0013)

That looked pretty good, and probably was “good enough for government work”.   But when I zoomed in to see exactly how good it was, I could see that I was just a bit low.


If you look closely, you can see that the two orange lines intersect towards the bottom of the blue line vs. exactly in the middle.  So, I arbitrarily made another “tweak” and adjusted Y to .0015, and as scientific people often say, “Ta-Da”.



How many “tweaks” you make is a function of how anal you are (I can be pretty anal).  My real point is that this approach lets you come up with a pretty exact value by reading your chart.

Obviously, if I knew the equation of the line, I could have just calculated what Y was given a value of X.  But a lot of times, when I am using this approach to read my chart, I am looking at some wild and crazy trend data that I pulled from a logger or control system or utility meter and there is no equation.  In fact, one of the reasons for plotting the data was so I could pick Y values from the data series based on an X value I selected.

Focusing Attention

One last “trick” before I stop. 

When you start trying to show a lot of data on one graph, doing things to help people correlate the information can be helpful.  For instance, I color coded my axes to match  my data series in an effort to help a (non-color blind) person’s “mind’s eye” make the connection.

I potentially could further enhance that by restricting the range of the axes as shown below.


For the third, blue axis, I accomplished this by formatting the text in the data labels I wanted to hide so that it was transparent.   For the red and green axis, I did it by inserting a rectangle shape, formatting it to match the chart background, and then sizing and positioning it so that it covered the part of the axis and related tick marks that were outside the range of the data I was presenting on the axis.

David-Signature1_thumb1_thumb                                                         PowerPoint-Generated-White_thumb2_th

David Sellers
Senior Engineer – Facility Dynamics Engineering                                                                     Visit Our Commissioning Resources Website at

1.     What I mean by the term “visually meaningful” is that the scale of the axis associated with a data series allows its wave form to be seen.    If the peak to peak value of a wave form is small relative to the axis it is plotted against, it will come out looking like a flat line, even though it actually is not at all flat.

Posted in Data Logging, Excel Techniques, HVAC Calculations | Leave a comment

A New Application for Plot Digitizer (Plus a Quick Look at Hydraulic Variable Speed Drives and Chiller Free Cooling Cycles)

Those of you who know me know I am quite enamored with a little freeware application called Plot Digitizer.  If you are unfamiliar with it, the application allows to to create CSV (Comma Separated Value) files from the lines in an image.  Meaning that if you have, for instance, a pump curve as a .pdf or .jpg file, then you can pretty quickly capture the curve shapes and load them into a spreadsheet to create a chart that is an electronic version of the curve. 


Once the curve is in the form of a spreadsheet. you can do math on the lines in it.  For instance, you can use the affinity laws to project a new impeller size from a know impeller size.  or you can plot a system curve from the data you collect in a pump test and the square law.


This link will take you to a page on our commissioning resources web site where I provide more information, and this link will take you to a page where I provide a spreadsheet template that will let you create a formatted pump curve pretty easily from the CSV files you capture with plot digitizer.

My goal in this post is to show you an idea that came to me one day out in the field that saved the day for me and involved using plot digitizer in a way I had not thought of before.  Since it happened while I was working at a really interesting chilled water plant that had some unique features, I thought I would give you a peak at those things also.


There were two cool features in the plant I will be discussing that I wanted to highlight in addition to illustrating my new Plot Digitizer application.  But if you want to jump straight to that, the links below will let you do that (or jump to any of the other topics for that matter).  Each section has a Back to Contents link that brings you back here.

Setting the Scene

Last October, I had the opportunity to support a field class that used the central chilled water plant at the Gaylord Grand Ole’ Opry as a part of the Building Commissioning Association Annual Conference (formerly called the National Conference on Building Commissioning or NCBC).  (And still called that by older folks like me who forget they changed the name).  It was a 9,000 ton plant;  one of the largest I have been around for a while. 


The plant was a variable flow primary/secondary plant.  Unlike current technology chillers, back when this plant was designed, the chiller technology would not deal well with flow variation in the evaporator.  In fact, in the olden days, before we had realized that we needed to pay attention to energy efficiency, the most common chilled water plant design configuration was a constant flow arrangement and a big driver for that was protecting the chiller tube bundles from frosting up and freezing.  But you ended up with large pumps moving the design flow rate at the design head for all of the operating hours.

The variable flow primary secondary design evolved as a way to allow the flow rate to the loads to vary with the load profile, saving a significant amount of pump energy, while maintaining a steady flow rate through the chillers, which protected them.  My point in bringing this up here is simply to let you know about the configuration of the plant I am about to discuss, not to explain variable flow primary/secondary plant theory.  But, if you want to know more about  that, you will find a couple of resources at this link.

Return to Contents

The Quick Look

Aside from the size and quality of the plant, there were two technologies they had in place that provided for some added interest.  So, I wanted to briefly highlight them here so you recognize them if you run into them.

Return to Contents

Hydraulic Variable Speed Couplings

One unique feature was that the distribution pumps had hydraulic variable speed couplings on them.


A hydraulic coupling is very similar to the torque converter in an automatic transmission and is the blue piece of machinery between the motor (the dark gray thing on the left) and the pump on the right (the black thing with the silver pipes attached to it) in the picture above. 

Here is a picture of the drive itself, including the heat exchanger that rejects the heat associated with the efficiency losses.


While fairly efficient at full speed and full load, these drives are much less efficient than a current technology variable frequency drive at part load.  That means they represent a good retrofit target and the plant operating team has that on their list of improvements for this year. 

From a retrocommissioning standpoint, for a project where you might need to document the inefficiency of the drive to support your case for replacing it,  its kind of cool that you could pretty easily document the efficiency of the drive by logging flow and temperature rise across the heat exchanger because that is where the losses show up.

This is the actuator that controls the output speed of the drive by varying how much of the oil that is moved by the impeller in the drive reaches the turbine.


Another interesting thing about this technology when you contrast it with the variable frequency drive most of us are more accustomed to  is that the motor is ahead of the drive.  That means the motor needs to be sized for the brake horsepower the pump needs at its input shaft, plus the losses in the drive.  In contrast, a variable frequency drive serving a motor serving a pump supplies the pump energy plus the motor efficiency losses.

In this case, the pump bhp requirement is probably in the range of 400 – 425 bhp and the motor is a 450 hp motor.  Thus, having the drive losses be part of the load that the motor had to serve probably did not affect the motor selection.  But I suspect there are instances where the hydraulic drive would have kicked up the motor size by one incrament due to its location in the “food chain”.

You may wonder why anyone would use even use a hydraulic drive.  My guess is that at the time they were installed, a variable frequency drive for a 450 hp 4,160 volt motor would have been pretty spendy.  

For example, in 1980, when I specified my first variable speed drive for a 40 hp air handling unit motor, my choices were a variable frequency drive that cost about $50,000 and was the size of two motor control center sections.  Or, I could use an eddy current clutch which cost about $20,000.  The eddy current clutch was significantly less efficient at part load, but given the price difference, it was the better choice at the time.

So my guess is that a similar economic assessment prevailed when they built this plant and these drives probably made sense back then.  Plus, they are basically mechanical devices so a mechanically inclined person can probably fix one in a pinch.

Given that the point of this post is something other than hydraulic couplings, I’m not going to go any further into them for now.  But TMEIC’s brochure titled Selecting Variable Speed Drives for Flow Control provides a lot of good information regarding how they work along with comparing them to variable frequency drives.

Return to Contents

Chiller Based Free Cooling Cycle

Most people in the industry are familiar with water side free cooling cycles that leverage the capacity of cooling towers at low wet bulb temperatures to create chilled water directly with out the need to operate a chiller.  Typically, this involves operating the cooling towers to produce water colder than is required by the chilled water system and using a plate and frame heat exchanger in between the condenser water system and the chilled water system to transfer the energy. 

A picture of a typical plate and frame heat exchanger is shown below along with a little model I have that  shows all of the parts.

DSCN6477_thumb12   DSCN0931_thumb13

Here are a few pictures of some actual plates.

Plate-and-Frame-Hx-Plate-02_thumb3  Plate-and-Frame-Hx-Plate-01_thumb3

My point here is that there is another way to accomplish the cycle with out the cost of the heat exchanger and the pumps it requires and several of the chillers in the Gaylord plant were equipped with this feature.  

More specifically, some chillers can be configured in a way that allows an operating mode to occur where control valves bypass the compressors and expansion device.  The compressor bypass allows  refrigerant vapor to migrate from the evaporator to the condenser due to the vapor pressure difference created if the temperature in the condenser is lower than the temperature in the evaporator.  The expansion device bypass allows liquid refrigerant to circulate by gravity back from the condenser back to the evaporator. 

That means that if you run the condenser water temperature down below the desired chilled water supply temperature (just like you would if you were going to use a plate and frame heat exchanger for a free cooling cycle), then there will be a natural circulation pattern set up inside the chiller that transfers heat from the (relatively) warm evaporator to the (relatively) cool condenser.

Here are a few slides I use when I teach about this feature, including one that shows the parts and pieces on the chillers in the Grand Ole’ Opry plant.  This first slide shows a schematic of a centrifugal chiller with the two control valves added but with the chiller in the normal operating cycle (warmer colors = warmer temperatures).


This next one shows the free cooling cycle triggered with the valves open and the compressors shut down.

image_thumb4This slide highlights the compressor bypass valve on an actual chiller. 


Note that it is in a similar location to where the hot gas bypass connection might be for a chiller of this type.  But, since this cycle needs to work at very small pressure differences, the pipe is much bigger than it would be for a chiller where hot gas bypass was installed. 

Here is a schematic showing the hot gas bypass connection along with a picture of a similar chiller (same manufacturer and product line but about half the tonnage) with a hot gas bypass connection to give you a sense of what that would look like.



The other difference between what a chiller with a free cooling cycle would look like compared to one with hot gas bypass is that the free cooling cycle requires a second valve that bypasses the expansion device.  Hot gas bypass does not require this second valve.  Here is the Gaylord chiller with the second control valve highlighted.


I will probably do a more details blog post about this at some point, but for now, that should give you a sense of what the free cooling option looks like on a chiller. 

All of my exposure to the free cooling feature on a chiller have been on Trane chillers.  But I suspect other vendors can offer it assuming their condenser is higher than their evaporator so you can get the gravity flow back along with some other technical details.   There is a section in this Trane manual that provides a description of  the cycle working on one of their machines.  And this page on their website will give you a few more images to look at.

Granted, this adds to the cost of the chiller.  But assuming you can get the capacity you need from the feature, it means you can provide free cooling with out buying a plate and frame heat exchanger and piping it into the system.  In other words, the pumps and connections serving the chiller also serve the free cooling cycle on the chiller.  If you had to do it with a plate and frame heat exchanger, you would need to provide all of that for the heat exchanger in addition to the heat exchanger itself, which is a pretty expensive piece of hardware.

Return to Contents

Using Plot Digitizer to Generate Trend Data from a Graphic

O.K.;  I will now return to the main reason I put of this post. 

One of the reasons I got to spend time in the Gaylord plant is that the operating team had graciously agreed to allow the commissioning conference to use it for a field exercise in the training class I was supporting for the conference.   Originally, I had hoped to use trend data from the plant to illustrate a few techniques I use.  But unfortunately, they were in the middle of a control upgrade and there was no trend data readily available. 

That changed my plans for the class a bit and time was of the essence since the plan was I would arrive on site the Friday morning before the class, spend Friday exploring the plant, and then develop the class over the weekend in my hotel room.

Back in my hotel room Friday evening, I found myself self studying the pictures I had take of the the chiller graphic displays, longingly wishing the control upgrade was to the point where I could pull the data they contained out of the system. 

The pictures below are of of the motor data and evaporator data for one of the chillers and will give you a sense of what I was staring at (note that the time scales are slightly different, which is why things don’t line up exactly).



I was actually contemplating doing a manual transcription of the data.  In the olden days, before we had trend data at our fingertips and all we had were log sheets with, if we were lucky, three readings a day   with one set of readings taken on each shift, manual transcription and plotting was the approach we were stuck with. 

The process was tedious but possible and provided meaningful insights in terms of general trends as long as the data was not highly variable (like a hunting control loop for instance).  And, it is the underlying concept behind the process I use now to leverage trend data to start to assess a plant.

In any case, as I was about to make a pass at manual transcription,  it hit me;  I realized it would a lot faster to use plot digitizer to trace the lines out and create Comma Separated Value files (CSV files) that I could then import into Excel and manipulate to my heart’s content.

So, I loaded one of the images into the tool and tried it out.   In hindsight, had I thought about doing it at the time I took the pictures, I would have tried to line myself up more directly with the graphic screen to eliminate the impact of parallax.  But I concluded that:

  • Since the angle I used was about the same on all of the shots, and
  • Since I was not as concerned with absolute values as patterns, and
  • Since this was a preliminary analysis and not an exact science

the data I pulled from the photos of the graphics would be good enough for my purposes. 

Return to Contents

Using the Trend Data

The purpose of this post is not to go into the details of my analysis technique (but hopefully in a future post I will).  Rather, it is to illustrate how I got the data into a form where I could use it for analysis.  In general terms, the steps behind what I show towards the end of the post are as follows.

Step 1

For each chiller, I digitized the motor current data as described above using Plot Digitizer.  Specifically, in the first image in the preceding section, I digitized the brownish colored line.

As I clicked across the image in Plot Digitizer to pick up the line, I tried to pick points that would capture the general curve shape, meaning I tried to click on the line anyplace the slope of the line changed significantly, but I didn’t worry too much about ripples that were small relative to the major changes.  The assumptions I would be making in my analysis would make them kind of meaningless. 

This gave me a little table in the form of a CSV file with a date and time in one column and a percent run loaded amps value for that point in time in another column. 


Once I loaded the CSV file into my spreadsheet, I plotted it. 


Step 2

The reason I plotted the chart is that (for me at least), it is a quick way to do data validation.  Things like the negative values and points being “backwards” in time jump out at me in the graph faster than they do in the table, all though you can see them both places if you look closely.  And for me, as I tweak the data to correct for those issues, the chart gives me a quick visual on the validity (or not) of the adjustment I made.

Obviously, the chiller could not pull negative amps and something in the future could not have happen before something in the past;  the reason for the discrepancies was I was slightly off with some of my mouse clicks when I did the digitization. In other words, when I am using Plot Digitizer  I am trying to click on a pixel with  my mouse that the program then correlates with the pixels I told it represented the X and Y axis for my chart.   If I am off  by one or two pixels, I get a value that doesn’t really exist.

So, before using the data, I did a bit of data validation.  Specifically, using the actual image as a reference:

  1. I filtered the data to replace negative values with zero.
  2. I eliminated points that were double clicks on the same point in the line on the image.
  3. I arbitrarily adjusted the data points a second or two either way where their was a sharp drop in the data so that the line was vertical and/or a data point further down the table (which correlates to a point further to the right in the graph image) was slightly later in time than the value ahead of it.

This only took a few minutes and made my data more representative of the actual data in the image I was trying to capture. And it was much faster and more accurate than manually trying to read points from the graph and enter them into a table.  The result of validating the raw data above came out looking like this.



Step 3

Once I had completed the first two steps for each chiller, I needed a way to combine the data.  Given the technique I used to capture the data in the first place, there was not the proverbial “snow ball’s chance in hell” that my time stamps were consistent from chart to chart to chart. 

So, I started a new  table with the first column being a date and time that incremented by one minute per row.  The first date and time value was manually set and simply corresponded to the earliest time I had in my data set. All of the other rows are created by using an Excel formula that added 1 minute to the value in the date and time column I was making relative to the same cell in the row above the cell with the formula.1   

In the image below, the table to the left (orange outline) is what the cells looked like for the first few rows of the spreadsheet.  The table to the right illustrates those same cells with the raw values and formulas made visible.


Since the data set I was working with covered about two days, I ended up creating about 2,880 rows with time stamps (2 days times 24 hours per day times 60 minutes per hour).

Next, I added columns for each chiller and used the VLOOKUP function to go to the table with the digitized data I had created for each chiller in it and fill in the percent run loaded amps value associated with the time stamp in the first column of the row.  Here is what those cells and their formulas looked like for the first few rows.


Certainly, this introduced a bit of an error into my analysis.  For instance, let’s  say I selected a point at 12:51 PM when I was digitizing my data and then the next value I picked was at 1:51 PM on the same day because the machine was off or drew the same amount of current for that entire period.  Because of how the VLOOKUP function works, when it scanned my data table, for all of the times between 12:51 PM and 1:50 PM, it would have reported back the percent run loaded amps value that existed at 12:51 PM. 

That means that if I was not fairly meticulous in making sure I captured any significant change as I did my digitizing, I would have introduced some potential issues.  But, since I was careful to pick up any meaningful change, this approach would provide a reasonable value for the gap in time. 

In other words, if I could visually see a change in the value of the line I was digitizing relative to the previous point I had clicked on, I clicked again.   When I reviewed my images, you could visually pick up changes in the range of 2% (basically, the “ripple” you can see in the motor data image earlier in the post between about 9:21 PM pm the 10th and 1:21 AM on the 11th ).  So, as long as I was rigorous and methodical in my digitizing, I probably had not skewed the results for any given data interval by more that 2 or 3%.

And again, I want to emphasize that I was just doing this as a first pass to get me pointed in the right general direction.  In other words, to quote Pat Murphy (the lead estimator at Murphy Company, where I worked for a while)

It’s an estimate, not an exatamate

Step 4

When I am out in the field, I have gotten into the habit of taking pictures of the nameplates of the machinery I am looking at.  In this case, that included the chillers because the chiller nameplates included a lot of very useful metrics, including the nominal chiller tonnage and the nominal kW at full load. 


As you can see, this can be a bit cryptic;  i.e. it does not say “nominal chiller tons”; rather there is a code with a number beside it;  in this case, NTON. 

I happen to be somewhat familiar with the Trane codes so I could read the information from the pictures I took of the nameplates.  But lacking that, usually, if you search around a bit on the internet, you can find something that explains manufacturer nameplate codes, either as a separate document  or in the form of their installation and operation manual for the equipment in question.  Here is an example for Trane centrifugal chillers.


The reason I needed this information was that I wanted to turn the run loaded amps into a tonnage, which I describe next. 

Step 5

To convert the percent run loaded amps to tons, I assumed the relationship between percent run loaded amps and % full load on the chiller was approximately linear.  In other words, if the chiller is at 50% run loaded amps, then it is at 50% of its nominal full load tonnage.

This is far from a perfect assumption, especially at low load conditions and especially if the chiller has hot gas bypass.  But for the equipment I was looking at, there was not hot gas bypass and the chillers, when running, were typically at 50% load or more. And, I will remind you again of Pat Murphy’s quote.

In any case, using this assumption, I added another column for each chiller and then for each minute in the data set, I estimated the load on the chiller in tons.  Finally, for each minute in the data set, I added up the tonnages of all of the chillers that were running, which gave me the total tons on the plant for each minute and allowed me to project a load profile and draw some preliminary conclusions.

Step 6

Aside from looking at the load profile pattern as a time series, I wanted to look at it in terms of some indicator of a driver behind the load.  For plants serving air handling systems with integrated economizer cycles and/or 100 percent outdoor air systems, up until the point where the economizer high limit kicks in, the load on the cooling coils is a 100% outdoor air load, meaning it is a direct function of the outdoor air enthalpy and a fairly direct function of the outdoor air temperature.

As a result, I wanted to be able to plot tons as a function of outdoor air temperature.  To do that, I needed outdoor air temperature data which I retrieved from a local ASOS site using the Iowa State University website I mention in the blog post titled Hourly Weather Data Website Update.

Once I had the weather data for a period that correlated with my chiller data period, I used the VLOOKUP function to add an outdoor air temperature value for each row (minute) in my chiller data set.

Step 7

I always try to do something to cross-check myself, especially when I am going quickly due to the pressure of time or making some pretty general assumptions, both of which were true in this case. Given the data on the evaporator graphic for the chiller (the second graphic I show above), I could also have digitized the evaporator entering and leaving temperatures and used the water side load equation to calculate the tons on each chiller.


To do that, I would need an evaporator flow rate.  It turns out there is there is a reasonable assumption I could make to provide that piece of information.

Specifically, since the plant is a variable flow primary/secondary plant, by design, the intent is for the flow through the evaporators to be constant no matter what the flow is in the distribution loop.  So assuming a constant flow rate for the evaporators simply reflects the plant design intent and is reasonable.  The question then becomes

What is the magnitude of the constant flow rate I am assuming?

Here is how I answered that question.

Since the discharge valves on the pumps were throttled, it was reasonable to assume that the flow being delivered by the pump was the nameplate (design flow). 

I say that because the reason balancers throttle pumps is that at their testing has demonstrated that the pump is delivering more flow than needed with the discharge valve wide open.  Most balancing specs and good practice require that if the balancer finds this condition, then they should throttle the pump to design since most of the time, this will save some energy (all though not as much as you would save by some other optimization strategy).2  

All of that means that time permitting, I could have digitized the evaporator temperature data, done the math, and compared the result I got using evaporator temperature drop data with the result I got using the percent run loaded amps data.  But, as I mentioned, I was under a pretty tight schedule and that process would have consumed some of my precious time. 

In fact, I initially had considered using the evaporator data but decided on the percent run loaded data instead because it would (in theory) get me similar results.  But to accomplish it, I would only need to digitize one line for each chiller, not two.  And, I would only need to do one VLOOKUP for each chiller, not two.  Given that there were 9 chillers in the plant, saving those steps represented a significant time savings.

However, I did spot check my results by randomly selecting some points in time and then comparing the actual logged evaporator temperature drop with the temperature drop I came up with based on:

  • Tons from my percent run loaded amps analysis, and
  • An assumption of design flow through the evaporator (the valves were in fact throttled as indicated by the white line being approximately in the middle of the window on the valve actuator)


These spot checks tended to validate each other, so I was reasonably comfortable moving forward with the data set.

Return to Contents

The Bottom Line

The bottom line was that my brainstorm about using plot digitizer to turn a photo of data into actual data paid off, allowing me to generate both a time series view of the plant load profile and related parameters …


… and scatter plots and regressions of the load profile patterns relative to outdoor air temperature.


I’m just about done with a post that takes a look at the clues that were revealed by the charts above.  But before I do that, I wanted to show you one other trick that came in handy for my effort working with the data I generated, specifically, how I created the third, number of chillers running axis in the time series chart.  That will be the topic of my next post.

Return to Contents

David-Signature1_thumb1_thumb                                                      PowerPoint-Generated-White_thumb2_th_thumb[1]

David Sellers
Senior Engineer – Facility Dynamics Engineering                                                                     Visit Our Commissioning Resources Website at

1.     For more on how Excel represents date and time, which you need to understand to be able to do this, see the blog post titled Good News about NWS Weather Data, Plus Working with Date and Time in Excel.

2.    For more about optimizing pumps, including case studies illustrating the steps and techniques, you may want to download and read two design briefs that you will find on the Energy Design Resources web site;  Centrifugal Pump Application and Optimization, and Pumping System Troubleshooting.

Posted in Boilers, Hot Water Systems, and Steam Systems, Chillers and Chilled Water Systems, Data Logging, Excel Techniques, HVAC Calculations, Motors, Operations and Maintenance | Leave a comment

A Control Logic Exercise and a Way to Get Comfortable With Navigating SketchUp Models

Plant V31 - Scenes Inside Bypass Answer 3

As many of you know, I have been experimenting with using SketchUp models as a way to teach EBCx techniques.  I frequently use them in my classes and also have started to post self-study exercises that are based on the models on our commissioning resources website.

The purpose of this post is two-fold.  One is to let you know that I just put up a new model and exercise that ties into the Control System Fundamentals slides I posted a while back.  In the exercise, you modify the control logic for the hot water system illustrated below to add a reset schedule, which will solve some comfort problems and also save some energy.

HW System Diagram

The model is also a good starter model for learning how to navigate in SketchUp since it is not as complex as some of the other models I use for existing building commissioning training.  Click here to jump to that part of the post.

Control Logic Exercise

If you are interested in giving that a try, I think you will find everything you need on the Bureaucratic Affairs Building Heating Hot Water System Logic Modification Exercise page, including the model, a description of the problem and the theory behind solving it, a description of the building, and other pertinent information.

SketchUp Scavenger Hunt

Once I posted the model I realized that it is also a good one for you to use to get comfortable navigating around in SketchUp in preparation for attending a class were I will be using models or if you want to try one of the self-study exercises.  The reason that this might be a good model to learn basic navigation in is that it is a relatively simple model.

If  you  use one of the scene tabs that turns off the walls of the building and/or other structural elements, there are not many things for you to collide with as you can see from this scene where the 2nd floor, as well as the walls, columns, and beams on the first floor have been turned off.

Tab 6 01

In SketchUp, you have super human powers and can pass right through a wall.  Once you are inside it, it can be very disorienting because there is nothing to focus your zooming and panning effort.

When that happens, remembering “Control” plus “Shift” plus “E”, which is the keyboard short-cut for “Zoom to Extents” is handy.  But you can totally avoid the problem if there is not much to run into in the first place, which is my point and why I say getting the hang of SketchUp navigation with this model might be a good way to go.

Once you get accustomed to things, you can work with more layers turned on to create a more realistic view of things, like this scene for instance, which is what it might look like if you had gone out to do some construction observation in the facility after the pipe and terminal units were hung but before the ductwork went in.

Construction Observation Scene

To make this all a bit more interesting, at the suggestion of Barry Estes, a friend of mine whom I work with at Marriott to provide technical training, I made an “Easter Egg” hunt out of it.  Meaning, I came up with a string of questions that require you to use basic SketchUp navigation skills like zooming, panning, orbiting and scene tabs to find the answers, along with some outside the box thinking and the ruler tool.

For this exercise, you won’t need all of the information provided on the web page to support the logic diagram exercise, just the model itself and the building description and history.  You can download those files individually at the link provided above, or you can just follow this link to get what you need in one zip file.

If you want to give it a try, here are the questions.

  1. What is the size of the inlet duct on a typical terminal unit?
  2. How many steps are there from the first to the second floor?
  3. What are the hours of operation for the Department of Bureaucratic Affairs?
  4. Who manufactured the ladders that are being used on the project?
  5. Will the ladders float?
  6. Does the hot water system have any balance valves in it and if it does, who is the  manufacturer?
  7. For the finned tube radiation serving the West perimeter zone (Scene 12), what would you estimate that the pressure drop was through the balance valve if the flow is 7.4 gpm?
  8. Can you propose a reason for the issue noted in the header picture on the web page.  In other words, can you find a problem in the piping network that could be causing people at the East end of the 1st floor to complain of being cold when the rest of the building is comfortable?
  9. Are all of the terminal units the same and if not, why do you think there is a difference between them?
  10. Did you find any other “Easter Eggs”?  If so, what did you find?

I will answer the questions for the first time in a class this week, so once I have done that, I will publish them here too so you can see how you did.


PowerPoint Generated White_thumb[2]David Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at

Posted in Controls, Retrocommissioning Findings, SketchUp Model Based Self Study | Leave a comment

A New Resource for Looking at Climate Data

So, some of you are probably thinking Thank God he has finally put something up that gets rid of that picture of him and Kathy on Christmas.  That did stay up for a bit longer than I had planned at the time.  But now that Valentine’s Day is past, I figured I really did need to move on from the romantic thing.

Frequently, when I am going some place I have never been before, I like to get a sense of what the climate might be like.   There are a bunch of ways I do that including the City Data site I mentioned in a previous post and the bin plot feature that you get with the Professional version of electronic psych chart tools like the one Ryan Stroupe has made available via the Pacific Energy Center.

For commissioning projects where I am trying to decide what trend data I want to have the operators pull for me to look at, I really like the nomographs that the National Weather Service has.   You can look at various locations on a month by month basis


… or an annual basis.


For the temperature chart, he red band represents the extreme high on record, the blue band represents the extreme low on record, and the green band represents the normal range.  The dark blue line is what happened for each day of the month, meaning it spans from the low for the day to the high for the day.

Once I find the chart for the area of interest, I look for:

  1. The month with the highest maximum temperature on record and then a day (the dark blue bar) when conditions approached that. 
  2. The month with the lowest minimum temperature on record and then a day when conditions approached that. 
  3. A couple of days during the swing seasons when the dark blue band spans a wide range, meaning the building and its systems saw a huge range of operating conditions.

I then ask for the trend data for those days because those days were probably the ones that challenged the systems in the facility the most, especially the day with the huge wing in temperatures.  Here is an example of that which I use when I teach.









Basically the systems in that building saw every conceivable operating mode in the course of 24 hours.  And they worked!  We went home feeling pretty good that day.   Had they not worked, the day would have been a nightmare.

For a number of the NWS regions, including the Western Region, you can find these charts pretty easily on the Regional forecast office home page.


One way to find the regional office home page for a given location  is to start at the NWS home page and put in a location.


From there you can get to the regional forecast office (it will be linked at the upper left corner of the page under the search box).


But for me at least, the problem has been that not every region seems to have a link to the nomographs off of their home page, an it would take me a while doing random searches to find them once I was there.  And sometimes, I simply could not find them.

But, after going through that recently for a project in Atlanta, Georgia, I discovered a location that seems to be common across the regions that has a very similar product that has some cool features.  So I thought I would take a minute to share that with you.

To find it, you need to get to the local regional climate office home page, just like I illustrated above.  But from there, you use the “Climate” tab, which so far seems to be consistent across regions.


Once you are on that page, you pick the NOWData tab.


NOWData is an acronym for NOAA Online Weather Data  and is the result of a joint project between the National Weather Service and the National and Regional Climate Centers.   Anyway, once you are on that page, you pick a location and the “Temperature Graph” feature and a year.


When you hit “GO” it will take you to the graph for the location you selected.


If you mouse over the graph, you can look at the data for individual days.


You can can also drag a window to zoom in on a certain period.


And you can use a drop-down to save the image in a bunch of different formats.


The next few images illustrate how a team I am working with used the data for Atlanta Georgia in a presentation to an Owner to help them understand the load profile we measured relative to what might happen other years.  This, in turn helped them understand our recommendation regarding a new cooling tower they are contemplating purchasing.




We made a two year graph by saving the charts for the two years we wanted to look at and then cropping the left edge of the right image (2018 data) and lining it up with the left image (2017 data) so that it appeared to be one continuous data stream.  We blocked the heading of the second image by simply placing a white rectangle over it.



We then cropped the image and used the Morph transition to focus in on the area we wanted to discuss .  We made the band outside our focus area look faded by putting white rectangles with the transparency set to 25% over those parts of the graph.




So a pretty cool and useful resource, at least for me.  Hopefully it will be for you too.


David Sellers
Senior Engineer – Facility Dynamics Engineering

Visit Our Commissioning Resources Website at

Posted in PowerPoint Techniques, Weather and Climate Resources | Leave a comment

Happy Holidays

Hoping you are having as much fun this holiday season as Kathy and I are having.


Meanwhile, thanks for supporting the blog and Happy Holidays.

image David-Signature1_thumb1_thumb image

David Sellers
Senior Engineer – Facility Dynamics Engineering

Posted in Uncategorized | Leave a comment

Exploring Evaporative Cooling–Part 1

If you look at a psych chart closely, you will notice that the constant wet bulb lines are not exactly parallel to the constant enthalpy lines.


Note that to make things more visually apparent in this blog post, for most of the psych chart images, I have narrowed down the temperature and humidity scales.  So the chart probably looks a bit different from what you are accustomed to seeing.


In any case, it’s tempting to just ignore the fact that the enthalpy and wet bulb lines are not exactly parallel.  But in the context of an evaporative cooling process, the non-parallel nature of the lines is an important distinction if you are trying to understand the physics behind the process.  The purpose of this series of posts is to explore that distinction a bit and look at what it means practically in the context of air handling systems that use an evaporative cooling process.

The context for all of this was that we were lucky enough to have a field day in a facility that was served by both direct and indirect/direct evaporative cooling air handling systems (with “we” being myself and the folks participating in the current round of the Existing Building Commissioning Workshop at the Pacific Energy Center).  Here is the Google Earth view of the facility we were at, and you can see the systems we were working with sitting in the equipment area on the right half of the roof. (The round structure is a planetarium, so, pretty cool to be working on a building with a planetarium.)


If you know how the direct and indirect/direct evaporative cooling processes work, you can actually tell which unit is which by studying the appearance of the equipment in the Google Earth image.  So, I will let you check out what you learn from reading this by coming back and identifying which unit is which after you finish the series.

From my perspective as an  instructor, the evaporative cooling systems represented an unique opportunity to connect the psych chart with reality.   For the class participants, it is a chance to see something different and learn how to understand it by thinking about it in terms of fundamental principles (I hope).

To me, this is an important thing.   If I and others like me were to endeavor to spend the days that remain to us in instruction targeted at describing every conceivable type of HVAC system that might exist, we would simply run out of time, as can be seen from the following relationship, which calculates the maximum possible number of HVAC system configurations that could exist in our little corner of the universe.


On the other hand, at the end of the day, the phenomenon going on in most HVAC systems can generally be described by a few fundamental relationships and tools including the steady flow energy equation …


… which I realize is a bit scary until you think of it in the terms Dr. Albert Black, one of my mentors put it to me in, those terms being that …

                 The Goes Inta’s Gotta Equal the Goes Outa’s.

My recollection is that Al (modestly) told me that he can not claim total credit for that phrase in that it was passed to him by one of his mentors.  But it resonated with me and has been a guiding principle and foundation for me when all else seemed to fail.  That includes something that happened in the context of my developing this blog post; i.e. being confronted, as I frequently am, by the realization that understanding something and being able to explain it are two different things.

I actually learned that lesson very early on in my technical training career as a flight line lab instructor when in my first lab session, an aspiring Airframe and Power Plant Mechanic (A&P) asked me (a freshly minted A&P) a question about a concept that I understood, but found that I could not explain in a way that made sense to him.  So, I told him I didn’t know how to provide a clearer answer to the question right then (harder than it sounds, at least for some of us) but that I would fix that and and get back to him, which I did and have been doing ever since.  It’s one of the things I really love about teaching;  to teach, at least in my experience, you have to be in a constant state of learning.

Al also pointed out to me at one point that all of this math is just a reasonable model for us to use to predict what might be going on in an HVAC system and building.  In reality, we probably don’t really have a clue. 

Just saying.

Beyond conservation of mass and energy, one of the most important tools for understanding what is going on in an HVAC system is the psychrometric chart.  This is something that Bill Coad initially inspired me about via his very cool engineering trick of creating one by hand via the application of basic principles.  Replicating that trick is the subject of  yet-to-be completed string of blog posts starting with this one.  Eventually, I will get all of the way through showing you the trick, so stay tuned.


The specific driver behind my developing this post was a field question that came up several times in the field class regarding the evaporative cooling process.  If you are looking at the depiction of the process on a psych chart and don’t fully appreciate that the constant wet bulb and constant enthalpy lines are not totally parallel, then you might ask:

How can a water be evaporated by a process that occurs at a constant enthalpy which implies there is no energy change?

The answer is …

Actually, it is a constant wet bulb process not a constant enthalpy process, and the enthalpy of the air increases. 

The amount of latent energy associated with adding water to the air stream by evaporation is in fact exactly equal to the reduction in sensible energy in the air that entered the process.  But the water represents mass being added to the air stream and that mass had some energy associated with it before it entered the process, just like the air did.   So the enthalpy at the the end of the process, with the added water vapor mixed in with the original air sample has been increased by the amount associated with the added water.

It turns out that to really explain this, at least to explain it in the way I thought I needed to, things got long (surprise).  So what started out as a blog post has evolved to series.  

The  remainder of this post will be dedicated to explaining some basics behind the evaporative cooling, primarily adiabatic saturation and wet bulb temperature.   I will follow this post with a post that looks at practical adiabatic saturation a.k.a. evaporative cooling.  Finally, I will do a post sharing what we saw in our recent field experience, along with some of the insights that were gleaned from the experience. 

As is my practice for my annoyingly long blog posts, the following links will take you to topics of interest which will include a “Back to Contents” link at the end of the section to bring you back here.

The Energy Content of a Parcel of Air

A parcel of air is a concept used in psychrometrics and meteorology.  It implies a sample that is large enough to contain many molecules, but much smaller than the surrounding volume or environment.   It will have uniform temperature and moisture characteristics but those characteristics may be different from the surrounding environment.

The bubbles of steam that rise through the liquid in a pot of boiling water are an example of a parcel. Both the vapor and liquid are made up of water molecules, but the conditions inside the bubbles are different from the conditions outside the bubble

To understand evaporative cooling, you need to understand what makes up the energy content of a parcel of air.  Unless a parcel of air is totally devoid of moisture (0% relative humidity), then the energy it contains includes both a sensible energy component and a latent energy component.

Sensible Energy

The sensible component is the easiest to understand because manifests itself to us as the the dry bulb temperature. Most people are very familiar with it and frequently, we simply call it the “temperature” of the air.  Changes in dry bulb temperature are associated with the change in sensible energy. 

Another way of thinking of it is to say that sensible energy manifests itself to us as heat.  If I increase the sensible energy of an object, it becomes hotter to the touch with “touch” being one of our senses and thus the name.

Latent Energy

Our comfort is also affected by the amount of moisture in the air because it impacts how efficiently (or not) our body’s  evaporative cooling process works.   If you have traveled around the country a bit, you probably have noticed that a 95°F, sunny day at someplace like the  Grand Canyon feels much more comfortable than a 95°F, sunny  day in the Midwest or Southern states or even Northeastern states like Pennsylvania right after a thunderstorm. That is because the summer time air is much dryer at the Grand Canyon (most of the time), compared to the summer time air in the Midwest, South, and Northeast.  

The moisture in an air parcel has energy associated with it because it takes energy to convert the moisture from a liquid to a vapor (or from a solid to a liquid for that matter).  Going from a liquid to a vapor or a solid to a liquid is called a phase change.

Unlike sensible energy, the energy associated with the phase change that adds moisture to a parcel of air does show up as a temperature increase.   Rather, we sense it a change in comfort level that we frequently call feeling “muggy” or “humid”.  

So the good news is that we can detect that it is there. But unlike heat, which we can measure with a thermometer, it can be challenging to measure and quantify “mugginess”.  The term applied to this energy is latent energy or sometimes, latent heat. 

Latent is a term that means “hidden” or “concealed” and it is used to describe the energy associated with a phase change because.  We really didn’t understand latent heat until about 250 years ago when a Joseph Black, a Scottish scientist intuitively connected a few dots by pointing out that what people thought should happen, based on the science of the time, did not actually happen.  

For instance, the science of the time suggested that it would take only a small amount of heat to melt snow and ice.  Mr. Black pointed out that if that was really true, then the world would be ravaged by floods due to the immediate melting of snow and ice when the temperature increased from just below to just above freezing. In other words, the expected didn’t happen, which implied there must be something else going on.

He reached a similar conclusion about boiling water by observing that while below the boiling temperature, the addition of heat caused the water temperature to increase fairly quickly. But once boiling started, applying the same amount of heat did not cause the temperature to change at all but rather, caused the water to become vapor at the same temperature as the boiling water.

He also noted that it took quite a bit of time and heat to convert all of the water from liquid to vapor relative to the amount of heat it took to simply raise the temperature to the boiling point.  In other words, a significant amount of energy had to exist in the water vapor that was generated by the boiling process, even though its temperature was the same as that of the liquid water it came from.  That energy was invisible in the context of the conventional way of measuring heat (temperature) and he termed it “latent energy”.

Total Energy

Enthalpy is the term we used to refer to the total energy content of an parcel of air.  It will be exactly equal to the sum of the sensible energy and latent energy in the air parcel and is typically expressed in terms of energy per unit mass;  Btu per pound in the system of units we typically use here in the United States.

The symbol h is often used for enthalpy.  There are a couple of conventions that often show up in psychrometric discussions, steam tables, and psychrometric equations.

  • h with out any subscript usually stands for the total enthalpy of the parcel of air, both the sensible energy of the dry air plus the sensible energy of the water vapor and the latent energy of the water vapor.
  • hf typically represents the enthalpy of the saturated liquid.
  • hg typically represents the enthalpy of something in its gas/vapor state.
  • hfg typically represents the energy associated with the transformation of something from the liquid to a vapor state;  i.e. the energy associated with a phase change.

Back to Contents

A Sojourn Into the Weeds

You may actually think we already were in the weeds.  And we probably are a little bit.

But they weeds are more like Queen Anne’s Lace …

Queen Anne's lace airport tarmac study #1

… and Dandelions, both of which I actually happen to like and thus, don’t consider weeds.  Truth be told, there are very few plants that I don’t like (and there are very few things that don’t fascinate me at some level).  So there are very few things I consider weeds (plant-wise and otherwise). 

But I am pretty sure I am a bit odd that way, so I am just trying to draw a line of distinction to acknowledge that I realize what I am doing here in that context. 

Having said that, there have been occasions where I was totally confused because, for instance:

  • I did not realize that a certain symbol could be used in multiple ways, or
  • I didn’t realize that that the baseline for enthalpy on a pych chart is arbitrarily set to 0 Btu/lb at 0°F;

Stuff like that.   So, as I was writing this post, I was including all of that information in the stream of it.    But doing made it even longer than I had thought it would be.  In addition, at one point, I realized that it could obscure the real information I was trying to convey.

But, since some of the details I am alluding to here are important to be aware of, I decided I would create a “weed patch” at the end of the post and put that information there so you could jump to it if you wanted to or just keep moving forward through the primary content of the post.

So if you are interested, the “weed patch” contains the following  “weeds” along with a “Back to Contents” link so you can get back to where you came from pretty easily if you go there.

Back to Contents

Exiting the Weed Patch

O.K.;  enough of that.  

The reason  that understanding sensible, latent and total energy matters in the context of a discussion about evaporative cooling is that the amount of cooling  – i.e. the energy change – provided by an evaporative cooling process is very much dependent upon the amount of moisture in the air and the latent energy it represents.  That means that if we really want to understand the energy content of an air sample, we need to know more than its temperature.  We also need to have some sense of the amount of water vapor it contains. 

That is basically what the science of psychrometry is about;  it is the study of the physical and thermodynamic properties of gas/vapor mixtures.  The sensible energy is reflected by the psychrometric property called dry bulb temperature.  The latent energy is reflected by a number of psychrometric properties including including relative humidity, wet bulb temperature, and dew point temperature.  Both the sensible and latent energy (total energy) are reflected by the property of enthalpy.

Back to Contents

Measuring the Total Energy Content of a Parcel of Air

Quantifying how much moisture is in the air is actually much harder than it sounds, and we have been trying to figure out how to do it for a long time.   While reading The Invention of Clouds, which is about Luke Howard (the person who came up with the system we use to this day to classify and discuss clouds) I learned that in China, over 2,000 years ago, during the Han Dynasty, the scientists of the time used the change in weight of a dry piece of charcoal that was exposed to the atmosphere as a measure of humidity;  pretty clever.

As a somewhat related aside, one of my favorite philosophical quotations comes from a a 6th century BC Chinese philosopher named Lao-Tzu who once said:

If lightning is the anger of the gods, then the gods are concerned mostly about trees.

Pretty funny.

Anyway there are a multiplicity of approaches that we have used over the years to try to quantify the moisture content in a sample of air.  The Malcolm J. McPherson reference I provide a bit further down has a pretty good discussion about them if you are interested.  

But if your are trying to do building science and quantify latent energy in an air sample, you will eventually run across a discussion of the concept of adiabatic saturation.  The concept is important because it is the basis for wet bulb temperature measurements, one of the basic ways we assess moisture content in the air.

Back to Contents

Adiabatic Saturation;  the Principle Behind the Psych Chart

A Scary Sounding Term

The device that is used to define adiabatic saturation, is appropriately enough an adiabatic saturator.   That name sounds scary and complicated and may cause you to want to run off and pursue something else. 

But its actually a relatively simple device and (thank goodness) no where near as complex as a turboencabulator. Now that is a device where the scariness of the name is warranted due to it’s reliance on a mixture of high S-value phenyhydrobenzamine and 5 percent reminative tetraiodohexamine for operation rather than  a mix of air and water vapor. 

In addition, critical to the functionality of a turboencabulator is the alignment between the two spurving bearings and the pentametric fan, which of course, requires that  six hydrocoptic marzelvanes be installed on the ambifacient lunar vaneshaft to prevent side fumbling. 

In contrast, the entry and exit points in the adiabatic saturator can have significant misalignment issues as long as they are far enough apart to allow the adiabatic saturation process to run to completion. 

Incidentally, I am not making this up;  I am simply paraphrasing an expert and citing performance criteria that can be found in the turboencabulator’s data sheet.

Truth be told, if success in building science relied on a deep working familiarity with the principles of turboencabulation, many of us would have fallen by the wayside given the complexity.

But thankfully, to understand evaporative cooling and for that matter, the psychrometrics of moist air, we only need to grasp the operation of an adiabatic saturator.  That’s because in reality, an evaporative cooler is just a practical implementation of  an adiabatic saturator.

Adiabatic Saturation Descriptions

Adiabatic saturation is a kind of thought experiment that involves a device in which an parcel of air is cooled adiabatically (with out the addition of heat from an external source) to saturation (100% relative humidity) by evaporating water into it.  All of the energy (latent heat) required by the evaporation process comes from the parcel of air and as a result, the parcel of air is cooled (sensible energy is reduced) as its moisture content (latent energy) increases. 

Aside from the explanations given to me by my mentors I have encountered two written explanations of adiabatic saturation that seemed very approachable to me.  One is provided by Willis Carrier in his book Modern Air Conditioning, Heating and Ventilating where he describes a process that involves a fan blowing air through an insulated box full of wetted excelsior (softwood shavings that were used to package fragile items back in the olden days). 

You can still find copies of the book and to me, it is worth having for a number of reasons ranging from sentiment to the fact that one of the stated goals of the book was to present the material in a manner that would not only be useful to the scientifically minded, but also to those who had a technical interest but not an extensive background in engineering and science. 

In other words, they hoped to convey somewhat complex information in a useful manner to people coming into the field from some other industry, like airplane mechanics in my case.   In other words, people who have taken an interesting in building science but are coming at it from outside of the engineering profession.  

And, in my opinion, the authors did a pretty good job of it.  So, I have scanned the pages on adiabatic saturation from my copy of the book and put them on a page on our commissioning resources web site if you are interested.

The other explanation that made sense to me is part of the chapter on psychrometrics (Chapter 14) in a book by Malcolm J. McPherson titled Subsurface Ventilation and Environmental Engineering where he uses the analogy of air flow through a long tunnel with no heat sources in it and a puddle of water on the floor, which I imagine might be what some parts of a mine might be like. 

There seems to be a .pdf copy of  Mr. McPherson’s  psychrometrics chapter out there in the public domain if you want to take a look.  In addition to providing an approachable explanation of adiabatic saturation, it is also an approachable explanation of psychrometrics in general so you might find downloading a copy to be useful.

My Concept of Adiabatic Saturation

For the purposes of this post, I made a little diagram to illustrate the adiabatic saturation concept as I understand it.


You start out with an insulated chamber so that the air and water in it will not experience any heat transfer from external sources which is what makes the process adiabatic. 

The chamber also needs to be very, very long, some say infinitely long (which I guess is why you seldom see one sitting around out there in the field since they would get in the way a lot).   But the length is necessary so that by the time the air parcel exits the chamber, it has come into equilibrium with the liquid water in the pool inside the chamber and is saturated, meaning the relative humidity is 100% . 

In other words, the air is going to exit the process at a point on saturation curve on the psychrometric chart.

Since water will be evaporated from the pool inside the chamber into the air stream, there needs to be a water make-up connection.   But to ensure that energy is not transferred from the water to the air stream by radiation or convection, the temperature of the water must be controlled to match the saturated leaving air temperature so that by the end of the process, the water temperature has no influence on the energy content of the air.

Bear in mind that the pressure of the parcel of air entering the process is created by the combined action of the constituent air elements as well as the action of the water vapor molecules, each contributing to the total pressure.  The pressure contributed by a constituent element is called its partial pressure.  If you want a more detailed explanation of this, or at least my take on it, you may want to take a look at the blog post titled Build Your Own Psych Chart – A Few Fundamental Principles.

Since the air coming into the process is not saturated, the partial pressure of the water vapor it contains is lower than the vapor pressure of the water in the pool inside the chamber.   Thus, there is a driving potential  causing water to evaporate from the pool and become water vapor in the air parcel.  

Conceptually, this is very similar to sensible heat being transferred from a warm object to a cold object.  The temperature difference is what causes the heat transfer to take place and the bigger the temperature difference, the higher the heat transfer rate will be. 

In the case of water vapor, it is the difference in vapor pressure that causes the water vapor to move around.  It will be inclined to travel from an area with a high vapor pressure – for instance the immediate vicinity of a liquid water surface – to an area of lower vapor pressure – for instance, the dry parcel of air entering and moving through the adiabatic saturator.

Because we have insulated the adiabatic saturation chamber and we are maintaining the make up water temperature at a fixed value that is identical to the leaving air temperature, the only source of energy available to cause the water to evaporate is the sensible energy in the air parcel.  As a result, the air parcel is cooled while its moisture content is increased until it becomes saturated.  At that point, the driving potential (the difference between the partial pressure of the water vapor in the air and the vapor pressure of the water in the pool) is zero and no additional water is evaporated.

Since all of the energy required to saturate the air came from the sensible energy in the air when it entered the device, the latent energy added is exactly equal to the sensible energy lost.   The resulting temperature is called the adiabatic saturation temperature or thermodynamic wet bulb temperature, the technical definition of which is:

The temperature a volume of air would have if cooled adiabatically to saturation by evaporation of water into it, all latent heat being supplied by the volume of air.

It is the difference between this parameter and the dry bulb temperature of the air entering the process that sets how much cooling will occur for a given air parcel.

This is a very important thing in the context of evaporative cooling.  For a dry air parcel from, say, the Grand Canyon area in the summer, the difference will be large compared to that of a moist air parcel from say, central Pennsylvania after a summertime thunderstorm.  As a result, more evaporative cooling can be produced by the Grand Canyon air parcel than the central Pennsylvania air parcel.

For a saturated air parcel, there is no difference between the dry bulb and adiabatic saturation temperature and thus, no evaporation (and no cooling) will occur.

Back to Contents

Exploring the Process from a Conservation of Mass and Energy Perspective

Conservation of Mass

From a conservation of mass standpoint, the adiabatic saturation process looks something like this for a parcel of air that enters the process totally devoid of any mosture.


Note that there is a net increase in mass through the process because of the water vapor that is added to the air parcel.  This is important and it is why the enthalpy (total energy content) of the air parcel increases through the process.

Conservation of Energy

From a conservation of energy standpoint, the adiabatic saturation process looks like this for that same parcel of air.


Essentially, the equation says that there is an increase in energy (enthalpy) through the process due to the addition of the water that is evaporated into the air parcel.   

But to fully appreciate what is going on, I am going to expand the terms in the relationship above a bit.  And in doing that, I am going to focus on a special case because (I think) it will allow me to make the point I am trying to make in a bit less confusing manner.  Specifically, I am going to focus on the case where the air entering the process is totally dry (0% RH).


Sensible Energy Lost = Latent Energy Gain

The expanded form of the equation includes terms for the sensible energy that is lost from the air parcel (the green term) and the latent energy that is gained by the air parcel (the purple term) as it moves through the adiabatic saturation process. Since the latent energy increase is exactly equal to the sensible energy decrease (by the definition of the process),  then the combination of the two terms ends up being zero.

That means that the only reason that there is an energy gain in an evaporative cooling process is due to the energy that comes in with the mass of the water that is evaporated.

In some way’s it’s kind of hard to get your head around the energy represented by the purple and green terms in the equation.  It’s there, but it’s not there,  kind of like Wile E. Coyote’s ACME Corporation portable hole.  But the reality is that, this is very useful thing to recognize for those of us using psychrometrics to assess HVAC systems.  

Stated mathematically, the words the latent energy increase is exactly equal to the sensible energy decrease (i.e. the purple term in the expanded equation is exactly equal to the green term) look like this on a per pound of air basis using psychrometric parameters for our special case (check out the Weed Patch for the more general case where the air coming into the process has some water vapor in it).


In other words, we could figure out how much water the totally dry air could hold if we saturated it by measuring the temperature change through the process and multiplying it by the specific heat of air (the 0.24 value).

The temperature change through the process is the difference between the entering dry bulb temperature and the leaving dry bulb temperature.  The leaving dry bulb temperature is equal to the adiabatic saturation temperature, by the definition of the process.  

If we could come up with those three numbers, then we could figure out how much water a totally dry parcel of air at a specific dry bulb temperature would hold if we saturated it using an adiabatic saturator.   Heck, if we could do it, that would let us draw the saturation curve for a psych chart .  This could be cutting edge!

We can easily measure the dry bulb temperature of the entering air parcel.   And the specific heat of air is also a measurable quantity and well documented (check out the weed patch for more on that). 

Gosh, if only there was a real world way to measure the mythical adiabatic saturation temperature of the entering air parcel, we would be able to quantify how much evaporative cooling a given parcel of air could produce.

Back to Contents

Wet Bulb Temperature;  a Very Close Cousin of the Adiabatic Saturation Temperature

Ta Dah!

By virtue of a happy thermodynamic coefficient coincidence, a thermometer that has its bulb covered by a wet wick will nearly (but not exactly, more on that later) measure the adiabatic saturation temperature of air at the conditions commonly encountered in an HVAC system. 

In fact, the value it indicates is close enough to the adiabatic saturation temperature that we can assume that the the adiabatic saturation temperature is identical to the temperature measured by a thermometer with a bulb that is wet. 

In fact, when we plot constant adiabatic saturation temperature lines on a psych chart, we are actually plotting constant thermodynamic wet bulb temperature lines. And, we call them temperature measured by a thermometer with a bulb that is wet lines.

Well, actually, we generally don’t call them that.  But my point is that constant wet bulb lines on a psych chart specifically represent a value that goes by two different names (adiabatic saturation temperature and thermodynamic wet bulb temperature) neither of which is what we typically measure out in the field.

The word “thermodynamic” ahead of the term “wet bulb” reminds us that what we measure with a thermometer with a bulb that is wet (which we often call a wet bulb thermometer) is not quite the same thing as the adiabatic saturation temperature, a.k.a thermodynamic wet bulb temperature.  

But we sure feel calmer, and thus, continue to breath normally, by calling what we measure the “wet bulb temperature” instead of “the temperature measured by a thermometer with a bulb that is wet” or “the approximate adiabatic saturation temperature”. 

So having beaten that into the ground, moving forward, I will refer to the temperature measured by a thermometer with a bulb that is wet as wet bulb temperature.

The Stationary Wet Bulb Thermometer

The stationary wet bulb thermometer was one of the earliest ways that folks used to try to understand the amount of moisture in a sample of air by measuring the temperature of a thermometer bulb that was wet (the images below are courtesy of

Kenyon33a_thumb3_thumb  Kenyon34a_thumb4_thumb

Empirical data (data based on observation or experience vs. theory or logic) derived using an instrument similar to the images above was likely the starting point for the psych chart as we know it today.

The stationary wet bulb thermometer evolved to the sling psychrometer, which I will describe and illustrate later in the post. 

Incidentally, if you are interested in learning a bit more about the history of psychrometrics in our industry, then you might find the AHSRAE Journal article titled Psychrometric Chart Celebrates 100th Anniversary to be of interest.  You can find a copy on the Hands Down Software web site (they are the folks behind the free Pacific Energy Center psych chart that I have written about on the blog).

Back to Contents

Sigma Heat

Returning to our discussion about the perfectly dry air parcel that moves through an adiabatic saturator, recall that we had concluded that we could figure out how much water it would take to saturate the air if we knew the entering dry-bulb temperature (tEntering in the equation below) and the adiabatic saturation temperature (tLeaving in the equation below). 


Now we know that we can do it if we measure the dry bulb temperature of the air parcel and also measure the temperature of the air parcel using a wet bulb thermometer.  There are two interesting things to recognize as you contemplate all of this.

One is that the only reason there is a change in total energy content/enthalpy through the evaporative cooling process is that the water that was evaporated into the process – i.e. the mass that was added – already had energy associated with it;  the enthalpy associated with the saturated liquid for water at the conditions entering the process.  But bottom line, the change in total energy/enthalpy through the process is entirely due to the addition of mass and the energy it brings into the process.

The rest of the process is just trading some of the sensible energy in the entering air parcel for latent energy in the leaving air parcel, which is the second point of interest. 

If you rearrange my expanded form of the conservation of energy equation to show this mathematically, it looks like this.


In other words, the amount of energy that entered the process as sensible energy in the totally dry air does not change;  it stays constant.  

The only thing that changed was how much of it is sensible energy and how much of it is latent energy at the end of the process.  Willis Carrier recognized this  and that is where the term  Sigma Heat came from.

In the psych chart below, I started with air at the 0.4% cooling design conditions in a number of climates and calculated Sigma Heat for that air as it moved through an adiabatic saturator to saturation, and also what would happen if that air sample entered the saturator at the same adiabatic saturation temperature but with a lower specific humidity.


Notice how the lines are straight lines that follow the constant wet bulb temperature lines and diverge slightly from the constant enthalpy lines.

Back to Contents

Sigma Heat and the Sensible to Latent Energy Trade-off Are Not Exactly the Same Thing

Just to be clear, Sigma Heat and the amount of sensible energy that is converted to latent energy in the adiabatic saturation process are not exactly the same thing.  For the temperatures and pressures we deal with in HVAC, the entering air parcel will have a lot more sensible energy available that is required to saturate it by evaporating water into it. 

The important point is that the sensible to latent trade-off energy is part of Sigma heat.  And the amount of energy traded off is a function of the difference between the entering dry bulb temperature and the entering wet bulb temperature.  In other words, Sigma Heat is a pure function of the difference between the entering dry bulb temperature and the entering adiabatic saturation temperature.

Back to Contents

Constant Sigma Heat = Constant Adiabatic Saturation Temperature

At this point, I imagine you have realized that the amount of water vapor that exists in a parcel of air is reflected by it’s wet bulb temperature.   Relatively dry air will have a lower wet bulb temperature than relatively moist air, all other things being equal.

In addition, the amount of water vapor that a parcel of air can hold will be reflected by the difference between it’s dry bulb temperature and its wet bulb temperature.   The difference between the two represents sensible energy available in the air parcel which can be used to pick up moisture via conversion to latent energy.  The bigger the difference, the more water vapor that can be evaporated into the air parcel. When the two are identical, the air parcel is saturated and can hold no additional water vapor.

Furthermore, that conversion energy is a component of the Sigma Heat of the air parcel, which remains constant in an adiabatic saturation process.  So if you think about it, that implies that for every dry bulb temperature, there is a very specific wet bulb temperature (adiabatic saturation temperature).   And:

  • Because Sigma Heat is a pure function of the wet bulb temperature (adiabatic saturation temperature), and
  • Because Sigma Heat remains constant through the adiabatic saturation process (evaporative cooling process), then

the wet bulb temperature (adiabatic saturation temperature) will remain constant through an adiabatic saturation process (evaporative cooling process).  That means that if you wanted to model and evaporative cooling process on a psych chart, you would do it by moving up a constant wet bulb temperature line.  

In the psych chart below, I started with air at the 0.4% cooling design conditions in a number of climates and calculated Sigma Heat for that air as it moved through an adiabatic saturator to saturation, and also what would happen if that air sample entered the saturator at the same adiabatic saturation temperature but with a lower specific humidity.

I can imagine that even if you were only mildly excited by the content of this post up to this point, that the revelations of this section have made you ecstatic, perhaps kindling a desire to go get yourself something that can measure wet bulb temperature.   So, lets take a look at a couple of the options for doing that next.

Back to Contents

Measuring Wet Bulb Temperature the Old Fashioned Way

In the olden days, when I first entered the industry, wet bulb measurement involved using a device called a sling psychrometer (the black gizmo to the right in the picture below), which relied on evaporative cooling to directly generate the wet bulb temperature.


It was an improvement over the stationary wet bulb thermometer for a number of reasons including the variable nature of the velocity of air flow across the stationary device.

Now-days, we use modern electronics and measure relative humidity and dry bulb temperature (the “space age” light gray gizmo on the left in the picture).  There are some pros and cons to both approaches, which I will get to in a minute.

In this breath-taking close-up of the sling psychrometer, you can see that it actually has two, identical, factory matched thermometers, one of which has a cloth sleeve (called a wick) around its bulb (the upper one in the photo).


The little cap on the left is actually the cover to a water reservoir that the wick threads into.  Once the wick has absorbed some water, it will keep the bulb of the thermometer that it encases wet, thus, the term wet bulb

To take a reading, you use the vertical part that is pointed down and off the picture as a handle and swing the horizontal part as quickly as you can for about 1-2 minutes(while avoiding slamming it into things like walls, ducts, pipes, associates, etc.  that are in the vicinity). 

As a result of all of this activity, the temperature of the bulb with the wick will drop due to – you guessed it – evaporative cooling.   At some point, the temperature of the wick, the bulb, and the water will come into equilibrium with the moisture content in the ambient air and the temperature will stop dropping.  That point is what we call the wet bulb temperature.

Slinging the thermometers for 1-2 minutes is harder than it sounds and if you do it a lot, I suspect you get pretty well developed fore-arm muscles on your “psychrometer arm”.    But the speed and time are important because you want enough air to flow past the bulb with the wick on it so as to keep the little micro climate in the area of the wick at about the same condition as the ambient environment.

If you don’t keep it moving fast enough, the water evaporating from the wick will influence the local vapor pressure in the immediate vicinity of the thermometer bulb, which affects a number of things.  But bottom line, you end up with a high reading.    

When I am using a sling psychrometer, after my first 1 –2 minutes of slinging, I stop, take a quick reading, and then sling again for another 30 or so seconds to make sure I have reached the equilibrium state;  i.e. if my second reading is the same as the first one, I figure I have.

But if it has dropped some more, I keep on slinging a bit more (aching fore-arm aside) until I get two readings in a row that are about the same.   Here is what my psychrometer looked like right after I slung it in my office earlier today;  wet bulb above and dry bulb below.



It’s important to take your reading right away because once the airflow stops, the wet bulb will start to rise pretty quickly.   In fact the wet bulb reading in the picture is a bit higher than it was when I stopped slinging because of that effect. 

Parallax  also comes into play in the context of the picture;  you need to read the thermometer “dead-on” and some psychrometers even have mirrored scales to facilitate that.  The content just below where this link takes you talks about mirrored scales and parallax if you want to know a  bit more.

You also want to be careful not to touch or breath on the thermometers since that could also throw your readings off.  But bottom line, once you know a dry bulb temperature and a wet bulb temperature (or any other indication of moisture) you can use a psych chart or psychrometrics calculator to come up with other parameters like relative humidity.

Or you can just use the handy slide-rule built into the sling psychrometer.


As you can see from the image above, my little field test said the RH in my office is about 58%. 

What I like about that number is that it was generated directly by fundamental principles (evaporative cooling and the expansion of the thermometer liquid from the bulb up a capillary tube) with no batteries or electronics in-between what I was measuring and the result. 

But it is very much subject to technique and is also limited by the manufacturing tolerances of the thermometers.  If you go to the spec sheet for my little Bacharach tool, you will discover that it is accurate to +/- 1°F dry bulb, which, since two thermometers are involved in generating an RH reading, translates to being accurate to +/- 5% RH. 

Thermodynamic Wet Bulb Temperature vs. the Wet Bulb Temperature Measured with a Sling Psychrometer

As discussed earlier in the post, wet bulb temperature lines on a psych chart are more specifically thermodynamic wet bulb temperature or adiabatic saturation temperature lines.  The wet bulb temperature we measure out in the field is not exactly the same thing and there are a number of reasons for that.  

  1. As the water in the wick starts to cool off, it starts to pick up energy via convection from the air around it and by conduction from the thermometer itself.  As a result, the energy balance is different from what happens in the mythical adiabatic saturator.
  2. There is likely radiant energy transfer from the sling psychrometer itself as well as the surrounding environment, unlike the process in the adiabatic saturator.
  3. If the velocity of the air is too low (i.e. you don’t sling fast enough), then the environment in the immediate vicinity of the wet bulb may be at a different state from the free air stream due to the water that is evaporating into it from the wick.
  4. The temperature of the water to the wick is not controlled to match the thermodynamic wet bulb temperature where-as in the adiabatic saturator, it is.
  5. If the water is not pure or the wick is dirty, then evaporation will be different from what would occur with clean water and a clean wick.

At a fundamental level, the thermodynamics behind what causes a wet bulb thermometer to register a temperature lower than the dry bulb temperature are fundamentally different from the process occurring in an adiabatic saturator.  

But by a happy coincidence of physics. the coefficients associated with the thermodynamics of the real wet bulb process (there is a convective heat transfer coefficient and a mass transfer coefficient involved) are such that the result is very nearly identical to the thermodynamic wet bulb temperature, at least for a mixture of air and water vapor in the range where we apply the device in building science.

If you want to know a bit more about that, there is a YouTube video by a guy named
Mitchell Paulus that will give you a pretty good idea the mathematics
behind what I just typed.  He also has a couple of videos where he goes through the mathematics of adiabatic saturation, which you probably would want to look at first since that math becomes the foundation for the match in the video about the difference between sling psychrometer wet bulb and thermodynamic wet bulb.

There is also a paper out there titled Calculation of the Natural (Unventilated) Wet Bulb Temperature, Psychrometric Dry Bulb Temperature, and Wet Bulb Globe Temperature from Standard Psychrometric Measurements that explores the topic and includes charts showing the amount of deviation and how it varies with different conditions like wind speed and the radiant temperature of the surroundings.

Back to Contents

Measuring Wet Bulb Temperature with New Fangled Technology

As I said previously, in my day, we measured wet bulb temperature by slinging a psychrometer until our arm ached, and we liked it.  But people these days want everything to be easy schmeezey so they buy fancy schmancy electronic gizmos to measure wet bulb temperature with the press of a button.

Truth be told, so do us old timers.  

Contrast the Bacharach result with what my modern electronic gizmo said was going on at the same time (a Vaisala HM 40 series hand held humidity and temperature meter).  At the conditions that existed in my office at the time of the reading, it is accurate to +/- 0.36°F and +/- 1.5% RH. 


Pretty different from what my trusty sling psychrometer told me.

But, when I plotted the points on the psych chart along with a box around them to reflect the accuracy of the instrument associated with them and project the Vaisala accuracy window across the Bacharach accuracy window, I concluded that given the stated accuracies, both instruments did their job.


Note how the Bacharach RH window overlaps the projected RH window from the Vaisala HM 40 as does the dry-bulb temperature window.   In terms of wet-bulb accuracy, this test result says the Bacharach is probably more like +/- 1.5°F vs. +/- 1°F. 

But, as I discussed above, the wet bulb lines on the chart are thermodynamic wet bulb temperature lines, or more specifically, adiabatic saturation temperature lines , and the number they represent is not exactly the same as the temperature I measured with a thermometer bulb that was wet.

The test also says that the sling reading will tend to be higher than the reading derived from the Vaisala instrument.   I have used the sling a lot longer than I have used an instrument like the Vaisala;  the latter simply was not available for the first part of my career, at least not at an affordable level for me.  But in thinking back through my experience taking readings with both instruments, I believe that most of the time, this tended to be true;  i.e. the Bacharach relative to something like the Vaisala would tend to be high. 

That could be the result of technique, the resolution capabilities of a glass tube with degree marks etched into it, and even the cleanliness of the wick;  mine is a bit dirty right now and I should probably replace it.  A dirty wick can impact the reading because it can affect the how well the water is absorbed by it and thus, how “wet” the wet bulb really is along with how easily the water can evaporate from the wick.

Back to Contents

Some Wet Bulb Temperature Measurement Conclusions

couple of important and interesting things to note about all of this.

Accuracy Comes at a Price

If you wanted to buy a sling psychrometer like my Bacharach tool, it is currently priced at just a bit over $100 on Grainger.  In contrast, the Vaisala HM40 starts at $534 for an instrument like the one in the picture and runs up to $1,168 for one with a longer, separate hand-held probe.

For even tighter accuracies, you might be looking at something like a Vaisala HMT330, which is the standard instrument used by the U.S. Climate Reference Network.  Those can start at about about $1,900 and go up from there to as high as $3,000 or $4,000 depending on accessories and the specific application targeted for the device.  For the added dollars, you get +/- 1% RH accuracy, so a 0.5% improvement over the HM-40 that I have.  The temperature accuracy is +/- 0.36°F, which is the same as the HM40.

The Air Inside Came from Outside

If you assume an instrument similar to he HMT330 is being used at the Automated Surface Observation System (ASOS) station at the Portland International Airport (PDX), here is what it thought the humidity was outside when I was taking my readings inside the office.


It can be helpful when you are trying to understand what is going on in a building to remember that the air that is inside came from the outside.   That means that the outdoor psychrometric conditions establish the baseline for the conditions inside the building. 

Most of the time, unless you have a bunch of open desiccant containers lying around, the dewpoint/specific humidity inside will be no lower than the dewpoint/specific humidity outside.  In fact, it will likely be a bit higher because there are things going on in most buildings that add moisture to the air in addition to adding heat. 

The ratio of sensible heat or energy added inside the building to the total heat or energy added inside the building (i.e. sensible plus latent energy) is called the Sensible Heat Ratio or SHR.   On a psych chart, if you plot that line using the SHR scale, it gives you a “visual” on how much energy is added to a parcel of air as it goes from one state to a different state. 

It also helps you determine the leaving air temperature required from a cooling coil given a specific sensible and latent load to be addressed in a zone served by the coil.  I discuss that in a bit more detail in the blog post about how to use Ryan’s free psych chart resource if you are interested.

When I plotted the implied SHR line for my office assuming that the air outside my house was about the same as the air at the airport, which is 13 miles East-Northeast of me, it implied that the SHR was about .65 (the orange line in the image above). 

Initially, that seemed a bit high;  typically, the SHR for a house or office will be in the 0.75 to 0.95 range.  But my office has a number of moisture sources in it, some of which are a bit out of the ordinary including:

  • A 30 gallon goldfish aquarium that runs with a water temperature of about 75°F (my office runs at about 67-69°F in the winter)
  • A candle burning (combustion processes generate water vapor)
  • A number of plants
  • A hot cup of coffee (not out of the ordinary I suspect)
  • An aging engineer who had just been vigorously slinging a psychrometer (probably not very common)

Plus, Kathy was doing some cooking up-stairs and that was generating enough moisture that the windows on the French Doors to the deck from the kitchen had some dew on them.  So I am not surprised in hindsight by the higher than normal SHR.

My main point in bringing all of this up is to show how challenging it can be to measure relative humidity and how the humidity indoors is going to be related to the humidity outdoor somehow.  For me, these things have been important considerations to keep in mind as I work with existing buildings.

Maintained Accuracy Comes at a Price

Short of breaking a thermometer (what happens if you don’t avoid walls, ducts, pipes, associates, etc. in the vicinity of your slinging), there is nothing much to cause my Bacharach instrument to go out of calibration.   My bifocals are probably the biggest issue along with my age because ….

… what was I saying?

Anyway,  if you did need to “recalibrate” the sling, then it would involve ordering 2 new thermometers for about $31 each.

In contrast, Vaisala recommends recalibration of their instrument once a year.   You can have that done at the factory for $292 per year.  If you want an extended warranty that covers parts, no questions asked, for three years plus calibration plus priority service plus shipping and handling, then it costs $380 per year.  Alternatively, you could purchase a calibration tool  for just under $1,000 and do the calibration on your own.

If you don’t do the calibration, then the industry data out there suggests that at some point, probably sooner rather than later, the Vaisala will have about the same accuracy as the Bacharach.  This link (page down a bit after you go there) takes you to a page where there is a report done by the Iowa Energy Center for the National Building Control Information Program (NBCIP) that looked at out of the box accuracy and maintained accuracy for blind purchased humidity transmitters.  The results were all over the place as you can see from the images below, which were extracted from the report.


Granted, the report is several years old now.  But the technology in the electronic relative humidity instruments we are using currently is the same basic technology that was being used back when the report was developed.

My Sling Psychrometer is Probably as Good or Better than the Average Humidity Sensor Out There in the Field

Given the data in the report, which was specifically targeted at commercial building HVAC sensors, the data I glean from my Bacharach is probably about as good or even better than the average DDC system relative humidity sensor, especially if a high accuracy sensor was not specified and especially if the sensor has not received regular maintenance.

In the experience of FDE as a whole, calibrating a humidity sensor annually would be a minimum requirement.  For critical applications, it is probably desirable to calibrate a humidity sensor every three to four months.  This conclusion is generally consistent with the NBCIP Humidity Transmitter Product Testing Report Supplement (which looks at the long term accuracy of the sensors covered by the report mentioned previously) all though:

  • The amount of drift varied from manufacturer to manufacturer, and
  • The point in time when the most drift occurred varied from manufacturer to manufacturer.

For most buildings, where I am just trying to get a general idea of what might be going on, I am pretty comfortable with numbers from my sling if I don’t have anything else with me at the time.  But that is contingent on using good technique and not taking the number I get more seriously than warranted given the stated accuracy of the device.  If the system I am looking at uses a well maintained, high accuracy sensor, then that is a different situation in terms of what I would do with the data I got from my sling and I would likely want to use my HM40.

Back to Contents

The Weed Patch

Just in case you wanted to know …

The Same Symbol can Mean Different Things

With regard to the hf term in the list earlier in the blog post;  if you are doing chemistry, it is used to represent the enthalpy of formation, not the enthalpy of a saturated liquid.  The way I think of enthalpy of formation is that it is the amount of energy it took to create the substance in the first place. 

Enthalpy of formation values are based on molar quantities (the chemistry unit for amount of something usually in terms of number of atoms or molecules or fundamental particles) and are referenced to a specific temperature and pressure condition, typically 1 atmosphere of pressure and 298.15 K as I understand it (about 77°F).  In contrast, the enthalpy of a saturated liquid is typically given on a Btu per pound basis for a specific saturation temperature and pressure. 

The way I think of it is the saturated liquid enthalpy value includes the enthalpy of formation along with the additional energy associated with the difference between the saturation temperature you are working with and the reference temperature for the enthalpy of formation.

In some ways, the industry is not exactly good about using consistent sets of symbols and terms and you will find different symbols used in different discussions about the same concept by different resources, including some of the ones I will mention.  So it is important to make sure you know what a symbol means in the context in-which it is being used.  And it’s also important to document your use of a symbol in anything you are working on, just so there is no confusion.

Towards that end, because I am trying to write this so that it is approachable for folks who are wanting to get into working with existing buildings and learn building science, but who don’t necessarily have engineering backgrounds, I am going to use the term “energy” instead of enthalpy most of the time unless I need to explain something very specific in the context of enthalpy.  And I will use the symbol “Q” with subscripts like S for  “sensible” or L for “latent”.  So, for instance, I will use QSAitIn instead of hAirIn to represent the sensible energy content of a parcel of air entering a process.

That’s sort of a judgment call on my part. But in my personal career path, I came into this topic from the perspective of  a somewhat math-phobic airplane mechanic.  And from that perspective, the term energy was less intimidating than the term enthalpy.  From at technical purity perspective, a few objections are probably justified.   But in the context of trying to promote a broader understanding, I am taking a few liberties and acknowledging that here (and hope it does make it harder instead of easier to understand).

Back to Contents

Enthalpy of Formation is Related to but Not the Same Value as Enthalpy on a Psych Chart

While you probably don’t need to worry about it too much in the real world, day to day, building operations and commissioning environment, out here in the weed patch, it is probably worth noting that:

  1. The enthalpy of formation of a substance is typically referenced to some baseline condition, which typically is a pressure of 1 standard atmosphere and a temperature of 25°C (77°F).
  2. The enthalpy of formation is based on forming one mole of a substance, which is the unit of “amount” used in chemistry and related to the mass of a very specific number of fundamental units, like atoms or molecules . 
  3. The enthalpy of formation of a pure substance is considered to be 0.
  4. The enthalpy of formation for a pure substance is taken for the substance in its most stable state.  For instance a carbon atom can exist as graphite, diamond, or a gas with enthalpies of formation of 0, 1.9, and 716.67 kilojoules per mole respectively (kJ/mol).   In contrast, the enthalpy of formation for oxygen is zero for diatomic oxygen, which is the form it has in the air around us.  A single atom of oxygen has an enthalpy of formation of 249 kJ/mol.  For ozone (O3) the enthalpy of formation is 143 kJ/mol.
  5. The enthalpy of formation can be negative or positive.   In other words, sometimes going from the most stable form to a different form involves  a release of energy to the surroundings (exothermic, negative enthalpy of formation). But other times, energy will be absorbed from the surroundings (endothermic, positive enthalpy of formation).
  6. Processes can also be endothermic or exothermic.  For instance, ice melting is generally considered to be a process vs. a chemical reaction.  But from what I can tell, the difference between a reaction and a process is even more into the weeds than I got here so probably not a huge deal in the context of our discussion in this blog post.

Back to Contents

Air is Not a Molecule

It’s important to remember that air is a mixture of elements, primarily Nitrogen and Oxygen but including a number of others.


These elements are not bonded together;  they are all just bouncing around together with Boyle’s Law and Dalton’s Law being good models for how we think they are gong about doing it.   In other words, there is no such thing as an air molecule in the technical sense, even thought we often talk about air molecules. 

That means that the term “enthalpy of formation” is not really appropriate for air, at least that is how I understand it.   The enthalpy of a parcel of air is the sum of the enthalpies of the mixture of pure substances it contains, each of which, being a pure substance, has an enthalpy of formation. 

That is not totally true in the general case because mixing substances can often release or absorb energy.  But for gases, this effect is generally negligible and we can usually just add the enthalpies of the constituent elements.

Back to Contents

Enthalpy Values Vary from Source to Source;  How Can That Be?

If you go to a chemistry book and add up the enthalpies of the constituents of air, you do not end up with value of 0 Btu/lb at 0°F, which is what most psych charts show for the enthalpy of totally dry air. 

Furthering the confusion, if you look at a chart in SI units, you find that the enthalpy is 0 kJ/kg at 0°C.   Since 0°F and 0°C are two different temperatures, you might wonder how the enthalpy of air could be 0 at both of them, or at least I did.

The answer to that is that what we typically are concerned about when working with psychrometrics is the change in enthalpy, not the absolute value of it.  At some point, for psych charts, the zero values were arbitrarily referenced as indicated above. 

Back to Contents

We Treat Air as an Ideal Gas Even Thought the Water Vapor it Contains Does Not Behave That Way

For our purposes in HVAC psychrometrics, we generally consider air to be a superheated gas and behave as an ideal gas, meaning it follows the ideal gas relationship.


In words, the ideal gas equation says, among other things, that if the temperature changes, the pressure and volume will change in proportion. 

But, if you cool air enough (to about –318 °F), it will become a liquid, which is a phase change, and a phase change is a deviation from ideal gas behavior.   When something goes through a phase change the temperature and pressure hold constant while there is a very large change in volume as energy is added to the system.

If you use a steam table to get scientific about this phenomenon for water, like the  Keenan and Keyes table below …


… and compare the specific volume of saturated liquid water at atmospheric pressure with the specific volume of saturated water vapor at atmospheric pressure, you would find that they differ by a factor of about 1,600.  In other words, one cubic inch of liquid water becomes about 1,600 cubic inches of water vapor when you boil it.

The reason this matters is that unless the you are working with air is absolutely devoid of moisture (RH = 0%), one of the elements bouncing around in the parcel of air with the other molecules listed above are molecules of water in a vapor state.

That means even though the air in our HVAC systems will never become cold enough to change phase (even in place like Minnesota or Siberia or Antarctica), the water vapor in it can and will.

So, confusingly enough, one of the common constituents of air – water vapor – does not behave as an ideal gas some of the time.  But since it is such a small constituent, even if the air is saturated, for our purposes, we can assume ideal gas behavior for air.

But we can’t assume that the water will not change phase in our systems or in our environment, and that is important to us for a whole bunch of reasons.

Back to Contents

Sensible Energy Lost = Latent Energy Gain for a Moist but not Totally Saturated Parcel of Air

The equation I use for the conversion of sensible energy to latent energy in the body of the blog post is associated with a special case;  i.e. the case where the air entering the adiabatic saturator is totally dry.    Most of the time, that is not the case;   the air entering the process will already have some water vapor in it.   And the water vapor brings energy into the process with it, just like the dry air did.  

That means that in the more general case (and the more realistic case in terms of what you will actually run into out in the field), the words the latent energy increase is exactly equal to the sensible energy decrease would look more like this on a per pound of air basis using psychrometric parameters.


The darker green term represents the sensible energy content of the water vapor entering the process, and is the difference relative to the special case I used in the body of the blog post.

Back to Contents

Specific Heat – A Measurable Quantity

Specific heat (also called heat capacity) is a measurable quantity that is defined as the amount of energy it takes to raise a unit mass of a substance through a unit temperature change.  Specific heat values for specific substances can be found in tables in thermodynamic and chemistry text books. 

In fact, for water. you could figure it out from your copy of Keenan and Keyes (you all have one of those, right?) …


… or by creating your own steam table using REFPROP.


For instance, in the Keenan and Keyes table above. the saturated liquid enthalpy hg (energy content) of saturated water at 209.56°F is 177.61 Btu/lb.  At 212°F, it’s 180.07. Doing the math:


If you were to go through a similar process for the saturated water vapor over the temperature range I show from my REFPROP table, which are temperatures commonly encountered in HVAC systems, you would come up with the .45 Btu/lb/°F value that is shown in the sensible equals latent energy equation above.

There are similar resources out there for air.  This graph was generated using valued from a thermodynamic text book.


Back to Contents

Hopefully, all of this has given you a sense of the fundamental principles behind an evaporative cooling process.   In the next post, I will build on this and take a look at real world evaporative cooling processes.  So if you thought this was exciting, wait until you see that.


David Sellers
Senior Engineer – Facility Dynamics Engineering

Posted in Air Handling Systems, HVAC Calculations, HVAC Fundamentals, Psychrometrics | Leave a comment

Universal Translator Workshop

Sorry to disappoint Star Trek fans, but this is not a workshop about a device used to decipher and interpret alien languages into the native language of the user.  Having said that, if you are into building systems commissioning and the related field work, then you may find this is even more exciting (in a nerdy sort of way).

Specifically, this post is a “heads-up” to let you know that a no-cost training that is focused on using the Pacific Energy Center’s Universal Translator tool will be offered on November 13, 2018;  you can attend in person or via the internet.

UT3-2018-flyer croppedThe reason this is exciting news is that the Universal Translator is an ever evolving, feature rich tool that supports trend analysis and diagnostics of building system data retrieved from DDC control systems, energy management systems, and data loggers.  The image to the left illustrates some of its capabilities including:

  • Combining multiple data sets, resampling them, and capturing useful information like the maximum and minimum temperature for an interval (top chart),
  • Regressions, a form of scatter plots that present your data in a way that lets you look for tell-tale shapes in your data clouds (second from top),
  • Concurrently compare data series from multiple similar systems over the same time frame (second chart from the bottom), and
  • Colorful carpet plots that allow you to contrast multiple variables in a three dimensional visualization (bottom chart)

to name just a few of the features illustrated in the current brochure.  You can download the current brochure from the Universal Translator page on our Commissioning Resources web site, where you will also find links to the website that will allow you to access a no-cost copy of the tool and the YouTube video channel that has been created to support it.

So bottom line, in my opinion, it will be well worth your time to visit the UT Online website and obtain your own personal copy of the tool.  And you may want to consider attending at least a portion of the upcoming class, either in person or via the internet.  I have a few other commitments that day but in-between things, I plan to join the internet session and brush up on my UT skills.  So hopefully, I will “see” you then.


David Sellers
Senior Engineer – Facility Dynamics Engineering

Posted in Uncategorized | Leave a comment