Control System Fundamentals

This post evolved to support a string of posts I will start next to support an exercise in one of the classes I am involved with at the Pacific Energy Center.  In those posts, we will look at how you can use Eikon for Educators or WindLGC to develop and verify control logic.


Applying these packages as a design tool is very useful since they both allow you to simulate your logic before it goes to the field.  Having that ability to allows you to find and fix any bugs that might exist.    And designer or not, using the tools will help you learn about logic and how control systems work. 

If you are involved with building systems in any way, be it as a designer or an operator or a commissioning provider or a field technician, it is very helpful to have a basic understanding of control system fundamentals and the building blocks used to develop control logic. My hope is that the slides modules provided with this post will facilitate that a bit by providing you with the slides I generally use when I do a control system basics class. 

These slides are not narrated but I think the graphics and links to resources in them can facilitate understanding the basic idea behind the slides.  They certainly are not a comprehensive presentation of the control system topic.   But they do provide information on most of the fundamental principles you might need to design or understand a control system and the logic and programming behind it.

The post is divided into three main parts.  Module Organization provides an overview of the content in each module.  Using the Modules explains how you go about downloading and running the modules if you are interested.  Taking Notes on Slides explains how you can use Microsoft One Note if you want to add  your own notes to the slides as you work through them. 

Module Organization

The slides are divided into seven modules, as listed below.  If you click on the link, it will take you to a description of the module and there will be a link from there to the actual module on my Google Drive.  The “Back to Contents” link and the end of the description will bring you back here.

Introduction and Overview

This module starts out by defining the basic, functional goal of a control system.


It then looks at what we have been able to do with control systems historically, from fairly early in the industrial age to something representing the epitome of technology and compares that to where we seem to be in terms of applying control systems in the HVAC industry currently, which many (including myself) would say is not particularly good.

The module then proceeds to explore why (in my opinion) the current process may be dysfunctional and proposes a process (that I and others I know have used) to address what I believe might be the causes of the dysfunction.  We should and can do better than we are on most projects these days.  A narrative that complements the information in the slides (which is basically what I talk about as I go through them) is included in the second chapter of the Control Design Guide.

Next, the module looks at some of the National Electric Code requirements that impact the installation of a current technology control system, including a case study that illustrates how these requirements might affect an installation in a tight equipment room.  Current technology control systems are basically electrical systems.  Thus we need to make sure we install then in accordance with the requirements of the codes that govern electrical systems.

From there, the module looks at some basic functional testing requirements that will help ensure that your design goals are realized.  As the Designer or Owner of a control system, you should have a fairly good sense of what you expect it or need it to do.   It turns out that if you know that, and test for it, you can find out if the system you have is delivering or not, even if are not up to speed on the details, networking, and theory behind a modern DDC system.

The module concludes by introducing the basic building blocs for most control systems;

  • Inputs
  • Outputs
  • Control Processes
  • Set Points

The modules that follow explore these topics in more detail.

Click here if you want to download the Introduction module from my Google Drive.

Click here to go back to contents.


This module focuses on the “Inputs” building block and starts out by looking at some of the basic concepts behind sensors and transmitters.  All though we tend to use those terms interchangeably, at a technical level, there actually is a difference and understanding that is important from an application stand point.


The module then looks at how the dynamics that are going on in an operating system might impact what a sensor and transmitter “see”.  It turns out that for accurate results, you need more than an accurate sensor and transmitter.  It needs to be installed correctly and arranged so that the data it picks up is truly representative of what is going on in the system.  The accuracy can also be impacted by application considerations for the sensor and transmitter itself, including position effects and the effect of mass.

Calibration is an important component of accuracy, but there is often more to it that simply verifying the sensor at one operating condition.  For one thing, the accuracy of many sensors will vary over their range.  And even if that is not the case, a single point calibration may not deliver accuracy over the entire range.  These topics as well as others are focused on in the Calibration and Accuracy section of this module.

The bottom line is that there are a lot of things between what we see at the work station and what is actually going on out in the field.  All of them can make a difference and of particular importance is the sampling frequency.  The Nyquist Theorem allows you to predict and appropriate sampling frequency and if you don’t pay attention to it, then you may be misled by your data.  The final section of this module explores this topic, including examples of what can go wrong.

Click here if you want to download the Inputs module from my Google Drive.

Click here to go back to contents.


Outputs are another important building block in our control systems and this post starts out by looking at the types of actuators and the signals that are typically used to cause them to move, both pneumatically and electrically.


Just because an actuator has a linear signal applied to it does not mean that the response in the system it is controlling will be linear.   In fact, most of the time it won’t due to they way system pressures and flows interact with the valves and dampers, the characteristic curves for the valves, dampers, and other final control elements in our systems, and the linkages that connect the actuators with the final control elements.  These topics, including examples, are explored as a part of this module.

Click here if you want to download the Outputs module from my Google Drive.

Click here to go back to contents.

Control Processes

Control processes manage our outputs based on our inputs and the set point for the process.  So they are a pretty important part of the control system.  This module starts by looking at where we came from in the HVAC industry, starting with the analog days, as illustrated below in the photo, which is an analog, pneumatic PID (Proportional plus Integral plus Derivative) controller and moving forward through the evolution to DDC.


The content in this module includes a section that takes a detailed look at how pneumatic one pipe and two pipe controllers work and their pro’s and con’s relative to each other, the pro’s and con’s of pneumatic vs. electric actuators (pneumatic control is not the same as pneumatic actuation) and pneumatic control resources before moving on the DDC systems.

The module then explores digital and analog control processes, including floating control, Proportional control (P-only), and Proportional plus Integral plus Derivative (PID) control.  This part of the module includes examples illustrating what happens when you narrow down the throttling range on a P-only control process and what happens when you take a P-only process and add integral gain and derivative gain to it,including the units of measure associated with both types of gain.

The open and closed loop tuning methods are also explored including illustrating what the response of a well tuned control process should look like.  It turns out that lags in the control process can have a huge impact on your ability to tune it tightly and get it to settle out in a reasonable period of time.  The module illustrates this along with showing how the open loop tuning method can be used to identify the lags in your system, which will provide a lot of insight into where you can expect to end up in terms of response and settling time once you have tuned your process.

The module concludes by providing you with some resources that will give you a deeper understanding of PID control and all of its variations as well as some general rules to apply as you go about tuning control loops and starting up control systems.  The rules are generally the same as the rules you would use in any functional testing process, probably because the control system plays such an important role in virtually all of our HVAC systems.

Click here if you want to download the Control Processes module from my Google Drive.

Click here to go back to contents.

Control Logic

This module supports the previous module by taking a closer look at the logic behind control processes, including how that logic is developed for a current technology system.


Having a sense of the logic required to implement and improvement to a system is very helpful if you are involved with commissioning and operations, new construction, existing building, or otherwise, which is one of the points that is made early-on in this module via an EBCx (Existing Building Commissioning) case study.

The module then looks at relay logic, which is how digital logic was accomplished in the “olden days”, and which is the foundation for modern digital logic.   This includes an example that compares a narrative sequence with the relay logic required to accomplish it.  It then contrasts the relay logic with the DDC logic that would be used to accomplish it.

The module concludes by comparing the two primary ways logic is implemented in current technology systems along with guidelines and resources you can use to help you develop the logic for the systems you are working with.

Click here if you want to download the Control Logic module from my Google Drive.

Click here to go back to contents.

Set Points

Set points are critical in terms of coordinating a logic sequence that might apply to a given system type in any location with the specifics of a given location, the loads the system serves, and the nuances of the equipment serving the system.


For example, the perfect set point to return an economizer process to minimum outdoor air on a hot and humid day in San Francisco, California could be an energy efficiency disaster if you used it on an a similar air handling system serving a surgery in St. Louis, Missouri.

This module uses a number of case studies to illustrate why the set point you select for a given control process needs to consider the operating environment, the nature of the load, and the characteristics of the equipment serving the system if your goal is optimal performance and efficiency.

Click here if you want to download the Set Points module from my Google Drive.

Click here to go back to contents.

An Exercise

This module is the exercise that I will be working with in the blog posts that follow.


The intent of the module is to let you work with Automated Logic’s Eikon for Educators or Idec’s WindLGC to build and test some basic logic and then modify it to achieve energy savings.  Both of these tools are free resources and are fully functional versions of their respective manufacturer’s programming software, including the ability to simulate and test your logic once it is complete.  They only thing you can’t do is download the logic to an actual controller.

The exercise starts by having you put together the logic for a control loop that will modulate the steam valve on the heat exchanger serving the hot water system in the system diagram above so that a constant discharge temperature is maintained.  Once you have that working, you modify the logic to add two energy saving features.

The point of the exercise is to help you become familiar with logic by working with it.  While the building blocks you will be working with are specific to the manufacturer’s software you will be using, I think you will find that once you understand how to develop logic for one vendor, the knowledge is easily transferable to other vendors, at least that is how it was for me when I first started going control system programming.

The other point of the exercise is to illustrate how having the ability to understand, work with, and communicate logic can open the doors to some very low-cost/no-cost savings opportunities.   I plan to illustrate this by developing the cost/benefit metrics for the two improvements you will make to the logic.  So, not only will you learn about logic, you will learn a bit about energy savings calculations and cost estimating by the time you are done with the exercise and related blog posts.

Click here to download the Exercise Module from my Google Drive.

I have provided the system diagram that goes with the exercise as both a PowerPoint file and a .pdf file.   The reason is that if you have PowerPoint and are interested in developing system diagrams, having the system diagram in the PowerPoint format will give you a start on some symbols since you can copy and paste them from my diagram.  If you don’t have PowerPoint, the .pdf version of the diagram is really all you need to do the exercise (along with the programming software mentioned above).

Click here to  download the system diagram that goes with the exercise as a PowerPoint file.  Click here to download the system diagram that goes with the exercise as a .pdf file.

Click here to go back to contents.

Using the Modules

The modules are PowerPoint slides that have been converted to html and flash files to allow you to view them even if you do not have PowerPoint.  I believe they should work on PCs and IOS devices but currently, I don’t have a way to verify that so let me know if you have problems with running them on a MAC.

The various modules are saved on my Google Drive (the link takes you to the location) as zip files.   I created the files with Winzip, but they typically will open with the standard Windows file compression utility.   If you have problems getting that to work, you can download the free trail copy of Winzip for a PC, Linux, or a Mac off the internet.

Downloading the Modules

In the paragraphs that follow, I am going to show the download process on a Windows PC using Internet Explorer.  Other operating systems and web browsers should look similar to what I show.  I should mention that I have run into an occasional problem working with Google Drive using Internet Explorer, but I was always able to solve them by using Google Chrome instead.  So if you run into a problem try doing the steps in Google Chrome.

To use the modules, you should down load them and then extract the files to the location of your choice, making sure to preserve the file structure.    Specifically, you should navigate to the Google Drive location where the module of interest is located by clicking on the Click here to download the …  link at the end of each section describing a module. which should give you a window that looks something like this.


To download the zip file containing the files listed in the window, you click on the download icon at the upper right side of the Google Drive window.  When you select the download option, you should be asked if you want to open or save the file. To get them on your computer, you need to save them and if you are doing that, you are given the option of where to save it, at least that is how it works for a Windows PC.   Here is what that looks like on my system.


If you select “Save” on most systems, the file will end up in your default “Download” folder.   If you pick “Save As”, then you an select any location on your system’s hard drive (or drives).  Here is what that looked like on my system when I created a folder on my Desktop named “Download Example” and then saved the Introduction and Overview module there.


The next step is to extract the files from the compressed folder that you downloaded.   To do that, you highlight the folder of interest, which in this case, since the folder is a compressed file folder, will open up a set of “Compressed File Tools”. 

I’m sure that what this looks like and the specific steps involved with vary from PC to PC and operating system to operating system and may even vary with the file compression tool set as the default tool on your system.  But here is what that looks like on my system, which is running Windows 8.1 Pro.


To extract (unzip) the files in the folder, select the “Extract” menu and then the “Extract All” option. 


When you do that, you will be given the option of selecting a location for the extracted files.  The default is typically a normal (non-compressed) folder of the same name in the same folder (location) as the zipped file, but you can put them anyplace you want by using the “Browse” button.


When you select “Extract”, you typically will get a little window showing the progress of the operation …


… and then a window with the folder that you extracted the files to open, showing the files you extracted.


For the “Play” function to work, it is important that the files be in the structure shown above.  For most systems, that will happen automatically if  you extract the files using the process I illustrated, but I mention it just in case somehow, in the process of doing this, you don’t end up with something that looks like the preceding.

Click here to go back to contents.

Playing the Modules

If you double click or otherwise open or run the file with the name that starts with Run This …, it should launch your html viewer (Internet Explorer, Firefox, Google Chrome, etc.) and allow you to play the slide show.

Note that, depending on your security settings, you may get a message asking you if you want to allow blocked content to execute.  To get the slides to run, you will need to click the “Allow Blocked Content” button.  Ultimately, you should end up with a window that looks like this.


You can display the slides “Full Screen” by clicking the little square button with the diagonal arrows as shown below.


You can advance the slides in a number of ways including pressing the “Play” button, pressing the “Previous” or “Next” buttons, pressing the “Page Up” or “Page Down” keys on your keyboard, or pressing the “Space Bar” on your keyboard.


Note that there is a subtle but significant difference between using the “Play” button or “Space Bar” to advance the slides vs. the “Previous” or “Next” buttons, or pressing the “Page Up” or “Page Down” keys.  Specifically, the “Previous” or “Next” buttons and the “Page Up” or “Page Down” keys will literally advance to the next slide even if the animation associated with the current slide has not run.

In contrast, using the “Play” button or “Space Bar” will step through each animation (if they are not automatically triggered by the previous one) and only advance to the next slide after you have seen all of the content on the current slide.   So, I would recommend using the “Play” button or “Space Bar” to move through the slide deck.

To facilitate things when I teach using these slides, I have hyperlinks set up to jump between the various slide decks and back to the table of contents.   So far, I have not been able to figure out how to make those links work in the html versions of the slides.  Meaning that to open the next module, you will literally need to go to the location that the Run This to Play the …. Module.html file is located in and launch the module that way.   To return to a previous module, you will need to manually navigate to it if it is still open or simply launch it again.

Click here to go back to contents.

Taking Notes on Slides

Frequently, when you are working through a presentation or listening to a webinar, you might find it handy to be able to take notes on a particular slide or anything that you have showing on your monitor for that matter.  OneNote makes this pretty easy (the link takes you to web page the describes some of the features) so I thought I would close this post by illustrating how you go about using OneNote to make notes on the slides if you want to do that.

What follows is specific to OneNote, which is a free Microsoft utility. I use it a lot for taking notes in meetings, keeping a phone journal, taking notes as I work through projects, taking notes when I am listening to presentations or webinars, etc.  OneNote will not run natively on a MAC, but you could run it in a Windows Shell.  But I believe there are MAC specific programs that have the same basic functionality as OneNote.

Lets say I was working through the Inputs module and got to this slide and wanted to make a note reminding myself that one of the points of the image was to illustrate that while the two devices appeared to be similar a closer examination of the nameplate reveals that they actually have different ranges.


To do that, select the window you want to capture so that it is on top on your monitor.  Then switch to OneNote window and select “Insert” and then “Screen Clipping”.


When you do that, OneNote is hidden and your entire screen is covered by an opaque white layer.


You can now use your mouse to drag a window on what-ever portion of the screen you would like to capture …


… and when you release the left mouse key, OneNote will re-appear, but now, the portion of the screen you selected will be included on the OneNote page and you can add your notes to it by simply clicking anyplace on the page and typing.


Pretty cool and really fast once you know how to do it.  

Incidentally, when I say “drag a window with your mouse” what I mean is that you use your mouse to put the little cross-hairs that appear with the opaque screen over one corner of the area you want to capture.   Then, while holding down the left mouse key and moving the mouse to the other corner, you will discover that you will “make a hole” in the opaque screen defined by your use of the mouse.

Click here to go back to contents.

In closing, I should mention that this is the first time I have tried doing this (putting slide modules up on line in a format that should allow you to view them).   Everything seems to work on the machines I have tried it on, but please let me know if you have are having problems getting the slides to open up via a comment on the blog and I will try to figure it out.


David Sellers
Senior Engineer – Facility Dynamics Engineering

Posted in Controls, HVAC Fundamentals, Operations and Maintenance, Pneumatic Controls | 2 Comments

Happy Holidays, 2017; Recollections of George Hinke

For the past couple of years, around this time of year, I have tried my hand at writing a non-technical thing that has some measure of seasonal appropriateness, like the post last year about snow crystals.  So, this post is my effort at that this year and will be totally not about engineering.  So, if you are follower that finds my occasional forays into non technical topics to be annoying, you probably should stop reading this right about here.  But, if you are starting to feel “the holiday spirit”, you may enjoy what follows.

My parents really enjoyed Christmas and when I was little, they made it pretty magical, especially my Mom who did a lot of decorating and baking around the holiday.   One of the traditions was that Dad would always get Mom a copy of the Christmas Ideals magazine. The magazine was more like a book than not and had color pictures, poetry, stories and a generally nostalgic air to it.  It turns out they are still being published and the link above will give you a sense of what they were like.

As near as I can tell, in 1958 (when I was about 4 years old), Ideals published a book titled Jolly Old Santa Clause.   I found this image of the original title page on an Esty site called Tinsel and Trinkets.

The book told the story of the days leading up to and through Christmas Eve as seen by Sparkie, one of the elves at the North Pole and had these really detailed, wonderful pictures by an artist named George Hinke.  When I looked at them, I just knew that is what it must look like up at the North Pole right about this time of year.

So, I have a lot of pleasant memories of being snuggled up with my Mom and brother reading this book (probably a lot) during the weeks and days leading up to Christmas with Christmas music playing in the background and the smell of pine in the air from the garlands my Mom decorated with and the candles she burned at that time of year. 

On the musical side of things, the one album that sticks out was called Twas the Night Before Christmas by Fred Waring and the Pennsylvanians.


It was this eclectic mix of secular and sacred holiday music sung by a vocal choral.   My favorites, being 4 or 5 at the time, were Twas the Night Before Christmas and Rudolph the Red Nosed Reindeer, which were the perfect songs to accompany the reading of Jolly Old Santa Clause.

I especially liked Rudolph the Red Nosed Reindeer, which, in the second chorus adds all kinds of silly sound effects to the song that still make me smile (as they did my Grandson, when I played it just now) (The links take you to sound clips from the songs to give you a sense of them if you want).

As I grew older, of course, the number of times I found myself snuggled up with Mom reading the book declined.  But frequently, around this time of year, I found myself remembering them and Jolly Old Santa Claus in particular.  So, on trips home, I would often dig around for a bit trying to find the copy of the magazine, and was always disappointed because it seemed to have disappeared.

The good news is that in 1996, someone republished the book and copies of it were all over the place.  So I happily bought one for myself and one for Mom, giving it to her as a Christmas present in honor of the happy Christmas memories she and Dad created for me as a kid.  Now, I have bought copies for all of my grandkids (and maybe in hindsight, twice for Piper and Arabella; what can I say;  I’m getting old).  Hopefully, they are creating a few happy holiday memories for them.

So, I thought for my holiday post this year, it would be fun to scan some of the images and share them here.  The first was one of my favorites because it reminded me of my Mom and all of the Christmas cookies she baked.


We had this tradition of having a “cookie tree” which was this little artificial tree that we set up someplace and decorated cookies that Mom had made.  When one of our friends came over to play, they got to take a cookie home with them.

These cookies were no ordinary cookies thought.  Each one was a little work of art.  They were all decorated with colored sugar, but when I say that, I don’t mean that the snowman, for instance, hand white sugar crystals all over it.  Rather, the snow balls on the snowman had white sugar, the scarf had red or green sugar, the hat had blue sugar and there were little brown dots –the tip of a clove I think – that made the eyes, nose, smile, and buttons down the front of the snowman.

There were cookies shaped like snowmen, stockings, Santas (with rosy cheeks), Christmas Stars, Trumpets and Drums (after the little drummer boy), and Christmas Trees.  Each one was individually decorated and then wrapped in cellophane with ribbon tying the cellophane in place and making a loop to hang it on the tree. 

After Mom passed, when Kathy and I were helping my brother sort through the stuff in her house, we found a tray of them in the freezer, which made me happy because Kathy, who is an artist, had never seen them and really appreciated what it took to make them.  Mom would have appreciated that because she and Kathy became pretty close.  Like, I learned stuff about Mom from Kathy because Mom had told her about it (vs. Mom telling me;  not an problem, just saying she came to love and trust Kathy a lot).

This next picture was another favorite because for our family, heading out to find and cut down our own Christmas tree out in the Pennsylvania woods was another tradition; sort of a sane version of one of the opening scenes in Christmas Vacation.


The tree was set up by the family on Christmas Eve right before we went to church.  Magically, the next morning, we rushed down stairs to find it beautifully decorated by Santa and his elves while we slept.   At the time, I was sure most of our ornaments had originated in a scene like this (and still think that may be the case).


When I was a bit older and snuck down the stairs one Christmas Eve, I even saw “Mommy kissing Santa Clause”, which is one of my more romantic memories of Mom and Dad. 

The tree decorations were another one of my Mom’s works of art.   Each year, we had this tradition of buying a special ornament for my brother and I.   That was started by my Grandparents, who typically sent us a special ornament each year.   I still have this little glass clown (I’m not afraid of clowns) that is as old as I am and plastic stars (back when plastic was a scientific break-through) that had a little spinning propeller in them that spun in the thermal created by a Christmas Tree light if you hung them over the lights.

Spinny Star Ornament   Spinny Star Ornament 02

So part of the tradition when I was younger was that we would hang our special ornaments after we set up the tree;  the rest of the decorating happened over night, as I mentioned above.

When I was older and saw my Mom decorate the tree, I discovered that each individual ice cycle was placed by hand.  Back then, icicles were strips of shiny stuff (originally lead which could explain a few things about me, and later on acetate) that were about 1/16” wide and maybe 18 –24 inches long.   So hanging each one individually (and retrieving them for  use the following year) was a pretty painstaking process.  But the result was a really spectacular looking tree, at least to my childhood’s and memories eye.

This next scene was one that tended to be emphasized by Mom when she was reading the story to us in the days leading up to Christmas.


In the original painting, if you look closely at the book where Santa is making his notes, you discover that it is a list of nice girls and boys and naughty girls and boys.   My Mom’s point at the time was that we should err on the side of “Nice” given that my brother and I were probably running around like “wild men” in our excitement (and sugar high).

The picture that follows is an example of one of the things that, to this day, I love about George Hinke’s pictures;  there is an incredible amount of detail to them, as can be seen in this one.

George Hinke 01

Clearly, its the day after Christmas, and the reindeer are being groomed and bedded down after their long night’s work, not the day before, as can be seen from the date on the calendar. 

And then, there is (at least for me), the primary thing you focus on when you look at the picture, which (for me in this picture) is the sleigh and reindeer.  But if you look closely, you see all sorts of other things going on, like the elves polishing bells and lanterns and feeding the reindeer who are already in their stalls (with their names over them, of course).   I think you will notice the same sort of thing in the other pictures I have shared.

This final picture is the closing picture in the book, and, to my mind, when Mom was finishing the book, was exactly what it would look like on the roof of our house on Christmas Eve.


So, with that, I will leave you to the holiday and wish everyone a happy holiday season.   And thanks, as always, for visiting the blog.


David Sellers
Senior Engineer – Facility Dynamics Engineering

Posted in Uncategorized | Leave a comment

Build Your Own Psych Chart – Creating the Data for the Saturation Curve and a Bit About VLOOKUP

So here I am, over a year later after starting this string, getting back to it.  All I can say is life happens and sometimes, because of that, you find that what you set out to do takes a bit longer than you thought it would when you set out to do it.   But thankfully, several folks reminded me about this and I mean to complete it, just like Bill (Coad), the inspiration for this, would have.

Having discussed some of the basic principles behind the psychrometric chart in the previous post, in this post, we will get started on the process of actually drawing the chart.  By the time we are finished (which I am guessing will take a couple of posts to accomplish), you will hopefully have everything you need to know to develop your own version of the graph below.


And you may even have an idea bout how to go the next step, which involves adding the relative humidity lines.



This post will focus on doing the calculations you need to do in order to have a data table to plot as your chart and configuring the chart axes.  The links below will take you to the indicated topics.  The (Return to Contents) link at the end of each section will bring you back here.

Setting Up the Axes

At its most fundamental level, a psych chart is a graph.  And in general, a graph is a picture you create by plotting points as a function of the parameters associated with at least two of the axes.  Typically, the axes are set at a 90° angle to each other and frequently, but not always, the points are connected with a line.

So, the first question you will likely ask yourself as you start down the path to making your on psych chart is what two parameters should I use for the axes?  There are a couple of ways to approach answering that question.  One would be to simply look at someone else’s chart and use what they used.

And while there is nothing particularly wrong with that (unless you are taking a test), since one of our goals in doing this is to understand the fundamentals behind the chart, it might be desirable to consider the question from a more fundamental perspective.  So, you might ask yourself, why am I interested in making this chart in the first place (self education goals aside).

(Return to Contents)

Charts and Their Axes

In the engineering and scientific world, most charts are built as tools to facilitate some sort of analysis or development process.  For example, when sizing a pipe, rather than having to calculate or otherwise assess a number of parameters and then use that information in a subsequent calculation for each flow rate, piping material and line size encountered, a designer may simply reference a pipe friction chart which presents this information graphically based on some fixed assumptions like the type of pipe.


Since flow and pressure drop due to flow are the key items of interest, those parameters are used for the axes.  Having established that, the relationship between the two can be plotted for different line sizes.   In addition, velocity, another parameter of interest can also be added by plotting constant velocity lines; i.e. lines that are based on the relationship between pressure and flow for a fixed velocity.

Or, a refrigeration mechanic or engineer may plot out a refrigeration cycle on a p-h (pressure-enthalpy) diagram and gain insight into how well the system they are working on is performing, how much power it will use, etc. even though they may not be familiar with the equations of state behind the chart.  Pressure and enthalpy are the axes of choice because they allow the cycle to be visualized easily.  For instance, in the ideal cycle below, you can quickly see the superheat caused by the compression process.


Another example of a reason for choosing axis parameters is in the context of a diagnostic plot like this one comparing energy consumption before and after a repair to an economizer control system


By comparing preheat energy consumption before (red) and after repairs were made (green) you can clearly see that there was an improvement.  Meaning that for any given outdoor air temperature, the system used  less heat once the control system had been fixed.  The trend line feature of Excel even allows you to develop the mathematical relationship between consumption and outdoor temperature and use it to extrapolate the results of the improvement beyond the data set.

Folks other than engineers select their axes based on what they are trying to visualize or understand too.  The Skew T/Log P chart meteorologists use to understand the atmosphere is an example I am learning about these days, given my interest in meteorology.  On this chart, the primary axes are temperature and atmospheric pressure/altitude which allow the changes in temperature and humidity with altitude measured by a radiosonde to be compared to the saturation conditions at altitude.

RAOB - 2014-10-01 12Z CURRENT.RAWINS ... 72694 - KSLE - SALEMMcNARY, OR US at 011200Z 1012014 14139 PM

The chart I used in the illustration is a software tool I have called RAOB that allows field data (the red and purple lines) be overlaid on the basic SkewT/LogP plot (the tan and green lines).  Its a tool very similar to the electronic psych chart I described in the previous post.  But back in the olden days, the field data was plotted manually on a paper version of the chart, just like we did with paper psych charts.

The bottom line, at least in my perspective, is that the decision regarding what to use for an axis on a chart is driven by what you can readily measure, what you want to know, and the ability to visualize data in a meaningful manner.

(Return to Contents)

The Psych Chart Axes

Two of the key things we try to understand in HVAC processes and systems are often related to the temperature and moisture content of the air they are handling.  Since a psych chart is a tool to facilitate that understanding, we might conclude that using temperature and some parameter that is representative of moisture for our axes would be desirable.

Temperature is something we can measure fairly easily in the field.  And, in the previous post in this string, we touched on the fact that there is a psychrometric parameter termed specific humidity that is the ratio of pounds of water per pound of air in a given sample.

You will also recall that if you knew the temperature of a sample of air and had a copy of Keenan and Keyes or some other version of a steam table, then you could calculate the specific humidity associated with a saturated condition via the following relationship.


Given that our current goal is to plot the saturation curve on a psych chart, it would seem that if we selected temperature and specific humidity as the axes, we might have a way to do that.   Specifically, we could:

  1. Select a number of arbitrary temperatures, maybe 40 – 100°F in 10°F increments since a lot of our HVAC systems operate in that range.
  2. Assume a fixed atmospheric pressure.
  3. Use Keenan and Keys to determine the saturation pressure for water vapor at those temperatures.
  4. Calculate the partial pressure of the air at saturation by subtracting the vapor pressure at saturation that you looked up in your steam tables from the total pressure you have assumed (atmospheric pressure).
  5. Calculate the specific humidity at saturation for each of the temperatures.
  6. Plot the points and connect them with a line.

(Return to Contents)

Doing the Math to Create the Data Table

The Data Table

Excel charts are typically plots of data from a table.   So to make our chart, we will need a table that has a pair of data points in it for each point we want to plot on the chart.  Usually, at least for me, it is helpful to have the table include any intermediate data used to develop the data point of interest.  This is the table I developed to make the saturation curve on my chart.  (Remember, if you click on an image in the blog, it will open up in a separate window at a larger scale).


The columns follow the steps I outlined in the previous section.  If you don’t have a copy of Keenan and Keyes or some other version of the steam tables, fear not.  I used my copy of REFPROP to create one that you can download from the public space on my Google Drive.

(Return to Contents)

VLOOKUP;  A Handy Excel Function

Personally, I think its worth having a copy of Keenan and Keyes in your engineering library,  and I take a certain nostalgic comfort in using mine.  But the truth is, that if you are working in a spreadsheet, having the data electronically is handy.  Here’s why.

It turns out that there are a number of functions in Excel that let you look up information in a different table and insert it into your table.  If you only had to look up a couple of things, its not much of a time saver.

But, when the number of things you need to look up gets larger, for example:

  • The 10-15 times you have to consult a steam table to get data to draw your saturation line, or
  • The hundreds or thousands of times you might need to look something up based on parameters in a set of logger data,

the Excel lookup functions start to save you time and/or make something that is impractical on a manual basis totally possible.

The function I used to help build the table in the spreadsheet above is VLOOKUP.  Specifically, I use VLOOKUP to go to a different tab in the spreadsheet where there is a steam table that I created with REFPROP and look up a value from the steam table and insert it into my table.  (The steam table is the one I put on my Google Drive so you could download it.)

This slide shows a zoomed in view of my table so you can see things a little more clearly.  The formula that is enlarged is the formula in cell D:243 (the one that has the black outline around it).


Basically, what the formula is doing is looking for the number that is in cell B:243 (highlighted in blue;  a different cell in the same row as the cell with the formula). It looks for that number in a table that is on the workbook tab ‘Water at Saturation 35-220F .01” (highlighted in green). 

Once it gets to that tab and table, it looks at the range of cells starting at K8 and going all the way to Q18806 (I’ll explain the dollar signs in a minute).  The the first 25 or so rows of the range are highlighted in orange below;  I won’t try to illustrate all 18,000 plus rows and will figure that you will get the idea from the illustration. 


Note that the first row of the range you specify needs to have the parameter you are using as a reference in it. Meaning that since I am going to look up a pressure that I don’t know based on a temperature I do know (the temperature is the value in cell B243/the values in column B of the table I am making), then temperature column needs to be the first column in the range I select for the VLOOKUP to use.

Also note that the data in your reference column in the range you specify needs to be sorted in ascending order (smallest at the top to largest at the bottom of the range).

The 2 in the formula (highlighted in purple) tells Excel that the data you are looking for is in the second column to the right of the column that has your reference parameter in it.  In other words, in the range I selected, temperature (what I know) is in the first column (I have highlighted the column header in blue below).  I am trying to find out the saturation pressure associated with a given temperature and that information is in the second column (highlighted in purple below).


If I wanted to know the latent heat associated with the phase change from liquid to vapor (the number in column Q, which is highlighted in yellow), then I would have put a “7” in the formula instead of a “2” since column Q is the seventh column to the right of the temperature column, which is my reference.

In human terms, what the formula says is to go to the steam table that contained on the Water at Saturation 35-220F .01 tab in the spreadsheet and go down column K until you run into 40°F (the value in cell B:243).  Once you find that, look in column L (the 2nd column in the range of K8 through Q18806  that you defined in the formula) and put the value you find there in the cell with the formula.

One more thing;   when you enter a formula in Excel, it is often convenient to use the Functions Arguments dialog box, which you can open by clicking on the little fx symbol in the formula bar.


If you do that for VLOOKUP, you will discover there is one other optional parameter that you can enter after telling Excel which column the data you want is located in (the 2 in our formula).   If you omit this parameter or put the word “True” in as the last parameter in the formula, then Excel will start at the top of your reference row and search down it until it comes to the closest match to the number you are looking for.   If you want Excel to only pick up an exact match, then you need to put the word “False” into the formula as the last parameter.

As far as I can tell, the way Excel determines it has the closest match is that it goes down the column with your reference parameter in it.  If it finds an exact match, it looks in the column where you have told it the data you are interested in is and returns your data.  If it doesn’t find an exact match, Excel keeps going until it finds the first instance of something larger, then goes back one row and used the data associated with that row.

(Return to Contents)

Absolute vs. Relative References

That all sounds more complicated than it actually is once you try it.  But one thing that can become a “gotcha” is related to the dollar signs ($).  The dollar signs ($) in the formula are important and not the default that you will get if you select the range using your mouse.

They make the reference to the array an absolute reference vs. a relative reference.   In other words, if you copy and paste the formula into a bunch of rows below the one where it is entered, the $ signs let Excel know that you always want it always (and absolutely) look at the range K8 through Q18806 in the array on the Water at Saturation 35-220F .01 tab.

If you did not use the $ sign, Excel would consider the reference to be relative to the row and column that the formula was in. Meaning that if you pasted the formula from cell D243 to cell D244, Excel would shift the range that it looked things up in from K8 through Q18806 to K9 through Q18807 (i.e. it would shift it one row down).

That means if you used a relative reference in the formula (no $ signs), if you pasted the formula into 1,000 rows below the one where you entered it, then in the last row, Excel would think the array you wanted it to look in was K1008 through Q19806, not K8 through Q18806.  That could be a problem.

Related to all of this, I have found the books by Mark Moore on Mastering Excel to be pretty helpful.  I have a number of them on my Kindle including the one on VLOOKUP.  For the price, to me, they are really worth it and well written.

(Return to Contents)

The Bottom Line on Doing the Math

I diverged a bit to discuss VLOOKUP and  you certainly don’t need to use it to build your table and draw your chart.  But I tend to learn by doing things.  So my thought was that if the same were  true for you, then trying your hand at using VLOOKUP to bring the 10 or so values  you need into your table automatically may be a good way to learn the function and a lot less intimidating than trying to do it on the hundreds or thousands of rows of data associated with a logger file where you might want to use the function.

But bottom line, you will be fine for making your chart if you manually enter the data for saturation pressure into your table  or us VLOOKUP;  the choice is yours.

Going through the steps to create a table similar to the one I show earlier in the post should set you up to actually start the process of drawing your chart, a topic I will take up in the next post.

(Excitement, Excitement, Excitement)

(Return to Contents)


David Sellers
Senior Engineer – Facility Dynamics Engineering

Posted in Excel Techniques, HVAC Calculations, HVAC Fundamentals, Psychrometrics | Leave a comment

A Free Electronic Psych Chart and How to Use It to Plot Basic HVAC Processes

Ryan Stroupe of the Pacific Energy Center recently worked out a deal that allows him to provide a free limited capability version of Hands Down Software’s electronic psych chart.  You can get a copy at this link and a overview document that also includes the download link here.

Once its up and running, you should have a window that looks like this.


Even though it is a limited version, it is still a very useful tool since it allows you to plot points, read the parameters associated with the location of your cursor, create a .pdf of your chart, and copy the image to your clipboard for use in a report or presentation.

And, if you find yourself using it a lot and wishing you had all of the features associated with the full blown version of the Hands Down Software chart, you can upgrade it for a one time licensing fee of $160.   That’s about a 30% discount relative to what the Hands Down chart would cost if you just went and bought it on your own.  So a big thank you to Ryan for making this available to the industry and to Hands Down Software for offering the discount.

I also should mention that while I am writing this and using the PG&E electronic chart to illustrate things, the basic psychrometric concepts and techniques I discuss will work for any psych chart, including a paper one.  (You remember paper, right?)


For this post, I thought I would focus on what the basic version of the chart can do by using an example.  The links below will jump you to topics of interest, since this is another long post.  The Return to Contents like at the end of each section will bring you back here.

Plotting a Point

As a starting point, let’s plot the “generic” ASHRAE standard space condition of 75°F and 50% relative humidity and then read all of the other psychrometric parameters off of the chart.  To start, you need to open the Psychrometric Process window, which is accessible from the Analysis drop down menu.


Once you have the window open you can adjust the size and column width if you want to by dragging and clicking the boarders of the window or the column dividers.


Even in the basic version of the chart, there are a number items you can manage in the psychrometric process window, including naming a point, controlling where the label sits relative to the point, the air flow you are dealing with for your process and its units of measure, and the type of process.

To place a point on the chart, you click on the Add a Point button in the Psychrometric Process window, which will add a row to the table and open a sub-window in the lower portion of the main window, which is where you will eventually enter you data.


To enter a point, you need to know the dry bulb temperature and some other metric that is an indication of moisture, like dew point temperature, specific humidity, wet bulb temperature, or relative humidity.  We are using 75°F as our dry bulb temperature and 50% RH as our indication of moisture.  So at a minimum, to get a point on the chart, you will need to type in:

  1. The name of the point in the Point column, and
  2. The dry bulb temperature next to DB in the sub-window under Current Point, and
  3. The relative humidity (or other indication of moisture content) in the sub-window below the dry bulb temperature.   You will also need to pick the units of measure for this point (RH, Wet Bulb, etc.) using the drop down menu to the left of the box where you enter moisture content number.

Then, click on the Apply  button at the top of the Psychrometric Process window.  When you are done, things should look something like this.


Notice how there now is a red dot labeled ASHRAE Space on the chart and that as soon as you entered the moisture data, all of the other psychrometric properties associated with the point appear in the sub-window below the boxes where you entered your data.

You can move the location of the label relative to the point by changing the settings in the Label column using the drop down  menu that is provided there.  For instance, here is what things look like if I select B for Below instead of AR for Above, Right in the Label  column.

Adding first point 03

You could also change the airflow and its units of measure in the appropriate columns, but I generally accept the defaults.

(Return to Contents)

Adding a Second Point

Now, suppose we took the air from our ASHRAE Space and passed it over a cooling coil so that it left the cooling process at 55°F dry bulb (often abbreviated as 55°Ftdb) and 54.6°F wet bulb (often abbreviated as 54.6°Ftwb).  and we wanted to show that process on our chart.  Our system would look like this


We would start by clicking the Add A Point button on the Psychrometric Process window and typing in our data, just like we did to put the ASHRAE Space into the table.   This will add another row to the table and, when we click Apply, will plot the point on the chart.


Here is a screen-shot of just the table so you can see it better.

Point Table 02

And here is the chart vs. a screen shot of the chart instead of the entire working window so you can see it better.


To get the image above into the blog, I used another handy feature of the electronic psych cart.  By selecting Copy Chart Image to Clipboard from the Edit dropdown menu, I captured the image to my clipboard.  I can then paste it into other applications like Word or PowerPoint or Excel for making a report or for illustrative purposes in this blog post.

If you are wondering how a designer arrives at the coil leaving conditions, that is a topic for a future post.  But we discuss it in the VAV Systems, Design, Performance and Commissioning Issues class at the Energy Center and this link will take you to those slides, which should give you a sense of how it is done and why different load conditions require different coil leaving a conditions.

Now lets do something different.  You may have noticed that there is a column labeled Process in the Psychrometric Processes window.  If you click into that cell, you will discover that there are a number of options aside from Add State Point.  Specifically, you can also pick Connect State Point and Cooling Coil.

Since the point we just entered was the cooling coil discharge temperature for a system handling return air from our ASHRAE space, let’s select the Cooling Coil option.  When we do that, the Psychrometric Processes window changes a bit, as you can see below.

Cooling Coil 01

Notice how you can now pick a start point for the process.   If we had more than one point on our chart, we could pick any of them using the drop down menu.  For our cooling process, the start point was the ASHRAE Space, so we will simply accept that and click Apply.   When we do that, the psych chart adds a line from the ASHRAE Space to the Cooling Coil Discharge that approximates the path the air would take as it passes through the cooling coil.


The reason that the line is straight and then curves downward as it approaches the saturation line is that for most coils, most of the heat transfer initially is sensible heat transfer (heat transfer you and I detect as a change in temperature).   But as the air cools towards saturation, some of the heat transfer is in the form of latent energy (the energy that is keeping the water in the air in a vapor state;  you and I detect it as a change in humidity).

(Return to Contents)

Plotting a Process

To plot the entire process for our little system, we need to connect the Cooling Coil Discharge point with the ASHRAE Space point since the air in our simple little system simply flows from the space to the cooling coil and back to the space again.   If we change the process for the ASHRAE Space point from Add State Point to  Connect State Point, we get this.


Notice how there is now a straight line between the Cooling Coil Discharge point and the ASHRAE Space point.  This line is a bit more theoretical than the cooling coil line and is a Sensible Heat Ratio (SHR) line.  Sensible Heat Ratio is pretty much what the name implies;  its the ratio of the sensible load in the space to the total load.

Designers come up with it by doing a load calculation that considers how much latent energy, how much sensible energy and how much total energy will be added to a space by the loads it contains.  The bottom line is that to maintain steady state conditions in the space, the air leaving the cooling coil has to be cool enough so that when the sensible energy in the space is added to it, you end up at the space temperature.  And it has to be dry enough that when the latent energy added to it, you end up at the humidity level you have targeted.

Generally speaking, that means that with all other things being equal, loads that have a a big latent component will require colder air than loads with a small latent component.   That is because we typically dry the air out by cooling it below its dew point.  All of that is fodder for a future blog post and is explored a bit in the slide deck I referenced previously.

What is cool about the SHR line (I say “cool” in a nerdy sort of way) is that when you plot it on the chart, it represents how much the process has to cool the air and dry the air so that when the air is delivered to the space, it will meet the load requirements in terms of sensible and latent energy.

(Return to Contents)

The Impact of Sensible Heat Ratio on Cooling Coil Leaving Conditions

I set up the chart for our little exercise using a SHR of 0.95, which is a bit high for commercial buildings.  I did it so the coil discharge temperature was 55°F, which seems to be the temperature everyone thinks their systems should run at.   If you are following the discussion, I suspect you can see that is not necessarily true and that the actual temperature is very dependent on the amount of humidity you have to handle in the space and the dry bulb and dew point/specific humidity that you need to end up at.

If you consult Arthur Bell’s book HVAC Equations and Rules of Thumb, you will discover that for commercial buildings, SHRs can run from 0.75 – 0.93.  In contrast, places like Theaters can have SHRs in the range of 0.65 – 0.75 because the number of people in them adds considerably to the latent load and thus, lowers the SHR.  But places like the clean rooms I was involved with during my tenure at Komatsu Silicon American can have SRHs that approach 1.o.

If we were to plot our little process out for a space with a SHR of 0.85 instead of 0.90, we would end up needing a Cooling Coil Discharge Temperature in the range of 53.0°Ftdb/52.8°Ftwb.   If  you needed to meet the 68°Ftdb,/45%RH conditions we needed in our Epitaxial cleanroom at KSA, you had to come off the coil at 46°Ftdb,45.8°Ftwb.


So, you probably are thinking that the SHR line is a pretty important and useful line to be able to plot on your chart and may be wondering how to do that.   You do it by using the SHR scale, which is the little protractor shaped thing in the upper left corner of the chart.

(Return to Contents)

Plotting the SHR Line

Plotting the SHR Line in the Pro Version

If you upgrade to the full featured version of the chart, you can do this very quickly as a part of the tool by simply drawing a SHR line through the space condition using the   Constant Line Control tool, which is one of the many tools you get if you upgrade the basic chart.  If we did that for an ASHRAE space with a SHR of .8, it looks like this.


Notice that there is now a line through .8 on the SHR protractor and a parallel line through the ASHRAE space point.  Note that the line also goes through 0.80 on the SHR scale that is on the right axis of the chart.

Plotting the SHR Line on a Paper Chart and in the Basic Version

At this point, you may be thinking bummer, I sure wish I could plot the SHR line on my basic chart without having to upgrade since my budget is tight right now.  The good news is that you can if you put a copy of the chart into PowerPoint or Word.  Here is how you go about doing that.

Let’s start with a copy of the basic chart. and point out a few things.


There are actually two ways to plot the SHR line, the protractor and the vertical scale and the dot.  Notice that the dot just happens to be at 75°Ftdb,/50% RH.  So, if that happens to be the condition of interest, then the vertical axis and dot approach is very convenient.

But if you are looking at other conditions, then you will have to use the protractor or vertical scale and dot to set the slope of the SHR line and then transfer it to the point of interest.  To illustrate this, lets use a space condition of 72°F tdb/40% RH and plot the line for a SHR of .8 through that point. To get started, you would first plot the point of interest in the electronic chart, just like we did in the example above.   Here is what that should like like when you get finished.


Next, copy the image to the clipboard and then paste it into PowerPoint or Word or Excel.  I tend to use PowerPoint because for me, it is the easiest to draw in.   Here is what that should like like when you are done.


Next, you plot the SHR line using either the protractor or the vertical axis and dot.   If you use the protractor, one end of the line goes through the little vertical hash mark on the top horizontal scale and the other end goes through the SHR of interest;  in our case 0.80.  Here is what that would look like if you did it that away (Note that I have zoomed in on the PowerPoint slide to let you see it better).


Alternatively, you could plot the SHR line by placing one end of it at the dot at 75°Ftdb,/50% RH and the other end through the SHR of interest on the SHR scale on the right side of the chart.  If you did it that way, it would look like this.


Incidentally, if you are wondering how you draw the actual line, you use the Shapes tool, which is on the insert menu.


The trick now is to somehow move the line so it goes through the point of interest and crosses the saturation curve.  In the olden days, when we were using paper charts, we would use a drafting trick that used two triangles to shift the line.  We would start by laying one triangle so that it matches the slope of the line we wanted to transfer, like this.


Next, we would put a second triangle up against the first triangle like this.


Then, we would slide the first triangle down the second triangle until the side that matched the SHR line on the SHR scale passed through the point we were interested in, like this.


Finally, we would draw the line, using the triangle as a guide, which would leave us with this.


Like I said, that was how we did it in the olden days, and when I retire, I will no doubt end up in the local HVAC museum showing curious children how to do this.  I can just see the excitement in their little eyes as I explain what paper and pencils where and lay actual triangles on a paper psychrometric chart to transfer the line.  In fact, I was about to show my grand daughter but she suddenly realized she was coming down with a cold and said she had no choice but to leave the room to minimize my exposure to it.

The reality is that we live in an electronic age, and there are a lot of good things about that, including how easy it is to transfer the SHR line.   Lets go back to the copy of the chart that we put into PowerPoint and see how you would do that.  We will start with a chart that has the SHR line plotted on the little protractor thingy.


You can very easily shift this line to the space condition point with the arrow keys.  Just click on the line and then use the down arrow and right arrow key or your mouse to move it until it intersects the middle of the space condition.  When you finish, you should end up with something like this.


The problem now is that the line does not extend through the saturation curve because it is not long enough.  You can also solve that problem electronically.  Specifically, if you click on the line and then the format tab that shows up when you do that, you will notice that there is a Size item on the menu.


If you click on the little arrow in the box at the lower right corner of the Size menu item, a dialog box opens up that will allow you to format the shape you have selected, in this case, the SHR line.   It should automatically open up the tab that is associated with size.


Even though we are thinking of our SHR line as a red angled line, the drawing tools in PowerPoint, Word, etc. actually think of it as the diagonal joining the opposite corners of an invisible box.  The length and angle of the line are actually controlled by changing the height and width of this invisible box.

So, if you first lock the aspect ratio of the image (green arrow below), you can then change either the height or width of the object and the dimension you didn’t directly change will be changed in proportion to the one you did change.  That means that by virtue of the locked aspect ratio, if we make our line longer, the slope of the line, which is set by the SHR, will not change.

For example, if I select my SHR line, lock the aspect ratio (the green arrow below) and then make it 3 inches wide (the red arrow below), I get a longer line that has the same slope as the line originally had; in other words, the slope matches the slope of a line through 0.8 on the SHR index (the blue  arrow below).


Now, all you have to do is shift the line back to where it runs through the space condition point and also crosses the saturation curve.  Doing that tells you that if you really wanted a 72°Ftdb/40% RH space and the loads in the space had a sensible heat ratio of 0.8, then you would need an Apparatus Dew Point (ADP) of about 37.5°F, which is pretty cold.


(Return to Contents)

Apparatus Dew Point

The ADP is the point where the SHR line crosses the saturation curve, read from the dry bulb axis at the bottom of the chart.

ADP is another theoretical number that I will go into in a future post, but for now, suffice it to say that if you know the ADP required by a load, then you can make a coil selection using any number of different approaches.  The Bypass Factor method developed by Willis Carrier is one example of an coil selection technique that uses ADP and if you want to  know more, you will find it discussed in the slides I mentioned previously.  This image zooms in on the chart above so you can read the ADP better.


Incidentally, note how the dry bulb temperature lines on the chart are not perfectly vertical, so you need to take that into account when you are plotting points and projecting dry bulb temperatures on the chart.

(Return to Contents)

Plotting a Mixed Air Condition

To finish this post, I am going to make an addition to our little HVAC system that will make it more realist.   Specifically, I am going to add ventilation air from outside and then use that to demonstrate how you plot a mixed air condition on a psych chart.   I will use the Pro version of the chart to make it a bit faster for me and to illustrate a few other features of that chart.

But by using techniques like I just discussed, you can do the mixed air analysis with the basic version of the chart by plotting your points and then working with a copy of the chart in PowerPoint.

Let’s start by modifying our system diagram to include a number of things that you would likely encounter in a real world application.

For starters, we will add a duct that introduces 10% ventilation air, which then leaves the system by a small exhaust fan.  To make things interesting (in a nerdy sort of way), we will put our system in St. Louis, which has both extremely hot and humid days as well as extremely cold and dry days.  Thus, the outdoor air we will be introducing can have a range of impacts on our system, depending on what is going on outside.

With the basic version of the chart, you would need to look up the outdoor design condition in some other resource, like the ASHRAE Handbook of Fundamentals or an internet search.  But if you have upgraded to the professional version of the chart, that information is built in as a tool you can open up, specifically, the HDClimatic Tool, which shows up in the drop-down menu under Tools.


The data behind the tool is the ASHRAE data so it covers many, many locations through-out the world.  Here is what that looks like for St. Louis, MO from my professional version of the PG&E psych chart.


If you look at my system diagram, you will discover I have also made the return air warmer than the space is, which is what happens in a real system if the return duct runs through a hot ceiling plenum or across the roof on a hot sunny day.  Since all of the energy gain was in the form of sensible energy, the specific humidity (and dew point temperature) of the return air did not change from the space condition.  The dry bulb temperature changes along with the relative humidity since, unlike specific humidity relative humidity is a measure of the moisture content of the air relative to what it could hold at the current temperature.

Specific humidity is a measure of the absolute amount of moisture in the air.  Common units include pounds of moisture per pound of air and grains per pound, which is what the professional version of the PG&E chart defaults to.  You can change the units along with many other settings like colors, line weights, etc. using the Chart Profile Control  tool on the Settings drop down menu in the professional version of the chart.


Some of you may be wondering why we would use some weird unit like grains per pound to measure specific humidity.  The reason is that back in the olden days, we did all of this stuff with papers, pencils and slide rules.


When you were using a slide rule, you were accurate to 1 or 2 decimal places and numbers with a lot of leading zeros were trickier to deal with than numbers with out them.   If you use units of pounds of moisture per pound of dry air for specific humidity, you end up with some very small numbers, for instance 64.9 grains per pound becomes 0.0093 pounds of moisture per pound of air since there are 7,000 grains in a pound.  So, by using grains of moisture per pound of dry air, it was easier to work with the numbers since they had fewer leading zeros.  Now, with computers and calculators, its not  so much of an issue.

The bottom line on this is that we can figure out the return condition by assuming a constant specific humidity or a constant dew point and using that as our indication of moisture content along with the dry bulb temperature we anticipate for the return air.

You could come up with the dry bulb temperature via a number of ways, including a somewhat complex heat transfer calculation for the duct or an estimate based on past experience, or for an existing building, a field measurement.  Here is what our system looks like with the changes we have been considering


And here are the state points plotted on the chart, along with the process for the air as it moves from the cooling coil to the space (the green line) and the space to the return connection at the AHU (the red line).


To select the cooling, we need to know the entering condition, which is a mix of the return air and outdoor air.   It turns out that when you do the analysis on that based on the steady flow energy equation and conservation of mass and energy, the mix point will lie on a line connecting the two state points of interest and its position on the line will be proportional to the percentage of total flow represented by each of the state points.

I go into the physics behind what I just said in a previous blog post titled Economizers–The Physics of a Mixed Air Plenum if you want to see the derivation.  But in practical terms, what it means is that you can plot the line on the psych chart and get your answer.   To do that with the basic chart, you need to plot the outdoor air point and return air point using the tools in the chart and then move an image of the chart to PowerPoint and work out the mixed condition by drawing and measuring lines.   You can see an example of that in the economizer post I just mentioned under The Psych Chart, A Graphical Approach to the Same Problem topic, and that is how we did it in the olden days.  Its not particularly complex and doesn’t really take that long once you have done it a couple of times.

But, the professional version of the PG&E psych chart makes the mixing analysis even easier.  The reason is that the professional version of the chart gives you quite a few more processes to chose from in the Process column of the Psychrometric Process window, including an air mixing process.  This is what that looks like.


Notice that I have selected Air Mixing as my process, picked the two state points that I want to mix and specified the percentage of air from each state point in the fields in the lower left of the Psychrometric Process window.   When I do that, the conditions for the mixed air point are calculated and displayed for me under Current Point.   And when I click on Apply, the point is added to my chart, along with lines connecting it to the two state points associated with it.


By reading the information directly from the chart, or by looking at the data table in the Psychrometric Process window…


… we can see that the entering condition to our cooling coil on a design day will be 94.6tdb/75.8twb.

I should point out that in both versions of the chart, you can drag the side of the Psychrometric Process window to make it wider and see all of the psychrometric data for each point in the table, which is what I did to make the image above.

You can also copy the data table to your clip board in both versions of the chart using the Copy Chart Data to Clipboard tool on the Edit drop down menu.


Having done that, you can then paste it into another application like Excel to work with the numbers mathematically or create your own data table or both.


Note that the table also includes other useful information where appropriate, like the SHR or the energy added or removed given the air flow for connected state points.  Bear in mind that the chart is actually calculating all of this information to multiple decimal places.   So, for instance, even though the return air condition is based on the space condition with no moisture added to it (SHR = 1), a very small moisture difference actually shows up because of the accuracy I used to enter relative humidity (100% vs. 99.999999%).

In any case, having determined the mixed air condition, which is the coil entering condition, we can complete our plot of the HVAC process occurring for our system by connecting the ADP to the mixed air point using the Cooling Coil process in the Psychrometric Process window.


Hopefully, at this point, you have seen that either version of the PG&E chart can be quite useful if you are working with HVAC systems and trying to understand what is going on in them.  If you do that sort of thing a lot, then the professional version of the chart will probably pay for itself by streamlining your process.   Plus, the professional version includes a number of very useful tools that I have not used up to this point.   So, I will close by showing you a few of the ones I use the most.

(Return to Contents)

Professional Version Extras

Bin Plots

One of my favorites is to do a bin plot of the climate data for a particular location on the chart.  The professional version of the chart has the Typical Meteorology Year 2 and 3 files (TMY2 and TMY3 files) built into it.  You can access that data from the Analysis drop down menu using the Open a Weather Data File tool.  If we do that for the HVAC process we just analyzed, it looks like this.


The warmer colors are areas on the chart with the most hours and the cooler colors are areas on the chart with fewer hours;  the little color code key tells how many hours are in each bin and you can control the size of the bins and which hour, day, week, month, etc. is shown  via the options in the weather data window.

As a result, we can see that even though we modeled the system on the design day, it could actually see conditions that were more extreme and modeling those conditions may be of interest to us.  By changing the values for temperature and RH for the outdoor air condition, we can quickly assess how that impacts our HVAC process.  In the summer …


… we would discover that we need a bigger cooling coil if we absolutely always want to deliver our space cooling design target.

In the winter …


… we would discover that even on an extreme day, the mixed air plenum should be nowhere near freezing if we are only using 10% outdoor air, even if the return air is now coming back colder than the space instead of warmer than the space.  That means that if perfect mixing occurs, our little system would not need a preheat coil and it would not have a freezestat trip unless something went wrong.

This is an important thing to be aware of when you are operating buildings and doing retrocommissioning because it is not uncommon to see active preheat coils, nuisance freezestat trips, and frozen coils under conditions where they should not have occurred if you plot the process out on a psych chart (or do the math).  When those conditions exist, they are likely the result of less than perfect mixing, which is very common in mixed air plenums.  That means that if you can improve the mixing, you can solve problems and save energy.

Our analysis also says we would not need to humidify since the mix point is right on the SHR line, assuming of course we had started out with a specific humidity in the building of about 57 grains per pound and the latent loads in the space were about the same as they were on the design day.

But, unless the building was totally leak free, including free of the leakage of water vapor (which will migrate against air flow and move from the place with the highest vapor pressure to the lowest) it is unlikely that we would maintain our targeted space condition with out adding some moisture.  After all, ultimately, the air inside the building came from outside of the building and on the extreme winter day, that air is very, very dry and it will enter the building via other mechanisms than our HVAC system, including infiltration, traffic through doors, and envelope leaks.

Note also that since our mixed air temperature is above our target, if the space we are serving really requires an ADP of 52.1°F to meet the space condition, then even with it being extremely cold outside, we would need to do some mechanical cooling and the coil would be wet if the indoor humidity levels were really up at the design conditions.  The load on the coil would have dropped due to the introduction of the colder, dryer outdoor air.  And again, the building would have to be perfectly leak free for this condition to exist.

If we added an economizer to our system so we could make our targeted leaving air temperature of 51.2 by mixing cold outdoor air with warm return air and not running mechanical cooling, we would discover that on the extreme day, it would require about 35% outdoor air to hold our targeted discharge temperature.


And since the mix point has a specific humidity below 57 grains per pound, we would need to humidify a bit to keep the building moisture level where we wanted it.  If we didn’t humidify, the dry mixed air created by the economizer process would eventually drive the indoor relative humidity down.

You can get a sense of where you might end up by assuming that the worst case specific humidity at the mixed air condition was the same as the outdoor air (which is where the air in the building came from) and then plot the SHR line through that point and look at where it crossed your indoor dry bulb temperature target.  You could then calculate the mixed condition from this point as shown below.


Of course, the moisture added to the air by the load would tend to ratchet the mixed air condition up


But infiltration and other factors would tend to dry out the air more than the economizer process would acting on its own, so my best guess is that the space would float around somewhere between the lowest possible specific humidity and the best case specific humidity.

The bottom line, for me at least, is the bin plot feature of the professional chart is a really powerful tool and the one I probably use the most.   That is because it allows me to juxtaposition the dynamics of the climate against the dynamics of the system and make appropriate design and operating decisions on that basis.

Bin Data Tables

If you looked closely at the bin data dialog box, you will notice that you also have the option of exporting the bin data to a Data Table.  When you select that option, you can then pick a text file or Comma Separated Value (CSV) file and when you tell the tool to Create a Table, it does just that and the table will open up in Notepad (.txt file option) or  Excel (.csv file option).  When I select the CSV option for the St. Louis data we have been looking at, it comes up looking like this.


If you look at the full file, you will discover there is a line in it for each hour of the year with all of the associated weather data.  If you save this file as an Excel workbook, you will then be able use things like the KW Engineering Get Psyched functions and basic HVAC equations to perform hour by hour energy calculations against a normal climate year.

You could even use Excel’s VLOOKUP function to import data from a logger or trend file for each hour and figure out what it costs to operate a dysfunctional system for a year.   You could then repeat the calculation with more appropriate parameters reflecting repairs to the system, which is what it would cost to run the system properly.  The difference between the two numbers is the savings potential.

You will find a number of examples of techniques like this in my string of blog posts on scatter plots or by selecting the Excel Techniques topic in the dropdown list on the right side of the blog home page.

ASHRAE Design Data

Another feature I frequently use is the ASHRAE design data feature that is included in the Pro version of the chart in the Tools drop-down menu.  This is the feature I used earlier in the post to get the design data for St. Louis, so I won’t say a lot more about it.  I will point out that in addition to the design day data for a number of winter and summer design conditions, you also get the extreme day data, wind data, and location information.

Steam Property Calculator

Another handy tool is the Steam Property Calculator, which also shows up in the Pro version in the Tools drop down menu.


The tool is basically a steam table where you can type in one property and read back all of the other properties at that condition at saturation.    In the example above, I have entered 212°F and the tool returned all of the metrics for steam at atmospheric pressure.

Motor Heat Calculator

There is a motor heat calculator included in the Pro version of the chart that tells you what the temperature rise will be across a fan due to the fan heat and the motor efficiency losses into the air stream.  In this example, I entered the flow rate and fan static pressure and static efficiency and the motor efficiency and the other metrics were provided by the calculator when I clicked on the Calculate button.


Note that you still need to add the belt losses, which could be another 2-3% for a well maintained system and maybe as much as 10% for a system where the belts were screaming when the fan ran.  If the motor is not in the air stream, then setting the motor efficiency to 100% will eliminate the motor efficiency losses from the assessment.

If you are not familiar with fan heat, Jerry Williams wrote a really great pair of articles for Heating, Piping and Air Conditioning magazine a while back.  The articles are old enough that they aren’t in the digital archives.  But, as physics tends to be, the concepts are still perfectly valid and I have placed copies of the articles on my Google Drive at this link if you want to reference them.

(Return to Content)


Hopefully, this gives you a sense of how to go about using the basic version of the PG&E psych chart (or any psych chart for that matter) and some insight into the advanced features you would get by upgrading to the professional version.   I have only only listed the professional tools I use the most, meaning I have only  scratched the surface in terms of what’s in the package, including the ability to use different languages.


These screen shots of the various dropdown menus included with the Pro version should give you a pretty thorough picture of the features I didn’t highlight.

Analysis menu


Notes menu


Tools menu


Toolsv7 Menu


Settings Menu


(Return to Content)


David Sellers
Senior Engineer – Facility Dynamics Engineering

Posted in Air Handling Systems, Economizers, Excel Techniques, HVAC Calculations, HVAC Fundamentals, Psychrometrics, Resource List and other Resources, Weather and Climate Interactions with Buildings and Systems | Leave a comment

Retrocommissioning Findings: Reducing Boiler Purge Cycle Losses–How I Obtained My Results

This post expands on what I did in the final section of the last post to show how I arrived at the answers I arrived at.

Before proceeding, I want to emphasize that what I am about to show is one approach out of many possible approaches to the problem.  So if you came up with similar answers via a different path, then all that means is that you think differently from me, which could be a good thing.

If your answers are significantly different from mine, it probably means that one of us made a misstep, and the person that did that could be me, so we should talk.  As a frame of reference on “different” given all of the assumptions, etc. I suspect my projections are +/-10-20%.  So if you are in that window, we probably agree.

These links will take you to specific locations in the content if you want to jump around.  At the end of each main heading, there will be a “Back to Contents” link to bring you back here.

Step 1 – Associated an Outdoor Air Temperature with Each Boiler Flue Temperature Data Point

For my analysis, I decided to correlate the logged data from the flue gas logger with data from a different logger that we had deployed in the outdoor air intake of the corridor make up air handling system.   The thought was that since the air handling system was a 100% outdoor air system, the data from the outdoor air intake would be a reasonable representation of the outdoor conditions during the time that boiler flue gas temperatures were being logged.

As was discussed in the previous post and the post on Aliasing, I used a fairly fast logging rate for the the flue gas temperature to make sure I picked up the nuances of the cycle.  But I needed to coordinate that with weather data, specifically outdoor air temperature data, which had a much slower sampling rate.

I accomplished this using the VLOOKUP function in Excel as is illustrated below. Note that I have clicked into the formula bar prior to making the screen shot so that the various variables are highlighted in color in the formula and the cells they are associated with are highlighted in the same color in the spreadsheet.


Specifically, the formula in the orange column in cell D908 …


…  goes to a table that is on the next tab (the Cor-MAU-1_0_0 tab) and scans vertically (that’s the “V” part of VLOOKUP) down the first column of data in the cell range that starts with B4 and ends with F10388.  It scans down the 1st column (column B) until it finds the closest match to the value in cell B908 in the table on the W._Boiler_Flue_Temp tab.  Once it finds the closest match, it returns the value that is in column 2  of that row.

Here is a screen shot of a portion of the Cor-MAU-1_0_0 tab that contains the beginning of the range.  As you can see, it is simply the output from a data logger that was monitoring a number of parameters in a 100% outdoor air system serving the facility.


Note that the first column in the range specified in VLOOKUP is column with dates and times and that the second column is the column with the data we are looking for.

The dollar signs ahead of the reference to the range that the table lies in is to make the reference an absolute reference, something I will discuss in a bit more detail in Step 2 since it comes up again there.  My point in this section is to illustrate how I picked up data from a logger that had a much slower logging rate and correlated it with data in a logger that was measuring something else during the same time period at a much faster sampling rate.

Incidentally, if you want to see a live demonstration of how to use VLOOKUP, I have one in the video clip that is part of the blog post titled Using Scatter Plots to Assess Building Performance–Part 3.

Back to Contents

Step 2 – Detect the Changes in Temperature that Correlate with the Various Steps in the Boiler Cycle

In the previous post, we discussed how it would be possible to use Excel techniques to create “flags” that detected the changes in the data pattern from our flue gas logger that were indications of different parts of the firing cycle.  Here is a reproduction of the pattern that can be observed in the logged data for your reference.


A Simplifying Observation

One thing that makes this analysis a bit simpler than it might otherwise be is that each cycle consists of a number of steps that are predictable in terms of when they happen and how long they last because they are programmed into the firing controller.  Specifically, we know that each cycle will include:

  1. A purge ramp up that lasts 30 seconds.
  2. A full combustion air flow purge that lasts 60 seconds.
  3. A purge ramp down that lasts 30 seconds.
  4. A pilot ignition cycle that lasts 10 seconds.
  5. A main flame ignition cycle that lasts 15 seconds.
  6. A post-purge with combustion air only that lasts 14 seconds.

The only variable portions of the cycle are the time the burner fires and the time that the boiler stands by with out doing anything.  That means that if we pick up an indicator for the variable events, we can figure out where the other events in the sequence need to fall relative to those points in time.

In other words, if I can pick up an indication of when combustion starts, I know that events 1 – 5 on the list above have occurred in sequence immediately ahead of that point in time.  And if I can pick up when combustion ends, I know that event 6 on the list above will happen immediately after that.

Finally, if I know when combustion starts for two sequential cycles, then I know the length of one complete cycle of operation. If I subtract the accumulated time for all of the programmed events from the complete cycle time and also subtract the time that combustion is actually occurring from the complete cycle time, I am left with the standby time; i.e. the time that the boiler sits idle between burner events.

Back to Contents

Developing the Flags

If you study the diagram above, you will notice that the start and stop of combustion are the two largest temperature change events.  Intuitively, that would seem to make them easy to detect by simply looking at the rate at which temperature is changing, which is what I did initially to create my flags.

But what I discovered was that the various spikes in the data also had some pretty rapid changes in temperature, they just did not last that long.  So my first stab at making flags based on the current temperature relative to the temperature for the past 10-20 seconds did not reliably pick out only the beginning and ending of the combustion process. Rather it would pick up some (but not all) of the other sudden temperature changes in addition to the ones I was looking for, meaning it gave me false starts and false stops.

Ultimately, I realized that what I was really trying to detect was a sudden change in the slope of the line represented by the data set;  i.e the change in temperature relative to the change in time.  For instance, when the pilot is ignited, there is a sudden change in the slope of the line from a gradual downward incline to a sharp upward incline.


The same is true for the end of the combustion cycle but the slope then shifts from a gradual upward incline to a sharp downward incline.

Calculating the Slope of the Flue Gas Temperature Line

Here is a screen shot of the spreadsheet tab that I used to create my cycle counter with the formula that I used to calculate the slope of the flue temperature line highlighted so you can see how it works.


The formula, which is in cell E26 of the green column, first calculates change in average temperature (the yellow column) relative to time (the red column).  Specifically, it averages the temperature and time for the 30 seconds prior to the time associated with the row that the calculation is in (the cells outlined in purple and blue) and subtracts that from the average temperature and time for the 30 seconds that follow (the cells outlined in red and green)

This is the average change in temperature for a minute centered around the time associated with the calculation.  Since the formula uses relative references, when I paste the formula down the column in the spreadsheet, it creates a sliding window that is always looking 30 seconds behind and ahead of the time associated with the current row.

Compare the screen shot above, with this one, and I think you will see what I mean;  the formula is in a cell that is one row further down the spreadsheet and so are all of the cells that it references compared to the formula in the cell above it.


Excel and Dates and Times

You may be wondering why the denominator of my little equation includes multiplying the difference in average date and time by  24 and 60.  The reason is that I wanted to calculate slope with the  units of  °F per minute, not °F per day.

As humans, we think of a date and time as a string of different parameters mixed with special symbols.  And the order in which they are presented is part of conveying information in addition to the value of the numbers.  For instance, as I type this, it is the 25th minute of the 14th hour of the 22nd day of the 3rd month of the 2,016  year since Christ died  A very common way of noting this would be to type:

Currently, it is 2:25 pm on March 22, 2016


2016/03/22 14:25


03 March 2016, 2:25 PM



So a lot of different ways to say the same thing with a lot of interpretation required understand what is being said.

Computers like things to be a bit more straight-forward than that, so Excel does not think of it that way.  Rather, Excel thinks of it as being 42,451.600926 days since January 1, 1900.  Tomorrow at this time, it will be 42,452.600926 days since January 1, 1900;  in other words the number is incremented by 1 each day.

This makes it very easy for Excel to do the kind of math I wanted to do in my calculation.  But our human minds would have trouble recognizing   42,452.600926 as representing March 23, 2016 at 2:25 PM.  So Excel provides formatting features that let us see the numbers in a manner we can quickly comprehend while allowing Excel to work with the dates as a number.

So bottom line, to get units of °F per minute, I had to multiply the number in the denominator by the number of hours and minutes in a day.  I have done a couple of blog posts that include discussions of Excel date and time values so you may want to take a look at them if you are still curious about it.  The most recent one includes a little spreadsheet tool that helps you come convert dates and times to the values Excel uses to format the time axis in a graph.

Detecting the Burner Being Turned Off

The next step in the process was to write a formula that would somehow flag that their had been a sudden change in the slope of the flue gas temperature line because that was my clue that the burner was firing or had stopped firing.

The two events I was trying to capture (the start and end of combustion) could be further delineated by the fact that the only event that had a steep upward slope lasting more than 3 seconds when flue temperatures were above about 250°F was the start of combustion. 

Similarly, the only event that had a steep downward slope above 500°F was the end of combustion.  So I decided to include these items as filters for detecting those events. 

Here is a screen shot of the formula I used to detect the burner shutting down.


The formula, which is in the blue column, compares the boiler flue temperature in the yellow column with the 500°F threshold I selected as my “cycle ends” indicator (cell C6, with the orange boarder) and if the current temperature is higher than that but falling off at a rate of 200-300°F per minute (cells D10 and D11, outlined in purple and green), then it places a “1” in the cell.  Otherwise it puts a “0” in the cell.

As a result, I now have a column in my spreadsheet where the number will change from 0 to one if the flue gas temperature is above 500°F but falling off at a rate of 200-300°F per minute.

Note that I could have simply entered the 500°F, 200°F and 300°F metrics directly into my formula like this:


But by using cell references, like this …


… I set the formula up so I could play with the variables to fine tune it, which was a handy feature in terms of getting it set up to only trigger on the desired event.

A Word About the Dollar Signs

If you look closely at the cell reference in the formula in the previous screen shot, you will notice that some of them have dollar signs ($) ahead of the letter and number associated with the cell and others don’t.  The dollar signs tell Excel to look for the information in a very specific row and a very specific column;  the ones indicated by the letter and number after the dollar sign.  Because the reference is specific, it is termed an “absolute” reference.

In contrast, if there are no dollar signs, then Excel looks for the information in a cell relative to the cell containing the formula.  This is called a “relative” reference.

Applying this to the formula in the screenshot above, the dollar signs are telling Excel:

  • No matter what cell the formula is in, it should go to cells C6 for the temperature that will be the trigger for detecting the end of a firing cycle, and
  • No matter what cell the formula is in, it should to go to cells D11 and D10 for the slope values that should be used as a part of the detection process. 

This is accomplished by referring to the cells as $C$6, $D$11, and $D$10.

In contrast, the formula is saying to look for the flue gas temperature in column C of the same row as the formula (C90 in the example) and to look in column E of the same row as the formula for the slope information (E90 in the example).  This is accomplished by not putting dollar sign into the cell reference.

Note that you can have an absolute reference to just the row or just the column in addition to locking down on a specific cell by making an absolute reference to both.  A short cut for setting this up when you are typing a formula in Excel is to press the “F4” key while the cell reference is highlighted in the formula window.

The first press puts dollar sings on both the row and column reference.  Each subsequent press toggles you through a different option until after 4 presses, you are back to where you started from.

Detecting the Burner Starting

By applying a similar filter but looking for a positive (rising) sudden change in the slope of the flue temperature line and looking for that condition only if the flue temperature was above 250°F, I was able to detect burner start events as illustrated in the formula that is  highlighted in the purple column in this screen shot.


You may be wondering why this detector doesn’t also flag the little spike in temperature that happens right after the burner shuts down during the post firing purge cycle.  After all, it is a sudden, positive change in slope at a flue gas temperature that is above 250°F, right?


The reason is that it does not last long enough. 

In other words, the sliding window that averages the slope for the 30 seconds before and after the point in time that the slope calculation  is being done for flattens out the short duration spike.  In contrast,  a spike that persists will eventually show up as a large change in slope.  

In this case, the slope at the beginning of combustion quickly jumps above the 100 °F per minute trigger that I have set and stays there. As a result, I reach a point where the average slope also stays above the trigger value.


In contrast, the spike at the end of the cycle during the post purge is not steep enough nor does it last long enough for the average value to hit the triggering threshold.


Applying a Bit More Filtering

If you look closely at the following screen shot …


…you will notice that while the Main Flame On Detection formula successfully triggers when the burner cycles on, there are actually six data points that met the criteria, (in Rows 285 – 289).  This is because the data is noisy;  sometimes there would only be 1-2 events that would trigger the flag, other times, 3-4, etc.

I was able to tune the slope and temperature trigger values to minimize the problem, but I never could get it fine tuned so that I only had one triggering event per cycle, which is what I really was looking for.  To solve that problem, I added a formula that looked for the first transition from “0” to “1’ as the indicator that the event had happened.  For the burner start event, when the formula detected that condition, it would return the words “Combustion Starts).  Otherwise, it would put a “0” in the cell.  This is illustrated in the screenshot below in the highlighted formula in the gray column.


A similar formula in the white column selects the first burner off event as indicating “Combustion Ends”.


Finally, I added a column that would pick up the “Combustion Starts” and “Combustion Ends” markers and combine them into the same column as illustrated in the formula highlighted in the second red column below.


I could then filter the data for only “Combustion Starts” an “Combustion Ends” events …


… which set me up for my next step.

Back to Contents

Step 3 – Determine the Cycle Length for Each Logged Cycle and the Number of Cycles per Hour

To start this step, I took the filtered data shown in the screen shot above and pasted it as values into a new tab in the spreadsheet.  I did this because I only wanted the rows associated with a combustion start and end event so I could do math on those events with a regular, repeated pattern that would allow me to copy and paste a formula down the entire spreadsheet.  I wanted to get rid of all of the other rows  which were hidden but still present when I applied the filter and pasting the data as values did that for me. 

This screen shot shows how to paste as values …


So bottom line, but pasting my filtered data as values into a new tab, the process left me with a copy of the data from the previous sheet but only the rows with a combustion start or a combustion end event.  After a bit of formatting, my result looked like this.  


Notice how the sequentially numbered rows alternate events in a regular pattern, which is what I wanted to allow me to do the next step in my process.

Before moving on, I did a quick scan of column I to make sure that it really represented alternating events.  That actually sounds harder than it is.  Your eye (or at least my eye) is pretty fast to detect a break in the regular patterns that exist in columns E, F, G, H, and I. So, when I scanned down the columns quickly, right towards the end of the file, I noticed an anomaly.


Notice how there are two “Combustion Start” events detected with out a “Combustion Ends” event between them.    A quick check of the original unfiltered data revealed that there was a little “blip” in the flue temperatures and the rate of change that toggled my trigger calculation an extra time for one of the starts.


Notice how the Main Flame On Detection value cycles from two occurrences “1” to five occurrences of “0” then back to 6 occurrences of “1”.   There was no off event in between but the toggling of the flag created an extra “Combustion Start” event that was not real.

I could have played with the filters a bit more to get rid of the anomaly, but since there was only one and I had found it, I simply deleted the line before moving on.

My next step was to determine the length of each firing cycle by subtracting the time for a “combustion starts” event from the time for the subsequent “combustion ends” event as illustrated below in the highlighted formula in the 2nd yellow column


A similar formula in the second green column calculates the total cycle time by subtracting the combustion starts time for the current cell from the next combustion start time.


By definition, this includes all of the elements of a full firing cycle including all of the purge and ignition steps as well as the standby time.

My next step was to convert the cycle duration to hours, which basically meant taking the number in the green column and multiplying it by 24 (the highlighted formula in the 2nd blue column) …


… and formatting it as a number …


.. instead of hours, minutes and seconds.


Next, I converted the duration of the cycle into cycles per hour (basically the reciprocal value;  the highlighted formula in the second purple column) …


Finally, I correlated the cycles per hour with the average outdoor air temperature for the cycle (the highlighted formula in the second gray column) …


… and the average time that the cycle occurred (the highlighted formula in the second white column).


That set me up for my next step.

Back to Contents

Step 4 – Develop a Relationship Between Outdoor Air Temperature and Boiler Cycles

At this point, I was able to set up a relationship between the outdoor air temperature and the number of  boiler cycles per hour and develop a mathematical relationship that expressed it.   I explained how I did that in the previous post so I won’t repeat myself and will just reproduce the chart here.



I created the chart by filtering the data we have been discussing to eliminate the blanks …


… which gave me this.


I was then able to copy and paste the values in columns M, N, and O into a new tab and develop the chart that I showed at the beginning of this section.


Back to Contents

Step 5 – Determine the Combustion Air Flow Rate

As I mentioned in the previous post, the number I was really after was how much energy does a purge cycle cost? The idea was to use the sensible heat equation, which I have reproduced below, to figure that out.


To apply the equation, we can figure out the temperature rise by using hourly outdoor air temperature data and our logger data as was discussed under Step 1.  The missing piece of information at this point is the flow rate.

In the previous post, I mentioned that …

The combustion air fan needs to provide the air flow required for complete combustion during high fire.  So there is a clue.

… and …

there is a clue in the information we looked at previously

So, lets take a look at what those clues are and how we can use them.

The reason that the first quote is a clue is because it is possible to use principles from chemistry to define how much air is required for complete combustion of a given fuel.   It is a branch of chemistry called “Stoichiometry” which comes from a Greek word meaning “measure of elements”.

If you search for “stoichiometric combustion of natural gas” on the internet, you will come up with a number of places where the fuel air relationship is give.   It is important to distinguish between the ratio on a mass basis vs. an volume basis and since we are talking about burning cubic feet of natural gas, we need to use the volume basis and the ratio is 9.7 to 1.

The second clue was in the form of the boiler specifications, which I have reproduced below for your reference.


Notice that there are figures for fuel consumption with a number of different units of measure, including cubic feet per hour.   So, we could use the 8,798 cubic feet per hour of natural gas rating to come up with the combustion air requirement for perfect combustion at full fire.

But even if that data was not available, most boilers nameplates indicate the maximum energy input rate at full fire.  For the boiler I was working with, the 900 under capacity meant the boiler was rated for 900MMBtu or  900,000,000 Btu/hr.


MMBtu is a common boiler rating metric and stands for 1 million Btus.  In some ways, it’s a confusing unit but I believe the “M”s have their roots in Roman Numerals, where M stood for 1,000. So 1,000 x 1,000 (MM) is 1,000,000.

The question then becomes:

how many Btus are in a cubic foot of natural gas?

It turns out that the actual value will vary depending the source of the gas, and you should be able to find the exact value for the gas you are buying on your utility bill or in the regulations that govern your utility.  For example, here is the value associated with Northwest Natural Gas, the utility that servers our house.


But lacking that, a good rule of thumb is to assume there are about 1,000 Btus per cubic foot of natural gas.  The bottom line is that if we know the boiler firing rate and assume stoichiometric combustion, we can calculate the required combustion air flow at full fire.


But,  perfect combustion can be tricky to achieve and if we didn’t completely burn the fuel, it would waste energy and also be a bit dangerous since a combustible mix would be dumped out of the flue at roof level. Thus, to ensure complete combustion, burners are set up with a bit more air than necessary.

This is termed “excess air” and works against peak efficiency because the air that is not involved in the combustion process is simply heated up and dumped to the roof.  Current best practice is that about 9.5% excess air will provide a good balance between safety and efficiency.  So taking that into account, the following table summarizes how I arrived at the peak combustion air flow requirement for the boiler we are discussing.


Back to Contents

Step 6 – Determine the Purge Air Flow  Rate for Each Portion of the Purge Cycle

The unresolved issue that cropped up for me after I had decided how I would come up with the combustion air flow was that it is not the same for all phases of the purge cycle.  Specifically, you will recall from that post and information at the beginning of this post that during the pre-fire purge cycle, the air ramps up to the maximum, holds that flow rate for a minute and then ramps back down.   And since the post purge cycle occurs when the boiler has reduced its firing rate to the minimum, then it likely occurs with the minimum air flow through the boiler.

To reflect those contingencies, I made the following assumptions.

Understanding the Burner Turndown Ratio

For most machines that can vary their capacity, there will be a limit with regard to how low they can go relative to their peak capacity.   This limit is often expressed as a turndown ratio in the form of Peak Capacity/Minimum Capacity:Minimum Capacity.  For the burner on the boiler we were dealing with the turn-down ratio was 10:1.  Meaning that the maximum firing rate was 10 times the minimum firing rate.

Given a turndown ratio of 10:1 and a maximum combustion air requirement of 1,558 cfm with 9.5% excess air, the implied minimum combustion air flow rate with 9.5% excess air will be 156 cfm.

Back to Contents

Addressing the Pre-fire Purge Ramp Up and Ramp Down

For the ramp up and ramp down portion of the pre-fire purge, I assumed that there was a linear relationship between firing rate and time and that the combustion control system was adjusted to keep the fuel/air ratio correct for the entire process.  Mathematically, that means I could assume the combustion air flow rate for the cycle was the average of the maximum and minimum flows, or 857 cfm with 9.5% excess air.

Back to Contents

Addressing the Post-fire Purge

Since in theory, the post-fire purge occurred after the boiler had turned down to minimum fire, I assumed that it occurred at the minimum combustion air flow rate or 156 cfm with 9.5% excess air.

Back to Contents

Determining the Flue Gas Temperatures During the Purge Events

As you can see from the charts of the boiler cycle, the flue gas temperature is constantly varying during the purge events.  One way to deal with that would have been to apply the sensible heat equation to each logged data point to calculate the energy associated with that 3 second interval, and then add it all up to come up with the over-all energy associated with a complete cycle.

To save some time, I decided to make two conservative, simplifying assumptions.

  1. I decided to assume a fixed leaving air temperature that was the lowest value I observed in the data set for a pre-fire purge cycle for the all of the pre-fire purge cycles;  specifically, I used 184°F as the leaving air temperature for all of the pre-fire purge cycles.
  2. I decided to assume a fixed leaving air temperature that was the lowest value I observed in the data set for the post-fire cycle;  specifically, I used 350°F.

The results will be conservative because during the real cycles, some of the events are at a higher temperature, meaning the Δt and associated energy calculated by the sensible heat equation would be higher than I would project with a fixed minimum temperature.

Back to Contents

Putting it All Together to Come Up with Energy Up the Flue per Purge Cycle

The following table summarizes how I calculated the energy loss for each segment of the purge cycle.


Incidentally, in case you are curious, I actually ran out the calculation using the data point by data point method of calculating the energy loss up the flue and the result was with-in 5% of what I came up with using my assumed, fixed leaving temperature. 

Since my technique was 5% low, it means that if anything I have understated the savings a bit, which is probably better than overstating the savings. Another way to think of it is that there is a 5% safety factor built into my calculation as a result of my conservative estimate of the temperature in the flue for a purge cycle.

Back to Contents

Step 7 –  Determine the Total Purge Losses for a Year Given the Current Operating Pattern

We now have all the parts and pieces we need to do a projection of the losses up the flue due to the purge cycles for a typical year given the current operating pattern.    Specifically, we have:

  • A relationship between boiler cycles and outdoor air temperature that was developed in Step 4, and
  • The energy cost per purge cycle information developed in Step 6,

All that we need is annual weather data.

Selecting a Weather Data Source

There are a number of ways we could go here including using bin data, a year of actual weather data, or a TMY (Typical Meteorological Year) file.   If you went to the Google drive location that I provided in the last post to get the data you needed to try your hand at this, you have discovered that I shared all of these resources with you there.

For my analysis, I decided to correlate the logged data cycles with a typical weather year in Columbus vs. an actual weather year.  The TMY files are normalized data, meaning they reflect the average conditions for all of the years in the data set.  To my way of thinking, that means they are good ways to asses what, in general, I might think will happen moving forward in time.  If I use the TMY data, it is likely that I will not over or understate the savings and other benefits associated with an improvement.

In contrast, hourly weather data for a particular year is what really happened, including extreme conditions.  The figure below contrasts a year of TMY data (the blue line representing normalized conditions for several years) with what actually happened in 2013 (the red line).


In general terms, the TMY data masks the extremes  That is probably good in terms of projecting savings.  But from an operational standpoint, understanding what might happen on an extreme day is probably a good thing.  So both data sets have a place in our analysis toolkit.

Back to Contents

Bin Data vs. Hourly Data

In the olden days, back when I didn’t have a computer on my desk and a spreadsheet was actually a piece of paper that you wrote on and I did the required calculations with a slide rule or a calculator or an adding machine, I would have likely turned to the bin data for this calculation instead of the weather files.

For one thing, the weather files would have been hard to come by.  And for another thing, you basically do the calculations we are about to discuss for each row in the spreadsheet and the bin data spreadsheet has about 24 rows (one for each 5 degree bin between the minimum and maximum temperature for the climate in the area) vs. the weather data spreadsheet, which has 8,760 rows (one for each hour of the year).

So back in the olden days, we used bin data a lot for the pure practicality of it.  But now, once you have the formulas set up, which will be virtually the same for bin data or hourly data, its just a matter of copying and pasting them into all of the rows in your spreadsheet.  Given the speed of current laptop and desktop computers, its not much more labor and time intensive to do an hourly calculation vs. a bin calculation.   And the hourly calculation has a number of advantages to it.

For one thing, you can define what happens for each hour of the day rather than average conditions that need to be applied to the range of hours in a bin.  For instance, for a variable flow system that has a calculation based on flow and pressure in the system, you could use a formula to determine flow rate for each hour based on the time of day, day of week, etc. 

You could also use a different flow rate for each hour based on flow profile you develop from past experience or trend data.  Having established the flow, in the next cell you could use the square law to predict the pressure required based on a pressure and flow reading taken in the system concurrently and the flow associated with the hour in question.

Or if you could use a formula to determine the operating state of the system based on the weekly schedule and  known holidays.  In contrast, if we were using the bin data like the data  I provided on the Google drive for Columbus …


… and had to reflect a schedule that ran a system 5 days a week from 8AM to 6 PM, we would have to apply de-rating factors to the total hours reported in each bin because they represent the hours at the indicated temperature and mean coincident wet bulb conditions for 7 days a week.   In addition, out of the 8 hours of the day associated with the 00 – 08 bin, we are only concerned with 1 of them (from 8 AM to 9AM).  Similarly, we are only concerned with a portion of the hours represented by the 17-24 hours bin.

The bottom line is that while bin data certainly is acceptable as a method for projecting energy savings, using hourly data will open the door to better precision in the calculations.  And current technology means there will be very little difference in the time it takes to do the calculation assuming you are using the same steps in either process.

Back to Contents

Doing the Math

What follows are the steps in the process I used to develop my projection of the energy loss due to the purge processes in a typical boiler cycle.  Note that once I had the spreadsheet set up, I was able to quickly make copies that allowed be to assess the impact of different cycling rates, boiler efficiencies, and excess air percentages on the results.

The reason for looking at the results with a number of different variations for the inputs is that I did not know certain things exactly.  For instance, as we discussed, the boiler flue temperatures we were seeing a range for the combustion efficiency and excess air, not a specific value. 

With out the results of combustion efficiency test to narrow things down, I wanted to bracket the savings range that was possible vs. identify a specific number.  This is not an unusual situation when you are doing energy calculations from field data given how many variables there are to deal with in a building and it’s systems and the climate it exists in.

What follows will illustrate the steps I went through for my first pass, which:

  • Capped the short cycle rate at 7.1 per hour;  this was an arbitrary reduction of the maximum observed cycling rate, meaning I was sure that during hot weather there would be at least this many short cycles in an hour.  The data we gathered indicates there could be twice as many if both boilers fired the way the East boiler fires when it short cycles.
  • Set the boiler efficiency at 72.6%; this was the low end of the estimated efficiency range based on the flue temperature.  I discuss that in detail in the preceding post under the Flue Gas Temperature;  Another Clue Pointing to Savings Potential topic.
  • Set the excess air at 9.5%;  this is ideal excess air rate, so it likely represents one end of the spectrum.

After I had worked out the bugs in my first calculation, I made copies of the spreadsheet and adjusted the factors listed above to represent other possible conditions and thus, established the range of savings I illustrated at the end of the previous post.

Back to Contents

Turning the TMY Date and Time into an Excel Date and Time Value

The TMY data  I used had the month, day, and hour in separate columns.  I needed to combine them into one number that Excel could work with, as I discussed previously.  There are a number of ways to go about doing this and I have done a number of blog posts illustrating some of them.  In fact, there is a video clip associated with the the second post in series about using scatter plots that shows the steps interactively as I go about doing the process vs. screen shots.  So, I will not go into a lot of detail here other than to briefly describe the particular process I used for this particular TMY file.

In the screen shot below, the date and time information that was provided in the TYM3 file is in the red columns.  My first step was to assign an hour of the day to each row, which I did by simply numbering them from 1 to 24 repeatedly, as shown by the highlighted formula in the first orange column.


This is a fairly safe approach for the TMY data since you know there is a value for each our, meaning that 1 row is guaranteed to be an hour later than the preceding.  But if the data was from a logger or a trend file, I would probably have done this a different way because there can be “hiccups” in the data due to power outages, etc. and assuming each data point is displaced by the same value in time from the preceding one could be a bad assumption.

Next, I built a text string for the month and day of the month. Since the TMY files is not for a specific year, I simply added 2015 to the string to make it work out when I plotted my chart.  This is what is going on in the second orange column.


The third orange column does a similar thing to create the time part of the text string I need.


Finally, in the fourth orange column, I combine the date and text string to make an Excel date and time value …


… including formatting the column to display it as a date and time vs. a number.


Back to Contents

Calculating the Boiler Cycles per Hour Based on the Outdoor Temperature

The yellow cells in my spreadsheet are the data fields that came from the TMY3 file.  The only one I am using in this particular calculation is the outdoor temperature in the first column.  But I kept them all in case I wanted to use the data for something else.

I calculated the number of boiler cycles per hour based on the relationship I developed between outdoor temperature and boiler cycling rate in Step 4 as illustrated in the highlighted cell in the green column.


But, you will notice that I limit the maximum number of cycles to the value in cell H6, 7.1 in this case.  That is because the equation could generate an unrealistic number of cycles at some point.   The absolute limit on how long a cycle could last is the sum of all of the events in the firing controller.

In other words, if the boiler shut down immediately after the main flame was proven and then started immediately after the end of the post fire purge, that would be the fastest chain of events that could happen because they are constrained by the firing controller settings.  If you look at it that way, you come up with something like this.


While I actually have seen equipment short cycling that much, I did not see that happening on this site on a fairly warm day.  So I decided to use an alternative approach that set the cycle time to what would happen if the West boiler fired for as short a period of time as the East boiler.


That even seemed like it might be a little fast so I decided I would do the calculation for a range and let this rate be one of the upper limits and an arbitrarily selected rate of 7.1 cycles per hour be the lower limit.

Back to Contents

Calculating the Energy Loss for Each Purge Cycle

The blue columns are where I develop the energy loss for each part of the purge cycle and then total them up.  I started writing one big formula that strung all of the equations in the table at the end of step 6 together into one cell.   But when I made a typo, it was really hard to find and I decided it would be easier to do a column for each purge phase and then just add them up.

That means that the formulas in the second, third, and fourth blue column are very similar to the formula in the first column, which is illustrated below.


The only differences are the parameters used for flow rate and cycle length and flue gas temperature during the cycle, which are retrieved from the a table on the next tab over ( the Purge Loss 9.5 Pct Excess Air tab) …


… which is just the working table behind the tables I presented at the end of Step 6.

The last blue column simply adds up the other to provide the total loss for one purge cycle where it to occur during the hour under consideration.


Note that the variation in outdoor air temperature is the only reason this total varies from row to row.

Back to Contents

Assigning Hourly Purge Losses to Each Boiler

The purple and gray columns develop the actual purge losses for each boiler for a given hour based on:

  • The number of boiler cycles per hour (the green column)
  • The energy that would go up the flue for one complete purge cycle at the outdoor air temperature associated with the hour under consideration
  • An assumption regarding how cold it has to be for the lead boiler to stop cycling.

Thinking Through the Plant Response as the Load Changes

Lets address that last bullet.   For this facility (and most facilities) the outdoor air temperature affects the load on the heating system with colder temperatures increasing the load for a number of reasons including envelope losses, colder make-up air, more infiltration, etc.

So lets think about how the plant might respond as it loaded and unloaded including ideal and less than ideal operation.  Once we understand that, we can compare how the plant we are working with currently seems to be reacting to load changes to what an ideal plant should do and base our purge loss calculations on that same comparison.  

The Plant at Full Load

In theory, on the design day, a perfectly sized boiler (with no safety factor) would just begin to run at steady state as the design condition was reached.   For a plant with two boiler required at design conditions,  on the design day, both of the boilers would begin to run at steady state as the design condition was reached.

The Plant at Part Load – Option 1

As the load dropped off  on the two boiler plant,  one possibility would be that both boilers would share the load and as a result, they would both begin modulate away from their high firing rate towards their low firing rate.  Once they reached their low firing rate, they would start to cycle if the load were to drop any further.

The cycling would be occurring because at low fire, the boilers would be putting more energy into the system than was leaving it due to the loads.  The boilers can not reduce their energy output any further with out cycling off due to physical constrains of their design. So, the control system cycles them off, allows the hot water loop to cool down  bit, then cycles them back on to bring the loop back up to temperature.  For as long as the energy being taken out of the loop is less than the boilers will put back in at low fire, the cycling pattern will continue.

As the load continued to drop, assuming equally sized boilers, at about 50% load, it should be possible for one boiler to carry the load and the second boiler to be shut down.  This would increase the load on the remaining boiler and potentially would shift it back to an operating state where it could modulate its capacity as needed to match the load.

But, if the load continued to drop, eventually, the single boiler that remained on line would reach a condition where the it could not reduce its capacity any further due to the turn down limitations of its burner and other physical constraints of its design.  At that point, it would begin to cycle again.

If things were really bad in terms of the staging adjustments, the control system may try to cycle the 2nd boiler during the time the first boiler is cycling.  Since at that point, the load is not high enough to keep one boiler running full time let alone two boilers, the 2nd boiler would likely cycle back off very quickly, perhaps doing little if any useful heating.   As you will recall, this is the pattern we are seeing in the boiler plant we have analyzing.

The Plant at Part Load – Option 2

Its important to remember that each boiler cycle represents purge losses and stress on the boiler metals due to thermal expansion and contraction.  And while the purge operations are necessary for safe boiler operation, ideally, we would like to minimize them.  In the context of our discussion, if two boilers are cycling when it should be possible for one to handle the load, then it is likely that we have an opportunity.  The question we need to ask ourselves is what would that sort of operation look like?

The answer to the question may be as simple as:

  1. Keeping the lag boiler off line until the lead boiler is no longer cycling and,
  2. If the boilers are operating together and begin to cycle, turn one of them off.  

This may require some fine tuning to optimize the process, especially if the boilers are different sizes and/or the lead boiler is swapped after so many hours of operation, etc.

It may also be possible to configure the piping connections to keep one boiler loaded while the other boiler experienced the load swings.  But if the system was not already piped to do that, such a modification may be cost prohibitive compared to an approach that could achieve the savings by adjustments to the staging of the boilers and their operating set points.

Determining When a the Lead Boiler Begins to Run Steady State for the Current Scenario

There are  number of ways to figure out when the lead boiler in the current plant (which appeared to be the West Boiler since it had the longer operating cycle) would begin to stop cycling.  By coincidence, we captured a period of time when the outdoor air temperature took a big swing while we were logging boiler flue temperatures, as was illustrated in the previous post.  If we take a look at that data, which I have reproduced below …

… we can see that it seems to happen when the outdoor temperature is in the range of 50°F.

If we didn’t happen to have data that captured the event, but had enough data to do a good regression, we could estimate the temperature by solving the trend line equation for the outdoor air temperature where the cycles per hour are equal to 1.  Here is what that looked like when I did it for the regression we used to correlate boiler cycles with outdoor temperature.


Based on these assessments and the other data and field observations, I decided to assume that for the plant as currently configured:

  • Below 52°F, the West Boiler ran steady state (no purge cycles) and the east boiler cycled at what ever the calculated boiler cycling rate was.
  • Above 52°F, both the West Boiler and East Boiler cycled at what ever the calculated boiler cycling rate was.

This is what the purple column …


… and gray column …


… do in my spreadsheet.  The white column totals them up.


(Note that to get the purple, gray, and white columns to show on my display with out reducing the scaling of the spreadsheet, I have hidden the first three columns with the TMY data in them).

To get the total purge cost for the year, I added up the purge cost for each hour in the white column and did a little basic math.  This table illustrates the results for the plant assuming:

  • A maximum short cycle rate of 7.1 per hour, and
  • A boiler efficiency at 72.6%, and
  • Excess air at 9.5%.


Once I had the spreadsheet set up, I simply made copies of it and then change the three assumptions I listed at the start of this section to establish the potential range of costs associated with the purge cycles for the plant in its current state.

I should also mention that while there were three boilers in the plant, I have assumed that the third boiler was not required.  It may have in fact been short cycling also.  But since it was off line for repairs, I had no meaningful way to understand how its operation would impact the cycling that I was observing and how much it would cycle.

What I could tell was that on a relatively cold day, two boilers seemed to be able to manage the load reasonably well.  If the third boiler had in fact operated in the past, then it is likely that made the short cycling problem worse, not better, meaning my calculations are conservative.

Back to Contents

Step 8 – Determine the Savings Associated with Reducing the Number of Purge Cycles

The final step in my process was to add columns to my spreadsheets that reflected reduced cycling for the East Boiler when I thought that was achievable.

Based on my observations and analysis of how the plant was running and how it might be possible to make it run, it seemed likely that we could eliminate the cycling of the East boiler once the West Boiler had started to cycle.

In other words, even thought the East Boiler was cycling on while the West Boiler was cycling, it only was producing heat for less than a minute.  That means it was not contributing much to offsetting the load on the plant.  That, in turn, implies that the West boiler could carry the load at that point. 

I have reproduced my diagram of the boiler interactions below to give you a visual on this.


Notice how the red band (which is the firing cycle) for the East boiler is very narrow and how the West Boiler is off for a significant portion of the time (the white band).  During the time I was developing the data for the diagram the temperature outside was running from 55-65°F.

So bottom line, it seems like the West Boiler could handle the load under these conditions with out the East Boiler.  And the projections that the West boiler could handle the load until the temperature outside was in the low 50°F range also seems reasonable based on this data.

The columns I added to my spreadsheet modified the purge cycles for the East boiler  so there were no cycles for if it was above 52°F outside.  In other words, I assumed the West boiler would cycle above that temperature but the East boiler would simply remain off line.  Below 52°F, I assumed the pattern that exists currently would still exist.  Here is what that looked like for the East boiler (the red column;  note that I have hidden most of the Date and time information in the orange columns to show this part of the calculation)  …


… and the West boiler ( the 2nd orange column) …


… along with the total projected consumption for the new operating pattern (the green column).


The following table summarizes the saving for this specific condition.


The next table summarizes the savings for all of the conditions I assessed.


The bottom line is that some fairly simple adjustments could potentially saving at least $800 per year and may deliver almost $2,000 per year  in savings depending on the specifics of the current burner set-up.

Back to Contents

So there you have it; the steps in the procedure I used to assess the savings associated with eliminating unnecessary purge cycles in a boiler plant.  It probably seems like a lot to go through, and in some ways, it is.  But it actually takes a lot more time to illustrate it or read about it and understand it than it does to do it. 

And the reality is that most energy calculations require that you make a number of decisions similar to the ones I needed to make to perform this one, which inevitably take you into the details of how things work and how to make them work better.  But that is the key to success in Existing Building Commissioning and Ongoing Commissioning;  we want to understand how our buildings work and make them work better.  And I can’t help but believe that is a good thing for all of us.


David Sellers
Senior Engineer – Facility Dynamics Engineering

Posted in Boilers, Hot Water Systems, and Steam Systems, Data Logging, Excel Techniques, HVAC Calculations, Retrocommissioning Findings | Leave a comment

Retrocommissioning Findings: Reducing Boiler Purge Cycle Losses

Greetings after another break in the action in terms of my posting rate.  As those of you who follow this blog know, sometimes, I get pretty busy and since writing this is not exactly my “day job’, my posting rate suffers.

Having said that, this post started out as an e-mail the students taking the current Existing Building Commissioning workshop class at the Pacific Energy Center and was intended to provide them with some follow-up information for the lab sessions we are currently working through.  Then, I realized I should share the same information with a different lab class I am involved with.  That made me realize that the information may generally be of interest, so I decided to spend the time on a blog post and point the students to it for the information I was trying to share.

The links below will jump you to the indicated topic.  The “Back to Contents” link at the end of each section will bring you back here.

If you read through this, you will discover that even though the answer is based on a fairly simple equation, obtaining the correct answer requires that the equation be applied properly and there  are a myriad of  details that come up when when you think through exactly what needs to be done.

This is typically the case with most energy calculations.  Thus, while illustrating the specific steps in a specific calculation process, this  post also  illustrates in the general case, what you need to consider as you develop an energy calculation and how to deal with the issues that will come up.  Thus, it may be useful in that  context if you are contemplating how to develop an energy calculation, even if it is not one about boiler purge losses.


Developing the post caused me to realize I needed to do a separate post on Aliasing, so I just put that up to support this.  All of that means these posts are out of sequence in the context of the series I was working on that was focusing on economizer analysis via scatter plots.   But I will be returning to that soon, I hope.  Meanwhile, in this post, we will look at a potential “low cost/no cost” improvement that might be made by fine tuning the sequencing of the boilers in an existing building heating hot water system.

Back to Contents

Background Information

One of the classes I teach is using a hotel in Columbus Ohio as a living lab to teach Directors of Engineering, Chief Engineers, and Engineering technicians retrocommissioning field and analysis techniques.   The class is very similar to the Pacific Energy Center Existing Building Commissioning Workshop series but is delivered in a concentrated, three week form.

The facility in question is a 485,000 square foot, 408 guest room, 22 story high-rise hotel that was originally built in the 1960’s and then went through a renovation cycle in 1996 during which the current mechanical systems were installed.  One of those systems is a heating hot water central plant that serves the building’s preheat, reheat, fan coil, finned tube radiation, and domestic hot water loads.  Here is a draft of the system diagram I have been working on to give you a sense of what the system looks like.



The diagram was created in PowerPoint and this link takes you to the PowerPoint file in case you want to look at it in detail or grab some of the symbols for your own system diagrams.

Back to Contents

Utility Consumption Patterns; My First Clue of an Opportunity

Even before I had gone  on site, I was planning to spend some of my field time and logging capabilities on the heating hot water system.    One reason was simply the age of the equipment;  older equipment can often benefit from some fine tuning if nobody has been paying attention to it for a while.

The obvious tuning opportunity is making sure that the combustion efficiency of the boilers is optimized. A less obvious tuning opportunity is related to minimizing boiler cycling, which will minimize the parasitic  losses associated with the pre-fire and post-fire purge that occurs for most large boilers.  More on that to follow.

My point here is that tuning opportunities are perfect RCx opportunities in terms of bang for the buck.  So one of the items on my list for field investigation was to see if I could determine how well the boilers where tuned.

Another reason that I  wanted to focus on the system was because it was the primary consumer of gas on the site.  The average daily consumption analysis I performed with the California Commissioning Collaborative Utility Consumption Analysis Tool (UCAT) revealed a relatively high baseline during the summer months.  Those are months when you would not expect a system to need to deliver much actual heating energy (i.e. offset losses through the envelope).


Notice how the average consumption in the summer months is about 40-50% of the peak consumption in the winter months, even though the need for heat, as indicated by the heating degree day data does not exist.   Note also that the consumption is independent of occupancy, something that is not always true for a hotel due to the domestic hot water loads.

In addition, the thermal benchmark was not particularly good relative to other lodging type facilities in the United States.


If you are wondering where the data in the chart came from, I generated it using the Building Performance Database, which is a free, online tool.

Back to Contents

Equipment Room Sounds, An On-site Clue

When I went into the facility for the first time, the Chief engineer gave me a quick orientation tour of the mechanical spaces and then left me to further explore them while he ran off to respond to a radio call for assistance with an HVAC problem in one of the meeting rooms.

The central plants were located on the 22nd floor, so I headed up there and started working my way through the mechanical room, observing the condition of things, documenting nameplate data, and taking pictures of things that captured my attention.  At this point in the scoping process, I am typically just “following my nose” and getting a sense of the plant and how it is configured.

That means that when I start, I may not have any particular focus and am just letting the clues I see lead me to things.  For most of us, the things we are seeing probably catch our eye more than the sounds we are hearing.  But the background noises in a mechanical room can also lead you to important information once they penetrate your consciousness.

Such was the case for me as I worked through the mechanical room;  at one point, it dawned on my that I was hearing a repetitive sound of some sort, not particularly loud or alarming, but definitely some sort of change in pitch, almost a shriek of sorts, that had a distinct pattern to it.

You may be able to get a sense of what that might be like by playing the video clip below while reading further into the post rather than watching it.   (I’ll send you back to watch the video clip later in the post to support the discussion at that point.)

But for now, by playing it while reading on, you will have the equipment room noise in the background but your focus will be on what you are reading, which may give you a sense of the mindset I was in at the time I realized that the sound was a clue.

The “Light Bulb” Comes On

Right before the light-bulb came on, I had discovered the door to the cooling tower area and gone out to see them.  As I walked back into the mechanical room, I happened to open the door just as the pitch of the sound that I mentioned above changed.

It could have been the contrast between the sound level outside (fairly quite with only the splash of water over the tower fill) and the general noise level in the equipment room that made it penetrate my consciousness, but what ever the reason, I found myself thinking:

What in the heck was that?  Now that I think about it, that has been going on the entire time I have been in this room.

So, off I went to find it.  That was a bit challenging at first because the shriek was of fairly short duration and there were breaks of a minute or more between the sound events.  In fact, I realized that while there was a pattern to it, the pattern was not a regular pattern.  Specifically, there would be a number of closely spaced shrieks, then a long break and then a single shriek, then a long break and the pattern would repeat.

The Discovery


  • The boilers were in my line of sight and
  • Because boilers will cycle at part load and
  • Because it was a mild day and thus, a likely part load day,

… I decided to take a look at them to see what they were doing.

As I walked up, it seemed like they were not doing much of anything.  But then my curiosity was rewarded by a click, then the sound of the combustion air fan starting, followed by the shriek I had been hearing.   I had found my mystery sound;  here is what I was looking at when I made my discovery.


Specifically, the actuator to the left of the air intake box varies the flow of combustion air and gas via a linkage system that moves an air damper and the gas valve as the actuator rotates in response to the firing control system.

At certain point in the cycle for the boilers in this facility, towards the low fire end of the capacity range, the air flow through the damper system creates a shriek that goes away as the dampers open towards high fire.  That shriek was my mystery sound.

The Pace Picks Up

Having found what appeared to be the source of the mystery sound, I decided to watch the boiler go through its firing cycle to see what I could learn.

I had little time to rest on my laurels as they say. Within moments of my initial discovery, I heard another click, another fan start and another shriek;  a second boiler had staged on with-in moments of the start-up of the lead boiler

In fact, the lead boiler had not even started to fire yet, which implied the second boiler may be starting earlier than it needed to, which implied a potential low-cost/no-cost improvement.  More on that in a moment.

This  sudden and rapid turn of events lead to a bunch of frantic note taking and IPhone stop-watch use on my part as I frantically ran back and forth between the boilers trying to capture what was going on.

Ultimately, I realized that a lot of the events were controlled by set points in the burner firing controller, which led to me shoot the video I included earlier in the post.  The idea was that I could document those events via the information in the controller display panel and then combine that with my field notes to come up with the boiler firing cycle.

Back to Contents

The Boiler Operating Sequence;  Evidence of an Opportunity

By working with all of my field notes and the video, I was able to construct the firing pattern for the boilers in the central plant at the time of my site visit, which is illustrated below.


The graph illustrates why I took the time to document the boiler cycling and interactions;  it turns out that I was on to something.

Specifically, the East Boiler was short cycling.   If you study the timeline above, you will notice that the West Boiler starts with-in a minute of the time that the East Boiler starts.  And, with-in seconds of the time that the West Boiler ignites its burner, the East Boiler cycles back off again.

That means that the East Boiler spends more time in its purge cycle than it does generating heat.  The fact that the West Boiler eventually cycles off and is off for a period of time indicates that the current load condition is less than the capacity of one boiler. In other words, it should be possible to meet the load for the current conditions with out even starting a second boiler.

There-in lies the savings opportunity.  To understand that, you need to understand a bit about what is really going on during the firing cycle for typical boiler like the ones I was working with in Columbus.

A Typical Boiler Firing Cycle

If you study the graphic above, you will notice that there are 8 steps that the boiler goes through in a typical cycle.

  1. When the system supply temperature drops below set point, start the combustion air fan and modulate to full air flow with out lighting the burner.
  2. Hold the full airflow rate for some predetermined period of time – in this case, 1 minute – to ensure that any unburned gas that might be present due to a leaking gas valve or residual gas from the previous cycle is removed (a.k.a purged) from the combustion chamber.
  3. Modulate back to minimum air flow.
  4. Light the pilot at minimum air flow and verify ignition.
  5. Light the burner at minimum air flow and verify ignition.
  6. Modulate the gas and air flow as required to maintain good combustion efficiency while matching the load requirements.
  7. If the minimum firing rate exceeds the demand on the system, then shut down the burner.
  8. Keep the combustion air fan running for some predetermined period of time; in this case, 14 seconds, to purge any unburned fuel and the products of combustion from the boiler.

The energy savings opportunity lies in the context of minimizing the number of purge cycles.

Taking a Look at the Inside of a Water Tube Boiler

As it turned out, one of the boilers at the hotel I was working at had a boiler opened up.  So I took some pictures which will let you see what the inside of a water tube boiler looks like if you have never seen one before.


During a purge cycle, the boiler combustion air fan is basically pumping equipment room air through a heat exchanger that is full of water at the system supply temperature (180 – 190°F in this case) on the other side of the heat transfer surface.  As a result, instead of warming up the water (which is what happens when you burn gas along with the combustion air), the water warms up the air.

That means that during the purge cycle, you are taking relatively cool air from inside the building (which came from outside the building at what ever the ambient temperature was), then you heat it in the boiler and dump it back outside the building.  In other words, the purge cycle removes energy from the water in the boiler vs. what normally happens (the boiler adds energy to the water).

The Energy Conservation Angle

It is important to recognize that purge cycles are very important.

Bottom line, they prevent boiler explosions due to lighting a flame with a combustion chamber full of a combustible, explosive mixture instead of air.  The following image, illustrates what can happen if you do that and was is provided courtesy of Metropolitan Engineering Consulting and Forensics.

The round thing in the picture above is what is left of the boiler after it tried to ignite its burner with a combustion chamber full of fuel and air at the appropriate mix for ignition.  Previously, there was a building surrounding it.  The building is the debris field lying around the remains of the boiler.

I think we can all agree that this sort of thing should be avoided if possible.

And that is exactly what the purge cycle does for us.  Specifically, it blows air through the boiler to make sure that when the burner is ignited, there is no residual fuel from the previous cycle or a leaking valve sitting inside the combustion chamber.

But if:

  • The boiler is oversized, and as a result, cycles more frequently than it would if it could come online and match the load with out cycling off, or
  • If a second boiler, which is not really needed to match the load, comes on line and then drops back off after only operating briefly,

… then the system is spending more time purging than would be required.

In the current example, we have a boiler cycling on when it is not required.  That means that all of the time it is spending on unnecessary the purges cycle is throwing energy away needlessly as compared to what would happen if the boiler staging was adjusted to only bring on a second boiler when it was required to meet a load that could not be handled by the lead boiler operating at full fire.

So bottom line, purge cycles are a desirable thing in terms of operating a boiler safely (see picture above).  But, if your boilers are cycling more frequently than they might need to, then those extra purge cycles represent energy that could be saved if you could reduce the cycles to the minimum number required by the load profile.

Back to Contents

Purge Cycles and Low Cost/No Cost Improvements

If a boiler is short cycling because it it too large for the load it serves, then you probably are not going to be able to make low cost/now cost adjustments to reduce the cycle rate.  Rather, you are probably going to need to add a smaller boiler to the plant or create a thermal flywheel.

I should point out that sometimes, you can create a flywheel by leveraging the thermal capacity of the existing piping network vs. having to add a physical piece of hardware like a flywheel tank.  So if you can leverage the inherent flywheel in the system, then you have a relatively low cost approach open to you.

If you want an example of that, you can take a look at the presentation I did for NCBC in 2013, along with the related blog post.  The example was for a chilled water system but the same concepts would apply to a hot water system.

Having said that, for the hotel I was looking at, it was pretty clear that we had a boiler coming on when it did not need to come on, meaning purge cycles were happening that could be eliminated if we could improve the boiler staging.

The pattern I was observing occurred on a day when the outdoor air temperature ranged from 50-60°F.


The graph above is courtesy of the Weather Underground web site, which is a great resource if you are trying to look back at what was going on for the day you were on site at a particular location.

If you take a look at the bin weather data for Columbus Ohio and assume that at a minimum, the cycling frequency would be the same if it was warmer outside, or maybe even increase, then the potential significance of the unnecessary purge cycles starts to become apparent.



The climate data suggests that “tweaking” what ever was controlling the boiler staging might eliminate the unnecessary cycle from the East Boiler for about 56% of the hours in the year.

Back to Contents

Applying a Data Logger to Assess the Boiler Operating Cycle Over Time

In order to broaden my perspective, I decided to have one of our team members deploy a data logger to monitor the flue temperature for one of the boilers.  My theory was that the flue temperature would provide some insight into the various stages in the firing cycle in addition to providing insight into the cycle frequency.  But the trick there is to use a really fast logging interval.

The reason for the fast logging interval is that in the context of a boiler firing cycle, things can happen really fast.  For instance, if you study the graphic that I presented earlier, you will discover that some of the events have durations of 10-15 seconds. So, if I wanted to capture them, I needed to log faster than those events, otherwise, my data set could be compromised by aliasing.

It was at about this point in the development of this post that I realized I needed to do a separate post on aliasing to keep the current post from becoming even longer than it is.  So I will refer you to that post if you need to understand the details behind how I selected a logging interval for the boiler that is the focus of this post.

The graph below is the raw data from the logger that was deployed to monitor temperature in the boiler flue several weeks after I was on site.


Here is the outdoor temperature data for the same time period.


Clearly, the boiler cycling rate is related to the outdoor temperature.  But before I proceed, I need to stop for a minute to pay tribute to the logger that provided the flue temperature data for us.

Took a Lickin’ and Still Kept On Tickin’

When we deployed the logger, which was strapped to the flue of a boiler that was 22 stories up on a high rise in Columbus, Ohio, we did not fully appreciate how hot the outside of the flue would get.   Our initial installation looked like this and included placing the logger inside a plastic bag to protect it from the weather.


But when we returned to the site several months later, we found the logger blowing around in the 14°F breeze like a little kite, hanging on for dear life to its thermocouple lead which was stuck in the flue and  acting as the kite string.

My guess is that situation evolved when the tape that had anchored the plastic bag and logger to the flue  let go and the bag and logger were directly exposed to the effluent from the flow as the wind blew them around.   In hindsight, that probably happened pretty early on.  So, we figured the little logger was “toast” as they say, if for no other reason that it’s appearance.



Note that this logger (an Onset UX100-14M with a Type K Thermocouple input) should not look like a toasted marsh-mellow (first picture) and that it should be flat, not curved (second picture).

But, when we grabbed it to remove it and hit the little buttons, the little LCD came on.  And sure enough, when we plugged it in, it had data, and the data seemed to be viable and told us what we needed to know for our calculation.  So, as the old Timex commercials would say, it took a lickin and still kept on tickin.  In general, that has been my experience with the Onset product line.  But this really stands out.

I will retire it since I am not sure how reliable it is at this point after all the trauma (I would be totally out of the picture if I went through what it did).  Plus, I am not sure what all of that did to the cold junction compensation, etc.   But I will not throw it out.  It certainly earned the right to relax on the shelf in my office after giving so valiantly to the energy saving cause.

Taking A Closer Look at the Boiler Cycles

If I export the raw data in the previous boiler flue temperature graph to Excel and focus on what the boiler cycle looks like during relatively warm weather, similar to the weather when I made my initial observation, you get a pattern like this.


If I focus in on one cycle, I discover that I can see all of the steps in the cycle via their signature in the flue gas temperature profile.


The graph above is just a different way of looking at the data in the bar graph earlier in the post where I discuss the burner operating sequence. Either way, the data behind the graphics allowed me to quantify the energy going up the flue during the purge cycle.

Back to Contents

Flue Gas Temperature;  Another Clue Pointing to Savings Potential

Before I get into the purge cycle losses, it is worth pointing out that the flue gas temperature profile provided another clue into a potential optimization opportunity.  Specifically, the flue gas temperature is an indication of the efficiency of the combustion process.

To really nail it down, you need to know the percentage of oxygen or carbon monoxide in the flue gas.   But there definitely is a relationship, as can be seen from this table that was extracted from the Department of Energy tip sheet on optimizing combustion efficiency.


The idea behind the table is that if you know the % excess air or % oxygen in the flue gas (most burners target 10% – 15% excess air as a good compromise between peak efficiency and safe operation) and you know the temperature rise across the boiler flue (the difference between the incoming combustion air temperature and the outgoing flue gas temperature), then you can estimate the boiler efficiency.

In a field situation, I think you can work the table the other way to get a sense of the boilers actual combustion efficiency.  Specifically, if you enter the table with your observed combustion air temperature, the manufacturer’s stated boiler efficiency, and an assumed excess air % at the rating point based on best practice, you can also get a sense of where the boiler might actually be operating in terms of combustion efficiency.

Applying this technique to the boiler I was looking at, if you look up the operating specifications ….


… you discover that the rated boiler efficiency is in the range of 82% (output Btu/hr divided by the input Btu/hr).

Using this information as a starting point, and assuming that the burner was set up for 9-10% excess air when it was rated, which would be best practice, interpolated the DOE table data as shown below to estimate what the actual operating efficiency might be based on the steady state flue gas temperature that was achieved after the burner had fired for a while.


My analysis implied that a potential improvement of 3-9% range might be attainable in terms of combustion efficiency if the burner was fine-tuned and/or the heat transfer surfaces were cleaned.  That’s a significant cost savings when you consider the annual boiler gas consumption and that the savings comes directly off of that bottom line.

So, we can add tuning the combustion process and perhaps opening up and inspecting the boiler heat transfer surfaces to our list of potential opportunities.

Back to Contents

Assessing the Purge Losses Using the Logger Data

One of the reasons for logging the flue temperature was to allow me to assess the boiler cycling rate and average efficiency per the discussion up to this point.   But I also hoped to use the data to develop an assessment of the purge losses and potential savings that could be achieved if I could optimize the boiler staging.

I should note that it would also be possible to identify the boiler cycling pattern by logging the burner amps, as illustrated towards the end of my post on Aliasing.  The benefit of monitoring the flue gas in this particular case was:

  1. We were out of CTs, so we had no way to log current with out giving up on something else.
  2. It told us a bit about the combustion efficiency.

Ideally, I might have decided to log both temperature and burner current to have a semi-redundant data set.

Back to Contents

Trying Your Hand at the Purge Loss Calculation

In any case, I will close this post out by outlining the technique I used to assess the purge losses and potential savings that could be achieved by optimizing the boiler staging with out showing the details of the actual calculations.  That way, if you want to try your hand at it, you can.  In my next post, I will provide the details of the calculations in the spreadsheet I used  so you can compare your results  to mine (or just go there if you don’t want to try the calculation on your own first).

I have loaded a Comma Separated Value (CSV) version of the boiler flue temperature logger data at this link for you to use.  If you are not familiar with using that type of file, I have a blog post about working with that type of file and you will find it is very compatible with Excel.

I also included some climate data for Columbus Ohio on the Google Drive since you will also need that information.  But if you wanted to pretend the boilers were in a different climate, then I have a number of blog posts that describe how to find hourly weather data, bin data, etc. on the internet if you don’t already have that sort of data for the location you wanted to consider.

What follows are the steps in the process that I used to identify the purge losses for the existing system as it currently operates.

Step 1 – Associated an Outdoor Air Temperature with Each Boiler Flue Temperature Data Point

As we discussed, I used a fairly fast logging rate for the the flue gas temperature to make sure I picked up the nuances of the cycle.  But I needed to coordinate that with weather data, specifically outdoor air temperature data, which had a much slower sampling rate.  I accomplished this using the VLOOKUP function in Excel.

Step 2 – Detect the Changes in Temperature that Correlate with the Various Steps in the Boiler Cycle

As we observed previously, the steps in the boiler cycle showed up as sharp changes in the flue gas temperature profile.  I have reproduced that image below to make it easier for you to reference it for the current discussion.


For example, when the purge cycle starts, the large increase in air flow through the boiler created by the fan moving equipment room air through the boiler (vs. leakage through the closed intake damper with the fan off) causes a sudden drop in the flue temperature.  Similarly, the heat of the pilot igniting causes a jump in temperature.

We can use these sudden changes to create flags in a column of the spreadsheet associated with an event.  In terms of exactly how to go about doing that, one technique would be similar to what I describe in the blog post titled Assessing Steam Consumption with an Alarm Clock: Step 2 – Detecting a Pump Cycle.  But it is a bit trickier than that because the pump was either on or off; kind of a square-wave type signal.  In contrast, the boiler flue gas temperature signal is “noisier”.

For now, I will leave it up to you to figure out how to resolve that problem.  But I will show how I did it in the next post.

Incidentally, just so you know, there is more than one way to do this.  The approximate number of ways to do this equal to the number of technical people who read this post raised to the power of 3 (because there are three boilers in the central plant) (I needed something to tie the three to).

Step 3 – Determine the Cycle Length for Each Logged Cycle and the Number of Cycles per Hour

Once you have identified the start and stop point for a firing cycle, you can determine the length of each cycle.  If you divide the cycle length by 60 minutes, you end up with the number of cycles per hour.  I did this because I was anticipating that they cycles per hour would very with outdoor temperature so I wanted to be able to establish that relationship.

Step 4 – Develop a Relationship Between Outdoor Air Temperature and Boiler Cycles

At this point, I was able to set up a relationship between the outdoor air temperature and the number of  boiler cycles per hour.  I did this by filtering my data and then, by making a scatter plot of Cycles per Hour vs. Outdoor Temperature, which gave me this chart.


Then, I used Excel’s trend line feature to develop a curve-fit for the data points I had and a mathematical relationship that would let me calculate the number of cycles per hour based on outdoor temperature.


If you are not familiar with how to apply a trend line to a data set, I actually show you how to do that in a little video clip that is included as a part of the blog post titled Using Scatter Plots to Assess Building Performance–Part 2.

If you think about how real boilers might work, you will probably conclude that you need to put some caps on the cycling rate when you do your analysis, at least that is what I concluded.   I’ll let you think about that and draw your own conclusions for now.  But I will show you what I concluded and how I adapted the curve fit equation in the next post, where I show the details behind each of these steps.

So bottom line, our 5 day data set can be used to predict about how many cycles the boilers make in a year, which will let us figure out the purge losses for the year as well as the potential savings if we can eliminate the unnecessary cycles.

Step 5 – Determine the Combustion Air Flow Rate

The number we are really after is how much energy does a purge cycle cost due to the fact that we are taking combustion air from out of doors, heating it up by blowing it through an inactive boiler that is full of hot water, and then tossing it back outside again.  In other words, we are sensibly heating the air and there is a common HVAC equation that lets us calculate the sensible heat that has been added or removed from an air stream based on the flow rate and temperature rise.


To apply the equation, we can figure out the temperature rise by using hourly outdoor air temperature data and our logger data as was discussed under Step 1.  The missing piece of information at this point is the flow rate.

The combustion air fan needs to provide the air flow required for complete combustion during high fire.  So there is a clue.  During a purge cycle, the fan provides this air flow rate with out burning fuel.  So if we could come up with the combustion air flow rate, we would have a pretty good sense of what the air flow rate was during a purge cycle.

I will let you think through how to come up with the combustion air flow, and you can do it with out a field measurement with a bit of theoretical knowledge.  And, there is a clue in the information we looked at previously.  I will show you how I did it in the next blog post.

Step 6 – Determine the Purge Air Flow  Rate for Each Portion of the Purge Cycle

Once you make it through Step 5, you will have a pretty good sense of the peak air flow that will be produced by the combustion air fan.  The problem is that if you study the boiler cycle chart (reproduced below), you will notice that for some of the pre-purge cycle, the combustion air damper is modulating open or closed, meaning the flow rate is varying from minimum to maximum and back again as the combustion air damper modulates.


That means that the combustion air flow rate we estimated likely only applies to the full airflow portion of the cycle.  For the other portions of the pre and post purge cycle, we will need to come up with a different air flow rate.

I will let you contemplate how to do that for now and also, how to turn the cycling rate into energy, but will show you how I did it in the next blog post.

Step 7 –  Determine the Total Purge Losses for a Year Given the Current Operating Pattern

The next step in the process is to put two of the things developed up to this point together and come up with the overall purge cycle cost for a year of operation.  Specifically, you need to:

  • Take the relationship between boiler cycles and outdoor air temperature that was developed in Step 4, and
  • Combine it with the Energy cost per purge cycle information developed in Step 6, and
  • Combine it with some form of climate data like an Typical Meteorological Year file (TMY file), bin weather data, or hourly weather data for a recent year.

The result should be the number of purge cycles that would happen for the year represented by your climate data and the energy loss that is associated with them.

That information can be converted to dollars, which tells you what the purge losses associated with the current boiler cycling pattern are costing.

Step 8 – Determine the Savings Associated with Reducing the Number of Purge Cycles

If you take the calculation you develop for step 8 and adjust it to reflect how you think you can reduce unnecessary boiler cycles, the result will reflect the purge costs (which are necessary at some level if you don’t want to have boiler explosions) for an optimized system.  And, the difference between the two is the potential savings.

I will let you contemplate what the optimized firing cycle pattern might look like, but in the next post, I will share how I visualized it.   The table below presents my results, and maybe a clue about how to think about all of this.


The bottom line is that some fairly simple adjustments could potentially saving at least $800 per year and may deliver almost $2,000 per year  in savings depending on the specifics of the current burner set-up.

Back to Contents

Hopefully, all of this gives you some insight into how to go about assessing the savings associated with what would likely be a relatively low-cost/no-cost adjustment to the boiler control algorithm for a central hot water plant.

And in the broader sense, I hope it shows you how to go about thinking about an energy savings calculation and how something that appears fairly simple can have a lot of nuances associated with it when you get into the details.

But, as they say, God is in the details (or the devil, depending on your perspective).  And from my perspective so is the fun.


David Sellers
Senior Engineer – Facility Dynamics Engineering

Posted in Data Logging, Excel Techniques, HVAC Calculations, HVAC Fundamentals, Operations and Maintenance, Retrocommissioning Findings | Leave a comment

Aliasing and Other Factors Affecting the Accuracy of Field Data

One of the challenges we face when we work with building systems is capturing enough sufficiently accurate data to paint a true picture of what is going on.  This is harder than it sounds for a number of reasons, including aliasing, which is what I want to focus on in this post to support a subsequent post.  But before I do that, I thought I should highlight and revisit some of  the other factors that can come into play, many of which are discussed in more detail in previous posts I have done.

The links below will jump you to the indicated topic.  The “Back to Contents” link at the end of each section will bring you back here.

Absolute Sensor Calibration

The accuracy and calibration of our sensors relative to some sort of standard is an obvious potential source of error.  For example, here are the results of a test I did with a couple of my thermocouples using a dry-well calibrator.


Both of the sensors have identical accuracy specifications, but they vary significantly from each other when compared to the NIST traceable accuracy of the dry-well calibrator, even though they are with-in the limits of their accuracy specification.

Back to Contents

Relative Accuracy

For many of our assessments, the accuracy of one sensor relative to another sensor is more important than the absolute accuracy of the sensors because we will base our calculations on the difference between two different parameters in an operating system.   For example, we calculate the load on a coil based on the temperature difference:


Or, we enter a pump curve based on the pressure difference we measure across the pump (the difference between the red and black pointers on the gauge in the picture).


By taking the pump reading above with one gauge instead of two, the gauge error is canceled out.

In other words, if the gauge’s rated accuracy is 2% of full scale, and  it turns out that this particular gauge, when tested against a standard, reads 1.2% high across its entire span, then the pump suction and discharge pressure readings will be off by the same amount relative to the standard.  As a result, the difference between them will be an accurate measurement of the pressure difference across the pump.

Contrast this with what would happen if two different gauges were used, both of which were rated for 2% full scale accuracy or better and one of which was reading 1.2% high relative to a standard while the other was reading 1.3% low relative to the same standard.  The pressure difference developed by these two readings could be off by 0.1% (1.3% – 1.2%) to  2.5% (1.3% +1.2%), depending on which gauge was used to take the high pressure and which gauge was used to take the low pressure.

Back to Contents


HVAC systems are prone to stratification of flow and temperature, especially large air handling systems.  Mixed air plenums are notorious for temperature and velocity stratification as you can see from the test data below, which is the subject of a previous post about stratification in economizer mixing plenums.


And large coils will often show a temperature gradient across them that can vary with the load and the coil circuiting.  This picture, taken by Tim Scruby, another one of the Senior Engineers at FDE on a site in Virginia, illustrates how poor water distribution in a coil at low flow rates can lead to a very significant temperature gradient across the coil


For flowing fluids, the shape of the velocity profile can be profoundly impacted by obstructions upstream and down stream of the measuring location.  You can get a sense of this by watching the water flow in this video clip of a stream. Notice how the rocks in the stream create waves and eddies both upstream and downstream of their location.

That means that it may require multiple sensing points distributed across the plenum or face of the coil to paint a representative picture of the actual temperature or flow profile that exists.

Back to Contents

Thermal Lags

Thermal lags are another thing that can cause us to be misled by our data.  In this case, the issue is not so much about accuracy as it is about what is going on inside the system at a given time relative to a measurement taken by an instrument that has mass between it and the system.


The mass can be in the form of a well that the sensor is in, which was the case in the example above. Or it may be the mass of the sensor itself as illustrated below.


Either way, mass can impact what we think is going on in a system vs. what is actually going on.

Back to Contents

Sensor Installation Issues

To provide accurate data, many sensors need to be installed in a manner that meets the requirements specified by their manufacturer.  For example, the pressure sensor in this video is position sensitive and needs to either be installed vertically or recalibrated in the installed orientation.

On a recent project, when an differential pressure sensor that was used to control the building’s relief fans was replaced with a sensor similar to the one in the picture, the new sensor was mounted horizontally instead of vertically to facilitate maintenance.  But it was not recalibrated in that position and as a result, it “thought” the building was always positive, which caused the control system to operate the relief fans almost continuously.

Since the envelope leakage in the facility usually handled the relief requirements, the change in the relief fan operating profile was picked up by the facility occupants, mostly because the facility is an energy efficiency center so the technical team is much more engaged with the details of the HVAC system operation than your typical office building occupants.  But, had they not noticed the problem, in addition to creating drafts, it would have increased the operating cost of the facility by about $800 – $1,000 annually.

Back to Contents

The Data Sensing Food Chain

Even if you properly address all of the items we have been discussing, there are still a lot of things between the sensors and actuators in the ducts and pipes and the person sitting at the operator work station.  This is illustrated in the slide below and discussed in my post titled 4-20 ma Current Loop Experiments – Thermal Mass Effects under the  Real World Implications topic.

Back to Contents


Aliasing, which is the topic I really wanted to focus on in this post, is also something that can have a major impact on the conclusions  you reach from your data set, even if you have addressed all of the items on the preceding list.

Aliasing is what happens when the process you are measuring changes faster than the rate at which you are taking your samples.

Assuming you have good sensors, all you really know about a process when you are logging data is the information that is reported at the time when the sample was taken. For instance, in the example below, if we are sampling the chilled water valve command signal and discharge temperature signal once every 5 minutes, we know what is happening at the points in time marked by the yellow squares.

Excel (which is what I used to create the chart) has made assumptions about what happened between those points in time using curve fitting techniques to create the purple and blue lines that connects the points.


If I were to increase my sampling rate to once a minute, it could turn out that all of the data points would lie on the purple and blue lines, as illustrated below with the red markers.


But it also could turn out that the actual wave form is totally different from what I was deriving from the 5 minute data simply because I was only capturing a few of the data points in the actual wave form.


So for me, aliasing means that the words “sampling time” and “too fast” are mutually exclusive terms until I know what is going on in a process.

Back to Contents

Putting a Number to the Sampling Time

If you buy into my premise in the preceding paragraph, they you would be inclined to simply set up any trending or data logging operation to gather data at the fastest available sampling rate.    But unfortunately, its not that simple.

Fast Sampling and Network Architecture can be Mutually Exclusive

There are commercial control systems out there that can sample a significant number of their data points once a second if they needed to.   Typically, all of the controllers on these systems have access to a very fast, peer to peer network, meaning they can send or receive data packages quickly and do it on demand, when ever it is necessary to share something or know something.

But, there are also systems out there, especially legacy systems, where the controllers reside on a low speed, polled network.  That means that the rate of transmission for data can be orders of magnitude less than what I alluded to in the preceding paragraph and that some sort of network level device needs to initiate and broker the transaction.  For systems of this type, sampling all of the points in one air handling unit, let alone multiple units or the terminal units associated with the main air handling equipment could be a practical impossibility, potentially crashing the system if you were to try to set it up.

Data Loggers Bridge the Gap

One of the advantages of having some data loggers around even if you have a DDC system is that they typically are capable of sampling rates of once per second or even faster.  So, they can bridge the gap, allowing you to log data for a process at a rate that might not be possible if your only option was to use the DDC system.

But there is a limitation there to;  loggers have limited memory and if you want a continuous data stream from them you have to visit them and pull data from them before their memory is totally full.  The exception to this is some of the new, wireless loggers that are emerging in the marketplace.  But, these are not as common as the more traditional types, require a wireless network and related set-up activities and hardware, and are a bit more expensive.

So if you are using a logger to pick up something that you can not trend fast enough with your DDC system, then you will be faced with the trade-off between memory and sampling rate.   I discuss this in more detail in a post I did a while back titled Data Logger Duration Times, which includes a tool that you can use to help you understand the trade-offs if you don’t happen to have a logger sitting around that you can plug in and program to get your answer.

Determining and Appropriate Sampling Rate

My general rule of thumb for sampling times on an HVAC process is that I need to sample things at least once a minute until I know what is going on.  I may slow that down after I am sure that there is not something going on that I need to look at requiring even faster sampling rates.

But there are also situations where I know from the start I will need to sample faster than once a minute.  For example, if I am watching a refrigeration system and trying to detect things like the pump down cycle or the operation of hot gas bypass, I may want to sample as fast as once a second.

Another example of a situation where a rapid sampling time is important is an application where I am trying to paint an accurate picture of a chain of events, like the start-up sequence for a chiller or a boiler.  There are events in these sequences that can happen in a matter of seconds and if I sample too slowly, I might miss them.

A recent example of just such a situation occurred when I was trying to identify the cycle rate for a boiler that seemed to be short cycling.  I became aware of the problem while scoping out the hot water system and noticing that the boiler cycled on and off quite frequently.  That caused me to pull out my iPhone and use its stopwatch and lap counter to get a sense of the timeline for the  operating sequence.  Here are the results of that effort.


Notice that the off cycle (Lap 1) is the only event that is much longer than a minute and that the pilot cycle (Lap 3) is 13 seconds long.  So if I really want to capture all of that, I need to log faster than the fastest event.

The Nyquist Sampling Theorem

Thanks to Mr. Nyquist, we can actually determine a sampling rate that will avoid aliasing for our data set if we know a bit about the disturbance we are trying to detect .


(And thanks to Steve Briggs of FDE for reminding me of the name, which I can never seem to remember).

The theorem suggests that we need to sample at a rate that is at least twice as fast as the rate of the disturbance we are trying to understand if we want to avoid aliasing.  While the basis of the theorem is in mathematics, the result is actually fairly intuitive if you think about it a bit.  The document I linked to in the previous sentence does a good job of illustrating that so I will let you take a look at it if you want a bit more clarity.

My point here is that if you know a bit about what you are looking for, you can get a pretty good sense of how fast you need to sample to capture things by applying Mr. Nyquist’s theory.

Applying the Concept

If we use the boiler I mentioned above as an example, the fastest event we are trying to capture is the pilot cycle, which lasts about 13 seconds.  So, if we log data every thirteen seconds, we will certainly capture the event with at least one data point being taken at some point during the cycle.

But, what we would not know (if all we had was our data) would be if the cycle lasted a fraction of a second or 12.9999 seconds.  Even if our logger were perfectly synchronized with  the start time of the pilot cycle and we took data just as it started and ended,  we would not know for sure what happened between those to events nor would we know what next.  Where there multiple events between our two data points and we just missed them?  Did the cycle actually end at 14 seconds?

Part of the complexity in this particular case is that the pilot cycle is one event in a string of events.  So, while it lasts 13 seconds and we need to log faster than that to capture it, there are also other events that will happen before it repeats, some of which are very predictable due to the programming of the firing controller (like the pilot cycle), and some of which will vary with other metrics.  For example, the duration of the firing cycle will be driven by the load on the system.

That’s why my little field test using the iPhone was so valuable.  It gave me insight into the nature of the cycle that helped me determine the minimum logger sampling rate based on the shortest element in the sequence.

The Results of the Boiler Logging Effort

Given my field observations, our minimum sampling rate needed to be once every 7 or 8 seconds based on the Nyquist sampling theorem.   But to get a sharper picture of exactly when the pilot cycle started and ended, we decided to use once every 2 seconds as our sampling rate, at least for the first round of logging.  That would give us 5 or 6 data points in during the pilot cycle and define it’s start and end time with-in 2 seconds of reality.

To capture the cycle, we decided to log the power draw of the boiler since each event in the cycle would change that.  Flue gas temperature might have been another option and I explore that in the next post.

For this example, when the pilot cycle was triggered, a solenoid valve would be energized, which would create a current spike due to the inrush and also shift the power consumption up slightly.  To make sure we could distinguish minor changes like the solenoid valve, we selected a CT for the logger whose rating was as close as possible to the maximum current draw we anticipated from the boiler.  Here are what our results looked like.


One thing we could see from the data set was that the boiler was obviously short cycling.  But we also could see the signature of every event in the cycle in the current waveform by virtue of our sampling rate and CT selection.


Back to Contents

Bottom Lines

Being able to identify the number of cycles and the length of the purge cycle was valuable information for us because the system is throwing energy up the flue during the purge cycle.  If we could come up with a way to reduce the number of cycles, we would also reduce the energy thrown away during the purge cycle.  The purge cycle is a necessary evil if you don’t like boiler explosions, so you can’t eliminate it, you only can minimize how often it needs to happen.

I will go into that more in my next post.  Meanwhile, I hope this has given you some insight into some of the things you need to consider when you are looking at a dataset for the first time and deciding if you trust it.  And in particular, I hope it has given you some insight into how to determine the sample time for trends and data logging.


David Sellers
Senior Engineer – Facility Dynamics Engineering

Posted in Data Logging, Excel Techniques, Retrocommissioning Findings | Leave a comment