An Interesting Psychrometric Process

2022-10-04 – Author’s note:  In reviewing this post yesterday to answer a question that came up, I discovered that some of the psych chart images had their quality degraded for some reason.  So, I have replaced them and I believe now, everything is legible.  

In answering the question, I also realized that I needed to mention one additional consideration that you would want to address if you used the process discussed, that being the need for good mixing – which is always important –  becomes even more important because of the lower set points used in this process.  So I added a paragraph about that when I re-posted.


I realize that for most normal people the word “interesting” could in no way, what-so-ever be associated with the words “psychrometric process”.  As I often tell folks,

When I say “interesting”  you can (and probably should) add the words “in a nerdy sort of way” to the end of my sentence.

That is the case here, so having given fair warning, I am going to proceed.

Some Background

As some of you likely know, I occasionally write for the Engineers Notebook column in the ASHRAE Journal, usually about twice a year.  Last April, I wrote a column  titled The Perfect Economizer, which was actually the trigger for the blog post series I am currently working on (and lagging behind on).  In any case, the magazine received a letter to the editor in response to it from Mr. C.  Mike Scofield , PE, ASHRAE Fellow, President of Conservation Mechanical Systems, Sebastopol, California. 

In it, he presented an interesting system configuration and psychrometric process and wondered if I had seen it applied in Portland, which I had not.  My editor asked me if I would mind responding to Mike’s question, and I did (published in the September ASHRAE Journal).

If you don’t receive the Journal, you may want to refer to a copy of the letter and my response that I have posted along with the copy of the article on our Commissioning Resources website since the discussion sets the stage for what follows.

What follows is an edited version of the correspondence between Mike and myself subsequent to my initial published response.   That happened because I became curious about the details of the process he had plotted on the psych chart he provided and I wanted to understand it better.

Once I understood it, I realized that it was a very clever process, but also an interesting psychrometrics exercise because it makes you think outside the box a bit compared to the psychrometrics of a conventional system.  So, I asked Mike if he would mind co-authoring this blog post with me to go into the details of the process so folks could learn from our discussion and he graciously agreed.

This will get a bit long (as usual).  The links below will allow you to focus in on the specific content of interest.  Each section as a “Back to Contents” link that will return you to this point.

A Few Resources

The process Mike asked about in his correspondence involves evaporative cooling and humidification.  Evaporative cooling is a constant wet bulb process and you can simply accept that as being true.  But if you want to understand it in more detail, along with the related concept of adiabatic saturation, I wrote a blog post that explores evaporative cooling in detail, including adiabatic saturation and wet bulb temperature that you can refer to.

If you want to work along with what follows on a psych chart of your own, you can download a free version of an electronic psych chart that Ryan Stroupe of the Pacific Energy Center has made available from the link in this blog post. In addition to providing links to the chart the post illustrates how to plot basic psychrometric processes and also illustrates the features associated with upgrading the chart to the professional version. The process plot examples can also be used if you are working with a paper chart, you simply need to manually plot the points on paper vs. using the tool in the electronic chart to enter them.

Alternatively, I uploaded a blank .pdf chart to the page associated with the Perfect Economizer article on our Commissioning Resources website.  There is nothing wrong with using a paper chart.  Mike himself is a self-confessed paper chart and slide rule guy, and I did things that way myself for a long time.   In fact, I still carry my slide rule around, partly for nostalgia, partly to show folks who have never seen one, and if push comes to shove, no batteries required!

Slide Rule 01

But the electronic chart does have some benefits in terms of being easily reproducible in things like this blog post and other tools that it includes, like the ability to plot TMY data as bin data on the chart, which gives you a “visual” on the climate you are considering.

If you are just learning about psychrometrics and using the psych chart, you may also find the chapter on Psychrometrics in the Honeywell Gray Manual to be useful.  And there are a number of slides in resource provided on the Useful HVAC Equations and Concepts page of the Commissioning Resources website that deal with the psych chart and basic psychrometric parameters.

<Return to Contents>

The System and Psych Chart

Here is the system AHU configuration and psych chart that Mike sent with his letter.

System and Chart

Mike’s written description of the illustration was as follows:

Has your team installed and tested a WB airside economizer using a
high saturation efficiency (97% to
99% RH) rigid media adiabatic evaporative cooler/humidifier (AC/H) to mix building return air with outdoor air to produce a supply air dew point that ranges between 45°F DP to 55°F DP during cold and dry ambient conditions?

The psychrometric chart shows a VAV system at 50% fan turndown with an assumed minimum 25% outdoor air to meet
code ventilation requirements. The
high saturation efficiency, at fan turndown to 50% flow, ensures that the delivery DB temperature off the AC/H is within a fraction of 1°F of both the WB and DP temperatures at the saturation curve. A low-cost commercial-grade DB sensor may be used with acceptable accuracy in determining the delivery DP condition of the supply air.

<Return to Contents>

The Reason the System Might Be of Interest

Note that the final element in the system is the evaporative cooler/humidifier.  There are a number of reasons that a system of this type might be of interest currently.  But Mike brought it up because ASHRAE research suggests that …

… maintaining the space relative humidity between 40% and 60% decreases the bio-burden of infectious particles in the space and decreases the infectivity of many viruses in the air.

One place you can find this is in the ASHRAE Building Readiness information published by the ASHRAE Epidemic Task Force.  It is also discussed in the ASHRAE Position Document on Infectious Aerosols (see page 8).  And I suspect folks with a healthcare background were not surprised by this since maintaining humidity levels in that range in a health care environment have been a requirement for quite a while for the reason indicated.  

But COVID has brought that to the forefront as something that might be considered more generally by designers. and in that context, I suspect the system configuration Mike suggested may merit consideration as long as due consideration was given to the application issues the committee mentions in the Journal’s May 2021 IEQ Applications column.  For instance:

  1. Is the building envelope suitable for an indoor environment with a higher than typical humidity level?  Or will condensation on surfaces or inside building assemblies become and issue?
  2. What will the water that is consumed cost?  This will likely vary significantly with the nature of the climate and the local rate structure.
  3. Related to item 2, does the utility offer a sewer charge credit for water that is supplied to the facility but not discharged to sewer?  The sewer charges can be as much or more than the water charges, so having a credit of this type can make a bit impact for evaporative processes like we are discussing.
  4. Also related to item 2, what will the parasitic losses associated with the added pressure drop in the system and the operation of the evaporative cooler pump cost? 
    • In addition to varying with climate and rate structure, the pressure drop loss will vary with the flow rate. For a constant volume system, this could be significant.  But,
    • For a variable volume system with a lot of part load hours, this may not be as big a factor as it seems due to the square law relationship between flow and pressure drop.

COVID and infections control issues aside, there are other reasons you might consider applying this approach.   When I did a quick survey of the company to see if anyone had seen the configuration Mike proposed, it turned out that we had.  But the applications were driven by the nature of the load and included automotive paint booths, server rooms, and museums. That’s not to say the concept does not have merit for the reason Mike pointed out. It just means that myself and the folks I work with have not seen it applied for that reason (yet).

<Return to Contents>

Taking a Closer Look at the Process

Finally, the part you have all been waiting for.  To get started I want to clarify a few of the assumptions and details behind what Mike presented.

Process Analysis Assumptions and Details

There are a number of things you need to understand for the discussion of the process to make sense.  But if anyone is still actually reading this at this point, and  if said person can hardly wait to read the process discussion and feels fairly comfortable with psychrometrics, then said person may want to skip this section and jump straight to the discussion of the process itself

Having said that, the following paragraphs kind of lay a foundation for the discussion of the process.

The Line on Mike’s Chart is the Result of a Bunch of Processes, Not a Single Process

Probably the most important thing to recognize is that the heavier black line Mike drew on the psych chart was not one specific psychrometric process.  Rather, it is the locus of points representing the leaving conditions from the evaporative cooler that will be produced by a system configured and controlled as he proposed as the outdoor conditions varied.  I did not realize this initially, and it is an important point to recognize.

In the course of what follows, Mike and I identify specific points on this line for specific indoor and outdoor conditions  The hope is that this will allow you to “connect the dots” and understand the locus of points that Mike presented, which is what it did for me.

<Return to Contents>

The Air Inside the Building Came from Outside the Building

In some ways, this is obvious.  But there is an implication to it that I want to highlight, that being that the lower limit on the moisture level in the building is most likely set by the ambient moisture level outside the building. 

In other words, most processes that occur in buildings add moisture to the air.  Since the air inside the building comes from outside, then the moisture added in the building will tend to raise the dew point and specific humidity of the air inside the building.

There can be exceptions to this.   For instance:

  • If the facility was hosting a desiccant manufacturers product showcase and all of the vendors had their wares on open display, then potentially, the moisture level inside could be reduced relative to the outside. Or, in a more realistic example,
  • For a facility that processed paper and stored the raw material in a warehouse that was maintained at a low temperature relative to the process area which was maintained at a higher temperature and actively humidified, during cold, dry weather, when the raw material was brought in, it would tend to absorb moisture and lower the indoor humidity level.

But most of the time, building processes will add moisture to the air.  We can reflect this on the psych chart using a sensible heat ratio (SHR) line, which is the ratio of sensible (heat or temperature changing energy) added to the air  by the process occurring in the building relative to the total amount of energy added (both heat and moisture in the form of water vapor, the latter increasing the specific humidity). 

A SHR of 1.0 means there is no moisture being added to the air.  Increasing latent loads cause the SHR to drop away from 1.0.  The chart below illustrates several different sensible heat ratio lines plotted relative to a 72°F/50% RH space. 

SHR Example

So, for example, an air handling system was delivering saturated 45°F air at its design flow rate to serve a design load condition for a space with a SHR of 0.9 and a set point of 72°F, then the resulting space condition would be 72°F,  42% RH.  If the SHR was 0.8, then the space condition would be 72°F, 46.8% RH.  The chart below illustrates these two processes.

SHR Example 2

The 45°F saturated air could be the result of any number of processes, including:

  • The leaving condition from an evaporative cooler, or
  • The leaving condition from an active cooling coil coil that was condensing, or
  • An air handler supplying 100% outdoor air on a foggy day.

<Return to Contents>

The Process Targets a Space Condition Window, not a Point

In the charts that follow, the trapezoid highlighted in orange represents the space conditions targeted by the process we will discuss, specifically:

  • 70-75°F dry bulb temperature
  • 40-60% relative humidity

The chart below contrasts the window targeted by the process we are discussing with the 2010 ASHRAE summer (red) and winter (blue) comfort zones.

Zones Chart

As you can see, the range we are discussing is a subset of the winter comfort zone, which is the season during which the process would be used.

While most designs target a specific point for calculation purposes, real processes operate over a range that is set by things like the tolerances on the design point and the accuracy of the control process.  In this case, the range allows the proposed process to be used over a fairly large range of climate conditions in the Portland area. 

If we narrowed the range down, either in terms of temperature or relative humidity, there would be fewer hours were we could use the process in the Portland climate and vice versa. I believe this will become apparent as we move through the details of our discussion.

<Return to Contents>

The Evaporator Cooler will Produce Near Saturated Air

Evaporative coolers are to some extent, field deployments of adiabatic saturators.   For a true adiabatic saturator, at its exit, the leaving air is saturated, which means:

  1. The relative humidity is 100% and
  2. The dry bulb temperature, dew point temperature, and wet bulb temperature are identical numerical values.

To achieve this, among other things, a true adiabatic saturator needs to be infinitely long, which (I suspect) is one of the reasons you do not run into many of them out in the field.  For one thing, they would kind of get in the way. And for another, Owners and Architects – with some justification I might add – are somewhat opposed to infinitely long mechanical rooms.

One of the things that happens when you make your evaporative cooler less than infinitely long is that the air coming off of it is not 100% saturated.   But, units can typically produce air with wet bulb temperatures that approach the dry bulb temperature by 3-4°F under design conditions, with efficiencies in the 80% –95% range depending on the specifics of the design.[i]

If you reduce the flow and thus provide more time for the air in the evaporative cooler to be in contact with the media in the cooler, you can approach adiabatic saturation. Mike’s diagram assumed that would happen because he was modeling the application in a VAV system that was at 50% of its design flow and as a result, the saturation efficiency of the evaporative cooler would approach 100%.

The charts that follow make the same assumption for the purposes of illustration.  But a real system would generate leaving conditions that are very near but not on the saturation curve of the psych chart.  How close the leaving conditions got to saturation would depend on the efficiency of the evaporative cooler at the flow rate that existed at the time.  The approach to saturation will improve as the flow rate drops below the design value. 

<Return to Contents>

The Chilled and Hot Water Coils are Not Active

Mike’s analysis focused on outdoor conditions when neither preheat nor mechanical cooling would be required to achieve the targeted leaving air condition.   In other words:

  • The evaporative cooling process alone could deliver the desired leaving air temperature, which in the example, ranges from about 45°F to about 55°F.
  • The outdoor conditions are such that the system was never driven to minimum outdoor air when it was cold outside, which is when preheat would be required if the outdoor air temperature continued to drop with out causing the evaporative cooler leaving air temperature to drop.

How many hours this encompasses will vary significantly with climate.  In particular, the metrics Mike cites were based on assumptions about applying the process in the Portland, Oregon climate and the analysis and charts that follow use the same assumption.

<Return to Contents>

A Brief Review of Mixing on a Psych Chart

To understand the discussion that we are leading to, it is important you understand how a mixing process shows up on a psych chart, in particular that:

  1. The mixed condition for two points on the chart will lie on a line that connects them and,
  2. The mixed point will be proportionally spaced between the two points in direct relationship to the percentage of the mass flow rate associated with each of the points.

This is illustrated below for a number of different mixing percentages, temperatures and humidity levels.  Notice how the mixed temperature and its location relative to the two conditions being mixed is the proportional to the minimum outdoor air percentage and the two temperatures that are being mixed.

Mixing Example 25 50 75 Pct

<Return to Contents>

The Mixing Dampers are Controlled by the Dry Bulb Temperature Leaving the Evaporative Cooler, Not the Mixed Air Temperature

This is really important because, as mentioned previously, for an evaporative cooling process, the leaving air is nearly saturated and as a result, measuring dry bulb temperature will also provide an indication of the wet bulb temperature and dew point temperature. 

If the air is saturated, they will be exactly the same.   If the air is near saturated, then they will be very close.   For example, if the saturation efficiency of the evaporative cooler was 95%, then the leaving wet bulb temperature would likely be with in a degree or less of the leaving dry bulb temperature.

If you consider this for a minute, you will realized that for a given outdoor dry bulb temperature and a given evaporative cooler leaving air temperature set point, where the evaporative cooler leaving air dry bulb temperature is being used to control the mixing dampers;

  1. Because the air is nearly saturated, the mixed air dampers are also being controlled for a leaving wet bulb temperature that is nearly identical to the dry bulb temperature , and
  2. As a result of item 1, the mixed air dampers are also operating to maintain a fixed wet bulb temperature set point, and
  3. The amount of outdoor air brought in to the system will vary with the outdoor wet bulb temperature;  on a dry day, the system will bring in less outdoor air to achieve the required set point vs. what it will need to bring in on a moist day. 

This is illustrated in the chart below.  Note how the outdoor air percentage required to achieve the 45°F saturated leaving air dry bulb/wet bulb temperature varies with the outdoor conditions.

MAT Evap Cooler LAT Controlled

The next chart illustrates what happens in a more conventional mixed air control process, where the mixing dampers are being controlled for a fixed mixed air dry bulb temperature.  Note how the outdoor air percentage does not change, even when the outdoor conditions change.

MAT Dry Bulb Controlled

<Return to Contents>

The Mixed Air Set Point is Lower than Typically Used

As you have probably observed, the 45°F supply temperature we are discussing is a lot cooler than we typically use in our systems, all-though you might see temperatures in this range for some special processes.[ii]

Generally speaking, running colder discharge temperatures than needed to satisfy the space dehumidification load will cost you energy when you are doing mechanical cooling. 

  1. For one thing, it will require lower refrigerant temperatures in the coils, which will tend to lower the efficiency of the compressors providing the refrigeration.
  2. For another, if the minimum flow rate provided by the terminal equipment provides more sensible cooling than needed once they are at minimum flow,  you will use unnecessary reheat compared to what would happen with warmer supply air temperatures.

But, if you are not using mechanical cooling, issue 1 enumerated above goes away.  That means that as long as a lower supply air temperature does not drive zones into a reheat mode, then for a variable air volume system, there could be a fan energy benefit associated with the lower supply temperature.

In other words, if a zone required 1,000 cfm of 55°F supply air to maintain a 72°F set point, it could also maintain that set point by using about 630 cfm of 45°F air.  So, as long as:

  1. The diffusers would perform with the cooler air, and
  2. The colder distribution temperatures did not result in condensation issues on the ductwork and related hardware, and
  3. None of the other zones on the system were driven into a reheat cycle when they would not have been driven into a reheat cycle with warmer supply air,

… then fan energy will be saved.

For Mike’s idea, the colder supply temperature will translate to lower system flow rates.  This will tend to push the saturation efficiency of the evaporative cooler to higher values, which means using dry bulb temperature to control the process will provide satisfactory results with out the added first and ongoing cost of some sort of humidity sensor.

<Return to Contents>

Good Mixing is Critical to Success

Achieving thorough mixing in a mixed air plenum is critical to success and is surprisingly hard to achieve.  Velocity and temperature stratification are very common, especially if you don’t pay attention to the details.  In fact one of my current focuses on the blog is a series of posts looking at this topic.

Since a process using the approach we are discussing may use a mixed air temperature set point that is lower than typically encountered, as discussed in the preceding paragraph, ensuring that the mixed air plenum is designed to promote good mixing will become even more critical.  The most serious potential issue, of course is a localized cold spot where temperatures could drop freezing during extreme weather, even though the average mixed air temperature was well above freezing.

<Return to Contents>

The Process (Finally)

What follows is my transcription of the dialog between Mike and myself as we discussed the process he suggested.  At the end of it, he indicated that I had “nailed it”.  But if there are errors in the transcription that follows, they are totally on me.

For the discussion that follows, I have assumed a space SHR of 0.90.  But other SHRs (until you got pretty extreme in terms of space latent load and outside of what you would see for most commercial office buildings) would have similar results.

In general terms, since the system is controlling for the temperature of near saturated air leaving the evaporative cooler:

  • The mixing point will lie on the constant wet bulb temperature line associated with the set point. 
  • The blend of outdoor air and return are required to meet set point will vary as the outdoor conditions vary causing the mixing point to move up and down the constant wet bulb line.
  • Once the outdoor wet bulb exceeds the set point, the system will be driven to 100% outdoor air, which will cause the discharge condition from the evaporative cooler to move up the saturation curve.

The following paragraphs illustrate this in more detail. 

An Extreme Winter Portland Day

If we start with a somewhat extreme condition for Portland (based on TMY3 data) then the process looks like this.

Chart - Extreme Dry

Controlling the mixed air dampers to deliver 45°F air off the evaporative cooler puts you at about 45% outdoor air and delivers a space at the bottom end of the targeted temperature window and up a bit from the bottom end of the targeted RH window.

A Typical Portland Fall/Winter/Spring Day

If we look at what would happen if the OA was in a more typical but cold range (the left most red squares on the chart), we end up here.

Chart - Typical

We require a higher percentage of OA (83%) because it is already moist.  But since we are modulating the mixing dampers based on what happens after the evaporative cooler to maintain 45°F at that point (remember, for this discussion, because of the saturation efficiency of the evaporative cooler, 45°F dry bulb is about the same as 45°F wet bulb), we just slide up the 45°F wet bulb line and the space condition we deliver (assuming the load – sensible and latent – did not change) remains the same.

A Warm But Dry Portland Fall/Winter/Spring Day

If we look at what happens on a warmer, but dry OA condition, as long as the OA dew point is below the evaporative cooler LAT set point, we still hold the same space conditions.  But this time, we need to use more OA because the OA is warmer and dryer.

Chart - Warm Dry Spring

Moving from Spring to Summer (Summer to Fall Transition Similar, Just Going the Other Way)

If the OA wet bulb rises above the evaporative cooler LAT set point (which is controlling the mixing dampers), it will drive the mixing dampers to the 100% OA position and hold them there. 

The control process can not meet its set point and as a result, the evaporative cooler LAT rides up the saturation curve, following the outdoor air wet bulb temperature.   Here is what that looks like for a somewhat common condition with a OA wet bulb above the 45°F evaporative cooler LAT set point.

Chart - 100% OA 48 Typical

Now, the space temperature and humidity start to drift up because the evaporative cooler LAT starts to drift up, but (assuming the load did not change and the VAV system flow did not change), you are still inside the envelope you targeted.  If you really wanted a lower space temperature, you could allow the VAV system to move a bit more air.

Encountering a Limiting Condition

Once the outdoor wet bulb drifts up to 50°F, we reach the limit of what we can do with the current VAV system flow rate (50% of design) assuming the load condition did not change;  i.e. at that point the space ends up at the upper limit of the temperature window, but below the humidity limit.

Chart - Upper Limit

Allowing the System Flow to Increase

If we continue to let the evaporative cooler LAT drift up as the outdoor air wet bulb drifts up, the VAV system could still keep us in our targeted window if it increased the flow rate.  When we reached the 55°F upper limit Mike discussed (a common commercial building HVAC system leaving condition) we would end up here.

Chart - Upper Limit 55

But, if the load had not changed, we could actually allow the LAT to drift up to about 59°F before the leaving condition was outside the targeted window (assuming the load has not changed and the VAV system is allowed to move more air to accommodate the lower LAT to space temperature difference).

Chart - Upper Limit 60 Pct

<Return to Contents>

Some Bottom Lines

How you would decide if you should do this and when to do this would be a function of the ability of the envelope to handle higher humidity levels in cold weather, the ability of the operating team to maintain the equipment, utility rates, hours of operation, and climate in addition to a desire to hold indoor conditions in the 40-60% RH range.  A totally brilliant idea in location “A” could be a disaster in location “B”.  

For instance, if you had an artesian well on your property and the law was written to say you owned the water rights (i.e. free water), what you would do would be totally different from a location where the water rates where high and you also did not get a credit on your sewer bill for water that was evaporated.

And if you did get a credit on your sewer bill for water that was evaporated, then that would also change the financial perspective.

Mike and I talked about using the TMY3 data to look at the water consumption and pump energy for the process in Portland to assess the full cost implication of using this strategy, but neither of us have had the time to do this at this point, so fodder for a future post.

But hopefully, what we have shared will help you “think outside the box” in terms of how we operate our buildings to deliver a clean, safe, comfortable, productive environment as efficiently and sustainably as possible, given the ever changing challenges we face.


PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at

[i]     For those who are interested, the relationship for saturation efficiency is as indicated below.

Direct Saturation Efficiency v1

<Return to Reference>

[ii]    For example, for the make up air systems serving the clean rooms I worked with when I was a facilities engineer/system owner at Komatsu’s Hillsboro plant, we targeted a 46°F leaving air temperature from our cooling coils in order to hit the space relative humidity requirement.

<Return to Reference>

Posted in Uncategorized | Leave a comment

The Perfect Economizer–Part 1–Laying Some Groundwork

An amazingly long time ago, I started a string of blog posts about economizers, that included posts about:

All of this was leading up to a blog post about a diagnostic tool that I use that I call the “Perfect Economizer” concept.  And I almost got there, but not quite, until now.


For those who want to jump around, the following links will take you to the different topics.   The “Return to Contents” link at the end of each major section will bring you back here.


As it turns out, the evolution of the ASHRAE Journal Engineers Notebook column that I help write led to an opportunity to do a column on the the perfect economizer because it complements a column I wrote about a similar concept for assessing chilled water plant performance titled Modeling Perfection. which is illustrated below.


In in the case study associated with the Modeling Perfection column, I mentioned that the reason for the unnecessary chilled water use in the areas outlined in red and yellow above was dysfunction in the preheat and economizer processes and that the team I was working with used the “Perfect Economizer” concept to assess them.

The idea behind the concept  is similar to the perfect chilled water plant concept;  you create a chart that shows how you would expect a perfect economizer to function and then plot real data against it to see how closely reality matches perfection.  The lines of perfection are illustrated below.


That concept is the focus of my next column, which will run in May. 

Defining Perfection

To be able to discuss the perfect economizer, one needs to define perfection.   Word count precluded me from doing that in the upcoming Journal column.  So I decided to do a few blog post that will focus on defining perfection to complement the column.  I actually started down that road in the post titled Economizer Analysis via Scatter Plots–Linking Excel Chart Labels to Data in Cells.  I will build on some of the concepts I outlined there in what follows and in related subsequent posts.  This first post defines a few baselines so we are all “on the same page” for the discussion that will follow.

Not a New Idea

I am not at all asserting that I came up with this idea. I believe you will find a version of it in the application software that Architectural Energy Corporation supplied for their data loggers in the mid to late 1990’s.  And the (free) Universal Translator application (which has nothing to do with Star Trek but is still pretty cool) includes a module that uses this approach.

(Return to Contents)

The Relationship Between and Economizer Process and Building Pressure Control

As discussed in the Economizer Basics post I referenced above, economizer processes bring in outdoor air volumes that are above and beyond what is required to ventilate the building, blending this extra outdoor air (OA) with return air (RA) in order to minimize the need for mechanical cooling.  At its core, an economizer process is a cooling and temperature control process. 

Conservation of mass and energy dictates that to achieve success, we need to complement the economizer process with some sort of building pressure control process that provides a path for the extra outdoor air to exit the building.  That becomes the role of the relief system.  The obvious components in this system are the relief  air dampers and depending on the system configuration, the relief fan and/or the return fan.

The less obvious components are the imperfections in the building envelope, which can also become part of the relief system. Recognizing this can provide benefit in terms of comfort by managing infiltration, and in terms of energy, by minimizing the need for return or relief fan operation.

A Word about Return vs. Relief Fans

When I discuss this topic, I am frequently asked about the difference between a return and relief fan.  The images below are from a set of slides that I used in class to discuss the topic.



This link takes you to a bit more information in a previous blog post.

Economizers and Building Pressure Control Coordination in the Olden Days

In the olden days, for a simple, constant volume system that incorporated an economizer process, there was a fairly direct relationship between:

  • The position the outdoor air and return air dampers were driven to in order to control temperature, and
  • The position the relief dampers needed to be driven to in order to manage building pressure. 

Thus, it was not unusual for the same signal that was used for the outdoor air and return air dampers to be used to drive the relief air dampers, especially in pneumatic control systems.[i]

Those of us working in existing buildings can still encounter this approach.  Sometimes, a minimum relief position is also provided.  And sometimes, the modulation of the relief dampers is delayed to provide a bit of positive pressurization for the building. 

And for a simple constant volume system, it can be made to work, especially with the minimum relief and delay feature mentioned above.  So if you have a very simple HVAC system, you can get away with out a building pressure control process, even in modern times.

Economizers and Building Pressure Control Coordination in Modern Times

The variable air volume (VAV) systems we commonly use in modern times breaks the relationship between outdoor/return air damper position and relief air requirements.  Consider a VAV system operating on a day when the outdoor temperature is 58°F with a 58°F leaving air temperature (LAT) requirement with variable speed relief fans under a part load condition. 

Lets imagine the system is operating on a day when the load in the building, and thus the supply flow rate is 50% of the design value.  With it being 58°F outside, if everything is working properly, the outdoor air dampers will be commanded to the 100% outdoor air (0% return air) position.  But, since the load in the space is only 50% of the design load, the supply flow rate will half of the design value. 

If the relief fans are commanded to 100% speed because they are controlled by the same signal used by the outdoor air and return air dampers, they likely will cause the building pressure to become very negative because their full speed, design flow rate was likely set to on the basis of the design supply flow rate.[ii]

This was a common problem in the field when we started transitioning from pneumatics and constant volume systems to DDC and VAV systems. And it still shows up on occasion in our modern day world.

(Return to Contents)

ASHRAE Guideline 16

The final control elements in an economizer process are the OA and RA dampers and the sizing and configuration of them is critical to success. 

Similarly, the relief dampers are often the final control element for the building pressure control process all though variable speed relief fans that have simple back-draft dampers or are sequenced with modulating relief dampers can also come into play.

ASHRAE Guideline 16 – Selecting Outdoor, Return, and Relief Dampers for Air-Side Economizer Systems provides a lot of good information about how to select and configure these dampers. But it also specifically states that

this guideline does not cover air mixing

Thus, it’s important to recognize that using the guideline is a good first step in the economizer design process, but there are other things that also need to be addressed.

In addition, the guideline is focused on proper design, meaning that you are starting with a “clean sheet of paper”. If you are working with existing buildings, that “ship has already sailed” and the challenge is understanding what you have, how well it is functioning, and how to correct any deficiencies that you discover within the constraints of the existing equipment capabilities and the operating budget.

For example, all of the recommended control sequences in the guideline require that outdoor air flow be measured somehow. In my experience, this is surprisingly uncommon in existing building systems, especially in older facilities.

Still, understanding what constitutes a good design can help folks performing existing building commissioning, ongoing commissioning and facility operations understand the changes needed to improve performance and resolve any issues they identify.  And the Perfect Economizer concept is a useful way to identify the problems.

Ultimately, when we apply the “Perfect Economizer” technique to existing facilities, we need to be extra diligent when we start to work to improve the mixing process so that we do it in a way that still ensures the required ventilation rates are maintained.

(Return to Contents)

That’s it for now.  In my next post, I will get into damper sizing and configuration, which are part of the focus of Guideline 16 and which are key to achieving perfection for an economizer process.


PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at

[i]     And, since many legacy pneumatic systems were upgraded to DDC by handing three different control vendors a set of the building’s pneumatic control drawings and telling them to provide a bid for a DDC system just like it (and incidentally, we will be taking the low bid), you find DDC systems with a single pneumatic output driving the outdoor air, return air and relief air damper systems.

I am not at all advocating this design approach;  there are obvious problems with it.  I am simply saying that just because you have  DDC system doesn’t mean you will not see this configuration and the potential challenges it can introduce.

[ii]   The relief flow would generally be set to the supply flow minus the ventilation air flow which will generally be removed by toilet and hood exhaust.  An allowance for building positive pressure may also be included, further reducing the relief air flow rate relative to the design supply flow rate.

Posted in Air Handling Systems, Controls, Economizers, The Perfect Economizer | Leave a comment

Using a Formula to Adjust an Axis in Excel, Plus a Simultaneous Heating and Cooling Case Study

Author’s Note; 2022-02-01.  I discovered that earlier today, when I thought I had saved this post, planning to make some final additions, edits, and add a table of contents when I got back from my walk, what I actually did was publish it.  So, if you read this before about 4:30 PM, there were some typos and the bottom line on the case study was not there yet.   My apologies;  I will click more carefully next time.


I want to preface everything that follows by saying that while the case study I share is from my own experience, I did not develop the technique I will share.  Rather I discovered it as the result of an internet search in the form of a very generous and well written blog post by a guy named Mark on his Excel Off the Grid web site. 

I’ll be linking to some specific content there as I move through this post, in which I use a case study from a past project to illustrate applying Mark’s technique.

And thanks also to Thy, a student from one of my classes, who asked the question that led to the post and “commissioned it” by taking my first draft and using it successfully to implement the feature in a spreadsheet of his by following my suggested directions.


These links will jump you around in the content to a topic of interest.   The <Return to Contents> link at the end of each major section will bring you back to here.

A Bit of Background

If you do existing building commissioning work, you spend quite a bit of your time looking at time series data.   Sometimes, you are interested in the over-all pattern for a long period of time, like this.

Logger Data Full Period CC LAT

For the project behind the data above, I was using steam condensate pump cycles as a proxy for steam consumption (the red data stream), a technique Chuck McClure taught me years ago using an alarm clock.  I was comparing the pump cycles to the operation of a steam preheat coil in a large laboratory air handling system, using the leaving air temperature from the coil as a proxy for coil operation (the orange data stream).

The reason that the condensate pump line looks like a red band with occasional spikes vs. a fine red line is that relative to the range of the time axis, there were a zillion pump cycles.  In other words, if we were to zoom in, we would discover that the red band was actually many, many, many spikes spaced closely together with each cycle representing one pump cycle.  In fact, that is what I needed to do in order to assess the number of pump cycles relative to the leaving air temperature spike.

<Return to Contents>

Diagnosing a Dysfunctional Preheat Process

There will be more on zooming in a minute, but before going there, I thought I would explain what was going on in the system behind of the data.

My initial view of the data, shown above, revealed that I had in fact captured the dysfunctional operating pattern I suspected to exist based on my field observation when I walked the project several days prior.  More specifically, I suspected something was amok when I walked by the unit on a 60ish°F day and noticed that the preheat coil was active along with the cooling coil.  

As a result, I deployed a few data loggers the next day and the pattern above is what I found as Mother Nature performed a natural response test on the system [i]. Note how the preheat coil leaving air temperature seems to vary vs. hold a fixed set point and also how on occasion, it jumps up and runs at 90+°F for periods of time. 

This was an issue because the system was set up to hold a fixed 55°F leaving air temperature, and it was doing a very good job of that (the blue data stream).   But, since it was a 100% outdoor air system and since the preheat coil was ahead of the chilled water coil, the only time the preheat coil should have been active was if the outdoor temperature dropped below the desired 55°F leaving air temperature set point.  And then, it should have not heated things up any higher than the desired leaving air temperature.

Since the preheat coil was the major load on the steam system for the facility, I anticipated that the condensate pump cycles would be higher during the periods of time when the coil was delivering a leaving condition in the 90’s°F, which would tend to validate my proposed approach for developing the system load profile since there was no steam meter.

But to verify that, I needed to zoom in on one of the dysfunctional cycles, which brings me to the point of this post.

<Return to Contents>

Changing the Range of a Time Series Axis in Excel

Excel and Dates

One of the things that is not immediately obvious when you start working with time series charts in Excel is how Excel represents a date and time, at least it wasn’t for me.   It turns out that Excel represents date and time as a serial number that increments by 1 each day, and which was arbitrarily set to zero at 12:00 AM on on January 1, 1900.   

That means that:

  • January 2, 1900 would be represented as “2”
  • January 2, 2022 would be represented as 44,562 since it is that many days after January 1, 1900.
  • One hour would be represented by 1/24 = .0147.

I go into more detail about that in a blog post titled Setting Time Axis Values in Excel.  But once I understood the way things worked, I made myself a little “sheet cheat” that allowed me to quickly come up with the values I needed to format a time series axis to the specific range I wanted to look at.

<Return to Contents>

Setting the Date Range in an Excel Chart

Since I wrote that post, I have discovered that if you type a date and time into the “Maximum” and “Minimum” fields in the axis format dialog box (the cells with the red arrows pointing to them in the image below) …

Format Axis r

… then Excel automatically makes the conversion for you.  I’m not sure if that was always there and I just missed it or if it’s a feature that showed up sometime after 2002 (when I built the first version of my cheat sheet).  

But so far, I have not figured out a way to set the major and minor units (the fields with the blue arrows pointing to them in the image above) with out “doing the math” to figure out, for instance, the decimal value that resents 1 minute if the decimal value of 1.0 represents 1 day.  

So, the little cheat sheet spreadsheet I built to help me come up with the values for the minimum and maximum dates and the major and minor units on my charts still comes in handy.

Time Values

If you want a copy of it, you can download it here.

<Return to Contents>

Zooming In the Old Fashioned Way

Having said that, if I wanted to zoom in on a portion of the chart to take a closer look at a pattern – for example, zoom in on one of the errant events above to see what the condensate pump cycles looked like during that period of timed …

Four Hours 1

… then, up until I found Marks blog post, I would have to go into the axis format dialog and make the change.  

In the image above, I zoomed in to show what was happening from 12 AM to 6 AM on October 10, 2009.  This revealed what I hope I would see;  that the condensate pump cycles in fact increased as the steam load increased.  In fact, occasionally both of the pumps serving the receiver needed to run, which is what caused the occasional higher than typical spike.  All of this validated my propose approach of using the pump cycles to come up with a load profile.[ii]

Since I often wanted both images for a report, I would typically would make a copy of the chart and then change the axis so that I had both views available.   If you are doing this a lot, it can become somewhat tedious and time consuming [v]. And, the file size can start to get to be significant if there are a lot of data points in each chart.

As a  result, I would occasionally find my self wondering if there was a way to get change the maximum and minimum values for a chart’s axis based on parameters that you entered in cells in the spreadsheet that would then, somehow, magically perhaps, be referenced by the appropriate fields in the “format axis” dialog.

My more observant readers may have notice that the dates and times I mention above show up in the yellow cells in the image and could be thinking:

I wonder if those cells have anything to do with where he is heading?

The answer is:

They do!

<Return to Contents>

Introducing User Defined Functions

It turns out that if you know how to program in visual basic, you can do just that. 

Or, in my case, it turns out that if you know how to do an internet search for something like …

Excel change chart axis automatically from cell values

… you will discover generous people who are good writers with blog posts that explain how to do it and also share the code required to do it and tell you how to make it all happen.

The trick is that you create a thing called a User Defined Function or UDF  that, when you execute it, calls some VBA (Visual Basic) code that causes the magic.   While I aspire to write VBA, I am in my infancy in that.  But thankfully, Mark does that for us in his Excel Off the Grid Column titled Set chart axis min and max based on a cell value.

It really is well written so I am not going to regurgitate it here since you can follow the link above and find out all of the details and copy and paste the required code from there.

But I will provide some screen shots of my implementation of it in the spreadsheet we have been looking at to clarify its application in that context and clarify a few things that were questions for me as I added the functionality to my copy of Excel.

<Return to Contents>

Using a UDF to Change the X Axis Minimum and Maximum

In the image below, I have clicked into cell range GH34 (orange highlight) and you can see the UDF in the formula bar where it says”=setChartAxis(“Data”,”chart 2″,”Min”,”X”,”Primary”,H35”.  (The red arrows point to the two spreadsheet locations I just mentioned).

X Min

“SetChartAxis” is the UDF.   It acts just like any other Excel function once you create it.  For instance, if I open a spreadsheet, click in a cell, type an “equal” sign, and then “if(“, Excel kind of says:

O.K.  I have a formula that has that name and here it is along with the function arguments you need to provide as inputs if you want to use it.


If I click on the little fx symbol by the function bar, a dialog box will open up so that I can enter the necessary function arguments into data fields.


Of course, if I use the formula a lot, I probably can remember them and just type them into the formula bar in the correct order, separated by commas.  But the dialog box sure is handy for less often used formulas (and/or as you age and find your memory is not quite what it used to be).

Assuming you don’t have the code associated with the “setChartAxis” UDF installed on your computer (more on how to do that in a minute), then, if you were to click into a cell in a spreadsheet on your machine and start typing setChartAxis, you would get a list of built in Exel functions that have the word “set” in the name like “OFFSET” and others depending on the plug-ins you have installed.   But “setChartAxis” would not be one of them.

In contrast, since I have added the code for the UDF “setChartAxis” to my copy of Excel, when I click on a cell and start typing “set …” it shows up as a function I can select along with all of the other functions installed on my machine that have “set” in their name.


Thus, I can pick it and provide the arguments it asks for …


… and the UDF does the “magic” for you.

Here’s what those arguments look like for the chart I am using as an example.  You will find a copy of it on the same webpage as the time value conversion spreadsheet tool if you want to download a copy to work with.


So basically, the formula says:

Set the minimum value for the primary, X axis, of Chart 2 on sheet Data to the value entered in cell H35.

The formula is looking for a numerical value (vs. a date), so, to make it easier to work with, I have cell H35 formatted to display the numerical value associated with a date and then set it equal to the value in cell I35, which I have formatted as a date and time.  That allows me enter the date and time and I35, which shows up as the numerical value associated with that date and time in cell H35, which is then referenced by the “setChartAxis” UDF. 

<Return to Contents>

Not Just for the X Axis

You can use the UDF for the other axis on the chart.  For example, to really understand how well the control loop is tuned, it would be nice to zoom in on the burble in the blue line that happens when the preheat coil discharge temperature spikes.   To do that, I used the “setChartAxis” UDF but set it up to adjust the maximum and minimum on the secondary Y axis based on spreadsheet cell parameters.

 Secondary Y

And, as you can see, by zooming in, I can now tell that the control loop response exhibits the somewhat classic quarter decay ratio associated with a well tuned PID loop. [vi]

I can also quickly re-scale the axis again to let me contrast both the response and the upset itself. (Note that I hid the pump amps data series to allow me to focus on the other two data streams).


You will also note that I provided similar functionality for the primary Y axis (the center cluster of orange and yellow cells) by simply copying and pasting the cell block then editing the UDF arguments as needed.

<Return to Contents>

Addressing a Few Questions that May Come Up

So, a couple of points.

  1. To find out the name of the chart, just click on it and it will show up in the cell name window next to the formula bar (“Chart 2” below next to the fx bar, right below the “snap to grid” quick access button on the left).

Chart Name

  1. The UDF is a Visual Basic module, so you need to have the “Developer” tab available in Excel to do this.  I think that sometimes, Excel can be installed without this enabled, but I believe it is a standard feature and you just need to turn it on, which is described here, in case you don’t see the “Developer” tab in your ribbon.[vii]
  2. The blog post I referenced above is (to my way of thinking at least) really well written and I think that if you page down to the “Creating the User Defined Function” topic, you would have no trouble setting it up;  the code you need is included so its really just a matter of copying and pasting it into the right place in a VBA module you create.
  3. If you do that, it will only be available in the spreadsheet you created it in.  But you can make it available for all of your spreadsheets by installing it as an Add-In.  That is described further down in the post under the “Making the function available in all workbooks” topic  which links  you to this page after telling you what you need to do first.
  4. <Return to Contents>

Back to the Case Study

As I indicated in an endnote previously (see end note [iv]), the somewhat wild temperature excursions seemed to be a freeze protection strategy gone amok.  

But when they were not occurring, the preheat coil still did not hold a leaving air temperature at a fixed value, causing the chilled water coil to do unnecessary cooling.  The reason for this was that the face and bypass damper system that was intended to control the leaving air temperature was out of adjustment and was always allowing some air to flow through the heating elements, even if no additional preheat was required.

Integral Face and Bypass Coils

The slides below illustrate the type of face and bypass damper system that was in place in the system we are discussing. 





This type of assembly is technically called an “integral face and bypass” coil.  But is also frequently referred to as a “Wing” coil since one of the major manufacturers at one point in time was the Wing Company.  Its kind of like calling every box of facial tissue  – a paper product produced by many manufacturer’s –  “Kleenex” – which is a common brand of facial tissue.

The pictures that follow are of the  actual hardware.  The assembly shown on the left uses hot water for the heat source.  The picture on the right uses steam and is the actual preheat coil associated with the case study.




<Return to Contents>

Why Integral Face and Bypass?

The design of this type of coil is intended to enhance its ability to resist freezing by:

  • Always keeping the heating elements active with the control valve wide open.  For water coils, this means design flow will always be moving through the coil (as long as the pump serving the system is running).  For steam coils, this means that the coil will be able to draw as much steam as needed and that the steam in the elements will be at near the saturation pressure and temperature associated with the distribution    system.[viii]
  • Vertical orientation for the heating elements in steam fired coils to ensure rapid condensate drainage via gravity.
  • Supply and return headers located outside of the air stream minimize the potential for condensate (water) to be exposed to sub-freezing conditions.

<Return to Contents>

Things that Can Go Wrong (a.k.a. EBCx Opportunities)

So, the good news is that a coil of this type is less likely to freeze.  But there are a couple of down sides.

One is that the actuation mechanism for the clam-shell doors is somewhat complex. With out regular maintenance and lubrication, it can fail, which, as we saw in the coil in the example, can cause a significant energy waste.

Another opportunity is related to the control of the steam valve.   Even if the clam-shell dampers are fully closed, there is significant heat transfer, primarily by radiation, from the live, saturated steam inside the tubes.  For instance, if the steam was at atmospheric pressure, the temperature would be 212°F. 

As a result, there can be a significant parasitic load associated with this type of coil.    To prevent that, it is desirable to close the  steam valve when preheat is no longer required.   It is not uncommon for this contingency to go unrecognized.  For example

  • A value-engineer, who is perhaps not totally familiar with HVAC processes and how this type of coil works may eliminate the control valve from the project as an unnecessary first cost, thinking it was not needed since there were dampers provided to control the leaving air temperature.
  • A control system designer who was not familiar with the specifics of how this type of coil operates may sequence the operation of the valve with the operation of the clam-shell dampers.  While this may tend to alleviate the parasitic load to some extent, it is likely that it compromises the “freezeproof(ish) aspect of the design.

As a result, when I encounter this type of coil in the field, I just about always flag it as a target for further investigation.  Frequently, one or more of the opportunities I mention above exist and I can save some steam (and maybe a frozen coil or two). 

And frequently, as was the case for the coil in the example, savings show up at the cooling plant in addition to the steam plant because of the unnecessary simultaneous heating and cooling.

<Return to Contents>

How Come Nobody Noticed?

Some readers may wonder why nobody noticed this problem.  After all, it kind of jumps out at you when you look at the trends I have shared.  

A big part of the reason was that the control system was somewhat antiquated and unreliable.   Sensors had failed, graphics could take minutes – like 5 or more minutes – to update (assuming they didn’t “crash”in the process), and sampling speeds faster than once ever 15-30 minutes were not possible due to the network configuration.  As you may surmise, those are the the reasons I was using data loggers to assess the system instead of the trends.

Because the chilled water coil masked the preheat dysfunction and the lab zones were constant volume pneumatic reheat  zones with repairs undertaken when an occupant complained, a lot had to go wrong before it would show up as an actual comfort problem.

The operating team itself –  like most teams these days – was spread really thin, trying to operate and maintain a complex full of mission critical facilities with a handful of people.

<Return to Contents>

Leveraging the Savings Potential

The good news was that once the problem was recognized, it opened the door for improvements.   Due to …

  • The size of the system (nominally 70,000 cfm), and
  • The 24/7, constant volume, near 100% outdoor air operating cycle associated with the laboratories it served

… the savings potential associated with repairing the errant preheat process was very significant;  tens of thousands of dollars annually.  The savings could have been accrued by simply repairing the damper linkage system and ensuring that the steam valve fully closed when preheat was not needed.

Recognizing that there was more to the issue than the immediately obvious root causes,  The Owner elected to leverage the savings to upgrade the control system to a current technology system, including:

  • The sensors necessary to perform diagnostics, not just control the system,
  • Trending and graphic capabilities that would deliver meaningful information to the operating team in a timely fashion, and
  • DDC controls at the zone level, which would allow the operating team to much more quickly identify operating issues that are typically masked by the insidious nature of HVAC processes.

And like most energy savings projects, the results of this project also moved the Owner down the road towards their long term carbon reduction goals.

So there you have it;  a cool little Excel trick generously shared by Mark on his Excel Off the Grid blog along with a little case study of a common existing building commissioning opportunity.


PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at

[i]    If you want to know a bit more about natural response tests vs. forced response tests or functional testing in general, then you may find a series of video modules I recorded on the topic to be helpful.

[ii]   It also revealed that the control loop for the chilled water valve was pretty well tuned.  Notice how what ever caused the errant change in set point [iii], initially, there is a big jump in steam flow and leaving air temperature and then a continued increase until the process stabilizes.  The leaving water temperature from the chilled water coil hunts around a bit trying to “find itself”.   But then it settles in;  more on that a bit later in the post.

[iii]   Can you put an end note on an end note? [iv]

[iv]    Assuming you can;  we never really figured out why the program running the system was set up to cause the set point jump.  But the trends indicated it was very predictably tied to the outdoor temperature and was triggered when the outdoor temperature dropped below 38°F and released when the outdoor temperature went back above 40°F.  And it was not really a set point change;  rather, the valve was simply driven fully open.  Thus, our conclusion was that it was a freeze protection strategy gone amok.

[v]    But not as tedious and time consuming as in the olden days when we would have had to transcribe the data from a strip chart and manually plot it on graph paper.  So count your lucky stars you young people out there.

[vi]   The slide below illustrates what the term quarter decay ratio means.


The pattern was the result of the work of John G. Ziegler and Nathaniel B. Nichols, who developed a very common tuning technique for PID control loops.  If you want to know more about PID, this link will take you to a webpage that contains some resources, including the original paper they published and an interview with John Ziegler himself.

[vii]   I suppose that there may be some corporate IT policies that would prevent you from turning on the developer tab feature with out someone from IT allowing you.  But I have not had that experience and only know about turning it on because I was helping someone once and it was not there and I poked around and found the link above.  Its always been on in any copy of Excel I have had.

[viii] There is a very subtle thing that can go on in steam fired heat exchangers due to the fact that the steam side is a saturated system.  Depending on the operating conditions, it is possible that the pressure inside the heat exchanger will be sub-atmospheric unless vacuum breakers are installed on the heat exchanger. 

That means that for condensate to drain out of the heat exchanger, or more specifically, to an open return system that is above atmospheric pressure, condensate has to accumulate inside the coil to a depth that is high enough create the head necessary to cause the condensate to flow out of the coil.  If the condensate accumulates in a portion of the coil that is exposed to the air stream, and the air stream is below freezing, then you can freeze the coil; bottom line steam coils can freeze.

By keeping the steam valve wide open on an integral face and bypass coil and relying on the damper system to control discharge temperature, it is significantly less likely that the conditions inside the heating elements will be sub-atmospheric.  This, combined with the vertical tube arrangement and locating the headers outside of the air flow path helps ensure that this type of coil is fairly freeze-proof.

Posted in Uncategorized | Leave a comment

Happy Solstice

2021-12-26 – Authors’ Note:  Yesterday, I realized that I had not fully taken into account how a pin hole camera works when I developed the SolarCan pictures.  The image in a pinhole camera is upside down relative to reality.  

When I started working with my images, I simply rotated them 180°;  sort of an intuitive reaction I suppose, since I instinctively knew the sun should rise and then  fall over the course of the day.  I was so excited about seeing the suns path that I did not initially realize that things were backwards;  on my backyard photo, my neighbor’s house is on the wrong side and in the Neskowin photo, Neskowin Creek disappears on the wrong side of the photo.

Rotating the image did in fact put the bottom at the top.  But it also put the left side of the image to the right, making it backwards relative to reality.  What I actually needed to do was flip the image along the horizontal axis, which makes the bottom the top, but keeps left to the left and right to the right. 

So, I have uploaded correctly oriented images in this revised post.

A friend called me yesterday to wish us a happy solstice.  I had an appointment I needed to head out to, so we only talked briefly.  But in doing that, I mentioned a solstice related “toy” I had found and said I would e-mail him about it after I returned home with more information.  But as I was starting that process, I realized that it would be kind of a cool thing to share for my semi-traditional ”holiday post”.   So here we go, and thanks to Sabastian for inspiring this.

The Shortest and Longest Day of the Year

Tuesday was the winter solstice;  the shortest day of the year,  and the path of the sun was at its lowest point in the sky relative to the horizon.  As most, if not all of you likely know, there is also a summer solstice, which falls on or about June 21st.  That, as you might expect, corresponds with the longest day of the year and the path of the sun is at its highest point in the sky.

The Equinox

Between those to extremes lie the two equinox (equinoxes? equinoxi?  equineex?,  not sure about the plural, but the spell check favors equinoxes and the others sound like part of a Gallagher routine or something).  Anyway, each day, the path of the sun across the sky will shift between the two extremes set by the solstices and will be halfway between them on the equinoxes.

A Major Driver

The daily shift in the pattern of the sun across our day is a fundamental reality in our lives, driving the seasonal changes we all experience, and for those in the buildings industry, driving the loads we try to address with our envelope and HVAC system designs.  Sadly, I think we may be less and less aware of the reality of it.

Most of it would readily acknowledge the impact that seasonal changes have on our lives and on the facilities we design and endeavor to operate.   But how many of us could, by virtue of our daily observations, point to exactly where – on the horizon – the sun rose and set on the solstice and equinox?

Some, I am sure can do just that.   But I suspect that in general, we are much less aware of that than we were even a generation or two ago, let along a century or two ago.


One of Kathy’s and my traditions is that we sit on our porch swing (or in our front room when its cold) and watch the sunset together, so I have developed a pretty good sense of where the sun will be in the evening in Portland or Neskowin Oregon.  Neskowin is where we own a share in a fractional and thus, get to spend 4 weeks a year at the coast.

A couple of years ago, I realized that by some cosmic coincidence, the long axis of the sofa and/or deck we sit on in Neskowin to watch the sunset is probably aligned with-in 5° or less of the same axis on our porch swing.  Kind of cool;  same view, just a different distance from the ocean.

But it was not until about 15 years into our life here on Buchanan Avenue that I realized that the long axis of our shot-gun bungalow (which is perpendicular to the long axis of the porch swing) is lined up so that on the equinox, the sun (if it is shining) beams down the basement stairs and hits the back wall of the basement.

IMG_2258I was walking down the stairs through the yet to be completed remodeling project that occupies half of the basement to the fairly completed remodeling project called my office, which occupies the other half, when I noticed something unusual as shown in the photo to the left.

One unusual thing was that it was not overcast early in the morning, which it often is in March here in Portland.  The other was that the rays of the sun were hitting the back wall of the basement.

This was on March 7th, and as the morning progressed, the sun beam retreated across the floor as the sun rose in the sky.  And as the days progressed, the point of light (when it was visible) moved across the far wall until the path of the sun was cut off by the stairwell. 

Kind of cool.  It reminded us of Stonehenge so we officially termed it Buchananhenge.  Kathy plans to paint some sort of mural tied to the event on the back wall, and maybe the floor, once the (somewhat mythical) remodeling effort is completed.

Enter SolarCan

SolarCan is the “toy” I mentioned at the beginning of the post.  I discovered it thanks to the “Somewhat Occasional Newsletter” that I receive by virtue of my membership in the Cloud Appreciation Society.   SolaCan is  a pin hole camera fabricated from a beer (or soda) (or actually now-days, I have discovered, wine) can.

Inside the can is a piece of really, really slow film facing the pin hole.  As a result, if you mount the “can” to some stationary, vertical object with the pin hole facing south, over time, you will generate a photograph that shows the path of the sun across the sky each day.  And, if you allow it to remain in place long enough, the background image will also burn itself into the film.

When your patience wears out, you open the can with a conventional can opener, pull out the film, and scan it, which generates a negative.  Then, you import it into some sort of photo processing software like Gimp or Photoshop or PaintShop and reverse the negative and start playing with it.

Upon discovering SolarCan, I procured several;  enough to send one to each of the grandkids, send one to my brother (who is an actual, for real graphic artist/producer) along with several to experiment with here on Buchanan Avenue and on the deck at Neskowin.

The View from Neskowin

Just to orient you, here are a couple of pictures from the deck at Neskowin with the SolarCan immediately behind me.  The were taken the day I took the can down and headed home to process the film.

2021-11-23 Neskowin Rainbow 03

2021-11-23 Neskowin Sunset

The large “rock” in both images is called “Proposal Rock” and appropriately enough, several proposals and weddings occur in its presence every year.  And probably about once a year, the coast guard has to come in with a helicopter and pull hikers off the top because they forgot to consider the tides when they planned their hike and were stranded as a result.

This next image is a panorama that I shot several years ago now.  But I include it because I was standing about where the SolarCan was mounted and because it the field of view is comparable to the field of view captured by the SolarCan.

December at the Beach 2014

Here is the negative image from the SolarCan, which captures events from June 7, 2021 through November 23, 2021, so pre-solstice to almost equinox.

CCI_000120 cr

And here is what that looked like when I scanned it into PaintShop, rotated it  and reversed it.  Note that since it is rotated, not flipped, the image is backwards from reality.  More on that in a minute.

CCI_000119 - Copy

The blotches are there because despite being under and eve and only having a pin hole exposed, the driving rain that is common at the coast managed to gain entry into the can and the film was wet.   I have played with the image some in Gimp and PaintShop (steep learning curve for me so probably a lot more that I can do) and here is where it is currently.

CCI_000120 - Copy

So, some improvement, but a ways to go.  Initially, I was kind of disappointed, viewing the imaged as being damaged by the water. But my perspective changed when Kathy looked at it, flashed her “come hither eyes” at me and said she thought I had achieved a very artistic effect.  So, I am thinking of leaving well enough alone.

Getting It Right

This paragraph did not exist in my initial post because I had not realized the error of my ways when I rotated vs. flipped the image.   But as I subsequently studied the two images I had, I realized things were backwards, as I mentioned in my note at the beginning.   So here is the SolarCan image flipped (vs. rotated), which puts everything into the proper orientation.

CCI_000120 - Copy Flipped

In the image below, I tried to overlay the panorama I took and the SolarCan image so you could kind of correlate things.  I played with the aspect ratios in the images to try to get things to correlate as closely as possible using the tree in the center of the picture and proposal rock (the flattened “bump” on the right side) as the frames of reference.

Combined Coast 2

The correlation is not perfect;  obviously the sun does not rise from inside the condo on the left.  That is primarily because I was not standing exactly where the solar can was located when I took the panorama among other things.  

For instance, the film in the can is curved because it lies on the inside wall of the can; i.e. it lies on the circumference of the circle represented by the can’s diameter.  This is in contrast to being on a plane perpendicular to the pin hole, extending across the diameter of the can.  But it will give you the general idea.

The View from Buchanan Avenue

I mounted the Buchanan camera on the pole supporting the rain gauge that is attached to the little deck on Kathy’s art studio in the back yard.  (The rain gauge in the foreground is now located on a pole just below the blue bird house in the background;  South is to the center right;  where the bright spot in the trees is).

2019-07-24 Art Studio View

We are blessed with a lot of trees and that is just about the only spot with a clear view to the South for a significant part of the day. 

The “can” went up right after the 4th of July and my patience ran out Thanksgiving week, so the image below does not cover the entire span from equinox to solstice, but almost.

Back Yard CCI_000117 - Copy 02 Flipped

In both images, the arching bands are the daily path of the sun.  Variations in intensity are (I suspect) due to clouds passing through. Gaps between the bands (I suspect) represent days of total overcast. 

I also suspect the intensity of the bands when the sun is lower in the sky is generally higher on a clear day than when the sun is higher in the sky due to the incident angle between a ray of light and the film in the can;  not totally sure about that but I think it is true.

Next Steps

Having done my initial experiments, I am already on to my next artistic effort.  I just deployed a new SolarCan on the rain gauge pole on the solstice and plan to leave it there until the June solstice, thereby capturing the full path of the sun from Winter to Summer.  I will replace it with another to capture the path the other way.

I plan a similar effort at Neskowin although the dates are constrained a bit by when we have our weeks in the rotation.  But I should be able to capture the full cycle and may try to find a way to keep the film dry (or maybe not, given the flashing of come hither eyes associated with perceived artistic efforts on my part.)

And I will shoot a panorama with my digital camera oriented as close as possible to the orientation of the SolarCan so I can better correlate the two images.


Hopefully, my adventures and experiments observing the sun’s path will inspire you to consider doing the same (obviously, don’t look directly at it).

For me, even though I had an intellectual awareness of it from a very young age,  watching the minute by minute, hour by hour, day by day shift via Buchananhenge and SolarCan gave me a firmer grasp of it.   And it also made me feel a bit more connected with this amazing universe we are all apart of.

IMG_0075In fact, if you find this to be interesting, then you may also enjoy one of my favorite books, Connecting with the Cosmos, by Donald Goldsmith.  The subtitle says it all in a way;  each of the 9 chapters is dedicated to exploring a different aspect of the sky, starting with sunrise and sunset, my topic here in a way, through observing the moon and various constellations, all with the unaided eye.

So here’s to happy sky-watching and a great holiday season.  And thanks to all of you who continue to visit the blog.

David-Signature1_thumb_thumb_thumb                                                                                                          Holly

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at

Posted in Uncategorized | Leave a comment

Heat Pumps Don’t Create Energy, They Move Energy

Wow, its been a long time since I have written a blog post!

Times fun when you’re having flies …

as frogs are often heard to say.

I have been putting a lot of new content up on the Commissioning Resources website, so that has taken my time.   But fairly recently, I had a discussion with a friend who was having a hard time wrapping their head around the coefficient of performance for of a heat pump/refrigeration process and I came up with an analogy that – while not perfect – worked for them and which they found somewhat amusing. 

So I decided I would try to resurrect my blog posting activities by sharing it for what’s worth.

The Question

The fundamental question was …

It seems like magic that you can get a COP = 4.  I’m having a hard time wrapping my head around the fact that you can get 4 units of energy OUT for putting in 1 unit of energy. 

The Somewhat Technical Answer

I started out by saying that  I thought maybe the key was to think about the compressor as doing work to move energy rather than creating the cooling effect.  

In other words,  a refrigerant at a saturation temperature/pressure of “X”°F/Y psia will produce “Z” Btu’s of cooling via the phase change that occurs if heat is applied to the evaporator, causing the liquid refrigerant to boil and become a vapor.    It’s the energy absorbed by the phase change process produces the cooling. 

The amount of energy absorbed per pound of refrigerant as well as the saturation pressure associated with the temperature that the phase change occurs at will be a function of the physical properties of the refrigerant. 

In other words, you may need to move “U” pounds of refrigerant A at a saturation temperature/pressure of “X”°F/Y psia, but move “V” pounds of refrigerant B to produce the same refrigeration effect at a saturation temperature/pressure of “X”°F/Y psia.

Once the refrigerant has gone through the phase change, the problem becomes getting rid of the heat by condensing the refrigerant.  One way to do that is to move it to a higher saturation temperature and pressure so that you can use some other medium that is cool relative to this new, elevated pressure and temperature to reverse the process and condense the refrigerant. 

The compressor accomplishes this for us by compressing the cool vapor from the evaporator.   In doing this, it does work on the refrigerant (the pv/J part of the steady flow energy equation) …

… and the amount of work it does can be determined by plotting the cycle on a pressure enthalpy diagram[i].

The work includes the irreversibility losses, i.e. there is a change in entropy.  All of this will be specific to the refrigerant that is use, as will the evaporator saturation temperature and pressure relationships.

In addition, you will put more energy into the compressor motor than you get out as shaft power to the compressor because of the losses in the motor. If the motor is cooled by the refrigeration process, then these losses will also show up as heat to be rejected at the condenser.

At the evaporator coil and condenser coil, the energy transfer is 100% efficient; i.e. 100% of the energy removed from the fluid flowing through the evaporator shows up as vaporized refrigerant and 100% of the energy removed from the condenser by the air or water flowing through it shows up as an increase in air or water temperature.  But the amount of energy rejected is more than the cooling effect because the compressor energy is also being rejected. 

I think my friend kind of knew this all along;  he basically alluded to it in what he said when he initiated the discussion.  But somehow, my saying it back to him caused the dots to connect.  All I really did was mirror back what he already know.  That is the power of having a discussion I think.

But at that point, I was on a roll, so I continued with my analogy, which they patiently tolerated.   (You, of course, can just stop reading this and I will never know). 

The Analogy

Suppose you have a nice little cabin out in the Pacific North West woods next to a very pretty, deep lake that was fed by streams which were fed by melting glaciers..   Most of the time the cabin is quite comfortable, but there is the occasional hot summer day when it would be nice to have some sort of cooling system.  

One day, after going snorkeling to see the fish in the lake, you realize that the water towards the bottom of the lake is actually pretty cold, even though the surface water temperature is very pleasant.

That gives you an idea.  

You go buy an 800 cfm fan coil unit, install it in the basement of your cabin, and run a pipe from the inlet of the cooling coil out to just below the surface of the lake, then add a vertical extension to it so that when you open the valve to the coil, the head produced by the water level in the lake will cause  water to flow through your coil, but the flow will be from the bottom of the lake, where the water is coldest. 

You buy a kiddie pool to place under the outlet of the coil to catch the water so it doesn’t flood your cabin. The good news is that you can make 76°F air with this arrangement, which will cool down your cabin, which is at 90°F but very low RH (i.e. the coil is running dry). 

The bad news is that the 4.8 gpm it takes to do this adds up and the  kiddie pool starts to overflow.  So you build a flume and reservoir that allows you to fill a bucket, climb up a ladder 15 feet, and dump the 4.8 gpm into it, which returns it to the lake.

The bottom line is that the system is doing a ton of cooling by changing the temperature of the water going through the coil 14°F.

Natural forces are producing the cooling effect; 

  • The head created by the difference in the level of the lake and the outlet of your pipe moves the water through the coil to the basement. 
  • The ability of the water to absorb heat by changing temperature provides the actual cooling effect. Basically, the lake water is your refrigerant;  its just doing the cooling with a sensible energy change vs. a phase change.

But to keep your cabin from flooding you need to do some extra work to move the water back to the lake, which involves carrying a bucket of water multiple times from the basement level to the flume level.   When you dump the water into the flume, you are above the level of the lake. 

This is a bigger elevation change than the difference between the water level in the lake and the water level in the Kiddie pool.  But to get the water to flow from your cabin back to the lake, you have to dump it into the flume at the higher elevation. 

Bottom line, to keep the system working and keep from flooding your cabin, on average you need to move 4.8 gallons of water through a 15 foot elevation change.

But, of course, the mass of the water is not the only thing you move up the ladder.  You also move your own mass and the weight of the bucket.  If you do the math with the water horse power equation …

…   you discover that the water hp is about 0.018 hp. 

But if you  convert the gallons of water in the bucket to pounds and add your weight and the bucket weight to it and multiplying it by the 15 foot elevation change, and the number of trips you need to make to keep the basement from flooding, you discover that you are doing 0.087 hp of work or 220 Btus. 

If your body was about 25% efficient, you would need to consume a lot of calories to keep this process going[ii].

Since you find the free cooling to be quite desirable on the occasional hot day, but would rather not have to climb the ladder so much, you invent a device that can do that for you using solar cells as a source of power and begin to market your new product.  

Changing the Refrigerant

As a result of the success of your invention, you accumulate great wealth and decide to buy a place on the US Virgin Islands so you can spend some of your time there relaxing on the beach, snorkeling, and watching sunsets. 

Given the high temperatures and humidity levels, you decide to install your cooling system in one room of your beach house to provide a bit of relief from the heat and humidity, this time using seawater as the refrigerant.

When you commission your system, you discover a number of differences from the system in your cabin.  

For one thing, given the humidity in addition to the heat as well as the available water temperature, you realize you probably will need a larger fan coil unit; at one ton, your current model can not dehumidify and only performs sensible cooling.  So while it helps, what is really needed is some relief from the humidity in addition to the heat.

But you decide that the sensible cooling is better than nothing, so you continue to commission the system while waiting for your new, larger fan coil unit to arrive.  In doing that, you discover that to create the ton of cooling, you need a bit more flow, specifically 4.9 gpm instead of 4.8 gpm.  

After investigating and determining that your flow measurement is in fact accurate, you realize that  the specific heat of seawater is lower than that of the pure fresh water in the lake by your cabin; 1.00 Btu/lb-°F for the fresh water vs. 0.96 Btu/lb-°F for the seawater.

In other words, it is a different refrigerant and because of its physical properties, you need to move more of it to produce the same refrigeration effect. 

You also realize that the reason you seem to float better when snorkeling in the Caribbean is that the density of the saltwater is higher than that of the fresh water in your lake back at the cabin;  62.29 lb/cu.ft. for the fresh water vs. 64.00 lb/cu.ft. for the saltwater.

That means that you have to do a bit more work to keep the system running.  More specifically, you find that you are moving 2,509   lb/hr of saltwater up the 15 foot ladder or 0.0874 hp when you add your weight and the bucket into the mix.  This is in contrast with the vs. the 2,398 lb/hr you had to move up the ladder in your cabin using 0.0866 hp when you’re the weight of you and the bucket is added in.

Ultimately, you conclude that with a bit of development, you can expand your product line to provide a product suitable for providing relief to owners of USVI beach houses.  And what better place to do the development than from the deck of your beach house, over looking the Caribbean.

Thus Ends the Analogy

Hopefully that was more useful than silly. 

The idea was to illustrate that the actual refrigeration effect was provided by the refrigerant (the lake water or sea water) absorbing heat.  But to reject the heat, work had to be done to move the heat to a location where it could be rejected. 

In the case of the initial example it was done by carrying a bucket up a ladder to an elevation that would allow it to flow back to the lake were natural forces (like deep sky effect and evaporative cooling) would cool it back down.

But if you change refrigerant (seawater instead of fresh water), because its physical properties are different (its not as good of a refrigerant as pure water), you end up needing to move more mass to move the heat from the kiddie pool back to the oceans where evaporative cooling and deep sky effect can cool it back down. 

David Sellers
Senior Engineer – Facility Dynamics Engineering    

Visit Our Commissioning Resources Website at

[i]  If you want an example of a pressure/enthalpy diagram, you will find one in this blog post.  If you want to understand how to use one in practical terms, Sporlan publishes a very will done technical guide that is well worth reading in my opinion.

[ii]  In working on the analogy, I found a really interesting blog post about the efficiency of the human body.  The author was looking at biking and walking. 

Here is a summary table from the post showing miles per gallon for different activities and energy sources.  The difference between food and gas/lard is the energy density of our average diet vs. the energy we would get if all we ate was lard, which was the closest he could come to the equivalent of gasoline in terms of energy density.

Posted in Uncategorized | Leave a comment

Satellites, Eclipses, and Happy Holidays

As some of you know, I am pretty interested in the weather.  So most days, while having coffee and settling into the office, I am poking around on-line, looking at things like the models that the University of Washington Department of Atmospheric Science make available, looking at weather maps, downloading data and plotting soundings with ROAB and trying to understand what they mean. 

Sometimes, I even load data into Digital Atmosphere and try my hand at plotting a front.   Still a long way to go there but I think it may be kind of like learning to use a psych chart;  you just have to do it and it will eventually come to you.

20203351640_GOES17-ABI-FD-GEOCOLOR-1808x1808But my favorite part of the routine is the time I spend looking at satellite imagery.  I find myself mesmerized by the colored view of the earth and the clouds just hanging there in space.

The images update every 10 minutes and you can even create a little animated loop and watch the terminator and weather systems sweep across the globe, as shown below.


I was doing this earlier this week when my eye caught something.  At first, I didn’t realize what was happening.  But then, it dawned on me (and you probably have already figured it out from the title);  I had just seen the eclipse from the vantage point GEOS West.

I thought it was really cool.  So I created animations for GOES West and East, downloaded them and figured I would share them here.  This first one is from GOES West, which is what initially caught my eye. South America is in the lower right part of the image so watch that area to see the shadow show up.

This one is GOES East, which gives a better view of things since South America is front and center.  I don’t know exactly what the yellow bars that show up at the end of the sequence are, but I think they had something to do with the satellite data set not being fully complete.  Fortunately, the eclipse is in the first part of the sequence.

If you want to slow things down or pause, I made a little video that includes both of the animations with the yellow bars edited out.  You will find it at this link.

If you go to the GEOS imagery page and pick a view, you will discover that there are all sorts of ways to look at the images that reveal all sorts of different things about the atmosphere.   But the one that I love the most is the GeoColor product, which is what was used for the images above.

The image is actually a combination of different satellite data stream to create a very vivid realistic daytime image.  The night time image uses data from different infrared bands to show low liquid water clouds as differentiated from higher ice clouds.  The city lights are from a different static data based and provided to allow you to orient yourself.

To me, it is amazing to contemplate what you are seeing when you see that shadow pass over the surface of the earth; masses orbiting and interacting with each other in a perfect balance.   In the days leading up to Christmas this year, we will have the opportunity to see a different manifestation of that ballet as Saturn and Jupiter come into the closest conjunction they have been in for some 800 or so years.[i]

Saturn and Jupiter Conjunction

Some have even hypothesized that the star of Bethlehem may have been just such an event.

So now, (if you are still reading this) you are thinking O.K. there is the  “Happy Holidays” part of the post title.   And that is in fact part of it.

But, the other part of it is to point out that we did not always have such a spectacular view of our home available to us at our finger tips.  Prior to this time of year in 1968 – specifically December 21 through 27, 1968 – the most remote vantage point had been what Pete Conrad and Richard Gordon had captured for us from 850 miles up on their Gemini 11 mission, which is shown below [ii].

850 miles up 7-s66-54706-b

But on Christmas Eve, 1968,  the crew of Apollo 8 – Frank Borman, James Lovell, and William Anders – captured an earth rise while orbiting the moon; the first time humans had done that.


The image [iii] is, of course, is quite famous;  some have called it

the most influential environmental photograph ever taken[iv]

I tend to agree with that, having seen it with  my own eyes that evening.  That image, and the lunar surface rushing by and the words the astronauts shared that evening[v] are burned into my memory.  It definitely is part of the reason I do what I do these days.

Later that evening – actually, I think in the early hours of Christmas day (EST), this sequence of transmissions occurred (I believe the time stamp is hours into the mission and liftoff was at 7:51 a.m. EST on December 21, 1968):

089:31:12 Mattingly: Apollo 8, Houston. [No answer.]

089:31:30 Mattingly: Apollo 8, Houston. [No answer.]

089:31:58 Mattingly: Apollo 8, Houston. [No answer.]

089:32:50 Mattingly: Apollo 8, Houston. [No answer.]

089:33:38 Mattingly: Apollo 8, Houston.

089:34:16 Lovell: Houston, Apollo 8, over.

089:34:19 Mattingly: Hello, Apollo 8. Loud and clear.

089:34:25 Lovell: Roger. Please be informed there is a Santa Claus.[vi]

If you followed the space programs, the hours an minutes between the Christmas Eve broadcast and the transmissions above were pretty important because the Trans-Earth Injection burn would happen.  This event involved the (single) engine in the service module igniting and accelerating the spacecraft out of lunar orbit into a trajectory that would carry it back to earth.

If the engine failed for any reason, the crew was not coming back.

Thus, the acknowledgement of the existence Santa Clause.

Bill Anders, who took the earthrise picture above often said something along the lines of:

We came to explore the moon and what we discovered was the Earth

Ultimately, I think why I am writing this is to encourage you to take some time to contemplate and fully appreciate that discovery.   I think it’s easy to take for granted in the world we are in.  But I also think it is crucial that we appreciate it.

In her 1976 album Hejira,  in a song titled Refuge of the Roads, Joni Mitchell wrote:

In a highway service station
Over the month of June
Was a photograph of the earth
Taken coming back from the moon
And you couldn’t see a city
On that marbled bowling ball
Or a forest or a highway
Or me here least of all

These days, I think that is an important perspective to keep.   When you look at our pretty little home from the vantage point of space, all of the things that seem to trouble us and divide us become invisible.   And what becomes apparent is that we are all in this together on a beautiful but tiny little life boat.


PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at

i          Image Credit: NASA/ Bill Ingall

ii         NASA/Dick Gordon; Sept. 14, 1966 – View From Gemini XI, 850 Miles Above the Earth | NASA

iii       Image Credit: NASA/Bill Anders; Apollo 8: Earthrise | NASA

iv       Nature photographer Galen Rowell

v        This link will take you to a recording.  There are religious overtones, so fair warning if you find that sort of thing offensive.   Me personally;  I am probably more spiritual than religious, but the moment was and still is very moving.

vi        Apollo 8 Flight Journal – Day 4: Final Orbit and Trans-Earth Injection (


Posted in Uncategorized | 4 Comments

What is the Energy Content of a Pound of Condensed Steam? (Part 3)

or, It Depends …

This post is the last in a string of posts that started out as an e-mail answering a question from one of the folks taking the Existing Building Commissioning Workshop this year at the Pacific Energy Center.   The question was about the energy content of a pound of steam, which seems like a simple question but it turned out not to be.

In the first post we explored different ways to address the question including using published conversion factors, rules of thumb, and steam charts and tables.  In the second post, we took a closer look at how steam is procured, including on-site generation and district steam systems and how those approaches impact the amount of useful energy that is recovered from the steam.  We also looked at ways to maximize the amount of energy that you extract from a pound of steam for use in your HVAC processes.

In this post, we will look at some common energy saving opportunities associated with steam systems.  I should also mention that you will find a number of general resources about steam in this blog post.


I have included a table of contents that will allow you to jump to a topic of interest.  The “Return to Contents” link at the end of each section will bring you back here.

Maintaining The Benefits

Even if set points and processes have been optimized, there are things that you should look for in order to maximize the benefits, no matter where your steam comes from and where the condensate goes.  Typical issues (a.k.a EBCx and ongoing commissioning opportunities) include the following items.

Failed Condensate Return Pumps

Just because local boiler plants and campus district steam systems are set up to return their condensate and recycle it does not mean they are actually doing it. Condensate return pump failures are not unusual. 

Typically, when this happens, the receiver drain valves are opened until repairs can be made.  As a result, the condensate is dumped to the sewer, even though that would not be the case if the return pumps were operational.   Unfortunately, the failed pumps and open drain valve are often forgotten. 

A facilities director friend of mine at a large campus in the Midwest instituted a policy in his weekly meetings where each operator was required to report on the condition of the condensate return pumps in the facilities they were responsible for.   “Not working” was the “wrong answer”, and the policy quickly resolved what had been an ongoing problem with failed condensate pumps, saving a lot of energy, water, and water treatment chemicals in at the boiler plant.

<Return to Contents>

Failed Insulation

Condensate is hot and insulation will preserve the energy in the condensate.  Repairing damaged insulation typically delivers a quick payback and can frequently be accomplished in house.  All you need to do is measure the surface temperature with an infrared gun and look up the loss in a table or chart.


There are a number of resources at this link that will help you get started.

<Return to Contents>

Steam Trap Failures

For a steam system to work properly, it is important to ensure that only condensate leaves the steam system.  Steam traps accomplish this function but can fail if they are not properly monitored and maintained.  If a trap fails, live steam enters the return system, wasting the energy it contains and potentially causing other issues on the return side.

The infrared thermometer shown above for checking out insulation savings will also help you find a failed stream trap.   If there is a temperature drop across the trap with the leaving temperature being at or below the saturation temperature for the pressure in the return, then the trap is probably doing just fine, like this one.


But if the trap has failed, the temperature in the return line will be up near the saturation temperature of the steam, like this.


It is important to realize that the high temperature down stream of the trap means that a trap in the area has failed, not necessarily the trap you took the temperature across. 

In other words, the steam leaking by from a failed trap will raise the temperature of all of the pipe in its vicinity.  So to narrow things down, you may need to use an auto mechanic’s stethoscope to listen for the steam jetting through the outlet orifice in the trap.

There are resources at this link that can help you assess steam trap failures and the related savings.

<Return to Contents>

Piping Failures Due to Corrosion

Condensate tends to be corrosive because the carbonate and bicarbonate ions that enter the boiler with the feedwater break down due to the heat and pressure in the boiler. One of the biproducts is carbon dioxide gas, which leaves the boiler with the steam and then reacts with the condensate to form carbonic acid.


There are water treatment strategies that can be used to control this as well as piping materials that can minimize the potential for failure.  But my point here is that when a failure occurs, then the condensate is lost along with the benefits of returning it to the plant.

<Return to Contents>

Long Pipe Runs to the Central Plant

As mentioned in the previous blog post under Paradoxes, long pipe runs to the central plant can result in parasitic losses, even if they are insulated.  As a result, a number of campuses I have been involved with include a heat exchanger in the condensate return system that is used to recover energy from the condensate for local use, perhaps preheating domestic hot water or serving other loads that can be served by low temperature hot water.

<Return to Contents>


Thus ends another string of somewhat long blog posts.  Hopefully, they have given you some insights into how much energy is associated with a pound of condensed steam, techniques that can be used to evaluate it, and ways that you can maximize the potential and maintain the benefits of a system that uses steam as a source of heat.


David SellersPowerPoint-Generated-White_thumb2_th[2]
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at

Posted in Boilers, Hot Water Systems, and Steam Systems, HVAC Calculations, HVAC Fundamentals, Operations and Maintenance, Steam Systems | 1 Comment

What is the Energy Content of a Pound of Condensed Steam? (Part 2)

or, It Depends …

This post builds from the previous post, which started out as an e-mail answering a question from one of the folks taking the Existing Building Commissioning Workshop this year at the Pacific Energy Center.   The question was about the energy content of a pound of steam, which seems like a simple question but it turned out not to be.

In the previous post, we explored different techniques that could be used to assess the energy content of a pound of steam and looked at where the value used by ENERGYSTAR® for converting pounds of steam from a commercial district steam system to Btus came from.  It turned out to be associated with receiving steam at a delivery pressure of 150 psig, saturated and then dumping the condensate to the sewer.  

Dumping the condensate wastes quite a bit of energy, which is the reason the ENERGYSTAR® conversion factor seems high when you compare it to what you might expect based on rules of thumb or even an analysis that looked at the latent heat of vaporization for 150 psig saturated steam.   This approach also wastes water, another important resource with embedded energy implications. 

The good news is that there are other approaches that can be used to reduce the wasted resources.   This post looks at some of them as well as ways to maximize the amount of energy extracted from a pound of steam before it is recycled or dumped to the sewer.


Despite breaking up the original behind this into a string of posts, each post in the string is still somewhat long.  So, to minimize the pain for someone just wanting the bottom line, I have included a table of contents that will allow you to jump to a topic of interest.  The “Return to Contents” link at the end of each section will bring you back here.

Steam System Resources

I thought I would mention that there are several blog posts that will connect you with resources on steam and steam systems.

A Steam Heating Resources will connect you with a really good book titled The Lost Art of Steam Heating.  It also connects you with some articles Bill Coad wrote on the topic and a number of other resources.

Assessing Steam Consumption with an Alarm Clock is the first in a series that looks at a way that you can develop a steam system flow profile by monitoring condensate pump and feed water pump operation.  It was something Chuck McClure taught be very early in my career, but I still use the technique to this day (but do it with data loggers instead of alarm clocks).

<Return to Contents>

District Steam vs. Onsite Generation

The Operating Cycle

In terms of how condensate is handled, what I described in the previous post for a typical commercial district steam system (dumping it to sewer)  is a totally different scenario from what would happen if you had boilers on site generating the steam.  In the latter situation, the condensate is collected and returned to the boilers and recycled.   Some fresh water is added to make up for any losses due to leaks or the use of steam in a process (direct injection humidification for instance) and to make up for the water that is intentionally drained from the system to manage total dissolved solids levels (typically termed blow down). 

But for most facilities with local boiler plants generating steam, returning the condensate minimizes the amount of energy needed in the boiler to create steam since it only needs to heat the feedwater from the condensate return temperature (typically in the 140-200°F range) vs. heating it from the ground water temperature, which can be in the 45-50°F range for some parts of the year.  This practice also minimizes the consumption of water, another valuable resource. 

For a steam system of this type, you would probably not be entering thermal energy into ENERGYSTAR® as pounds of steam.  Rather, you would be entering it based on the fuel you used to fire the boilers.  This would reflect the net energy input required to bring the returned condensate back up to boiling temperature along with converting it to steam.

That’s not to say you would not be interested in the pounds of steam produced because that would tell you about the efficiency of your generating process.  And you would also be interested in the net energy change that occurred as the steam was condensed and the condensate was cooled, either intentionally or via parasitic losses like leaks or poor insulation.  If you had energy recovery devices in your boiler flue, you would want to consider their contribution also.

<Return to Contents>

The Operating Cost

If you were to look at the cost of a million Btus in the form of gas, which you would then burn in a boiler to make steam, and the cost of a million Btus delivered by a third party supplier as steam, the millions Btus as steam option would seem crazy expensive.   And it is if all you look at is solely as the cost of a Btu.

But, if an Owner elects to buy steam instead of gas, part of what they are electing to do is to not operate a boiler plant.   That has a number of implications including:

  • No need to purchase the boilers and related auxiliary equipment in the first place.
  • No need to operate the boilers plant, which may require operators with a different skill set from those required to only deal with steam, not generate it.  And it may require a round the clock operator presence depending on the pressure and temperature of the steam that is required.
  • Dealing with natural gas increases the level of risk associated with operations compared to dealing with just steam (which is not with out risk).
  • The reliability of a central plant may be much higher than a local plant unless significant investments were made in machinery and systems to provide N=1 redundancy at a local level.
  • The ASHRAE Systems and Equipment handbook has a chapter dedicated to  District Heating and Cooling systems that includes a discussion of the economic considerations and other issues if you want to learn more.

    <Return to Contents>

Campus District Steam Systems

It is not unusual at all for college, university, industrial and commercial building campuses (like the wafer fab I worked at) to use a central steam plant to serve multiple buildings on one site, basically a district steam system approach.  However, unlike the commercial district steam system we have been looking at, most of the systems I have been around are set up to return the condensate to the central plant.

Typically, this is accomplished by providing one or more condensate receivers for each building to capture the condensate for the facility.  The receivers are equipped with pumps that move the condensed condensate from the receiver to a return system that collects the condensate and returns it to a receiver in the central plan. 

From there it is pumped to a feedwater system where any necessary make-up water is added, water treatment chemicals are added and it is often deaerated (heated to drive out dissolved oxygen).  Pumps then move the treated condensate (now called feedwater) into the boiler as required by the load conditions, usually based on boiler water level.  Thus, the energy and water associated with the distributed steam is recovered instead of being dumped to sewer. 

The picture below will give you a sense of what this might look like.  It is from the central plant at the wafer fab I worked at for a while.

Boilers The cylinder in the lower left is one of the high pressure boilers.  We generate steam at 100 psig and distributed it to various locations on the site, where it was reduced to 5-10 psig for use in heat exchangers and coils.

The large elevated cylinder in the center of the picture is the deaerator and feedwater tank.  The feedwater pumps are located below it.  Condensate was returned to this tank by condensate pumps at the various points of use out in the facility.  The picture below will give you a visual on what a typical condensate pump looks like.

Condensate Pump

In the deaerator, the returned condensate was heated to 200°F+ to drive out the dissolved oxygen.  Then it was pumped to the boilers by the feedwater pumps when needed based on the water level in the boilers.

So for a steam system of this type, you really would be justified in doing some sort of analysis similar to the example in the previous post to come up with the KBtu’s delivered to the facility from the pounds of steam that you consumed (including the parasitic losses), even if you are billed by the central plant based on pounds of steam.  That would allow you to enter you consumption using a multiplier of 1 instead of 1.194.  And that would be legitimate (in my estimation) because by recycling the condensate, you are returning the energy and water associated with it back to the process rather than throwing it down the drain.

<Return to Contents>

Why Not Return the Condensate?

You may be wondering why a commercial district steam system would not include a return system that allowed them to collect and recycle the condensate from the loads they serve.  I can’t say that I know the answer to that for sure.  But my guess is that it has to do with a number of economic and operational factors that make it financially more attractive for the business entity to not deal with a condensate return system.

There are a number of things that make dealing with a condensate return system challenging, especially a system that covers an extensive area.  The map below illustrates the piping network associate with Clearway Energy Thermal San Francisco, who provides district steam to a number of cities across the country.


To give you a sense of scale, the map is probably in the range of 1-1/2 miles on a side. That is a pretty significant network to maintain; miles and miles of pipe running underground below streets and sidewalks.   Challenging enough for the steam piping, which is at high pressure and experiences significant thermal expansion and contraction.

While the pressures would be lower for a condensate return system, the thermal expansion and contraction issues will still exist.  And you would need to have multiple pumping stations to move the condensate back to the central plant location.  

Probably most significantly, condensate tends to be corrosive for a number of reasons.   And ensuring that the customers maintained the equipment necessary to return the condensate to the system can also be an issue.

So, those are some of the reasons that I suspect a commercial supplier finds it easier (more economical) to not deal with returning condensate.  Over time, as the value of energy and water increase, that could change.  After all, when we dump the condensate to drain, we are throwing away at least two resources (energy and water) and probably a third (boiler feedwater water treatment chemicals).

<Return to Contents>


All of this may lead to the question:

What can we do to make steam and condensate return systems as efficient as possible?

The answer (as you might guess) is:

It depends …

The first thing to consider is if you have maximized the extraction of energy from the steam and condensate that was delivered to you.  The other is to make sure you are maintaining the mechanisms that deliver those benefits.

<Return to Contents>

Maximizing the Benefits

One way to maximize the benefits of a high temperature resource like steam is to make sure you have reduced the temperature in a way that provides useful heat to the facility as much as possible.

Cooling the Condensate via a Separate Process

It is easy to think that the energy benefit of steam is associated with condensing it.  And in the context of Btu’s per pound extracted, a phase change beats sensible cooling hands-down.   But, given that the condensate coming of a process that is condensing steam at atmospheric pressure is still quite hot, there may be some significant benefit associated with subcooling it. 

For the process we looked at in the previous post, when I illustrated how to use a p-h diagram, the condensate came off the process at 212°F.   If there are loads in the facility that can be served by a fluid that is at this temperature or lower, then it may be possible to serve them by cooling the condensate rather than by condensing steam. 

Examples include processes like preheating outdoor air, preheating or heating domestic hot water, heating swimming pools, heating spaces and/or loads with less stringent temperature requirements like parking garages, and snow melting systems.  The viability of these processes from an economic stand point can vary a lot, depending on:

  • Are considering this option during design or in the context of an existing building, and/or
  • The value of the resources and/or
  • What happens to the condensate after it leaves your facility (i.e. is it dumped to the sewer or is it recycled.)

But to illustrate the point, lets consider what would happen if we took the condensate coming off the process I illustrated in the p-h diagram in the previous post and subcooled it to 160°F, perhaps by using a heat exchanger to preheat domestic hot water or hold it at about 150°F in a storage tank.


As you can see, this would have recovered about 30% of the energy that would have been throwing down the drain based on the district steam conversion factor that ENERGYSTAR® would use for systems that were billed in terms of pounds of steam consumed.[i]

An interesting paradox about this is that if you made this change in a facility where the domestic water heating was provided by electricity, you would see a drop in electrical consumption but no increase in the pounds of steam that were used.  That is because you would have been extracting more energy from the steam consumed for other purposes before discarding it to sewer. 

In contrast, if the domestic water had been provided by using steam in a heat exchanger directly, this change likely would have reduced the steam consumption because you would have been extracting more energy from the steam that was used by other processes, like preheat, heating, and reheat, before discarding the condensate.

Of course, for this to all work out, the loads generating the condensate would need to be concurrent with the domestic hot water load requirement.  If they weren’t, then alternative energy sources would need to be used to meet the load.

<Return to Contents>

Cooling the Condensate by Optimizing Process Set Points using a Reset Schedule

The Design Day is Not Everyday

If you study load profiles for a while, you will realize that the design condition is an anomaly.  In other words, equipment selected for the 99% ASHRAE heating design condition will be oversized for about 99% of the hours in the year.  The psych chart below illustrates this for Columbus, Ohio, a location that sees a wide range of outdoor conditions over the course of a year.


The colored squares are a bin plot of the climate data;  warmer colors have more hours at the conditions inside the square than cooler hours, as can be seen from the key at the lower left of the chart.   Notice how most of the data point lie between the different design values, not on them.

That means that if, for instance, you selected a reheat coil serving a perimeter zone where, on the design day, the coil needed to supply 94-95°F air to offset the losses that were occurring through the envelope, then as it warmed up outside, the coil would not need to supply air at that temperature all other things being equal.

Heating and Reheating are Different Processes

In fact, once the outdoor air temperature rose above the balance point for the building (the point where the internal gains exactly offset the losses through the envelope) the coil would no longer need to provide heat, it would only need to provide reheat and in the worst case, deliver air at the zone temperature (a.k.a. “neutral air”).  This is a very important point to understand.  

Since this post is already very long, I will save a detailed discussion of this for a subsequent post.  But in a nutshell (perhaps a coconut shell) a coil that is doing heating is adding energy to the area it serves to offset losses (usually envelope losses) in order to maintain the desired space temperature.  Thus, it will need to deliver air that is warmer than the targeted space condition.

In contrast, a coil that is doing reheat is delivering air that is cooler than the space condition but warmer than the air that is coming from a central system serving multiple zones.  The reason for doing this is that the central system leaving air temperature was likely set based on a design day dehumidification requirement.  Then the flow rates to the zones were set based on the zone sensible load and the design day coil leaving air temperature.  

Because of the design process I just described, given a mix of zones, it is possible that an interior zone, say a server room, with a very constant load condition, will require the design day flow rate and temperature under all operating conditions.  In contrast, a perimeter zone likely will not because the transmission and solar loads will be change from hour to hour, day to day, and season to season.  Thus the design day flow rate and temperature will tend to over-cool it much of the time.

For the perimeter zone, this could be mitigated up to a point by reducing the flow rate.  But there can come a point when the flow rate has been reduced to the minimum flow required for ventilation and delivering air at that rate and at the design day supply temperature (which can not be raised because the server room still needs it) will over-cool the zone.  Thus reheat becomes necessary if we want to keep the zone clean, safe, comfortable, and productive, which are the basic goals of an HVAC process.

So, the reheat coil warms the air up slightly.  But since there is still a need for some cooling, the air is still delivered to the one below the zone temperature.  In the limit, the highest temperature the reheat coil would need to provide under conditions where there were no energy losses from the space would be at the space design temperature to maintain the ventilation requirement with out over or undercooling the space.

Real World Coil Performance and Performance Requirements

It turns out that a coil that is selected for the design heating condition using, for example, 180°F water, can provide reheat with much cooler water.   I discovered this with the “dots connected” about the difference between reheat and heating one day early in my career.  Joe Cook (the lead operator at the facility I was working in at the time), then proved it by lowering the water temperature on the system until he got a cold call. 

In other words, Joe “asked the building” and I attribute my belief in that process (note the words in the banner of the blog) to this event and Joe.  Tom Stewart and I eventually wrote a paper about it for ACEEE, which you can find here if you are interested.

You can also demonstrate this by modeling a coil, locking down the physical characteristics like the fin spacing, circuiting, face area, etc. and then playing with the entering water temperature and flow rate to see what happens.   Here is an example I developed using Greenheck’s free coil selection program.

Modeling a Coil on the Design Heating Day

I first modeled the coil to serve the heating load in a perimeter zone, which required 94-95°F air on the design heating day.  Here are the coil’s physical characteristics …


… and here is the performance on the design day supplied with 180°F water and taking a 20°F temperature drop on the water side to match the heat exchanger selection I have been using as an example in this post.  The entering air condition is 53°F, the design day cooling coil discharge temperature that is required by a server room on the same air handling system, even though it is the design heating day.


Modeling the Same Coil on a Day When Only Reheat Is Required

Here is the performance achieved with that same coil if I reduce the entering water temperature to 110°F and take a 20°F waterside temperature drop with 53°F entering air.


Note that I am able to deliver 67.4°F air and only use 1.9 gpm to do it (35% of the design flow rate).   If I were to maintain the design flow rate of 5.5 gpm, I can deliver near neutral air.


Heat Exchanger Performance at a Reduced Leaving Water Temperature and a Lower Flow Rate

If we look at where the heat exchanger I have been using in this example would perform if I reduced the water side flow rate by 50%[ii] and lowered the set point from 180°F to 110°F, it turns out that the condensate coming off of it would be at 141.4°F.   Here is what that looks like if you plot the process out on the p-h diagram.


Here is that same diagram at a smaller scale and cropped to focus on the condensate condition (left image) next to the design day process (right image) so you can compare them.


Notice how the condensate leaving the lower temperature heat exchanger process has an enthalpy of 109 Btu/lb compared to 181 Btu/lb for the design day process.  Thus, operating at a lower temperature allows us to recover more of the available energy from the steam that was delivered.

More specifically, by operating at a 110°F supply water temperature, we now recover 1,084 Btu/lb from the steam vs.  the 1,012 Btu/lb that we recovered operating at a 180°F supply water temperature set point. That’s a 6% improvement in making beneficial use of the 1,194 Btu/lb that the ENERGYSTAR® conversion factor would attribute to a district steam system where the condensate was dumped to sewer.

But Wait, There’s More!

There would also be savings due to lower parasitic losses in the piping network.  In other words, even with insulation meeting code requirements for piping operating at 180°F, there are still losses. 

You can get a sense of this by using 3EPlus, a free application from the North American Insulation Manufacturers Association.  Here are screen shots comparing a  4 inch line operating at 180°F with code required 2 inches of insulation in a 75°F ambient temperature to that same line operating at 110°F.


The lower water temperature results in a 70% reduction in losses.  And while the Btu/hr/ft values are small, this is a situation where a little times a lot results in a big number.  In other words, there is an amazing amount of pipe in a typical building system, sometimes several miles.  So  if if you save 10-15 Btu/hr/ft over thousands of feet of length, it can add up.  

Reset Schedule Bottom Lines

The bottom line is that implementing  reset schedule that adjusted the supply hot water temperature based on the outdoor air temperature will save resources for a number of reasons.

  1. More of the available energy that was delivered as steam is recovered before the condensate is discharged to the sewer.
  2. The parasitic losses associated with the distribution system are reduced.
  3. Because of items 1 and 2, the pounds of steam consumed will be reduced, improving the building’s benchmark.
  4. If the piping ran through places that contain conditioned air, like a ceiling return plenum, then the reduction in parasitic losses will also represent a reduction in cooling load.
  5. Because the building is using fewer pounds of steam, it will uses fewer pounds of water, another important resource that we need to do our best to conserve.

All of this can be accomplished for a modest investment because in most situations, all that is required is a minor modification of the control system to add the reset schedule.  If the control system is a DDC system and was already monitoring outdoor air temperature, the improvement could be captured by making a relatively simple modification to the software.  The images below illustrate what this logic might look like before …


… and after modification.


Note that the “after” version includes some other enhancements like trending and graphic indication.   The diagrams were developed using an Excel based logic diagram tool that you can download here along with the actual logic diagrams  If you wanted to dig in and understand it a bit, you will find an exercise here that uses a virtual EBCx project in a SketchUp model as a mechanism to present the opportunity and develop the logic.

<Return to Contents>

Flash Steam

It is not uncommon for the loads served by a steam system to use steam at a pressure significantly higher than atmospheric pressure.  The distribution systems we have been discussing for district steam systems are onw example.  For these networks, because insulation is not perfect, energy is lost from the piping and some of the distributed steam condenses.  Condensation loads are even higher at start-up, when the piping is cold. 

It is critical that this condensed steam be removed from the piping system to avoid significant operating problems and even catastrophic failures.  Towards this end, steam traps are provided at regular intervals and at elevation changes in the distribution system.  These traps are termed “drip traps” and the condensate coming off of them will be saturated liquid at the saturation temperature associated with the steam in the distribution system.

Steam fired sterilizers in labs and hospitals are another example of a load that must be served at a higher pressure, typically requiring steam at approximately 30 psig (often termed “medium pressure steam” in the industry).  The saturated condensate coming off of these loads is at a temperature above 212°F saturation temperature associated with atmospheric pressure;  in this case, about 273°F. 

As a result, if the condensate was dumped into a return system that is open to atmospheric pressure, some of the condensate will “flash” to steam.   In other words, the 273°F saturated condensate coming off a 30 psig (44.7 psia) process will have a lot more energy than saturated condensate at 212°F.  The temperature difference reflects some of the additional energy content at the higher saturation temperature. 

The enthalpy (total available energy) of the saturated 30 psig condensate is about 243 Btu/lb.     If you reduce the pressure that it experiences to atmospheric pressure, the condensate can not exist at a saturated state and remain at 243°F;  it has too much energy to do that.

It solves this problem by converting some of its liquid to steam;  exactly enough mass to absorb the excess energy.  You can use a steam table like the one I provided earlier to figure out exactly how much of the liquid will be converted to steam by reading the appropriate data directly or interpolation.


Or, you can plot the process out on a thermodynamic diagram like a p-h diagram where the process will look just like the throttling process we looked at previously and occur at a constant enthalpy.


One thing that is more apparent from the p-h diagram plot, at least to me, is that the result of the process is not pure, saturated water vapor.  Rather, it is a mix of saturated liquid and saturated vapor, a.k.a wet steam.  This is what the thermodynamic term “quality” that I mentioned in the first post in the series is about.  

Note that the “Flashed Steam Condition” is at about the 6.4% quality point (the constant quality lines are the curved, dashed black lines that mirror the saturated liquid and vapor lines). What this is saying is that of the 242.9 Btu/lb of energy represented by the flash steam, 6.4% of it is in the form of steam, where a significant portion of the available energy (1,151.1 Btu/lb) could be captured by condensing it, which would provide 970.8 Btus/lb (1,151.1Btu/lb – 180.3 Btu/lb).  The bulk of it is saturated liquid (condensate) where the available energy (180.3 Btu/lb) could be captured by cooling it.

Hopefully, in light of the preceding, you can see that if your high temperature condensate is going to end up at atmospheric pressure, then it will “flash”, all though perhaps not in the way a non-thermodynamic oriented person would think of the term. 


(I thought I would insert that as an amusing comic interlude and a reward for anyone who is still actually reading this.)

If you simply dump it into the low pressure return, a lot of problems can occur including condensation induced water hammer (which can be quite destructive), along poor return system performance in terms of steadily removing condensed steam from the loads and returning it to the collection point.

This problem is addressed by providing flash tanks, which are sized to allow the flashing process to occur with out causing problems.  Here are pictures of a couple.

Blow-Down-Flash-Tank_thumb1 AHU5-equipment-room-flash-tank_thumb


A number of steam system vendors provide very useful information about flash tanks, including Sarco and Armstrong if you want to know more.  

My point here is to say that the 970.8 Btus/lb of energy in the low pressure steam coming off of a flash tank is just as useful as low pressure steam generated in a boiler.  Yet, you frequently find them vented to atmosphere.    This may represent an opportunity.  

One way of capturing the benefit is to vent the flash tank to the low pressure system header.  This will move the “Flash Steam Condition” line on the p-h diagram upward from atmospheric pressure. The lower the header pressure is, the more energy you recover.

<Return to Contents>

A Few Paradoxes

All of the opportunities we explored would extract more energy from a pound of steam relative to the process that occurs in the heat exchanger operating at the design supply water temperature.  As a result, they will reduce the pounds of steam consumed all other things being equal. 

In addition, the lower distribution temperatures associated with the reset schedule will also save energy, additional energy.  And using flash tanks to drop the temperature and pressure of medium and high temperature condensate will keep the condensate return system running more smoothly and quietly.

But, if the condensate is being recycled instead of dumped to sewer, the lower condensate return temperatures will mean that the boilers will need to add a bit more energy into the feedwater to get it to the steaming temperature as compared to what would be required if the condensate came back hotter.  So for systems that recycle their condensate, the impact of the lower temperature condensate on the cycle efficiency will be different from what it would be for a system where the condensate is dumped to sewer.

On the other hand, if the piping runs back to the central plant were long, there could be benefit to the lower temperature condensate because the energy would have gone into a useful process instead of being lost to the ambient environment on the way back to the plant. 

In other words, if the 200°F condensate leaving the heat exchanger has cooled to 140°F by the time it gets back to the central plant to be recycled due to the time it spent sitting around in condensate receivers and in long piping runs, then the boilers are going to have to heat it up from 140°F to the steaming temperature anyway.

In contrast, if it was cooled to 140°F to serve a domestic hot water load before being returned to the plant, the parasitic losses in the return system would be reduced and additional energy would have been extracted from the system for a useful purpose.

Extracting as much energy as possible for a useful purpose will improve the over-all cycle efficiency and will lower the parasitic losses in the condensate return system since it will be operating at a lower temperature.

<Return to Contents>

Thus far, we have talked about how to maximize the amount of energy extracted from a pound of steam.  In the final post in this series, we will look at how to ensure peak efficiency for your steam system in the long term.


PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at

[i]     The ENERGYSTAR® conversion factor implies that you would reduce the enthalpy of the incoming steam to 0 – which is about where the saturated liquid (dark blue) line crosses the enthalpy axis) –  if you recovered all of the energy represented by a pound of steam.

[ii]    This was an arbitrary selection on my part.  You will recall that the coil I modeled could do quit a bit of reheat with only 35% of its design flow rate and a lower entering water temperature.  And it could  deliver near neutral air if supplied with its design flow rate at the lower water temperature.

It would be somewhat unusual for an occupied zone to require neutral air if the building was above the balance point;  basically, that would indicate that there was not load in it and that you were still moving air through it.  Thus for the sake of discussion, I assumed that a variable flow hot water system serving multiple zones and operating with a reset schedule that lowered the supply temperature as the outdoor air temperature rose would operate at less than design flow and arbitrarily selected 50% of design flow.

Posted in Boilers, Hot Water Systems, and Steam Systems, HVAC Calculations, HVAC Fundamentals, Operations and Maintenance, Steam Systems | Leave a comment

What is the Energy Content of a Pound of Condensed Steam? (Part 1)

or, It Depends …

This post started out as an e-mail answering a question from one of the folks taking the Existing Building Commissioning Workshop this year at the Pacific Energy Center.   But as I worked on it, I realized that the question had come up before and that the answer and related concepts might be useful to others. On the surface, it seems like a simple question.  But if you really want to understand, it  is fairly complex.  Thus, this blog post.


This ended up becoming quite a long post (surprise, surprise, surprise).  So, I broke it up into several posts, which are still somewhat long.  To minimize the pain for someone just wanting the bottom line, I have included a table of contents that will allow you to jump to a topic of interest.  The “Return to Contents” link at the end of each section will bring you back here.


Students participating in the workshop are required to have access to a building that they can use as a living laboratory to apply the EBCx skills we teach in the class.  One of the first things they do is benchmark their building in the LBNL Building Performance Database and ENERGYSTAR®.  To benchmark, you typically need to convert the annual energy consumption of a facility into some sort of index, typically an EUI (Energy Use Intensity or sometimes also called an Energy Utilization Index). 

EUIs can be stated in terms of site or source energy.  If you want to know more about the difference, this blog post will provide the details.  In the discussion that follows, I will be considering things in terms of site energy.

EUIs typically have engineering units in the form of energy use per unit area per year, such as kBtu/sq.ft. per year (kilo or thousands of British Thermal Units per square foot per year).   Energy is not always billed directly as Btus.  For instance electricity is billed in terms of kWh or kiloWatt Hours consumed.  District steam is often billed as pounds of steam consumed.  To create an EUI from the bill metrics , you need to convert the billing units to Btu’s.  

In the industry, most people are pretty familiar with the conversion factor for kWh to Btus, which is 3.413 kWh per Btu and pretty invariable.   But there is less familiarity with how to convert a pound of steam to Btus, and there can be some variability related to exactly how the thermal energy is billed (kBtus, pounds of steam, thousands of pounds of steam, etc.) and the nature of the steam source (district steam, central plant, or boilers on site).  Bottom line, if you want an exact value, it can become more complex than the single factor used to make electrical conversion.

<Return to Contents>

The Question

As you may have guessed by now, the question I was asked was how to go about converting pounds of steam to Btus.   The answer is:

It depends ….

One of our students has a facility that purchases steam from a district steam system[i] and their bill states consumption in the form of Mlbs.  For example,

Total usage invoiced in Mlbs –  301.3

Note the letter “M” which means the unit of measure is not simply pounds, it is some multiple of pounds.

So the first part of answering the question is to determine what the “M” stands for, because to correctly answer the question,

It depends on the units of measure.

Most of us (probably because of computers) would take the M to be the SI (System International; often referred to as metric) prefix denoting a factor of one million (1,000,000) as in the MBytes or MB associated with a file or hard drive size.  Thus we might conclude the bill is stating that the facility was being invoiced for 301.3 x 1,000,000=301,300,000 pounds of steam.

Unfortunately, that turned out not to be true in this case.

<Return to Contents>

Confusing Units

It turns out that there is another system of units that uses “M” for a multiplier;  the Roman Numeral System, where “M” is used to indicate thousands (1,000), not millions (1,000,000).  And to make things interesting, the industry uses both systems and (to me at least), seems to figure you will simply know which one applies. 

If you have been in the industry for a while, that is probably true.  But if you are new to it all (or suffer from aging brain cells like I seem to), then it can be confusing.  

For example, we have control systems that are moving and storing Mb or  Megabytes  of data (where mega is the SI prefix for millions, so millions of bytes).  These systems can be monitoring and managing air handling systems that  are moving cfm of air (where the “c” stands for “cubic”, not the SI prefix “centi” or hundredths, nor does it mean hundred, which is what it would stand for if it was a capital letter in the Roman Numeral system).

The air is often being cooled using electricity, which is often billed as kWh ( where the “k” means the metric prefix “kilo” or thousands of watt hours), and heated, perhaps, with steam generated by a boiler that might be rated in terms of  MBtu (where the M is the Roman Numeral M and means thousands of Btu), or MMBtu(still the Roman Numeral M, but two of them, meaning thousand thousand, or million Btu).

If the boiler is fired using natural gas, then the gas might be billed in terms of MCF (thousands of cubic feet, where the M stands for the Roman Numeral, but the C stands for cubic not the Roman Numeral for 100 and F stands for feet),  or in terms of therms (which stands for 100,000 Btus),

Or the consumption could be billed in terms of Dth (which combines therm with the metric prefix “Deka” or 10 to stand for 10 therms), which is approximately the same amount of energy as an MCF of natural gas (see above) depending on the exact heat content of the gas, which varies with the source of the gas.

Other than nuances like that, we have a pretty straight-forward system of units in the industry. So there should be little confusion about what things mean.

<Return to Contents>

Asking the Source

The student who asked the question, went to the source (the utility representative) for clarification on the units on the bill.  And in this case, they were told that the M (Roman Numeral) actually stands for K (Si Prefix) meaning that their bill was for thousands of pounds of steam.

So it seems that all that is needed now is to figure out how any Btu’s are released when you condense a pound (or a thousand pounds) of steam.  Frequently, that is done by making an assumption about the amount of energy associated with the phase change.  But if you want a more exact answer, it is a bit more complex than a single number. 

It is also an interesting (in a nerdy sort of way) saturated system physics exercise.  So I thought it would be worth looking at both techniques.

<Return to Contents>

Using a Simplifying Assumption

There is nothing at all wrong with using a simplifying assumption.  Being math-phobic and often pressed for time in terms of coming up with an answer, I do it all of the time. But if you do it, I think  it is important to recognize the constraints that your assumption placed on the result so you don’t take yourself to seriously if the discussion becomes more precise.  And you need to understand if the assumption can actually be used in the context of a given discussion.

In this case, our simplifying assumption might be based on the fact that most condensate return systems are open to atmospheric pressure at some point, usually at the condensate receiver.  So, we could look at the amount of energy released if we were to condense 1 pound of steam at atmospheric pressure.

You can find this value in a steam table.   Steam tables contain empirically derived values for the various properties of water under different conditions of temperature and pressure.   You can find them in classic publications like Keenan and Keyeson line, in the ASHRAE handbooks, or you can even build one yourself as a learning exercise using REFPROP, like I did to create the table below.


Note that the pressures in second column are in absolute pressure units, not the gauge pressure units we are probably more accustomed too.  In other words, the pressures are referenced to a pure vacuum, o psia.   So atmospheric pressure is 14.71 psia or 0 psig.

The value we are interested in is the latent heat of vaporization at atmospheric pressure (highlighted in orange above) which is the difference between the enthalpy of the water vapor (steam) and the enthalpy of the liquid water at the condition we are interested in.  In this case, the value is 970.8 Btu/lb.

To estimate the amount of energy associated with a bill for 301.3 thousand pounds of steam based on the assumption that the steam was condensed at atmospheric pressure, we could do a bit of simple math, like this.


If we needed to convert this to millions of Btu, we would just divide the result by 1,000,000, like this.


We could even create a multiplier that we could directly apply to future bills to give us the answer.


In fact, the student who inspired this post was planning on using this multiplier.  All I have done up to this point is illustrate where it came from and that there is an assumption behind it. 

How much does that assumption impact the accuracy of the EUI and benchmark?  Well,

It depends on the magnitude of the difference between the assumed value for the enthalpy change that occurs when the steam is condensed relative to the actual value of the enthalpy change produced by the thermodynamic processes used to extract energy from the steam at the facility.

It also depends on what you do with the condensate.

<Return to Contents>

Seeking A More Exact Solution

Truth be told, in the olden days, folks (such as myself) would assume that condensing a pound of steam was worth about 1,000 Btus.  It made the math easier if you were using a slide rule or four function calculator.  And, if you contemplate the steam table above, you can see that it probably meant we were accurate to with-in 10% or better over a pretty broad range of conditions.

But, if you consider what is really going on in the context of the data in the steam table, you realize that assuming the latent heat of vaporization is 970.8 Btu/lb or 1,000 Btu/lb could be wrong because:

It depends on the saturation temperature that the steam condenses at.

For instance, most steams systems deliver the steam to the loads they serve at a pressure that is above atmospheric pressure;  pressures of 3-15 psig are common.  For district steam systems, the delivery pressure can be significantly higher, perhaps as high as 60-150 psig or more, which is subsequently reduced to the 3-15 psig range at the end use facility. 

If you look at the Tariff that defines the rate structure and nature of the service for the utility suppling steam to the facility in question, you find that there are two potential delivery pressure ranges available from their distribution network, 5-10 psig and 20-120 psig and that the company reserves the right to adjust the delivery pressure.


Note that I have assumed the pressures are gauge pressures vs. absolute pressures. 

And, the term “quality” as used in the tariff is probably not the thermodynamic use of the term given the reference to chemical constituents.  In other words, in a pure thermodynamic sense, the “quality” of saturated steam is a measure of it’s wetness; i.e. how much of the steam is pure vapor and how much of it is water that has yet to change phase. More on this to follow.

It is also worth noting that some utilities will deliver the steam in a superheated state, not a saturated state.  All of these things have an impact on the energy content of the steam.

<Return to Contents>

Energy and Phase Changes;  Understanding the Process

If you perform the experiment I describe in this blog post, you will discover that it takes a whole lot more energy to change the state of water from a liquid to a vapor relative to what it takes to heat the liquid or vapor.  Here is an image from that blog post depicting the results of the experiment.  The paragraphs that follow describe the results.


The red line in the picture is temperature of the water in the tea kettle.  The green dashed line and blue solid line are the temperature of the space above the water.[ii]  Initially, this space is filled with a mix of air and water vapor.   But once boiling starts, with the lid on the kettle, all of the air will be driven out and it will fill with steam.

Heating the Water

If you observe what happens, when I turn on the heat (the purple line is the watts into the burner on the stove), the temperature of the water and the water vapor mix both start to rise.  Since the liquid water is at atmospheric pressure but below the boiling temperature (a.k.a. the saturation temperature) we say that it is subcooled.   During this phase of the experiment the burner was supplying 1 Btu to raise the temperature of one pound of water 1°F.

When the water temperature reaches 212°F, the water begins to boil, which creates steam, filling the area above the water with pure steam, and creating a saturated system where the temperature of both the water and the steam are the same (notice how the green and red lines converge). 

<Return to Contents>

Heating the Mixture of Water and Steam

Now, even though the burner is applying a steady amount of energy, the temperature of the water/steam mix holds constant.  That is because the energy from the burner is now being used to change the liquid water to steam (a.k.a. a phase change) and during a phase change the temperature remains constant at the saturation temperature. During this time, the  burner was supplying 970.8 Btus for every pound of water that was converted to steam.

When the last drop of water changed to steam, the burner was still supplying energy at a steady rate.  But since the mass of the steam contained inside the teapot at that point was quite low compared to the mass of water that was there when we started (most of that mass was now outside the teakettle condensing on the windows in the kitchen),  there was a lot of energy being supplied to a very small mass.  

<Return to Contents>

Heating the Steam

At this point, the phase change is complete so all of the energy from the burner is applied to changing the temperature of the steam inside the pot.  Since it only takes about 0.5 Btus to raise the temperature of a pound of steam 1°F at atmospheric pressure (and there was much, much less than a pound of steam contained in the pot) then the temperature spikes rapidly.  This elevation in temperature above  the saturation temperature is called superheat.

<Return to Contents>

A Few New Terms

If you are new to thermodynamics, some of the terms that you observed in the steam table can be a little scary sounding.  After all, how many dinner conversations (with normal people) have you had where the words “enthalpy” and “entropy” were bantered about.

We are accustomed to concepts like temperature and pressure because we apply them directly in our day to day lives.  A weather forecaster may talk about a high pressure system moving into our area or that we can expect lower temperatures and humidity after a cold front moves through.   Or the recipe we select to prepare for dinner likely specifies a temperature that we should cook the food at, perhaps suggesting that we bring a pot of water to boil in preparation for making some pasta.

But in the course of day to day conversation, we seldom discuss enthalpy or entropy, even though those properties are also changing as we go about our daily lives.  For instance, the weather forecaster could have said that the enthalpy of the air is going to drop after the cold front passes.  And the recipe could have suggested that we increase the enthalpy of a pot of water until it reached saturation and then continue to add energy so that the water changes phase.

The point is that enthalpy, while an unfamiliar term in day to day life, is a property used to measure the total available energy in a substance at a given condition.   So, if we know the enthalpy change that a substance goes through in a given process, we know the energy change.[iii]  

Enthalpy is challenging to measure directly.  But since it is related to things that we can more readily measure, like temperature and pressure and moisture, some very dedicated individuals have been able to experimentally determine enthalpies for various substances and develop relationships that allow us to predict enthalpy based on other measurements and coefficients that are developed via the experiments. The thermodynamic diagrams that follow are simply graphical representations of these results.

<Return to Contents>

Enthalpy Depends on Temperature and Pressure

If you study the steam table I inserted previously,  you will discover that the latent heat of vaporization – i.e. the energy it takes to convert a pound of water to a pound of water vapor (a.k.a. steam) – varies as a function of the saturation temperature and pressure.  Stated another way, the enthalpy change associated with a phase change will vary with the temperature and pressure that the phase change occurs at.

For example, if the pressure is about 60 psig (or about 75 psia), then the latent heat of vaporization is more like 905 Btu/lb vs. the 970.8 Btu/lb we have discussed for water at atmospheric pressure.  Similar considerations apply for sub-atmospheric pressures.  And, as our experiment revealed, the amount of heat associated with changing the temperature of a subcooled liquid or a superheated vapor is different from the phase change value and will also vary a bit with temperature and pressure.

The steam table above is focused on water at saturation.   There are other tables that document the properties for water that is superheated or subcooled.

<Return to Contents>

Thermodynamic Diagrams

All of this can be quite complex to wrap your head around.  But a picture can be worth a thousand words, and in the context of our discussion, a thermodynamic diagram can be worth a thousand words.   Using one, you can plot a process and read all of the thermodynamic properties of water (or other substances) directly from the diagram.  And the process plot gives you a “visual” on what is going on.  

Psychrometric charts are a form of thermodynamic diagram that HVAC engineers use to assess an HVAC process. 


Skew T log P diagrams are used by meteorologists to understand the atmosphere.


To understand what happens to a substance as it goes through a process, encountering various  various conditions and states, we can use pressure-enthalpy (p-h) diagrams (what follows uses water as an example) …


… temperature entropy (t-s) diagrams …


… and enthalpy-entropy (h-s) diagrams (a.k.a Mollier diagrams)  ….


These diagrams are extremely intimidating. 

But if you can stay calm and continue to breath normally, they can be quite useful because if you can plot a process on them, you can read all of the properties for the various states directly from the chart. When you compare it to the other options, like playing with the equations of state, which can look like this …


…   or working through multiple tables like the one pictured below and interpolating values …


… they can become quite attractive and you may find yourself inspired to learn how to use them.

<Return to Contents>

The Spreadsheet Behind the Diagrams

If you are really curious about the diagrams above, you can find the spreadsheet behind them at this link.  Personally, I learned a lot by developing them.  And now that I have them, I can plot processes on them pretty precisely, which lends itself to using a graphical solution to solve and visualize complex thermodynamic processes.

<Return to Contents>

Focusing on p-h Diagrams

P-h diagrams are a very common way to look at thermodynamic processes like refrigeration cycles.


They can give you a “visual” on a complex process and make it less intimidating for math phobic folks like me.  If you want an example of how useful a diagram like the one above is, take a look at this engineering application guide from Sporlan.  

I don’t want to get to far a-field here, but the point is that diagrams like these can make the analysis of cycles much easier to accomplish once you learn to work with them.   There was a point in my career where I was somewhat terrified of a psych chart.  But now, it is my “go to” tool for understanding air handling system processes. Similarly, I use the various thermodynamic diagrams I illustrated above to help me understand different HVAC and building system processes.

<Return to Contents>

Applying the p-h Diagram For Water and Steam

To gain a deeper understanding of the amount of heat represented by a condensed pound of steam, I’m going to plot out a pressure reducing process on a p-h diagram.   I could plot it on any of the diagrams, but I chose the p-h diagram because we want to demonstrate what happens as steam is throttled to reduce its pressure and a throttling process can be considered a constant enthalpy process.  So, the two things we are going to work with are represented by the primary axis of the chart.

Let’s look at what happens if the utility serving the facility we are considering is delivering saturated steam to it from their high pressure system at 120 psig.  And let’s assume:

  • The facility uses a pressure reducing valve to drop the pressure to 12 psig to serve an insulated pipe header that delivers the lower pressure steam to a heat exchanger, and
  • That the heat exchanger condenses the steam to make 180°F hot water, which is then distributed to to the various loads in the facility, and
  • That the pressure reducing valve, heat exchanger, and its control valve are all in close proximity to each other so that there is no meaningful pressure drop between the pressure reducing valve and control valve nor is there any meaningful heat loss through the insulation between those points, and
  • That the design supply water temperature to the loads is 180°F with the heat exchanger was selected for a 20°F temperature rise on the water side using saturated steam at atmospheric pressure (0 psig, 14.7 psia), and
  • As a result, the condensate leaving the heat exchanger is at 212°F, and
  • That the condensate is discharged to a system that is vented to atmospheric pressure.

The process is plotted out on the p-h diagram below.


Plotting the Initial Condition

The initial condition is on the saturation line at the delivery pressure of 120 psig or 134.7 psia.  Knowing that the steam is saturated (red saturated vapor curve) at a specific pressure (value on the vertical axis) allows us to plot the entering condition on the chart, and we can read the enthalpy of 1,193 Btu/lb at this condition from the p-h diagram.

Plotting the Condition Entering the Control Valve

The condition entering the control valve represents the result of the throttling processes associated with the pressure reducing valve.   Throttling processes are constant enthalpy processes, so knowing that and that the leaving condition that the pressure reducing valve is controlling for (12 psig, 26.7 psia), we can plot this point on our chart.

Note that we assumed there was no meaningful pressure drop or heat loss in the piping header due to its short length.   Had there been a meaningful pressure drop and thermal loss in the piping system, that would have shifted the entering control valve point down and to the left slightly from where we plotted it.  

Plotting the Condition Entering the Heat Exchanger

The entering condition in the heat exchanger represents the throttling processes associated with  the control valve, which was selected based on an entering steam pressure of 12 psig and a pressure in the heat exchanger of 0 psig.   This results in an initial condition in the heat exchanger that is at the same enthalpy as the control valve entering condition (because throttling processes occur at constant enthalpy) but at the pressure used to select the heat exchanger (o psig, 14.7 psia).  Thus, we can plot this point on the chart based on these two parameters. 

Note that the steam entering the heat exchanger is superheated as a result of the two throttling processes in the delivery chain.  As a result, it has a bit more energy content than it would if it was saturated steam at atmospheric pressure.

Plotting the Leaving Condition

Because the heat exchanger was selected to deliver the design performance requirement using steam at atmospheric pressure, the condensate coming off of the process will be at atmospheric pressure and 212°F, the saturation temperature associated with atmospheric pressure.  This is also the condition in the condensate return main.  As a result, we can plot this point on the chart, which allows us to read the enthalpy of the  condensed steam leaving the process.

<Return to Contents>

Enthalpy Change = Energy Change

If we know the enthalpy change between two conditions, then we know the energy change.   In this case, the change in enthalpy was from 1,193 Btu/lb t0 181 Btu/lb or 1,012 Btu/lb. 

Good News and Bad News

Taking a closer look at the specifics of the process revealed that for every pound of steam that was condensed in this scenario, we received 42 more Btu’s than our rule of thumb would have suggested or about 4% more.  In the context of the Btu’s received for your dollar, that sounds like a good thing.  In other words, the pounds of steam you purchased delivered more Btus than the rule of thumb suggested.

But in the context of a benchmark, it means that you actually used more energy than the rule of thumb suggested.  Thus, in this case, if we were to calculate an EUI based on our more specific assessment of how the steam was actually used in the facility, the EUI will be higher and the benchmark score will be lower.

<Return to Contents>

ENERGYSTAR®, Conversion Factors, and Rules of Thumb

In an effort to try to create consistency, ENERGYSTAR® publishes conversion factors for various energy sources including district steam.


If I understand it correctly (I don’t actually do a lot of ENERGYSTAR® benchmarks), when you are entering your data into ENERGYSTAR®, an “Add Meter Wizard” will guide you to the 1,194 Btu number for a meter that was reporting KLbs (thousands of pounds) of steam. 

As you can see, this would result in a consumption value that is higher than the rule of thumb we developed based on an assumption of condensing steam a atmospheric pressure (1,194 vs. 970.8 Btu/lb) as well as the rule of thumb used by old engineers like myself sometimes (1,194 vs. 1,000 Btu/lb) .  

It is also higher than reality for the situation we explored in the p-h diagram (1,194 vs. 1,012 Btu/lb).  So if you where to benchmark in ENERGYSTAR® using their metrics, it would seem like they would over-state the energy use of your facility if it was a facility where the steam delivery followed the process we traced out.

That means  your EUI would be higher and your benchmark would be lower than it would be if you could insert your actual energy use in terms of the Btus released by the condensed steam vs. the thousands of pounds of steam you used into the ENERGYSTAR® database. 

<Return to Contents>

Benchmarks are Approximations, not Exactamates[iv]

The preceding may want you to cry “Foul”.  After all, you are trying to do a good job in terms of running your facility efficiently and it seems unfair to have your score penalized by an arbitrary conversion factor.

But you need to remember that benchmarks are intended to provide a broad-brush comparison of similar facilities in similar climates serving similar occupancies with similar use patterns.  There are a lot of variables at play.  For example, the heat content of gas and other fuels will vary with the source and ENERGYSTAR® applies arbitrary conversion factors to them just like it does to district steam.

The endnotes in the referenced ENERGYSTAR® conversion factors document indicate the source for the conversion factors, with the International District Energy Association being the source for the district steam energy conversion factor.

<Return to Contents>

Why so High?

If you study the steam table, you may find yourself wondering why the International District Energy Association recommended a conversion factor of 1,194 Btu/lb.  After all, that appears to be the latent heat of vaporization associated with an extremely low saturation temperature and pressure.

That is because there is more than the latent heat of vaporization to be recovered.   For instance, in the example I plotted out on the p-h diagram, the condensate left the process at 212°F.  There are quite a few things that you could do with a stream of water at that temperature.   For example, you could run it through a heat exchanger to recover sensible energy and preheat or even heat domestic hot water.

So, in a way, the answer to a modified version of the original question, perhaps along the lines of …

How can I go about capturing the energy that the  ENERGYSTAR® conversion factor for district steam metered as pounds of steam implies is available?

is …

It depends on what you do with steam and condensate you receive from the utility

<Return to Contents>

The Basis of the ENERGYSTAR® Conversion Factor

If you dig around a bit, you can discover the basis behind the ENERGYSTAR® conversion factor.  I found it in a footnote in a technical reference they provide about Greenhouse Gas Emissions.


What that is saying is that the ENERGYSTAR® conversion factor is equal to the enthalpy of saturated steam at 150 psig.   It is important to realize that this is different from saying it is equal to the latent heat of vaporization of 150 psig steam, which is the enthalpy change associated with condensing saturated vapor to saturated liquid, or about 858 Btu/lb.

In our field, we are typically interested in changes in enthalpy through a process rather that the specific enthalpy at a given state.  And,  because enthalpy cannot be measured directly, we state the values of enthalpy for a substance referenced to a particular state.  For instance the specific enthalpy of water or steam is referenced to water at 0.01°C and atmospheric pressure.

In the context of this discussion, that means that if we really wanted to capture all of the energy associated with the ENERGYSTAR® conversion rate for district steam metered as pounds, then we need to not only condense the steam we receive, we need to receive the steam at 150 psig as saturated steam and we need to cool it to just above freezing.

<Return to Contents>

So, the ENERGYSTAR® Folks are Crazy

You may be thinking at this point that the ENERGYSTAR® folks are nuts.  After all, your local utility may not deliver steam at 150 psig, with the delivery pressure of 120 psig in the utility tariff we looked at being an example of that.

But if you compare the enthalpy of 12o psig steam with 150 psig steam, you will find that it is only about 3 Btu/lb different;  about a quarter of a percent.  So in the bigger picture, receiving steam at a lower delivery pressure would not make that much difference in the factor that you would use.

You may think, O.K. I’ll buy that, but it just does not seem practical to cool the condensate to just above freezing in a way that delivered anything useful to the building.  In other words, to provide heat, the source (in this case the condensate) needs to be warmer than what you are trying to heat. 

Given that we are trying to maintain space temperatures in the mid 60°F to mid 70°F range in most of our buildings, a fluid stream that is at or below that temperature range could not be used directly to heat.  Some sort of heat pump (and energy input) would be required to move the heat from the condensate to the place that needed it.

Actually, the ENERGYSTAR® Folks are Not Crazy

If you take the time to think it through, you will realize that the ENERGYSTAR® conversion factor is simply forcing us to take a hard look at what it means in terms of energy and resources if our facility uses steam as an energy source. 

There is a subtilty associated with how most (not all)  commercial district steam systems work that we need to consider.  You get a clue about it if you read the tariff for the facility we have been discussing closely (note my highlight).


What that is saying is that the condensate (condensed steam) delivered from the utility will not go back to the utility.  Rather, it will go to the sewer.   That means that all of the energy associated with the hot condensate is literally dumped down the drain and eventually is dissipated to the environment with out serving any useful purpose in the building that consumed the steam (a.k.a. energy and water vapor;  two different resources).

In fact, depending on the temperature of the condensate and the requirements of the local plumbing code and the material in your sanitary piping system, you may actually have to cool the condensate before discharging it.  Typically this is done using domestic cold water (directly or via a heat exchanger) which is then dumped to the sewer along with the cooled condensate.

Bottom line, if  you received district steam at 150 psig, saturated, you actually did receive 1,194 Btus with every pound of steam (and a pound of water for every pound of steam).  The challenge is to understand how to capture as many of those Btu’s as possible before discarding the condensed fluid stream to the sewer.  Because what ever you don’t recover really is wasted energy (and water).

So painful as it may be for this type of system the 1,194 Btu/lb factor allows your steam consumption to be legitimately and fairly compared to the other types of steam systems I will describe  in the next blog post.


PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at

[i]     A district steam system is a network of piping served by a central plant that provides steam to a large area like the downtown area of a city.

[ii]   The blue line is data from a very low mass thermocouple so that it would react quickly because I wanted to capture the very rapid increase in steam temperature that I anticipated once all of the liquid water had been converted to steam. (For more on how sensor mass can impact the data it produces, see this blog post). 

I had the logger set for a very rapid sampling rate and did no have enough memory to allow it to log data for the entire time it took to boil off all of the water.  So I did not start the logger associated with that sensor until nearly all of the water was evaporated, which is why the blue line only shows up towards the end of the graph.  

[iii]  Entropy is a bit more complicated to grasp, like, I almost flunked thermo because I struggled with it so much.   I think that is not unusual and often take comfort in something John von Neumann said (emphasis is mine):

You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage.

They way I have come to think of it is that its basically nature’s way of saying:

There’s no such thing as a free lunch

When we turned on the burner to boil the water, energy flowed from it to the water because the burner was hotter than the water.   But, with out some sort of process that involves doing work, we can not get the energy that flowed into the water converted back into heat or electricity.  Heat does not flow from cold to hot, only from hot to cold.

If you want a bit more detail about all of this, you may want to review a string of blog posts I did that look at saturated multiphase systems.  The experiment I mention and use to illustrate what happens when water boils is part of one of the posts.

[iv]  You may also find the Chapters in Roy Dossat’s book Principles of Refrigeration titled Internal Properties of Matter and Properties of Vapors to be insightful.  He writes about thermodynamic concepts in a very understandable way.  When I found the book, early in my career, my first thought was where were you when I took thermodynamics, which I almost flunked because of my struggle with the math and concepts initially.

[iv]   When I worked for Murphy Company, Mechanical Contractors, more than once, I heard Pat Murphy, our chief estimator mentor some of the younger estimators, saying

we were doing estimates, not exactamates.  

When I first heard him say it, I felt it was really insightful.  And I also think the same is true for a benchmark.

Posted in Boilers, Hot Water Systems, and Steam Systems, HVAC Calculations, HVAC Fundamentals, Operations and Maintenance, Steam Systems | Leave a comment

Lags, the Two-Thirds Rule, and the Big Bang, Part 5

In Part 4 of this series, we explored the complex transportation lag that was the key challenge in terms of using a remote duct pressure sensor to control the large VAV air handling system in the case study building. In this post I will show you the solution that grew out of that understanding and discuss a few reasons why not every VAV system will exhibit this behavior. I’ll close out the post with what I have found to be a very useful and  interesting insight that can be gleaned from the apparent dead time that you observe when you upset a control process in a system that is in operation.

Not Every System Will React This Way (Thank Goodness) Reprise

In the first article, I mentioned that this issue obviously does not happen in every VAV system out there. I think one of the main reasons is that many systems are small enough that the transportation dynamic I focused on in the previous article is not significant enough to cause a problem. But I think there are also some other reasons that people may not run into it very often, or maybe have never run into it.

You Learn A Lot the First Time You Start Up a System

My experience at the MCI building occurred during the very first start-up of the system. At the time, I was in the dual role of control system designer and start-up technician. There was no formal commissioning process so, my start-up activities were the commissioning process.

On a current project, depending on the exact design of the commissioning plan, it is possible that the official commissioning provider would not be on site for the very first start-up of the system. They would only come on site after the contractor had taken the system through start-up process and identified and corrected any obvious deficiencies.

You could say that Ray (the service fitter I was working with) and I discovered an obvious deficiency when we blew up the duct, and then corrected it. Meaning that had there been a commissioning provider, when they came into the process, they may have found some issues, but they would not have observed the system blowing up a duct or having nuisance static safety trips. That could create the impression that the lag issue did not exist, simply because it had been addressed.

But, evidence in the field, like:

  • Ductwork with wrinkles in it, or
  • Ductwork with extra reinforcement angles, or
  • An obvious patch in the duct insulation, or
  • Pressure relief doors that have been added by change-order

… could suggest that just because the system seems to start smoothly now, that may not have always been the case.

Variable Speed Drives are Very Common

When the MCI Building came online, variable speed drives were not an option for most systems, even large ones, because of the cost and size. That is not the case for a modern project.

As a result it would be unusual for a VAV system these days to not have a variable speed drive of some sort. As a result, when faced with nuisance safety trips (or worse), it is common practice to address the problem by using the acceleration and deceleration settings in the drive to slow the system down. This approach is like the approach I tried when I added restrictors to the pneumatic lines feeding the actuators to slow them down.

As you may recall, I concluded that in doing that, I had traded one problem (safety trips and blown ducts) for a different problem (an unresponsive system that could not deal with a large step change). I believe that improperly applied acceleration and deceleration ramps are likely doing the same thing. But since an unresponsive system may appear to operate reasonably well unless you analyze the trends, this may not be generally recognized. More on this later in the article.

Solving the Problem

Back in the MCI Building days, with my significant emotional event fresh in my mind, I went about re-reading what David St. Clair had written about lags in Controller Tuning and Control Loop Performance . As you may recall from the first post in the series, I had totally missed his point on the topic of lags when I read his book the first time, despite him having it in all capitals, in a large shaded box at the end of the chapter.

All About the Lags st

Truth be told, it wasn’t so much that I missed the point.  Rather, I simply did not understand the concept at all.

But what was became clear almost immediately as a re-read the section on lags (due to my significant emotional event) was that my problem was the result of lags in the system and that I needed a control process that would be impervious to them. David’s chapter on cascaded control suggested a strategy that would offer a solution.

Modifying the Control-System Design

As you may recall, our initial solution to the problem was to move the remote sensor back to the fan discharge and control for that pressure. In doing that, we circumvented two major lags: the sensor lag and the transportation lag.

But after re-reading David St. Clair’s primer, I realized that if:

  • We added a remote sensor, and
  • Added a second controller for it to work with, and
  • Created a remote duct static pressure control process,

… then we could use the output of that process to adjust (or reset) the discharge static pressure control process set point. In other words, the output of the remote process would cascade into the discharge pressure control process to optimize its set point. The result was a control system configured as illustrated below.

Pneumatic Control v2

Bear in mind that there probably are several other design solutions that could have worked, especially in this modern area of fully programmable DDC systems.

Developing a Reset Strategy

To implement the solution, we needed to come up with a relationship that defined how the discharge-static-pressure set point would be adjusted as pressure at the remote point in the duct increased above the design target when the terminal units closed their dampers in response to decreasing load. This “reset schedule” is graphically depicted in the chart in the illustration above.

Pneumatic control system operating characteristics generally are defined by a 3 to 15 psi span. As a result, to fully define our reset schedule, we needed to specify the discharge-static-pressure set points associated with outputs of 3 psig and 15 psig from our remote static-pressure-control process. Once we identified those outputs, we could set them up in the controller by making physical adjustments with knobs and dials.

Knobs and Dials

In current technology DDC systems, all of the parameters I will discuss below are set up via the software in the system, either using sliders and knobs in a graphic screen or by setting the value of point in the system via keyboard commands.  But in the olden days, they were set up using the knobs, dials, and sliders that were provided on the controller.  The controllers in the image below illustrate this and are similar to the controllers we were working with at the MCI building.


For the MCC Powers RC-195 controllers illustrated above, the authority adjustment slide is what sets up the reset schedule.  If you want to know more about the details, you will find the instruction manual for it on the pneumatic control resources page of our commissioning resources website.

Controller Action—The General Case

As a first step in figuring out our strategy, we had to determine the “action” of our controller:

DA LrgDirect Action

With a direct-acting controller, an increase in the difference between the set point and the process variable (0ften called error) will cause an increase in control-process output.  A decrease in the difference between the set point and the process variable will cause a decrease in the control-process output.

RA LrgReverse Action

With a reverse-acting controller, an increase in the difference between the set point and the process variable will cause a decrease in control-process output.  A decrease in the difference between the set point and the process variable will cause an increase in the control-process output.

Controller Action Bottom Line

The bottom-line regarding controller action is that a designer determines the failure mode for the final control element (in the case of the MCI building, the inlet guide vanes) as a first step. That information combined with how the system will react when the final control element is moved in response to an increase or decrease in the process variable (in this case, duct static pressure) determines the controller action.

Controller Action for the MCI Building Static-Control Processes

For the MCI Building, because we had selected the IGV actuator to fail closed on a loss of air pressure, a reverse acting discharge static pressure controller was required. In other words,  if discharge static pressure dropped below set point, we needed the output pressure from the controller to increase, causing the inlet guide vanes to open.  If discharge static pressure increased above set point, we needed the output pressure from the controller to decrease, causing the inlet guide vanes to close.

A reverse-acting process allowed us to start the system with the inlet guide vanes closed and the fan at minimum capacity, meaning the fan started unloaded and the potential for immediate over pressurization upon system startup was minimized.

Interlocking the Control Process with Fan Operation

To ensure that the system started this we, we provided a three-way air valve (often called an Electro-Pneumatic switch or EP switch) in shown in the illustration. The equivalent in a DDC system is the proof-of-operation interlock.

When de-energized, the three-way valve blocked the control signal and vented the pressure in the actuator to atmosphere.  When energized, it closed the vent and connected the control signal to the output serving the actuator, allowing the control system to modulate the inlet guide vanes through the positioning relay. The three-way valve was wired in parallel with the fan-motor starter so that, when the starter was energized, the valve was energized.  

This was a fairly common approach for doing this sort of interlock at the time.  But there is an assumption behind it, that being that, if the motor is spinning, air is moving.  That may or may not be a good assumption for several reasons;  for instance, if the belts had broken, the motor would in fact be spinning but there would be no air moving. But to keep from making this even longer, I will set that discussion aside for now.

Reset-Line Points

We knew we needed 3 in. w.c. of pressure at the discharge of the fan to deliver 0.75 in. w.c. of pressure at the remote location on a design day. That requirement established one point on our straight-line reset schedule.

More specifically, we adjusted the knobs and dials on the controller so that, when the signal from the remote static-pressure controller was 15 psig, the set point of the controller was 3 in. w.c. In a DDC system, this would be accomplished by relationships set up in the controlling logic rather than by physical adjustments to a piece of hardware.

To determine the other point on our reset schedule, we considered what would happen on a weekend with only workers on the second floor in the building. Under those conditions, the system would run and the terminal units on the floor with people would follow the load. The terminal units on all the other floors would probably be at or near minimum flow depending on the solar load and thermostat set points.

In the worst-case scenario, we would need to deliver the design flow for the second floor and the minimum flow for the other floors. The calculated pressure drop to the remote-sensor location on the second floor at this flow condition was approximately 0.25 in. w.c. because at this relatively low flow condition compared to the design flow rate, the distribution duct system as quite oversized.

Adding this pressure drop to the 0.75 in.w.c. required to deliver design air flow from the remote sensor location to the zones on the second floor told us that we would need to deliver 1.0 in.w.c. at the supply fan discharge (0.25 in.w.c. + .75 in.w.c.) under this low load condition.  This value became the other point on the reset schedule line.

More specifically, we adjusted the controller so that, when the signal from the remote static-pressure controller was 3 psig, the set point of the controller was 1 in. w.c.  We would fine-tune both reset values based on operating experience during commissioning and the first year of operation.

Considering an Extreme Condition

Once we had made our adjustments, the remote sensor would adjust the discharge set point linearly over the range established for the reset schedule. But, because the output of the remote controller could drop as low as 0 psig and rise to whatever the pneumatic-system supply pressure was (typically 20 to 25 psig), in day-to-day operation, the set point of the controller could potentially be adjusted beyond the bounds of the reset schedule based on the nominal 3 to 15 psig span that was the de facto standard in the industry.

A set point lower than 1.0 in. w.c. would not be cause for much concern. A set point above the 3.0 in. w.c. maximum target, however, could cause nuisance safety trips or worse.

For example, at startup, when duct pressure at the remote location was 0.0 in. w.c., the reverse action of the remote static-pressure controller would cause the controller’s output to drive toward its maximum value. Depending on the throttling range/proportional-band setting of the controller, the output under this condition could be the maximum available main air pressure.

If you extrapolate the straight line associated with the reset schedule to 20 psig, you will discover that the remote controller would have commanded a set point of about 3.8 in. w.c. for the fan discharge pressure controller.   If the fan were to achieve this value, it would have tripped the high-static-pressure limit. 

To prevent that problem, we added a high-limit relay, which limited the signal to the reset input of the discharge controller at 15 psig even if the output from the remote controller drove above that value.   Thus, we limited the maximum reset command to the discharge controller to a set point of 3 in. w.c. In a DDC system, this would be achieved with the control logic rather than by a physical piece of hardware.

Reset Strategy in Operation

The reset strategy allowed us to have our proverbial cake and eat it too, meaning the control process would never allow fan-discharge static pressure to exceed the 3.0-in.-w.c. design target because it was controlling for discharge static pressure directly and the system hardware would allow only a maximum set point of that magnitude, even at startup, when the pressure at the remote point in the system was 0.0 in. w.c.

If, as the system came up to speed, delivering 3.0 in. w.c. at the discharge of the fan created more pressure than the 0.75 in. w.c. we targeted at the remote location, then the output of the remote controller would drop.

This would lower the set point of the discharge controller, causing the inlet guide vanes to close and deliver less air, which would lower the system pressure. If the terminal units opened their dampers to meet an increase in load, the reduction in pressure at the remote location would cause the set point of the control process to again be adjusted upward, but never above the design value.

One Final Thought About Lags

What follows is one of the most useful lessons gleaned from my experience at the MCI building (aside from how to not blow up ducts).

Comparing the Response of a Process to an Upset with Different Levels of Tuning Implemented

The figure below illustrates the response of a system with a proportional-only (P) control process to an upset[i] as the proportional band is reduced gradually from:

  1. No control (manual, top black line).
  2. Loosely tuned control—a very large proportional band (red line).
  3. Tightly tuned control—the proportional band is as tight as it can be without the risk of hunting (blue line).
  4. Near-resonance, or hunting (gray line).
  5. Over tuned/approaching instability—the proportional band is too narrow, given the characteristics of the system (bottom wavy black line).

Response Tune @

The system the controller is applied to is fixed in terms of lags, dead time, system gain, and other factors that dictate how the process will respond.

When you tune a control loop, you start with the a very large proportional band (the red line) and sneak up on the gray line, which is the point at which the system is starting to go unstable.  Then you back off a bit (back towards the red line) so you run on the safe side of stable (the dark blue line).

The reason you sneak up on the gray line is that it reveals the natural period for the control process and system. You can use that parameter to come up with a pretty good set of initial tuning parameters for the control loop.

In the illustration, the upset occurred at t=0 on the x axis.  Notice how there is a period of time after the upset during which nothing seems to happen based on the response of the system (the y axis on both charts).  The purple line with an arrow at both ends illustrates this, and it is called the “apparent dead time” for the process.  It represents the sum of all of the lags in the system.

My purpose in bringing that up is to focus your attention on three facts:

  • The natural period for the near resonance control loop (the grey line) is approximately equal to four times the apparent dead time (compare the light blue double arrow head line with the red, orange, green and dark blue double arrow head lines)
  • No matter how loosely or tightly tuned a control process is, the response for about the first half of the natural period (about twice the apparent dead time) will be nearly identical no matter if the control process is over tuned, under tuned or non-existent (manual control); contrast the 5 different response curves in the enlarged circle for half the natural period, which is indicated by the red plus orange arrows.
  • The tightly tuned control process (blue line) is stable at about the end of twice the natural period.

Once you recognize and embrace these facts, there are very useful in the context of what we are trying to do when we tune a P, PI or PID control loop.

The Quarter Decay Ratio

Technically speaking, for most of our systems, our goal is to achieve a quarter-decay-ratio response to a process upset, as illustrated below.

Quarter Decay 0

“Quarter decay ratio” is a fancy way of saying the peak of the spike during the second cycle of the response cycle will be one quarter of the peak during the first cycle of the response.  

It has its roots in the work John Ziegler and Nathan Nichols published in Optimum Settings for Automatic Controllers in 1941.  If you would like to read it, you will find a copy of it in part 1 of the Control Engineering Reference Guide to PID.  There is also an interview in there with John Ziegler, which is kind of cool.

Twice the Apparent Dead Time;  A Very Important Parameter

If you go out and start playing with loop tuning, you will discover that there are multiple versions of this response pattern or something very close to it, depending on the exact combination of proportional, integral and derivative gain you set up for the process.  In fact, you could probably spend hours changing the settings and observing the different patterns.

I speak from experience because when I first tried tuning loops, I did just that.  But at one point, I realized a couple of things,  specifically;

If the first spike doesn’t trip a safety or, worse yet, break something (for instance, blow up a duct), and

If the process settles within a reasonable time frame for the application you are working with

… then you probably have a winner, at least for the time being.[ii] 

Quarter DecayBut if you keep tripping safeties (or worse) and that was happening with-in less than twice the apparent dead time after you observe the system starting to respond, then you are going to need to eliminate some lags.  That is what the second bullet point in the opening part of this section was about.

Similarly, if you have managed to find a setting that does not cause safety trip (or worse) but now, the system is still trying to find itself hours (or even two natural periods) after the upset, then  you are going to need to eliminate some lags.

To quote David St.Clair:

It All Depends On The Lags

Eliminating Lags

The table below contrasts lags that are relatively easy and relatively difficult to eliminate.

Lags Table

Eliminating lags to solve a startup/loop-tuning problem can be counterintuitive.

For instance, when I was having trouble getting the MCI Building VAV system online, it seemed things were happening too fast at the inlet guide vanes;  they were opening up way to quickly.  So I slowed them down by adding restrictors. In reality, things were not happening fast enough in terms of the control system realizing the fan had started but that it would be some time before there was meaningful pressure at the remote sensor location.

When I added the restrictors, I was able to get the fan running without tripping the safety, but not able to achieve my set point in a reasonable time or respond to step changes in the system (zone level scheduling or a set point change for instance), so I had simply traded problems.

Ramps vs. Acceleration and Deceleration Settings

In modern times, it can be tempting to try to solve a startup problem like the one I experienced using the acceleration and deceleration settings on a VSD to slow the drive’s reaction to changes commanded by the control system. And, while you may be able to resolve the over-pressurization problem in this manner, you will have added a lag to the system. That means that for even a modest upset or step change in the system, you will have limited how quickly the control process can react to it to recover the set point and resume steady state operation.

Ramp logic is a way around this.  A true ramp limits reaction time until the process variable and set point are inside a window established during startup and commissioning. Once the process variable is inside the window, the limiting function is eliminated from the control process, meaning and the control process is unconstrained in terms of how quickly it can make a change.

Many VFDs have a ramp function built into them.  But just to make interesting, some manufacturer’s call their acceleration and deceleration settings “ramps”.  Having said that, if the drive does not have the setting built into it, you can simply implement it in the control logic that is managing the drive.


While I illustrated the solution to the MCI building problem using the pneumatic control technology we were working with at the time, many of the issues the solution addressed are independent of the control technology because they were about the physics of the system that was being controlled. Thus, they are somewhat timeless in nature and perhaps things you will find useful in the modern world with its DDC technology.  Maybe they are even something you can pass on in your role as mentor, just as the MCI building, David St. Clair, and Tom Lillie did for me.


PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering                                Visit Our Commissioning Resources Website at

[i]     The term “upset” means a sudden change in the process;  something like a major set point change or a major load change.  Sometimes, the word “step change” is used as a synonym for “upset”.  Start-ups are an example of a event that introduces an upset into nearly control loop in the system that is started up (and often into the systems that support it).

[ii]     I say for the time being because things that affect the lags in a system can change over time.  For instance, in a brand new system the day that you tune the discharge temperature control loop for the very first time may be a design cooling day.  

The system may (probably will) exhibit a totally different response pattern 6 months later on the design heating day since it will be using different heat transfer elements to deliver a similar discharge temperature.   And things will be different during the swing season when the economizer has a role in the process.

And after you finally have tweaked and fine tuned the loop over the course of the first year and found the perfect, year round solution, you may discover it no longer works two years down the road because wear in the linkage system changed the hysteresis or the coils are not as pristine as they were when they were new or the occupancy pattern in the building and related load profile has changed.

Bottom line, loop tuning, just like commissioning, is not a one time event.

Posted in Air Handling Systems, Controls, HVAC Fundamentals, Pneumatic Controls | Leave a comment