Satellites, Eclipses, and Happy Holidays

As some of you know, I am pretty interested in the weather.  So most days, while having coffee and settling into the office, I am poking around on-line, looking at things like the models that the University of Washington Department of Atmospheric Science make available, looking at weather maps, downloading data and plotting soundings with ROAB and trying to understand what they mean. 

Sometimes, I even load data into Digital Atmosphere and try my hand at plotting a front.   Still a long way to go there but I think it may be kind of like learning to use a psych chart;  you just have to do it and it will eventually come to you.

20203351640_GOES17-ABI-FD-GEOCOLOR-1808x1808But my favorite part of the routine is the time I spend looking at satellite imagery.  I find myself mesmerized by the colored view of the earth and the clouds just hanging there in space.

The images update every 10 minutes and you can even create a little animated loop and watch the terminator and weather systems sweep across the globe, as shown below.

G17_fd_GEOCOLOR_36fr_20201219-1818

I was doing this earlier this week when my eye caught something.  At first, I didn’t realize what was happening.  But then, it dawned on me (and you probably have already figured it out from the title);  I had just seen the eclipse from the vantage point GEOS West.

I thought it was really cool.  So I created animations for GOES West and East, downloaded them and figured I would share them here.  This first one is from GOES West, which is what initially caught my eye. South America is in the lower right part of the image so watch that area to see the shadow show up.

This one is GOES East, which gives a better view of things since South America is front and center.  I don’t know exactly what the yellow bars that show up at the end of the sequence are, but I think they had something to do with the satellite data set not being fully complete.  Fortunately, the eclipse is in the first part of the sequence.

If you want to slow things down or pause, I made a little video that includes both of the animations with the yellow bars edited out.  You will find it at this link.

If you go to the GEOS imagery page and pick a view, you will discover that there are all sorts of ways to look at the images that reveal all sorts of different things about the atmosphere.   But the one that I love the most is the GeoColor product, which is what was used for the images above.

The image is actually a combination of different satellite data stream to create a very vivid realistic daytime image.  The night time image uses data from different infrared bands to show low liquid water clouds as differentiated from higher ice clouds.  The city lights are from a different static data based and provided to allow you to orient yourself.

To me, it is amazing to contemplate what you are seeing when you see that shadow pass over the surface of the earth; masses orbiting and interacting with each other in a perfect balance.   In the days leading up to Christmas this year, we will have the opportunity to see a different manifestation of that ballet as Saturn and Jupiter come into the closest conjunction they have been in for some 800 or so years.[i]

Saturn and Jupiter Conjunction

Some have even hypothesized that the star of Bethlehem may have been just such an event.

So now, (if you are still reading this) you are thinking O.K. there is the  “Happy Holidays” part of the post title.   And that is in fact part of it.

But, the other part of it is to point out that we did not always have such a spectacular view of our home available to us at our finger tips.  Prior to this time of year in 1968 – specifically December 21 through 27, 1968 – the most remote vantage point had been what Pete Conrad and Richard Gordon had captured for us from 850 miles up on their Gemini 11 mission, which is shown below [ii].

850 miles up 7-s66-54706-b

But on Christmas Eve, 1968,  the crew of Apollo 8 – Frank Borman, James Lovell, and William Anders – captured an earth rise while orbiting the moon; the first time humans had done that.

apollo08_earthrise

The image [iii] is, of course, is quite famous;  some have called it

the most influential environmental photograph ever taken[iv]

I tend to agree with that, having seen it with  my own eyes that evening.  That image, and the lunar surface rushing by and the words the astronauts shared that evening[v] are burned into my memory.  It definitely is part of the reason I do what I do these days.

Later that evening – actually, I think in the early hours of Christmas day (EST), this sequence of transmissions occurred (I believe the time stamp is hours into the mission and liftoff was at 7:51 a.m. EST on December 21, 1968):

089:31:12 Mattingly: Apollo 8, Houston. [No answer.]

089:31:30 Mattingly: Apollo 8, Houston. [No answer.]

089:31:58 Mattingly: Apollo 8, Houston. [No answer.]

089:32:50 Mattingly: Apollo 8, Houston. [No answer.]

089:33:38 Mattingly: Apollo 8, Houston.

089:34:16 Lovell: Houston, Apollo 8, over.

089:34:19 Mattingly: Hello, Apollo 8. Loud and clear.

089:34:25 Lovell: Roger. Please be informed there is a Santa Claus.[vi]

If you followed the space programs, the hours an minutes between the Christmas Eve broadcast and the transmissions above were pretty important because the Trans-Earth Injection burn would happen.  This event involved the (single) engine in the service module igniting and accelerating the spacecraft out of lunar orbit into a trajectory that would carry it back to earth.

If the engine failed for any reason, the crew was not coming back.

Thus, the acknowledgement of the existence Santa Clause.

Bill Anders, who took the earthrise picture above often said something along the lines of:

We came to explore the moon and what we discovered was the Earth

Ultimately, I think why I am writing this is to encourage you to take some time to contemplate and fully appreciate that discovery.   I think it’s easy to take for granted in the world we are in.  But I also think it is crucial that we appreciate it.

In her 1976 album Hejira,  in a song titled Refuge of the Roads, Joni Mitchell wrote:

In a highway service station
Over the month of June
Was a photograph of the earth
Taken coming back from the moon
And you couldn’t see a city
On that marbled bowling ball
Or a forest or a highway
Or me here least of all

These days, I think that is an important perspective to keep.   When you look at our pretty little home from the vantage point of space, all of the things that seem to trouble us and divide us become invisible.   And what becomes apparent is that we are all in this together on a beautiful but tiny little life boat.

David-Signature1_thumb_thumb_thumb

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

i          Image Credit: NASA/ Bill Ingall

ii         NASA/Dick Gordon; Sept. 14, 1966 – View From Gemini XI, 850 Miles Above the Earth | NASA

iii       Image Credit: NASA/Bill Anders; Apollo 8: Earthrise | NASA

iv       Nature photographer Galen Rowell

v        This link will take you to a recording.  There are religious overtones, so fair warning if you find that sort of thing offensive.   Me personally;  I am probably more spiritual than religious, but the moment was and still is very moving.

vi        Apollo 8 Flight Journal – Day 4: Final Orbit and Trans-Earth Injection (nasa.gov)

 

Posted in Uncategorized | 4 Comments

What is the Energy Content of a Pound of Condensed Steam? (Part 3)

or, It Depends …

This post is the last in a string of posts that started out as an e-mail answering a question from one of the folks taking the Existing Building Commissioning Workshop this year at the Pacific Energy Center.   The question was about the energy content of a pound of steam, which seems like a simple question but it turned out not to be.

In the first post we explored different ways to address the question including using published conversion factors, rules of thumb, and steam charts and tables.  In the second post, we took a closer look at how steam is procured, including on-site generation and district steam systems and how those approaches impact the amount of useful energy that is recovered from the steam.  We also looked at ways to maximize the amount of energy that you extract from a pound of steam for use in your HVAC processes.

In this post, we will look at some common energy saving opportunities associated with steam systems.  I should also mention that you will find a number of general resources about steam in this blog post.

Contents

I have included a table of contents that will allow you to jump to a topic of interest.  The “Return to Contents” link at the end of each section will bring you back here.

Maintaining The Benefits

Even if set points and processes have been optimized, there are things that you should look for in order to maximize the benefits, no matter where your steam comes from and where the condensate goes.  Typical issues (a.k.a EBCx and ongoing commissioning opportunities) include the following items.

Failed Condensate Return Pumps

Just because local boiler plants and campus district steam systems are set up to return their condensate and recycle it does not mean they are actually doing it. Condensate return pump failures are not unusual. 

Typically, when this happens, the receiver drain valves are opened until repairs can be made.  As a result, the condensate is dumped to the sewer, even though that would not be the case if the return pumps were operational.   Unfortunately, the failed pumps and open drain valve are often forgotten. 

A facilities director friend of mine at a large campus in the Midwest instituted a policy in his weekly meetings where each operator was required to report on the condition of the condensate return pumps in the facilities they were responsible for.   “Not working” was the “wrong answer”, and the policy quickly resolved what had been an ongoing problem with failed condensate pumps, saving a lot of energy, water, and water treatment chemicals in at the boiler plant.

<Return to Contents>

Failed Insulation

Condensate is hot and insulation will preserve the energy in the condensate.  Repairing damaged insulation typically delivers a quick payback and can frequently be accomplished in house.  All you need to do is measure the surface temperature with an infrared gun and look up the loss in a table or chart.

image_thumb12

There are a number of resources at this link that will help you get started.

<Return to Contents>

Steam Trap Failures

For a steam system to work properly, it is important to ensure that only condensate leaves the steam system.  Steam traps accomplish this function but can fail if they are not properly monitored and maintained.  If a trap fails, live steam enters the return system, wasting the energy it contains and potentially causing other issues on the return side.

The infrared thermometer shown above for checking out insulation savings will also help you find a failed stream trap.   If there is a temperature drop across the trap with the leaving temperature being at or below the saturation temperature for the pressure in the return, then the trap is probably doing just fine, like this one.

image_thumb81

But if the trap has failed, the temperature in the return line will be up near the saturation temperature of the steam, like this.

image_thumb10

It is important to realize that the high temperature down stream of the trap means that a trap in the area has failed, not necessarily the trap you took the temperature across. 

In other words, the steam leaking by from a failed trap will raise the temperature of all of the pipe in its vicinity.  So to narrow things down, you may need to use an auto mechanic’s stethoscope to listen for the steam jetting through the outlet orifice in the trap.

There are resources at this link that can help you assess steam trap failures and the related savings.

<Return to Contents>

Piping Failures Due to Corrosion

Condensate tends to be corrosive because the carbonate and bicarbonate ions that enter the boiler with the feedwater break down due to the heat and pressure in the boiler. One of the biproducts is carbon dioxide gas, which leaves the boiler with the steam and then reacts with the condensate to form carbonic acid.

image261_thumbimage_thumb1

There are water treatment strategies that can be used to control this as well as piping materials that can minimize the potential for failure.  But my point here is that when a failure occurs, then the condensate is lost along with the benefits of returning it to the plant.

<Return to Contents>

Long Pipe Runs to the Central Plant

As mentioned in the previous blog post under Paradoxes, long pipe runs to the central plant can result in parasitic losses, even if they are insulated.  As a result, a number of campuses I have been involved with include a heat exchanger in the condensate return system that is used to recover energy from the condensate for local use, perhaps preheating domestic hot water or serving other loads that can be served by low temperature hot water.

<Return to Contents>

Conclusion

Thus ends another string of somewhat long blog posts.  Hopefully, they have given you some insights into how much energy is associated with a pound of condensed steam, techniques that can be used to evaluate it, and ways that you can maximize the potential and maintain the benefits of a system that uses steam as a source of heat.

David-Signature1_thumb

David SellersPowerPoint-Generated-White_thumb2_th[2]
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/


Posted in Boilers, Hot Water Systems, and Steam Systems, HVAC Calculations, HVAC Fundamentals, Operations and Maintenance, Steam Systems | 1 Comment

What is the Energy Content of a Pound of Condensed Steam? (Part 2)

or, It Depends …

This post builds from the previous post, which started out as an e-mail answering a question from one of the folks taking the Existing Building Commissioning Workshop this year at the Pacific Energy Center.   The question was about the energy content of a pound of steam, which seems like a simple question but it turned out not to be.

In the previous post, we explored different techniques that could be used to assess the energy content of a pound of steam and looked at where the value used by ENERGYSTAR® for converting pounds of steam from a commercial district steam system to Btus came from.  It turned out to be associated with receiving steam at a delivery pressure of 150 psig, saturated and then dumping the condensate to the sewer.  

Dumping the condensate wastes quite a bit of energy, which is the reason the ENERGYSTAR® conversion factor seems high when you compare it to what you might expect based on rules of thumb or even an analysis that looked at the latent heat of vaporization for 150 psig saturated steam.   This approach also wastes water, another important resource with embedded energy implications. 

The good news is that there are other approaches that can be used to reduce the wasted resources.   This post looks at some of them as well as ways to maximize the amount of energy extracted from a pound of steam before it is recycled or dumped to the sewer.

Contents

Despite breaking up the original behind this into a string of posts, each post in the string is still somewhat long.  So, to minimize the pain for someone just wanting the bottom line, I have included a table of contents that will allow you to jump to a topic of interest.  The “Return to Contents” link at the end of each section will bring you back here.

Steam System Resources

I thought I would mention that there are several blog posts that will connect you with resources on steam and steam systems.

A Steam Heating Resources will connect you with a really good book titled The Lost Art of Steam Heating.  It also connects you with some articles Bill Coad wrote on the topic and a number of other resources.

Assessing Steam Consumption with an Alarm Clock is the first in a series that looks at a way that you can develop a steam system flow profile by monitoring condensate pump and feed water pump operation.  It was something Chuck McClure taught be very early in my career, but I still use the technique to this day (but do it with data loggers instead of alarm clocks).

<Return to Contents>

District Steam vs. Onsite Generation

The Operating Cycle

In terms of how condensate is handled, what I described in the previous post for a typical commercial district steam system (dumping it to sewer)  is a totally different scenario from what would happen if you had boilers on site generating the steam.  In the latter situation, the condensate is collected and returned to the boilers and recycled.   Some fresh water is added to make up for any losses due to leaks or the use of steam in a process (direct injection humidification for instance) and to make up for the water that is intentionally drained from the system to manage total dissolved solids levels (typically termed blow down). 

But for most facilities with local boiler plants generating steam, returning the condensate minimizes the amount of energy needed in the boiler to create steam since it only needs to heat the feedwater from the condensate return temperature (typically in the 140-200°F range) vs. heating it from the ground water temperature, which can be in the 45-50°F range for some parts of the year.  This practice also minimizes the consumption of water, another valuable resource. 

For a steam system of this type, you would probably not be entering thermal energy into ENERGYSTAR® as pounds of steam.  Rather, you would be entering it based on the fuel you used to fire the boilers.  This would reflect the net energy input required to bring the returned condensate back up to boiling temperature along with converting it to steam.

That’s not to say you would not be interested in the pounds of steam produced because that would tell you about the efficiency of your generating process.  And you would also be interested in the net energy change that occurred as the steam was condensed and the condensate was cooled, either intentionally or via parasitic losses like leaks or poor insulation.  If you had energy recovery devices in your boiler flue, you would want to consider their contribution also.

<Return to Contents>

The Operating Cost

If you were to look at the cost of a million Btus in the form of gas, which you would then burn in a boiler to make steam, and the cost of a million Btus delivered by a third party supplier as steam, the millions Btus as steam option would seem crazy expensive.   And it is if all you look at is solely as the cost of a Btu.

But, if an Owner elects to buy steam instead of gas, part of what they are electing to do is to not operate a boiler plant.   That has a number of implications including:

  • No need to purchase the boilers and related auxiliary equipment in the first place.
  • No need to operate the boilers plant, which may require operators with a different skill set from those required to only deal with steam, not generate it.  And it may require a round the clock operator presence depending on the pressure and temperature of the steam that is required.
  • Dealing with natural gas increases the level of risk associated with operations compared to dealing with just steam (which is not with out risk).
  • The reliability of a central plant may be much higher than a local plant unless significant investments were made in machinery and systems to provide N=1 redundancy at a local level.
  • The ASHRAE Systems and Equipment handbook has a chapter dedicated to  District Heating and Cooling systems that includes a discussion of the economic considerations and other issues if you want to learn more.

    <Return to Contents>

Campus District Steam Systems

It is not unusual at all for college, university, industrial and commercial building campuses (like the wafer fab I worked at) to use a central steam plant to serve multiple buildings on one site, basically a district steam system approach.  However, unlike the commercial district steam system we have been looking at, most of the systems I have been around are set up to return the condensate to the central plant.

Typically, this is accomplished by providing one or more condensate receivers for each building to capture the condensate for the facility.  The receivers are equipped with pumps that move the condensed condensate from the receiver to a return system that collects the condensate and returns it to a receiver in the central plan. 

From there it is pumped to a feedwater system where any necessary make-up water is added, water treatment chemicals are added and it is often deaerated (heated to drive out dissolved oxygen).  Pumps then move the treated condensate (now called feedwater) into the boiler as required by the load conditions, usually based on boiler water level.  Thus, the energy and water associated with the distributed steam is recovered instead of being dumped to sewer. 

The picture below will give you a sense of what this might look like.  It is from the central plant at the wafer fab I worked at for a while.

Boilers The cylinder in the lower left is one of the high pressure boilers.  We generate steam at 100 psig and distributed it to various locations on the site, where it was reduced to 5-10 psig for use in heat exchangers and coils.

The large elevated cylinder in the center of the picture is the deaerator and feedwater tank.  The feedwater pumps are located below it.  Condensate was returned to this tank by condensate pumps at the various points of use out in the facility.  The picture below will give you a visual on what a typical condensate pump looks like.

Condensate Pump

In the deaerator, the returned condensate was heated to 200°F+ to drive out the dissolved oxygen.  Then it was pumped to the boilers by the feedwater pumps when needed based on the water level in the boilers.

So for a steam system of this type, you really would be justified in doing some sort of analysis similar to the example in the previous post to come up with the KBtu’s delivered to the facility from the pounds of steam that you consumed (including the parasitic losses), even if you are billed by the central plant based on pounds of steam.  That would allow you to enter you consumption using a multiplier of 1 instead of 1.194.  And that would be legitimate (in my estimation) because by recycling the condensate, you are returning the energy and water associated with it back to the process rather than throwing it down the drain.

<Return to Contents>

Why Not Return the Condensate?

You may be wondering why a commercial district steam system would not include a return system that allowed them to collect and recycle the condensate from the loads they serve.  I can’t say that I know the answer to that for sure.  But my guess is that it has to do with a number of economic and operational factors that make it financially more attractive for the business entity to not deal with a condensate return system.

There are a number of things that make dealing with a condensate return system challenging, especially a system that covers an extensive area.  The map below illustrates the piping network associate with Clearway Energy Thermal San Francisco, who provides district steam to a number of cities across the country.

Clearway-SFO-Map_thumb

To give you a sense of scale, the map is probably in the range of 1-1/2 miles on a side. That is a pretty significant network to maintain; miles and miles of pipe running underground below streets and sidewalks.   Challenging enough for the steam piping, which is at high pressure and experiences significant thermal expansion and contraction.

While the pressures would be lower for a condensate return system, the thermal expansion and contraction issues will still exist.  And you would need to have multiple pumping stations to move the condensate back to the central plant location.  

Probably most significantly, condensate tends to be corrosive for a number of reasons.   And ensuring that the customers maintained the equipment necessary to return the condensate to the system can also be an issue.

So, those are some of the reasons that I suspect a commercial supplier finds it easier (more economical) to not deal with returning condensate.  Over time, as the value of energy and water increase, that could change.  After all, when we dump the condensate to drain, we are throwing away at least two resources (energy and water) and probably a third (boiler feedwater water treatment chemicals).

<Return to Contents>

*Sigh*

All of this may lead to the question:

What can we do to make steam and condensate return systems as efficient as possible?

The answer (as you might guess) is:

It depends …

The first thing to consider is if you have maximized the extraction of energy from the steam and condensate that was delivered to you.  The other is to make sure you are maintaining the mechanisms that deliver those benefits.

<Return to Contents>

Maximizing the Benefits

One way to maximize the benefits of a high temperature resource like steam is to make sure you have reduced the temperature in a way that provides useful heat to the facility as much as possible.

Cooling the Condensate via a Separate Process

It is easy to think that the energy benefit of steam is associated with condensing it.  And in the context of Btu’s per pound extracted, a phase change beats sensible cooling hands-down.   But, given that the condensate coming of a process that is condensing steam at atmospheric pressure is still quite hot, there may be some significant benefit associated with subcooling it. 

For the process we looked at in the previous post, when I illustrated how to use a p-h diagram, the condensate came off the process at 212°F.   If there are loads in the facility that can be served by a fluid that is at this temperature or lower, then it may be possible to serve them by cooling the condensate rather than by condensing steam. 

Examples include processes like preheating outdoor air, preheating or heating domestic hot water, heating swimming pools, heating spaces and/or loads with less stringent temperature requirements like parking garages, and snow melting systems.  The viability of these processes from an economic stand point can vary a lot, depending on:

  • Are considering this option during design or in the context of an existing building, and/or
  • The value of the resources and/or
  • What happens to the condensate after it leaves your facility (i.e. is it dumped to the sewer or is it recycled.)

But to illustrate the point, lets consider what would happen if we took the condensate coming off the process I illustrated in the p-h diagram in the previous post and subcooled it to 160°F, perhaps by using a heat exchanger to preheat domestic hot water or hold it at about 150°F in a storage tank.

image_thumb31

As you can see, this would have recovered about 30% of the energy that would have been throwing down the drain based on the district steam conversion factor that ENERGYSTAR® would use for systems that were billed in terms of pounds of steam consumed.[i]

An interesting paradox about this is that if you made this change in a facility where the domestic water heating was provided by electricity, you would see a drop in electrical consumption but no increase in the pounds of steam that were used.  That is because you would have been extracting more energy from the steam consumed for other purposes before discarding it to sewer. 

In contrast, if the domestic water had been provided by using steam in a heat exchanger directly, this change likely would have reduced the steam consumption because you would have been extracting more energy from the steam that was used by other processes, like preheat, heating, and reheat, before discarding the condensate.

Of course, for this to all work out, the loads generating the condensate would need to be concurrent with the domestic hot water load requirement.  If they weren’t, then alternative energy sources would need to be used to meet the load.

<Return to Contents>

Cooling the Condensate by Optimizing Process Set Points using a Reset Schedule

The Design Day is Not Everyday

If you study load profiles for a while, you will realize that the design condition is an anomaly.  In other words, equipment selected for the 99% ASHRAE heating design condition will be oversized for about 99% of the hours in the year.  The psych chart below illustrates this for Columbus, Ohio, a location that sees a wide range of outdoor conditions over the course of a year.

image_thumb[1]

The colored squares are a bin plot of the climate data;  warmer colors have more hours at the conditions inside the square than cooler hours, as can be seen from the key at the lower left of the chart.   Notice how most of the data point lie between the different design values, not on them.

That means that if, for instance, you selected a reheat coil serving a perimeter zone where, on the design day, the coil needed to supply 94-95°F air to offset the losses that were occurring through the envelope, then as it warmed up outside, the coil would not need to supply air at that temperature all other things being equal.

Heating and Reheating are Different Processes

In fact, once the outdoor air temperature rose above the balance point for the building (the point where the internal gains exactly offset the losses through the envelope) the coil would no longer need to provide heat, it would only need to provide reheat and in the worst case, deliver air at the zone temperature (a.k.a. “neutral air”).  This is a very important point to understand.  

Since this post is already very long, I will save a detailed discussion of this for a subsequent post.  But in a nutshell (perhaps a coconut shell) a coil that is doing heating is adding energy to the area it serves to offset losses (usually envelope losses) in order to maintain the desired space temperature.  Thus, it will need to deliver air that is warmer than the targeted space condition.

In contrast, a coil that is doing reheat is delivering air that is cooler than the space condition but warmer than the air that is coming from a central system serving multiple zones.  The reason for doing this is that the central system leaving air temperature was likely set based on a design day dehumidification requirement.  Then the flow rates to the zones were set based on the zone sensible load and the design day coil leaving air temperature.  

Because of the design process I just described, given a mix of zones, it is possible that an interior zone, say a server room, with a very constant load condition, will require the design day flow rate and temperature under all operating conditions.  In contrast, a perimeter zone likely will not because the transmission and solar loads will be change from hour to hour, day to day, and season to season.  Thus the design day flow rate and temperature will tend to over-cool it much of the time.

For the perimeter zone, this could be mitigated up to a point by reducing the flow rate.  But there can come a point when the flow rate has been reduced to the minimum flow required for ventilation and delivering air at that rate and at the design day supply temperature (which can not be raised because the server room still needs it) will over-cool the zone.  Thus reheat becomes necessary if we want to keep the zone clean, safe, comfortable, and productive, which are the basic goals of an HVAC process.

So, the reheat coil warms the air up slightly.  But since there is still a need for some cooling, the air is still delivered to the one below the zone temperature.  In the limit, the highest temperature the reheat coil would need to provide under conditions where there were no energy losses from the space would be at the space design temperature to maintain the ventilation requirement with out over or undercooling the space.

Real World Coil Performance and Performance Requirements

It turns out that a coil that is selected for the design heating condition using, for example, 180°F water, can provide reheat with much cooler water.   I discovered this with the “dots connected” about the difference between reheat and heating one day early in my career.  Joe Cook (the lead operator at the facility I was working in at the time), then proved it by lowering the water temperature on the system until he got a cold call. 

In other words, Joe “asked the building” and I attribute my belief in that process (note the words in the banner of the blog) to this event and Joe.  Tom Stewart and I eventually wrote a paper about it for ACEEE, which you can find here if you are interested.

You can also demonstrate this by modeling a coil, locking down the physical characteristics like the fin spacing, circuiting, face area, etc. and then playing with the entering water temperature and flow rate to see what happens.   Here is an example I developed using Greenheck’s free coil selection program.

Modeling a Coil on the Design Heating Day

I first modeled the coil to serve the heating load in a perimeter zone, which required 94-95°F air on the design heating day.  Here are the coil’s physical characteristics …

image_thumb33

… and here is the performance on the design day supplied with 180°F water and taking a 20°F temperature drop on the water side to match the heat exchanger selection I have been using as an example in this post.  The entering air condition is 53°F, the design day cooling coil discharge temperature that is required by a server room on the same air handling system, even though it is the design heating day.

image_thumb37

Modeling the Same Coil on a Day When Only Reheat Is Required

Here is the performance achieved with that same coil if I reduce the entering water temperature to 110°F and take a 20°F waterside temperature drop with 53°F entering air.

image_thumb39

Note that I am able to deliver 67.4°F air and only use 1.9 gpm to do it (35% of the design flow rate).   If I were to maintain the design flow rate of 5.5 gpm, I can deliver near neutral air.

image_thumb42

Heat Exchanger Performance at a Reduced Leaving Water Temperature and a Lower Flow Rate

If we look at where the heat exchanger I have been using in this example would perform if I reduced the water side flow rate by 50%[ii] and lowered the set point from 180°F to 110°F, it turns out that the condensate coming off of it would be at 141.4°F.   Here is what that looks like if you plot the process out on the p-h diagram.

image_thumb44

Here is that same diagram at a smaller scale and cropped to focus on the condensate condition (left image) next to the design day process (right image) so you can compare them.

image_thumb56

Notice how the condensate leaving the lower temperature heat exchanger process has an enthalpy of 109 Btu/lb compared to 181 Btu/lb for the design day process.  Thus, operating at a lower temperature allows us to recover more of the available energy from the steam that was delivered.

More specifically, by operating at a 110°F supply water temperature, we now recover 1,084 Btu/lb from the steam vs.  the 1,012 Btu/lb that we recovered operating at a 180°F supply water temperature set point. That’s a 6% improvement in making beneficial use of the 1,194 Btu/lb that the ENERGYSTAR® conversion factor would attribute to a district steam system where the condensate was dumped to sewer.

But Wait, There’s More!

There would also be savings due to lower parasitic losses in the piping network.  In other words, even with insulation meeting code requirements for piping operating at 180°F, there are still losses. 

You can get a sense of this by using 3EPlus, a free application from the North American Insulation Manufacturers Association.  Here are screen shots comparing a  4 inch line operating at 180°F with code required 2 inches of insulation in a 75°F ambient temperature to that same line operating at 110°F.

image_thumb58

The lower water temperature results in a 70% reduction in losses.  And while the Btu/hr/ft values are small, this is a situation where a little times a lot results in a big number.  In other words, there is an amazing amount of pipe in a typical building system, sometimes several miles.  So  if if you save 10-15 Btu/hr/ft over thousands of feet of length, it can add up.  

Reset Schedule Bottom Lines

The bottom line is that implementing  reset schedule that adjusted the supply hot water temperature based on the outdoor air temperature will save resources for a number of reasons.

  1. More of the available energy that was delivered as steam is recovered before the condensate is discharged to the sewer.
  2. The parasitic losses associated with the distribution system are reduced.
  3. Because of items 1 and 2, the pounds of steam consumed will be reduced, improving the building’s benchmark.
  4. If the piping ran through places that contain conditioned air, like a ceiling return plenum, then the reduction in parasitic losses will also represent a reduction in cooling load.
  5. Because the building is using fewer pounds of steam, it will uses fewer pounds of water, another important resource that we need to do our best to conserve.

All of this can be accomplished for a modest investment because in most situations, all that is required is a minor modification of the control system to add the reset schedule.  If the control system is a DDC system and was already monitoring outdoor air temperature, the improvement could be captured by making a relatively simple modification to the software.  The images below illustrate what this logic might look like before …

HHW-Logic---Basic_thumb1

… and after modification.

HHW-Logic---Reset_thumb1

Note that the “after” version includes some other enhancements like trending and graphic indication.   The diagrams were developed using an Excel based logic diagram tool that you can download here along with the actual logic diagrams  If you wanted to dig in and understand it a bit, you will find an exercise here that uses a virtual EBCx project in a SketchUp model as a mechanism to present the opportunity and develop the logic.

<Return to Contents>

Flash Steam

It is not uncommon for the loads served by a steam system to use steam at a pressure significantly higher than atmospheric pressure.  The distribution systems we have been discussing for district steam systems are onw example.  For these networks, because insulation is not perfect, energy is lost from the piping and some of the distributed steam condenses.  Condensation loads are even higher at start-up, when the piping is cold. 

It is critical that this condensed steam be removed from the piping system to avoid significant operating problems and even catastrophic failures.  Towards this end, steam traps are provided at regular intervals and at elevation changes in the distribution system.  These traps are termed “drip traps” and the condensate coming off of them will be saturated liquid at the saturation temperature associated with the steam in the distribution system.

Steam fired sterilizers in labs and hospitals are another example of a load that must be served at a higher pressure, typically requiring steam at approximately 30 psig (often termed “medium pressure steam” in the industry).  The saturated condensate coming off of these loads is at a temperature above 212°F saturation temperature associated with atmospheric pressure;  in this case, about 273°F. 

As a result, if the condensate was dumped into a return system that is open to atmospheric pressure, some of the condensate will “flash” to steam.   In other words, the 273°F saturated condensate coming off a 30 psig (44.7 psia) process will have a lot more energy than saturated condensate at 212°F.  The temperature difference reflects some of the additional energy content at the higher saturation temperature. 

The enthalpy (total available energy) of the saturated 30 psig condensate is about 243 Btu/lb.     If you reduce the pressure that it experiences to atmospheric pressure, the condensate can not exist at a saturated state and remain at 243°F;  it has too much energy to do that.

It solves this problem by converting some of its liquid to steam;  exactly enough mass to absorb the excess energy.  You can use a steam table like the one I provided earlier to figure out exactly how much of the liquid will be converted to steam by reading the appropriate data directly or interpolation.

image_thumb2

Or, you can plot the process out on a thermodynamic diagram like a p-h diagram where the process will look just like the throttling process we looked at previously and occur at a constant enthalpy.

image_thumb4

One thing that is more apparent from the p-h diagram plot, at least to me, is that the result of the process is not pure, saturated water vapor.  Rather, it is a mix of saturated liquid and saturated vapor, a.k.a wet steam.  This is what the thermodynamic term “quality” that I mentioned in the first post in the series is about.  

Note that the “Flashed Steam Condition” is at about the 6.4% quality point (the constant quality lines are the curved, dashed black lines that mirror the saturated liquid and vapor lines). What this is saying is that of the 242.9 Btu/lb of energy represented by the flash steam, 6.4% of it is in the form of steam, where a significant portion of the available energy (1,151.1 Btu/lb) could be captured by condensing it, which would provide 970.8 Btus/lb (1,151.1Btu/lb – 180.3 Btu/lb).  The bulk of it is saturated liquid (condensate) where the available energy (180.3 Btu/lb) could be captured by cooling it.

Hopefully, in light of the preceding, you can see that if your high temperature condensate is going to end up at atmospheric pressure, then it will “flash”, all though perhaps not in the way a non-thermodynamic oriented person would think of the term. 

Stop-when-flashing_thumb

(I thought I would insert that as an amusing comic interlude and a reward for anyone who is still actually reading this.)

If you simply dump it into the low pressure return, a lot of problems can occur including condensation induced water hammer (which can be quite destructive), along poor return system performance in terms of steadily removing condensed steam from the loads and returning it to the collection point.

This problem is addressed by providing flash tanks, which are sized to allow the flashing process to occur with out causing problems.  Here are pictures of a couple.

Blow-Down-Flash-Tank_thumb1 AHU5-equipment-room-flash-tank_thumb

Flash-Tank_thumb1

A number of steam system vendors provide very useful information about flash tanks, including Sarco and Armstrong if you want to know more.  

My point here is to say that the 970.8 Btus/lb of energy in the low pressure steam coming off of a flash tank is just as useful as low pressure steam generated in a boiler.  Yet, you frequently find them vented to atmosphere.    This may represent an opportunity.  

One way of capturing the benefit is to vent the flash tank to the low pressure system header.  This will move the “Flash Steam Condition” line on the p-h diagram upward from atmospheric pressure. The lower the header pressure is, the more energy you recover.

<Return to Contents>

A Few Paradoxes

All of the opportunities we explored would extract more energy from a pound of steam relative to the process that occurs in the heat exchanger operating at the design supply water temperature.  As a result, they will reduce the pounds of steam consumed all other things being equal. 

In addition, the lower distribution temperatures associated with the reset schedule will also save energy, additional energy.  And using flash tanks to drop the temperature and pressure of medium and high temperature condensate will keep the condensate return system running more smoothly and quietly.

But, if the condensate is being recycled instead of dumped to sewer, the lower condensate return temperatures will mean that the boilers will need to add a bit more energy into the feedwater to get it to the steaming temperature as compared to what would be required if the condensate came back hotter.  So for systems that recycle their condensate, the impact of the lower temperature condensate on the cycle efficiency will be different from what it would be for a system where the condensate is dumped to sewer.

On the other hand, if the piping runs back to the central plant were long, there could be benefit to the lower temperature condensate because the energy would have gone into a useful process instead of being lost to the ambient environment on the way back to the plant. 

In other words, if the 200°F condensate leaving the heat exchanger has cooled to 140°F by the time it gets back to the central plant to be recycled due to the time it spent sitting around in condensate receivers and in long piping runs, then the boilers are going to have to heat it up from 140°F to the steaming temperature anyway.

In contrast, if it was cooled to 140°F to serve a domestic hot water load before being returned to the plant, the parasitic losses in the return system would be reduced and additional energy would have been extracted from the system for a useful purpose.

Extracting as much energy as possible for a useful purpose will improve the over-all cycle efficiency and will lower the parasitic losses in the condensate return system since it will be operating at a lower temperature.

<Return to Contents>

Thus far, we have talked about how to maximize the amount of energy extracted from a pound of steam.  In the final post in this series, we will look at how to ensure peak efficiency for your steam system in the long term.

David-Signature1_thumb_thumb_thumb

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

[i]     The ENERGYSTAR® conversion factor implies that you would reduce the enthalpy of the incoming steam to 0 – which is about where the saturated liquid (dark blue) line crosses the enthalpy axis) –  if you recovered all of the energy represented by a pound of steam.

[ii]    This was an arbitrary selection on my part.  You will recall that the coil I modeled could do quit a bit of reheat with only 35% of its design flow rate and a lower entering water temperature.  And it could  deliver near neutral air if supplied with its design flow rate at the lower water temperature.

It would be somewhat unusual for an occupied zone to require neutral air if the building was above the balance point;  basically, that would indicate that there was not load in it and that you were still moving air through it.  Thus for the sake of discussion, I assumed that a variable flow hot water system serving multiple zones and operating with a reset schedule that lowered the supply temperature as the outdoor air temperature rose would operate at less than design flow and arbitrarily selected 50% of design flow.

Posted in Boilers, Hot Water Systems, and Steam Systems, HVAC Calculations, HVAC Fundamentals, Operations and Maintenance, Steam Systems | Leave a comment

What is the Energy Content of a Pound of Condensed Steam? (Part 1)

or, It Depends …

This post started out as an e-mail answering a question from one of the folks taking the Existing Building Commissioning Workshop this year at the Pacific Energy Center.   But as I worked on it, I realized that the question had come up before and that the answer and related concepts might be useful to others. On the surface, it seems like a simple question.  But if you really want to understand, it  is fairly complex.  Thus, this blog post.

Contents

This ended up becoming quite a long post (surprise, surprise, surprise).  So, I broke it up into several posts, which are still somewhat long.  To minimize the pain for someone just wanting the bottom line, I have included a table of contents that will allow you to jump to a topic of interest.  The “Return to Contents” link at the end of each section will bring you back here.

Overview

Students participating in the workshop are required to have access to a building that they can use as a living laboratory to apply the EBCx skills we teach in the class.  One of the first things they do is benchmark their building in the LBNL Building Performance Database and ENERGYSTAR®.  To benchmark, you typically need to convert the annual energy consumption of a facility into some sort of index, typically an EUI (Energy Use Intensity or sometimes also called an Energy Utilization Index). 

EUIs can be stated in terms of site or source energy.  If you want to know more about the difference, this blog post will provide the details.  In the discussion that follows, I will be considering things in terms of site energy.

EUIs typically have engineering units in the form of energy use per unit area per year, such as kBtu/sq.ft. per year (kilo or thousands of British Thermal Units per square foot per year).   Energy is not always billed directly as Btus.  For instance electricity is billed in terms of kWh or kiloWatt Hours consumed.  District steam is often billed as pounds of steam consumed.  To create an EUI from the bill metrics , you need to convert the billing units to Btu’s.  

In the industry, most people are pretty familiar with the conversion factor for kWh to Btus, which is 3.413 kWh per Btu and pretty invariable.   But there is less familiarity with how to convert a pound of steam to Btus, and there can be some variability related to exactly how the thermal energy is billed (kBtus, pounds of steam, thousands of pounds of steam, etc.) and the nature of the steam source (district steam, central plant, or boilers on site).  Bottom line, if you want an exact value, it can become more complex than the single factor used to make electrical conversion.

<Return to Contents>

The Question

As you may have guessed by now, the question I was asked was how to go about converting pounds of steam to Btus.   The answer is:

It depends ….

One of our students has a facility that purchases steam from a district steam system[i] and their bill states consumption in the form of Mlbs.  For example,

Total usage invoiced in Mlbs –  301.3

Note the letter “M” which means the unit of measure is not simply pounds, it is some multiple of pounds.

So the first part of answering the question is to determine what the “M” stands for, because to correctly answer the question,

It depends on the units of measure.

Most of us (probably because of computers) would take the M to be the SI (System International; often referred to as metric) prefix denoting a factor of one million (1,000,000) as in the MBytes or MB associated with a file or hard drive size.  Thus we might conclude the bill is stating that the facility was being invoiced for 301.3 x 1,000,000=301,300,000 pounds of steam.

Unfortunately, that turned out not to be true in this case.

<Return to Contents>

Confusing Units

It turns out that there is another system of units that uses “M” for a multiplier;  the Roman Numeral System, where “M” is used to indicate thousands (1,000), not millions (1,000,000).  And to make things interesting, the industry uses both systems and (to me at least), seems to figure you will simply know which one applies. 

If you have been in the industry for a while, that is probably true.  But if you are new to it all (or suffer from aging brain cells like I seem to), then it can be confusing.  

For example, we have control systems that are moving and storing Mb or  Megabytes  of data (where mega is the SI prefix for millions, so millions of bytes).  These systems can be monitoring and managing air handling systems that  are moving cfm of air (where the “c” stands for “cubic”, not the SI prefix “centi” or hundredths, nor does it mean hundred, which is what it would stand for if it was a capital letter in the Roman Numeral system).

The air is often being cooled using electricity, which is often billed as kWh ( where the “k” means the metric prefix “kilo” or thousands of watt hours), and heated, perhaps, with steam generated by a boiler that might be rated in terms of  MBtu (where the M is the Roman Numeral M and means thousands of Btu), or MMBtu(still the Roman Numeral M, but two of them, meaning thousand thousand, or million Btu).

If the boiler is fired using natural gas, then the gas might be billed in terms of MCF (thousands of cubic feet, where the M stands for the Roman Numeral, but the C stands for cubic not the Roman Numeral for 100 and F stands for feet),  or in terms of therms (which stands for 100,000 Btus),

Or the consumption could be billed in terms of Dth (which combines therm with the metric prefix “Deka” or 10 to stand for 10 therms), which is approximately the same amount of energy as an MCF of natural gas (see above) depending on the exact heat content of the gas, which varies with the source of the gas.

Other than nuances like that, we have a pretty straight-forward system of units in the industry. So there should be little confusion about what things mean.

<Return to Contents>

Asking the Source

The student who asked the question, went to the source (the utility representative) for clarification on the units on the bill.  And in this case, they were told that the M (Roman Numeral) actually stands for K (Si Prefix) meaning that their bill was for thousands of pounds of steam.

So it seems that all that is needed now is to figure out how any Btu’s are released when you condense a pound (or a thousand pounds) of steam.  Frequently, that is done by making an assumption about the amount of energy associated with the phase change.  But if you want a more exact answer, it is a bit more complex than a single number. 

It is also an interesting (in a nerdy sort of way) saturated system physics exercise.  So I thought it would be worth looking at both techniques.

<Return to Contents>

Using a Simplifying Assumption

There is nothing at all wrong with using a simplifying assumption.  Being math-phobic and often pressed for time in terms of coming up with an answer, I do it all of the time. But if you do it, I think  it is important to recognize the constraints that your assumption placed on the result so you don’t take yourself to seriously if the discussion becomes more precise.  And you need to understand if the assumption can actually be used in the context of a given discussion.

In this case, our simplifying assumption might be based on the fact that most condensate return systems are open to atmospheric pressure at some point, usually at the condensate receiver.  So, we could look at the amount of energy released if we were to condense 1 pound of steam at atmospheric pressure.

You can find this value in a steam table.   Steam tables contain empirically derived values for the various properties of water under different conditions of temperature and pressure.   You can find them in classic publications like Keenan and Keyeson line, in the ASHRAE handbooks, or you can even build one yourself as a learning exercise using REFPROP, like I did to create the table below.

Steam-Table_thumb4

Note that the pressures in second column are in absolute pressure units, not the gauge pressure units we are probably more accustomed too.  In other words, the pressures are referenced to a pure vacuum, o psia.   So atmospheric pressure is 14.71 psia or 0 psig.

The value we are interested in is the latent heat of vaporization at atmospheric pressure (highlighted in orange above) which is the difference between the enthalpy of the water vapor (steam) and the enthalpy of the liquid water at the condition we are interested in.  In this case, the value is 970.8 Btu/lb.

To estimate the amount of energy associated with a bill for 301.3 thousand pounds of steam based on the assumption that the steam was condensed at atmospheric pressure, we could do a bit of simple math, like this.

image_thumb101

If we needed to convert this to millions of Btu, we would just divide the result by 1,000,000, like this.

image_thumb8

We could even create a multiplier that we could directly apply to future bills to give us the answer.

image_thumb18

In fact, the student who inspired this post was planning on using this multiplier.  All I have done up to this point is illustrate where it came from and that there is an assumption behind it. 

How much does that assumption impact the accuracy of the EUI and benchmark?  Well,

It depends on the magnitude of the difference between the assumed value for the enthalpy change that occurs when the steam is condensed relative to the actual value of the enthalpy change produced by the thermodynamic processes used to extract energy from the steam at the facility.

It also depends on what you do with the condensate.

<Return to Contents>

Seeking A More Exact Solution

Truth be told, in the olden days, folks (such as myself) would assume that condensing a pound of steam was worth about 1,000 Btus.  It made the math easier if you were using a slide rule or four function calculator.  And, if you contemplate the steam table above, you can see that it probably meant we were accurate to with-in 10% or better over a pretty broad range of conditions.

But, if you consider what is really going on in the context of the data in the steam table, you realize that assuming the latent heat of vaporization is 970.8 Btu/lb or 1,000 Btu/lb could be wrong because:

It depends on the saturation temperature that the steam condenses at.

For instance, most steams systems deliver the steam to the loads they serve at a pressure that is above atmospheric pressure;  pressures of 3-15 psig are common.  For district steam systems, the delivery pressure can be significantly higher, perhaps as high as 60-150 psig or more, which is subsequently reduced to the 3-15 psig range at the end use facility. 

If you look at the Tariff that defines the rate structure and nature of the service for the utility suppling steam to the facility in question, you find that there are two potential delivery pressure ranges available from their distribution network, 5-10 psig and 20-120 psig and that the company reserves the right to adjust the delivery pressure.

image_thumb21

Note that I have assumed the pressures are gauge pressures vs. absolute pressures. 

And, the term “quality” as used in the tariff is probably not the thermodynamic use of the term given the reference to chemical constituents.  In other words, in a pure thermodynamic sense, the “quality” of saturated steam is a measure of it’s wetness; i.e. how much of the steam is pure vapor and how much of it is water that has yet to change phase. More on this to follow.

It is also worth noting that some utilities will deliver the steam in a superheated state, not a saturated state.  All of these things have an impact on the energy content of the steam.

<Return to Contents>

Energy and Phase Changes;  Understanding the Process

If you perform the experiment I describe in this blog post, you will discover that it takes a whole lot more energy to change the state of water from a liquid to a vapor relative to what it takes to heat the liquid or vapor.  Here is an image from that blog post depicting the results of the experiment.  The paragraphs that follow describe the results.

image_thumb111

The red line in the picture is temperature of the water in the tea kettle.  The green dashed line and blue solid line are the temperature of the space above the water.[ii]  Initially, this space is filled with a mix of air and water vapor.   But once boiling starts, with the lid on the kettle, all of the air will be driven out and it will fill with steam.

Heating the Water

If you observe what happens, when I turn on the heat (the purple line is the watts into the burner on the stove), the temperature of the water and the water vapor mix both start to rise.  Since the liquid water is at atmospheric pressure but below the boiling temperature (a.k.a. the saturation temperature) we say that it is subcooled.   During this phase of the experiment the burner was supplying 1 Btu to raise the temperature of one pound of water 1°F.

When the water temperature reaches 212°F, the water begins to boil, which creates steam, filling the area above the water with pure steam, and creating a saturated system where the temperature of both the water and the steam are the same (notice how the green and red lines converge). 

<Return to Contents>

Heating the Mixture of Water and Steam

Now, even though the burner is applying a steady amount of energy, the temperature of the water/steam mix holds constant.  That is because the energy from the burner is now being used to change the liquid water to steam (a.k.a. a phase change) and during a phase change the temperature remains constant at the saturation temperature. During this time, the  burner was supplying 970.8 Btus for every pound of water that was converted to steam.

When the last drop of water changed to steam, the burner was still supplying energy at a steady rate.  But since the mass of the steam contained inside the teapot at that point was quite low compared to the mass of water that was there when we started (most of that mass was now outside the teakettle condensing on the windows in the kitchen),  there was a lot of energy being supplied to a very small mass.  

<Return to Contents>

Heating the Steam

At this point, the phase change is complete so all of the energy from the burner is applied to changing the temperature of the steam inside the pot.  Since it only takes about 0.5 Btus to raise the temperature of a pound of steam 1°F at atmospheric pressure (and there was much, much less than a pound of steam contained in the pot) then the temperature spikes rapidly.  This elevation in temperature above  the saturation temperature is called superheat.

<Return to Contents>

A Few New Terms

If you are new to thermodynamics, some of the terms that you observed in the steam table can be a little scary sounding.  After all, how many dinner conversations (with normal people) have you had where the words “enthalpy” and “entropy” were bantered about.

We are accustomed to concepts like temperature and pressure because we apply them directly in our day to day lives.  A weather forecaster may talk about a high pressure system moving into our area or that we can expect lower temperatures and humidity after a cold front moves through.   Or the recipe we select to prepare for dinner likely specifies a temperature that we should cook the food at, perhaps suggesting that we bring a pot of water to boil in preparation for making some pasta.

But in the course of day to day conversation, we seldom discuss enthalpy or entropy, even though those properties are also changing as we go about our daily lives.  For instance, the weather forecaster could have said that the enthalpy of the air is going to drop after the cold front passes.  And the recipe could have suggested that we increase the enthalpy of a pot of water until it reached saturation and then continue to add energy so that the water changes phase.

The point is that enthalpy, while an unfamiliar term in day to day life, is a property used to measure the total available energy in a substance at a given condition.   So, if we know the enthalpy change that a substance goes through in a given process, we know the energy change.[iii]  

Enthalpy is challenging to measure directly.  But since it is related to things that we can more readily measure, like temperature and pressure and moisture, some very dedicated individuals have been able to experimentally determine enthalpies for various substances and develop relationships that allow us to predict enthalpy based on other measurements and coefficients that are developed via the experiments. The thermodynamic diagrams that follow are simply graphical representations of these results.

<Return to Contents>

Enthalpy Depends on Temperature and Pressure

If you study the steam table I inserted previously,  you will discover that the latent heat of vaporization – i.e. the energy it takes to convert a pound of water to a pound of water vapor (a.k.a. steam) – varies as a function of the saturation temperature and pressure.  Stated another way, the enthalpy change associated with a phase change will vary with the temperature and pressure that the phase change occurs at.

For example, if the pressure is about 60 psig (or about 75 psia), then the latent heat of vaporization is more like 905 Btu/lb vs. the 970.8 Btu/lb we have discussed for water at atmospheric pressure.  Similar considerations apply for sub-atmospheric pressures.  And, as our experiment revealed, the amount of heat associated with changing the temperature of a subcooled liquid or a superheated vapor is different from the phase change value and will also vary a bit with temperature and pressure.

The steam table above is focused on water at saturation.   There are other tables that document the properties for water that is superheated or subcooled.

<Return to Contents>

Thermodynamic Diagrams

All of this can be quite complex to wrap your head around.  But a picture can be worth a thousand words, and in the context of our discussion, a thermodynamic diagram can be worth a thousand words.   Using one, you can plot a process and read all of the thermodynamic properties of water (or other substances) directly from the diagram.  And the process plot gives you a “visual” on what is going on.  

Psychrometric charts are a form of thermodynamic diagram that HVAC engineers use to assess an HVAC process. 

image_thumb11

Skew T log P diagrams are used by meteorologists to understand the atmosphere.

image_thumb51

To understand what happens to a substance as it goes through a process, encountering various  various conditions and states, we can use pressure-enthalpy (p-h) diagrams (what follows uses water as an example) …

image_thumb71

… temperature entropy (t-s) diagrams …

image_thumb131

… and enthalpy-entropy (h-s) diagrams (a.k.a Mollier diagrams)  ….

image_thumb14

These diagrams are extremely intimidating. 

But if you can stay calm and continue to breath normally, they can be quite useful because if you can plot a process on them, you can read all of the properties for the various states directly from the chart. When you compare it to the other options, like playing with the equations of state, which can look like this …

Equations-of-State-for-Air_thumb1

…   or working through multiple tables like the one pictured below and interpolating values …

Keenan-and-Keyes-Table_thumb1

… they can become quite attractive and you may find yourself inspired to learn how to use them.

<Return to Contents>

The Spreadsheet Behind the Diagrams

If you are really curious about the diagrams above, you can find the spreadsheet behind them at this link.  Personally, I learned a lot by developing them.  And now that I have them, I can plot processes on them pretty precisely, which lends itself to using a graphical solution to solve and visualize complex thermodynamic processes.

<Return to Contents>

Focusing on p-h Diagrams

P-h diagrams are a very common way to look at thermodynamic processes like refrigeration cycles.

image_thumb16

They can give you a “visual” on a complex process and make it less intimidating for math phobic folks like me.  If you want an example of how useful a diagram like the one above is, take a look at this engineering application guide from Sporlan.  

I don’t want to get to far a-field here, but the point is that diagrams like these can make the analysis of cycles much easier to accomplish once you learn to work with them.   There was a point in my career where I was somewhat terrified of a psych chart.  But now, it is my “go to” tool for understanding air handling system processes. Similarly, I use the various thermodynamic diagrams I illustrated above to help me understand different HVAC and building system processes.

<Return to Contents>

Applying the p-h Diagram For Water and Steam

To gain a deeper understanding of the amount of heat represented by a condensed pound of steam, I’m going to plot out a pressure reducing process on a p-h diagram.   I could plot it on any of the diagrams, but I chose the p-h diagram because we want to demonstrate what happens as steam is throttled to reduce its pressure and a throttling process can be considered a constant enthalpy process.  So, the two things we are going to work with are represented by the primary axis of the chart.

Let’s look at what happens if the utility serving the facility we are considering is delivering saturated steam to it from their high pressure system at 120 psig.  And let’s assume:

  • The facility uses a pressure reducing valve to drop the pressure to 12 psig to serve an insulated pipe header that delivers the lower pressure steam to a heat exchanger, and
  • That the heat exchanger condenses the steam to make 180°F hot water, which is then distributed to to the various loads in the facility, and
  • That the pressure reducing valve, heat exchanger, and its control valve are all in close proximity to each other so that there is no meaningful pressure drop between the pressure reducing valve and control valve nor is there any meaningful heat loss through the insulation between those points, and
  • That the design supply water temperature to the loads is 180°F with the heat exchanger was selected for a 20°F temperature rise on the water side using saturated steam at atmospheric pressure (0 psig, 14.7 psia), and
  • As a result, the condensate leaving the heat exchanger is at 212°F, and
  • That the condensate is discharged to a system that is vented to atmospheric pressure.

The process is plotted out on the p-h diagram below.

image_thumb1011

Plotting the Initial Condition

The initial condition is on the saturation line at the delivery pressure of 120 psig or 134.7 psia.  Knowing that the steam is saturated (red saturated vapor curve) at a specific pressure (value on the vertical axis) allows us to plot the entering condition on the chart, and we can read the enthalpy of 1,193 Btu/lb at this condition from the p-h diagram.

Plotting the Condition Entering the Control Valve

The condition entering the control valve represents the result of the throttling processes associated with the pressure reducing valve.   Throttling processes are constant enthalpy processes, so knowing that and that the leaving condition that the pressure reducing valve is controlling for (12 psig, 26.7 psia), we can plot this point on our chart.

Note that we assumed there was no meaningful pressure drop or heat loss in the piping header due to its short length.   Had there been a meaningful pressure drop and thermal loss in the piping system, that would have shifted the entering control valve point down and to the left slightly from where we plotted it.  

Plotting the Condition Entering the Heat Exchanger

The entering condition in the heat exchanger represents the throttling processes associated with  the control valve, which was selected based on an entering steam pressure of 12 psig and a pressure in the heat exchanger of 0 psig.   This results in an initial condition in the heat exchanger that is at the same enthalpy as the control valve entering condition (because throttling processes occur at constant enthalpy) but at the pressure used to select the heat exchanger (o psig, 14.7 psia).  Thus, we can plot this point on the chart based on these two parameters. 

Note that the steam entering the heat exchanger is superheated as a result of the two throttling processes in the delivery chain.  As a result, it has a bit more energy content than it would if it was saturated steam at atmospheric pressure.

Plotting the Leaving Condition

Because the heat exchanger was selected to deliver the design performance requirement using steam at atmospheric pressure, the condensate coming off of the process will be at atmospheric pressure and 212°F, the saturation temperature associated with atmospheric pressure.  This is also the condition in the condensate return main.  As a result, we can plot this point on the chart, which allows us to read the enthalpy of the  condensed steam leaving the process.

<Return to Contents>

Enthalpy Change = Energy Change

If we know the enthalpy change between two conditions, then we know the energy change.   In this case, the change in enthalpy was from 1,193 Btu/lb t0 181 Btu/lb or 1,012 Btu/lb. 

Good News and Bad News

Taking a closer look at the specifics of the process revealed that for every pound of steam that was condensed in this scenario, we received 42 more Btu’s than our rule of thumb would have suggested or about 4% more.  In the context of the Btu’s received for your dollar, that sounds like a good thing.  In other words, the pounds of steam you purchased delivered more Btus than the rule of thumb suggested.

But in the context of a benchmark, it means that you actually used more energy than the rule of thumb suggested.  Thus, in this case, if we were to calculate an EUI based on our more specific assessment of how the steam was actually used in the facility, the EUI will be higher and the benchmark score will be lower.

<Return to Contents>

ENERGYSTAR®, Conversion Factors, and Rules of Thumb

In an effort to try to create consistency, ENERGYSTAR® publishes conversion factors for various energy sources including district steam.

image_thumb5

If I understand it correctly (I don’t actually do a lot of ENERGYSTAR® benchmarks), when you are entering your data into ENERGYSTAR®, an “Add Meter Wizard” will guide you to the 1,194 Btu number for a meter that was reporting KLbs (thousands of pounds) of steam. 

As you can see, this would result in a consumption value that is higher than the rule of thumb we developed based on an assumption of condensing steam a atmospheric pressure (1,194 vs. 970.8 Btu/lb) as well as the rule of thumb used by old engineers like myself sometimes (1,194 vs. 1,000 Btu/lb) .  

It is also higher than reality for the situation we explored in the p-h diagram (1,194 vs. 1,012 Btu/lb).  So if you where to benchmark in ENERGYSTAR® using their metrics, it would seem like they would over-state the energy use of your facility if it was a facility where the steam delivery followed the process we traced out.

That means  your EUI would be higher and your benchmark would be lower than it would be if you could insert your actual energy use in terms of the Btus released by the condensed steam vs. the thousands of pounds of steam you used into the ENERGYSTAR® database. 

<Return to Contents>

Benchmarks are Approximations, not Exactamates[iv]

The preceding may want you to cry “Foul”.  After all, you are trying to do a good job in terms of running your facility efficiently and it seems unfair to have your score penalized by an arbitrary conversion factor.

But you need to remember that benchmarks are intended to provide a broad-brush comparison of similar facilities in similar climates serving similar occupancies with similar use patterns.  There are a lot of variables at play.  For example, the heat content of gas and other fuels will vary with the source and ENERGYSTAR® applies arbitrary conversion factors to them just like it does to district steam.

The endnotes in the referenced ENERGYSTAR® conversion factors document indicate the source for the conversion factors, with the International District Energy Association being the source for the district steam energy conversion factor.

<Return to Contents>

Why so High?

If you study the steam table, you may find yourself wondering why the International District Energy Association recommended a conversion factor of 1,194 Btu/lb.  After all, that appears to be the latent heat of vaporization associated with an extremely low saturation temperature and pressure.

That is because there is more than the latent heat of vaporization to be recovered.   For instance, in the example I plotted out on the p-h diagram, the condensate left the process at 212°F.  There are quite a few things that you could do with a stream of water at that temperature.   For example, you could run it through a heat exchanger to recover sensible energy and preheat or even heat domestic hot water.

So, in a way, the answer to a modified version of the original question, perhaps along the lines of …

How can I go about capturing the energy that the  ENERGYSTAR® conversion factor for district steam metered as pounds of steam implies is available?

is …

It depends on what you do with steam and condensate you receive from the utility

<Return to Contents>

The Basis of the ENERGYSTAR® Conversion Factor

If you dig around a bit, you can discover the basis behind the ENERGYSTAR® conversion factor.  I found it in a footnote in a technical reference they provide about Greenhouse Gas Emissions.

image_thumb1311

What that is saying is that the ENERGYSTAR® conversion factor is equal to the enthalpy of saturated steam at 150 psig.   It is important to realize that this is different from saying it is equal to the latent heat of vaporization of 150 psig steam, which is the enthalpy change associated with condensing saturated vapor to saturated liquid, or about 858 Btu/lb.

In our field, we are typically interested in changes in enthalpy through a process rather that the specific enthalpy at a given state.  And,  because enthalpy cannot be measured directly, we state the values of enthalpy for a substance referenced to a particular state.  For instance the specific enthalpy of water or steam is referenced to water at 0.01°C and atmospheric pressure.

In the context of this discussion, that means that if we really wanted to capture all of the energy associated with the ENERGYSTAR® conversion rate for district steam metered as pounds, then we need to not only condense the steam we receive, we need to receive the steam at 150 psig as saturated steam and we need to cool it to just above freezing.

<Return to Contents>

So, the ENERGYSTAR® Folks are Crazy

You may be thinking at this point that the ENERGYSTAR® folks are nuts.  After all, your local utility may not deliver steam at 150 psig, with the delivery pressure of 120 psig in the utility tariff we looked at being an example of that.

But if you compare the enthalpy of 12o psig steam with 150 psig steam, you will find that it is only about 3 Btu/lb different;  about a quarter of a percent.  So in the bigger picture, receiving steam at a lower delivery pressure would not make that much difference in the factor that you would use.

You may think, O.K. I’ll buy that, but it just does not seem practical to cool the condensate to just above freezing in a way that delivered anything useful to the building.  In other words, to provide heat, the source (in this case the condensate) needs to be warmer than what you are trying to heat. 

Given that we are trying to maintain space temperatures in the mid 60°F to mid 70°F range in most of our buildings, a fluid stream that is at or below that temperature range could not be used directly to heat.  Some sort of heat pump (and energy input) would be required to move the heat from the condensate to the place that needed it.

Actually, the ENERGYSTAR® Folks are Not Crazy

If you take the time to think it through, you will realize that the ENERGYSTAR® conversion factor is simply forcing us to take a hard look at what it means in terms of energy and resources if our facility uses steam as an energy source. 

There is a subtilty associated with how most (not all)  commercial district steam systems work that we need to consider.  You get a clue about it if you read the tariff for the facility we have been discussing closely (note my highlight).

image_thumb311

What that is saying is that the condensate (condensed steam) delivered from the utility will not go back to the utility.  Rather, it will go to the sewer.   That means that all of the energy associated with the hot condensate is literally dumped down the drain and eventually is dissipated to the environment with out serving any useful purpose in the building that consumed the steam (a.k.a. energy and water vapor;  two different resources).

In fact, depending on the temperature of the condensate and the requirements of the local plumbing code and the material in your sanitary piping system, you may actually have to cool the condensate before discharging it.  Typically this is done using domestic cold water (directly or via a heat exchanger) which is then dumped to the sewer along with the cooled condensate.

Bottom line, if  you received district steam at 150 psig, saturated, you actually did receive 1,194 Btus with every pound of steam (and a pound of water for every pound of steam).  The challenge is to understand how to capture as many of those Btu’s as possible before discarding the condensed fluid stream to the sewer.  Because what ever you don’t recover really is wasted energy (and water).

So painful as it may be for this type of system the 1,194 Btu/lb factor allows your steam consumption to be legitimately and fairly compared to the other types of steam systems I will describe  in the next blog post.

David-Signature1_thumb_thumb_thumb

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

[i]     A district steam system is a network of piping served by a central plant that provides steam to a large area like the downtown area of a city.

[ii]   The blue line is data from a very low mass thermocouple so that it would react quickly because I wanted to capture the very rapid increase in steam temperature that I anticipated once all of the liquid water had been converted to steam. (For more on how sensor mass can impact the data it produces, see this blog post). 

I had the logger set for a very rapid sampling rate and did no have enough memory to allow it to log data for the entire time it took to boil off all of the water.  So I did not start the logger associated with that sensor until nearly all of the water was evaporated, which is why the blue line only shows up towards the end of the graph.  

[iii]  Entropy is a bit more complicated to grasp, like, I almost flunked thermo because I struggled with it so much.   I think that is not unusual and often take comfort in something John von Neumann said (emphasis is mine):

You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage.

They way I have come to think of it is that its basically nature’s way of saying:

There’s no such thing as a free lunch

When we turned on the burner to boil the water, energy flowed from it to the water because the burner was hotter than the water.   But, with out some sort of process that involves doing work, we can not get the energy that flowed into the water converted back into heat or electricity.  Heat does not flow from cold to hot, only from hot to cold.

If you want a bit more detail about all of this, you may want to review a string of blog posts I did that look at saturated multiphase systems.  The experiment I mention and use to illustrate what happens when water boils is part of one of the posts.

[iv]  You may also find the Chapters in Roy Dossat’s book Principles of Refrigeration titled Internal Properties of Matter and Properties of Vapors to be insightful.  He writes about thermodynamic concepts in a very understandable way.  When I found the book, early in my career, my first thought was where were you when I took thermodynamics, which I almost flunked because of my struggle with the math and concepts initially.

[iv]   When I worked for Murphy Company, Mechanical Contractors, more than once, I heard Pat Murphy, our chief estimator mentor some of the younger estimators, saying

we were doing estimates, not exactamates.  

When I first heard him say it, I felt it was really insightful.  And I also think the same is true for a benchmark.

Posted in Boilers, Hot Water Systems, and Steam Systems, HVAC Calculations, HVAC Fundamentals, Operations and Maintenance, Steam Systems | Leave a comment

Lags, the Two-Thirds Rule, and the Big Bang, Part 5

In Part 4 of this series, we explored the complex transportation lag that was the key challenge in terms of using a remote duct pressure sensor to control the large VAV air handling system in the case study building. In this post I will show you the solution that grew out of that understanding and discuss a few reasons why not every VAV system will exhibit this behavior. I’ll close out the post with what I have found to be a very useful and  interesting insight that can be gleaned from the apparent dead time that you observe when you upset a control process in a system that is in operation.

Not Every System Will React This Way (Thank Goodness) Reprise

In the first article, I mentioned that this issue obviously does not happen in every VAV system out there. I think one of the main reasons is that many systems are small enough that the transportation dynamic I focused on in the previous article is not significant enough to cause a problem. But I think there are also some other reasons that people may not run into it very often, or maybe have never run into it.

You Learn A Lot the First Time You Start Up a System

My experience at the MCI building occurred during the very first start-up of the system. At the time, I was in the dual role of control system designer and start-up technician. There was no formal commissioning process so, my start-up activities were the commissioning process.

On a current project, depending on the exact design of the commissioning plan, it is possible that the official commissioning provider would not be on site for the very first start-up of the system. They would only come on site after the contractor had taken the system through start-up process and identified and corrected any obvious deficiencies.

You could say that Ray (the service fitter I was working with) and I discovered an obvious deficiency when we blew up the duct, and then corrected it. Meaning that had there been a commissioning provider, when they came into the process, they may have found some issues, but they would not have observed the system blowing up a duct or having nuisance static safety trips. That could create the impression that the lag issue did not exist, simply because it had been addressed.

But, evidence in the field, like:

  • Ductwork with wrinkles in it, or
  • Ductwork with extra reinforcement angles, or
  • An obvious patch in the duct insulation, or
  • Pressure relief doors that have been added by change-order

… could suggest that just because the system seems to start smoothly now, that may not have always been the case.

Variable Speed Drives are Very Common

When the MCI Building came online, variable speed drives were not an option for most systems, even large ones, because of the cost and size. That is not the case for a modern project.

As a result it would be unusual for a VAV system these days to not have a variable speed drive of some sort. As a result, when faced with nuisance safety trips (or worse), it is common practice to address the problem by using the acceleration and deceleration settings in the drive to slow the system down. This approach is like the approach I tried when I added restrictors to the pneumatic lines feeding the actuators to slow them down.

As you may recall, I concluded that in doing that, I had traded one problem (safety trips and blown ducts) for a different problem (an unresponsive system that could not deal with a large step change). I believe that improperly applied acceleration and deceleration ramps are likely doing the same thing. But since an unresponsive system may appear to operate reasonably well unless you analyze the trends, this may not be generally recognized. More on this later in the article.

Solving the Problem

Back in the MCI Building days, with my significant emotional event fresh in my mind, I went about re-reading what David St. Clair had written about lags in Controller Tuning and Control Loop Performance . As you may recall from the first post in the series, I had totally missed his point on the topic of lags when I read his book the first time, despite him having it in all capitals, in a large shaded box at the end of the chapter.

All About the Lags st

Truth be told, it wasn’t so much that I missed the point.  Rather, I simply did not understand the concept at all.

But what was became clear almost immediately as a re-read the section on lags (due to my significant emotional event) was that my problem was the result of lags in the system and that I needed a control process that would be impervious to them. David’s chapter on cascaded control suggested a strategy that would offer a solution.

Modifying the Control-System Design

As you may recall, our initial solution to the problem was to move the remote sensor back to the fan discharge and control for that pressure. In doing that, we circumvented two major lags: the sensor lag and the transportation lag.

But after re-reading David St. Clair’s primer, I realized that if:

  • We added a remote sensor, and
  • Added a second controller for it to work with, and
  • Created a remote duct static pressure control process,

… then we could use the output of that process to adjust (or reset) the discharge static pressure control process set point. In other words, the output of the remote process would cascade into the discharge pressure control process to optimize its set point. The result was a control system configured as illustrated below.

Pneumatic Control v2

Bear in mind that there probably are several other design solutions that could have worked, especially in this modern area of fully programmable DDC systems.

Developing a Reset Strategy

To implement the solution, we needed to come up with a relationship that defined how the discharge-static-pressure set point would be adjusted as pressure at the remote point in the duct increased above the design target when the terminal units closed their dampers in response to decreasing load. This “reset schedule” is graphically depicted in the chart in the illustration above.

Pneumatic control system operating characteristics generally are defined by a 3 to 15 psi span. As a result, to fully define our reset schedule, we needed to specify the discharge-static-pressure set points associated with outputs of 3 psig and 15 psig from our remote static-pressure-control process. Once we identified those outputs, we could set them up in the controller by making physical adjustments with knobs and dials.

Knobs and Dials

In current technology DDC systems, all of the parameters I will discuss below are set up via the software in the system, either using sliders and knobs in a graphic screen or by setting the value of point in the system via keyboard commands.  But in the olden days, they were set up using the knobs, dials, and sliders that were provided on the controller.  The controllers in the image below illustrate this and are similar to the controllers we were working with at the MCI building.

RC-195

For the MCC Powers RC-195 controllers illustrated above, the authority adjustment slide is what sets up the reset schedule.  If you want to know more about the details, you will find the instruction manual for it on the pneumatic control resources page of our commissioning resources website.

Controller Action—The General Case

As a first step in figuring out our strategy, we had to determine the “action” of our controller:

DA LrgDirect Action

With a direct-acting controller, an increase in the difference between the set point and the process variable (0ften called error) will cause an increase in control-process output.  A decrease in the difference between the set point and the process variable will cause a decrease in the control-process output.

RA LrgReverse Action

With a reverse-acting controller, an increase in the difference between the set point and the process variable will cause a decrease in control-process output.  A decrease in the difference between the set point and the process variable will cause an increase in the control-process output.

Controller Action Bottom Line

The bottom-line regarding controller action is that a designer determines the failure mode for the final control element (in the case of the MCI building, the inlet guide vanes) as a first step. That information combined with how the system will react when the final control element is moved in response to an increase or decrease in the process variable (in this case, duct static pressure) determines the controller action.

Controller Action for the MCI Building Static-Control Processes

For the MCI Building, because we had selected the IGV actuator to fail closed on a loss of air pressure, a reverse acting discharge static pressure controller was required. In other words,  if discharge static pressure dropped below set point, we needed the output pressure from the controller to increase, causing the inlet guide vanes to open.  If discharge static pressure increased above set point, we needed the output pressure from the controller to decrease, causing the inlet guide vanes to close.

A reverse-acting process allowed us to start the system with the inlet guide vanes closed and the fan at minimum capacity, meaning the fan started unloaded and the potential for immediate over pressurization upon system startup was minimized.

Interlocking the Control Process with Fan Operation

To ensure that the system started this we, we provided a three-way air valve (often called an Electro-Pneumatic switch or EP switch) in shown in the illustration. The equivalent in a DDC system is the proof-of-operation interlock.

When de-energized, the three-way valve blocked the control signal and vented the pressure in the actuator to atmosphere.  When energized, it closed the vent and connected the control signal to the output serving the actuator, allowing the control system to modulate the inlet guide vanes through the positioning relay. The three-way valve was wired in parallel with the fan-motor starter so that, when the starter was energized, the valve was energized.  

This was a fairly common approach for doing this sort of interlock at the time.  But there is an assumption behind it, that being that, if the motor is spinning, air is moving.  That may or may not be a good assumption for several reasons;  for instance, if the belts had broken, the motor would in fact be spinning but there would be no air moving. But to keep from making this even longer, I will set that discussion aside for now.

Reset-Line Points

We knew we needed 3 in. w.c. of pressure at the discharge of the fan to deliver 0.75 in. w.c. of pressure at the remote location on a design day. That requirement established one point on our straight-line reset schedule.

More specifically, we adjusted the knobs and dials on the controller so that, when the signal from the remote static-pressure controller was 15 psig, the set point of the controller was 3 in. w.c. In a DDC system, this would be accomplished by relationships set up in the controlling logic rather than by physical adjustments to a piece of hardware.

To determine the other point on our reset schedule, we considered what would happen on a weekend with only workers on the second floor in the building. Under those conditions, the system would run and the terminal units on the floor with people would follow the load. The terminal units on all the other floors would probably be at or near minimum flow depending on the solar load and thermostat set points.

In the worst-case scenario, we would need to deliver the design flow for the second floor and the minimum flow for the other floors. The calculated pressure drop to the remote-sensor location on the second floor at this flow condition was approximately 0.25 in. w.c. because at this relatively low flow condition compared to the design flow rate, the distribution duct system as quite oversized.

Adding this pressure drop to the 0.75 in.w.c. required to deliver design air flow from the remote sensor location to the zones on the second floor told us that we would need to deliver 1.0 in.w.c. at the supply fan discharge (0.25 in.w.c. + .75 in.w.c.) under this low load condition.  This value became the other point on the reset schedule line.

More specifically, we adjusted the controller so that, when the signal from the remote static-pressure controller was 3 psig, the set point of the controller was 1 in. w.c.  We would fine-tune both reset values based on operating experience during commissioning and the first year of operation.

Considering an Extreme Condition

Once we had made our adjustments, the remote sensor would adjust the discharge set point linearly over the range established for the reset schedule. But, because the output of the remote controller could drop as low as 0 psig and rise to whatever the pneumatic-system supply pressure was (typically 20 to 25 psig), in day-to-day operation, the set point of the controller could potentially be adjusted beyond the bounds of the reset schedule based on the nominal 3 to 15 psig span that was the de facto standard in the industry.

A set point lower than 1.0 in. w.c. would not be cause for much concern. A set point above the 3.0 in. w.c. maximum target, however, could cause nuisance safety trips or worse.

For example, at startup, when duct pressure at the remote location was 0.0 in. w.c., the reverse action of the remote static-pressure controller would cause the controller’s output to drive toward its maximum value. Depending on the throttling range/proportional-band setting of the controller, the output under this condition could be the maximum available main air pressure.

If you extrapolate the straight line associated with the reset schedule to 20 psig, you will discover that the remote controller would have commanded a set point of about 3.8 in. w.c. for the fan discharge pressure controller.   If the fan were to achieve this value, it would have tripped the high-static-pressure limit. 

To prevent that problem, we added a high-limit relay, which limited the signal to the reset input of the discharge controller at 15 psig even if the output from the remote controller drove above that value.   Thus, we limited the maximum reset command to the discharge controller to a set point of 3 in. w.c. In a DDC system, this would be achieved with the control logic rather than by a physical piece of hardware.

Reset Strategy in Operation

The reset strategy allowed us to have our proverbial cake and eat it too, meaning the control process would never allow fan-discharge static pressure to exceed the 3.0-in.-w.c. design target because it was controlling for discharge static pressure directly and the system hardware would allow only a maximum set point of that magnitude, even at startup, when the pressure at the remote point in the system was 0.0 in. w.c.

If, as the system came up to speed, delivering 3.0 in. w.c. at the discharge of the fan created more pressure than the 0.75 in. w.c. we targeted at the remote location, then the output of the remote controller would drop.

This would lower the set point of the discharge controller, causing the inlet guide vanes to close and deliver less air, which would lower the system pressure. If the terminal units opened their dampers to meet an increase in load, the reduction in pressure at the remote location would cause the set point of the control process to again be adjusted upward, but never above the design value.

One Final Thought About Lags

What follows is one of the most useful lessons gleaned from my experience at the MCI building (aside from how to not blow up ducts).

Comparing the Response of a Process to an Upset with Different Levels of Tuning Implemented

The figure below illustrates the response of a system with a proportional-only (P) control process to an upset[i] as the proportional band is reduced gradually from:

  1. No control (manual, top black line).
  2. Loosely tuned control—a very large proportional band (red line).
  3. Tightly tuned control—the proportional band is as tight as it can be without the risk of hunting (blue line).
  4. Near-resonance, or hunting (gray line).
  5. Over tuned/approaching instability—the proportional band is too narrow, given the characteristics of the system (bottom wavy black line).

Response Tune @

The system the controller is applied to is fixed in terms of lags, dead time, system gain, and other factors that dictate how the process will respond.

When you tune a control loop, you start with the a very large proportional band (the red line) and sneak up on the gray line, which is the point at which the system is starting to go unstable.  Then you back off a bit (back towards the red line) so you run on the safe side of stable (the dark blue line).

The reason you sneak up on the gray line is that it reveals the natural period for the control process and system. You can use that parameter to come up with a pretty good set of initial tuning parameters for the control loop.

In the illustration, the upset occurred at t=0 on the x axis.  Notice how there is a period of time after the upset during which nothing seems to happen based on the response of the system (the y axis on both charts).  The purple line with an arrow at both ends illustrates this, and it is called the “apparent dead time” for the process.  It represents the sum of all of the lags in the system.

My purpose in bringing that up is to focus your attention on three facts:

  • The natural period for the near resonance control loop (the grey line) is approximately equal to four times the apparent dead time (compare the light blue double arrow head line with the red, orange, green and dark blue double arrow head lines)
  • No matter how loosely or tightly tuned a control process is, the response for about the first half of the natural period (about twice the apparent dead time) will be nearly identical no matter if the control process is over tuned, under tuned or non-existent (manual control); contrast the 5 different response curves in the enlarged circle for half the natural period, which is indicated by the red plus orange arrows.
  • The tightly tuned control process (blue line) is stable at about the end of twice the natural period.

Once you recognize and embrace these facts, there are very useful in the context of what we are trying to do when we tune a P, PI or PID control loop.

The Quarter Decay Ratio

Technically speaking, for most of our systems, our goal is to achieve a quarter-decay-ratio response to a process upset, as illustrated below.

Quarter Decay 0

“Quarter decay ratio” is a fancy way of saying the peak of the spike during the second cycle of the response cycle will be one quarter of the peak during the first cycle of the response.  

It has its roots in the work John Ziegler and Nathan Nichols published in Optimum Settings for Automatic Controllers in 1941.  If you would like to read it, you will find a copy of it in part 1 of the Control Engineering Reference Guide to PID.  There is also an interview in there with John Ziegler, which is kind of cool.

Twice the Apparent Dead Time;  A Very Important Parameter

If you go out and start playing with loop tuning, you will discover that there are multiple versions of this response pattern or something very close to it, depending on the exact combination of proportional, integral and derivative gain you set up for the process.  In fact, you could probably spend hours changing the settings and observing the different patterns.

I speak from experience because when I first tried tuning loops, I did just that.  But at one point, I realized a couple of things,  specifically;

If the first spike doesn’t trip a safety or, worse yet, break something (for instance, blow up a duct), and

If the process settles within a reasonable time frame for the application you are working with

… then you probably have a winner, at least for the time being.[ii] 

Quarter DecayBut if you keep tripping safeties (or worse) and that was happening with-in less than twice the apparent dead time after you observe the system starting to respond, then you are going to need to eliminate some lags.  That is what the second bullet point in the opening part of this section was about.

Similarly, if you have managed to find a setting that does not cause safety trip (or worse) but now, the system is still trying to find itself hours (or even two natural periods) after the upset, then  you are going to need to eliminate some lags.

To quote David St.Clair:

It All Depends On The Lags

Eliminating Lags

The table below contrasts lags that are relatively easy and relatively difficult to eliminate.

Lags Table

Eliminating lags to solve a startup/loop-tuning problem can be counterintuitive.

For instance, when I was having trouble getting the MCI Building VAV system online, it seemed things were happening too fast at the inlet guide vanes;  they were opening up way to quickly.  So I slowed them down by adding restrictors. In reality, things were not happening fast enough in terms of the control system realizing the fan had started but that it would be some time before there was meaningful pressure at the remote sensor location.

When I added the restrictors, I was able to get the fan running without tripping the safety, but not able to achieve my set point in a reasonable time or respond to step changes in the system (zone level scheduling or a set point change for instance), so I had simply traded problems.

Ramps vs. Acceleration and Deceleration Settings

In modern times, it can be tempting to try to solve a startup problem like the one I experienced using the acceleration and deceleration settings on a VSD to slow the drive’s reaction to changes commanded by the control system. And, while you may be able to resolve the over-pressurization problem in this manner, you will have added a lag to the system. That means that for even a modest upset or step change in the system, you will have limited how quickly the control process can react to it to recover the set point and resume steady state operation.

Ramp logic is a way around this.  A true ramp limits reaction time until the process variable and set point are inside a window established during startup and commissioning. Once the process variable is inside the window, the limiting function is eliminated from the control process, meaning and the control process is unconstrained in terms of how quickly it can make a change.

Many VFDs have a ramp function built into them.  But just to make interesting, some manufacturer’s call their acceleration and deceleration settings “ramps”.  Having said that, if the drive does not have the setting built into it, you can simply implement it in the control logic that is managing the drive.

Conclusion

While I illustrated the solution to the MCI building problem using the pneumatic control technology we were working with at the time, many of the issues the solution addressed are independent of the control technology because they were about the physics of the system that was being controlled. Thus, they are somewhat timeless in nature and perhaps things you will find useful in the modern world with its DDC technology.  Maybe they are even something you can pass on in your role as mentor, just as the MCI building, David St. Clair, and Tom Lillie did for me.

David-Signature1_thumb1_thumb                                                        

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering                                Visit Our Commissioning Resources Website at http://www.av8rdas.com/

[i]     The term “upset” means a sudden change in the process;  something like a major set point change or a major load change.  Sometimes, the word “step change” is used as a synonym for “upset”.  Start-ups are an example of a event that introduces an upset into nearly control loop in the system that is started up (and often into the systems that support it).

[ii]     I say for the time being because things that affect the lags in a system can change over time.  For instance, in a brand new system the day that you tune the discharge temperature control loop for the very first time may be a design cooling day.  

The system may (probably will) exhibit a totally different response pattern 6 months later on the design heating day since it will be using different heat transfer elements to deliver a similar discharge temperature.   And things will be different during the swing season when the economizer has a role in the process.

And after you finally have tweaked and fine tuned the loop over the course of the first year and found the perfect, year round solution, you may discover it no longer works two years down the road because wear in the linkage system changed the hysteresis or the coils are not as pristine as they were when they were new or the occupancy pattern in the building and related load profile has changed.

Bottom line, loop tuning, just like commissioning, is not a one time event.

Posted in Air Handling Systems, Controls, HVAC Fundamentals, Pneumatic Controls | Leave a comment

Lags, the Two-Thirds Rule, and the Big Bang, Part 4

In the previous blog post,  we looked at common lags that you might encounter in building systems in the general case. In this post, we will look at the particularly complex transportation lag that I ran into in the MCI Building VAV system, which was the root cause behind my significant emotional event.

Some Housekeeping

Before getting into the post, I wanted to do a bit of housekeeping.  You may have noticed that all of the links that were previously on the right side of the blog home page under the “Categories” drop-down menu went away.   That is because all of them and more now exist on our Commissioning Resources website (the place you will go if you click on the little picture of the Pittsburgh skyline on the right side of the home page).

That said, let me know if there is something missing that you are looking for.  I will direct you to its new home or make sure it is available on the Commissioning Resources website if it is not already there.

Lags and the MCI Building VAV System

The VAV system in the MCI Building that is behind this case study had many of the lags described in the previous post. But thermal lags were not an issue since we were dealing with a pressure control process. What’s more, the linkage and valve-plug lags were in the form of the linkage system[i] and blade-rotation mechanism for the inlet guide vanes.

With my pneumatic pressure transmitter located on the second floor and the controller it served located on the roof, the sensor lag was fairly significant because of the long run of quarter-inch pneumatic tubing from the main air source in the control panel to the transmitter and then back up to the control panel: probably in the range of 300 feet or so each way.

In addition, the transportation lag was quite significant and complex and was something I had clearly not considered in my control system design. But it was probably the biggest contributor to the problem I experienced.

An Analogy

In trying to understand this phenomenon initially and then subsequently explain it over the years, I have developed an analogy that is based on pumping water to fill a series of interconnected tanks.

The first tank, which is directly served by the pump, fills three other tanks through lines of different lengths. The 3rd and 4th tanks have two-way valves that drain water back into a reservoir for recirculation to the pump.  

The sketch below illustrates the arrangement under stead-state conditions.

Tanks Start-up v1

Note that if you click on the image, an enlarged version of it will open up.  Clicking the back-arrow will bring you back to the post.  You can also right click on the image and select “Open image in new tab” as illustrated below.

Enlarge

Granted, water is incompressible and the air in the MCI building system was compressible. But bear with me;  in my experience, explaining this phenomenon using a water and pump analogy will get the basics of the phenomenon we are discussing established.  Having established that, we can then qualify it regarding the differences between air and water to fully explain what happened in the MCI building.  That lesson can then be applied to other large, complex distribution systems.

A Bit about Pump Physics

To understand the analogy, you need to understand how pumps work.  So, while I am not going to go into a full blown explanation of pump physics, I wanted to highlight a few things that will matter in terms of understanding how the pump will interact with the tank.  If you are comfortable with pump and system curves, then you may want to just jump on down to next section (The MCI Building System Arrangement).[i]

To get you up to speed on the pump physics that matter for this analogy, I will use a simplified version of our diagram, limited to a reservoir, one tank with a pump moving water into it from the reservoir and two valves that let water out of it back into the reservoir.

Steady State Operation at Design Conditions

image

Under this condition, the pump delivers design flow to the tank and each of two control valves allows 50% of the design flow to return to the reservoir.  The depth of water in the tank creates the pressure required to move the design flow rate through the wide open control valves.  Thus, if the tank level is maintained at the level shown above, there will always be sufficient head to deliver design flow through either or both valves.

The total flow rate is the sum of the flow through the two control valves and the head delivered by the pump is the head required to lift the water over the top of the tank and the head required to overcome the resistance due to flow in the piping network.

As a result, for a fixed speed with a fixed impeller size, the pump will operate at a fixed point on the impeller line (the green line on the pump curve) associated with the design head and flow.   The system curve (the orange line) is a parabola that passes through the operating point (the red dot).  Its 0 gpm point is associated with the lift the pump sees; i.e. how much head or pressure it needs to create to lift water over the top of the tank and initiate flow.

Note that from the perspective of the pump, it is serving a fixed system because there is nothing in the piping circuit that it serves directly that can move.  The control valves can move, but they are decoupled from the pump circuit by the air gap between the point where the pump dumps water into the tank and the air gap between the outlet of the valves and the reservoir.

Steady State Operation at 50% Design Conditions

image

If we close one control valve but keep the other fully open so it delivers design flow, we will have cut the flow in half since each valve was selected to deliver half of the total flow rate.   But since the pressure set by the water level is what drives flow through the valve, to deliver design flow, we still need to maintain the design water level in the tank, even though the flow leaving it has been reduced by 50%.

Since the depth of water and the pressure it creates at the bottom of the tank is what drives the design flow rate through the wide open valve, we could control the pump by measuring the pressure at the bottom of the tank and varying the speed as needed to increase or reduce the flow into the tank.  And since, for a fixed system, the pump speed and flow rate are directly related, a reducing in demand of 50% from the design value would mean that the pump only needed to run at 50% of the design speed to meet the new, lower flow requirement.

The head required to overcome the resistance due to flow for a given flow rate in a fixed system varies as the square of the flow (i.e. the Square Law).  As a result, when we reduced the flow by 50%, the head required to overcome the resistance to flow will drop to 25% of what it was at the design condition.  Since the height of the tank and the discharge pipe did not change, the lift did not change.

The bottom line is that if we were controlling for a fixed pressure at the bottom of the tank, a reduction in flow out of the tank by 50% would cause the pump to slow down to 50% of its design speed.  The operating point would shift down the system curve since to 50% of the design flow rate at a head equal to 25% of the pressure drop due to flow plus the static lift over the top of the tank.

Start-up at 50% Design Conditions

image

The diagram above shows the tank immediately after a start-up at 50% load.   Since the water level is below set point, the pump ramps up to full speed. As the water level rises, the pump slows down and follows the system curve illustrated previously until it stabilizes at the design water level and 50% of design flow.

The shape of the system curve is not impacted by tank water level.  This is a subtle difference from the situation we will discuss next.

Steady State Again but with a Subtly Different Configuration

image

If you study the diagram above, you will realize there is a subtle difference between it and the previous diagrams;  the pump discharges into the bottom of the tank instead of the top of the tank.

Now, the lift that the pump needs to provide will be a function of the level of water in the tank.   When the tank is totally empty – at start up for instance – the pump will require less lift than when the tank is at the design operating level and the system curve will shift down from the design operating point and the operating point itself will shift out the pump impeller line.

image

As a result, the pump will move more than design flow initially.  But as the tank fills, the pump head will increase because the static head imposed by the water level in the tank increases and the flow drops off.

The bottom line is that in this configuration, the water level in the tank impacts the system curve.

The MCI Building System Arrangement

To fully understand the phenomenon we are about to discuss, you will need a general understanding of the physical arrangement of the MCI building air handling system in question.  Thanks to Google Earth and the internet, even though I no longer have the documentation for the facility, I was able to put something together.  The result is the images below. 

This first  image is of the roof top air handling equipment;  note the large, identical fan systems with symmetrical supply (towards the bottom of the picture) and return (towards the top of the picture) duct connections.

image

This image illustrates a typical floor plan as well as an overview of the building.   The left side of the floor plan would be towards the top of the image above.   The view of the building is from street level towards the bottom left of the image above.

image

The supply and return ducts from the air handling units in the first image come together into a common supply and return duct riser in the two shafts highlighted on the floor plans.

MCI Building Analogous Components

The analogous components in the context of the tank and pipe network relative to the building are as follows.

  • The fans inside the two AHUs are analogous to the pump filling the 1st tank.
  • The 1st tank is analogous to the discharge duct from the AHU with is coupled to the distribution duct riser through a string of fittings that represent a significant portion of the system pressure drop due to their configuration and the high velocities that they operate at.[ii]
  • The 2nd tank represents the distribution riser, which is a straight run of duct and thus free of fitting pressure drops. However, it is long (the height of the building) and the implication of this is discussed subsequently.
  • The 3rd and 4th tank represent the floor level duct distribution duct systems. In the actual building, there are distribution systems for each of the 12 floors served by the air handling system. But for the sake of illustration, I am only representing the top floor and the bottom floor in the analogy.
  • The two way valves that allow water to leave the 2nd and 3rd tank and recirculate to the pump represent the VAV terminal units associated with the zones in the building.
  • The reservoir represents the return duct system.

The Floor Level Distribution Systems and Their Tank and Pipe Analogy

The distribution systems serving each floor in the facility are fed from the duct riser. Because it is long duct, running the full height of the building, there is a pressure drop across it’s length, even though it is essentially a straight duct running down a vertical shaft.

As a result, the pressure at the fitting that taps the riser at the bottom to serve the 2nd floor distribution system will be lower than the pressure at a similar fitting serving the 11th floor distribution duct system. This difference in available pressure to deliver air to the different floors is represented by the short vs. long pipe connecting the tank representing the duct riser to the tank representing the 11th floor distribution system (the short pipe) and tank representing the 2nd floor distribution system (the long pipe).

A Bit More about the Reservoir

For the purposes of the discussion that follows, the reservoir from which the pump draws its water is assumed to be large enough so that there is no meaningful change in level between what exists at design flow and what exists when the system off, when all of the water drains back to the reservoir. In other words, the pump performance is independent of the level of the water in the reservoir and is only a function of the elevation of the tank it serves, the water level in the tank it serves, and the speed it is operating at.

Pump and Tank System Control

In the analogy, the pump’s role is to move water from the reservoir to the first tank in the network.  The depth of water in the first tank, which represents the pressure created by the supply fan in the analogy, is what causes the water to flow to the other tanks, through the control valves and back to the reservoir.

The pump speed is controlled by the pressure at the bottom of the tank representing the lower floor of the building.  This is analogous to the remote pressure sensor I used to control the IGV’s on the supply fan initially as described in the first blog post in this series.

The pressure at the bottom of the tank is a function of the water level in the tank.   That means that if the water level in the tank is low relative to the desired level, the pump speed will increase, moving more water directly into the first tank and indirectly through the network of tanks and piping to the last tank.  There will be a time lag associated with this process and understanding that lag is the goal of the analogy.

The pump fills the 1st tank by pumping water into it from the bottom. As a result, the head the pump sees will vary with the level of water in the tank. In turn, this will cause the pumps operating point to vary with the level of water in the tank. This is analogous to how the supply fans in the AHU will perform as the duct system becomes pressurized.[iv]

The other tanks in the system are fed from the bottom of the tank ahead of them. As a result, the flow rate to the downstream tanks will vary with the pressure (water level) in the tank that is feeding them. This is analogous to how flow to the various floor level distribution systems will vary as a function of the pressure in the duct riser feeding them.

Finally, overflowing a tank is analogous to over-pressurizing a duct and causing it to fail.

Tank System Operation

Steady State Operation at Design Conditions

The illustration below (a repeat of the first illustration)  represents the system in steady state operation under design conditions.

Tanks Start-up v1

All the control valves (VAV terminals) are wide open. The pressure sensor in the 4th tank has the pump running at full speed because that is what is required at design to establish the level in the tank required to deliver design flow to the loads.

Notice that:

  • The level in the 1st tank is higher than the level in tank 2nd tank, and
  • The level in the 2nd tank is higher than the level in tank 3rd tank, and
  • the level in the 3rd tank is higher than the level in the 4th tank.  

This is because it is the level difference between the tanks that cases the water to flow from one to the other.   In other words the level difference represents the pressure drop due to flow in the pipe connecting the tanks.  Specifically, for the illustration above, it represents the pressure drop due to flow at design conditions.

These levels are not directly controlled.  Rather, the are established by the pressure in the 4th tank (which is directly controlled) feeding back to the other tanks through the piping network.

Response to a Load Reduction at a Load Served by the 4th Tank

If one of the loads served by the 4th tank dropped (required less water), it would trigger a  chain of events:

  1. The control valve would start to close, then
  2. The water level in the 4th tank would start to rise, and
  3. The pressure at the bottom of the tank would increase (due to the higher water level), and
  4. The control system to start to slow the pump down to re-establish the targeted operating level in the last tank.

Those four events are only the beginning of a very dynamic, interactive string of events that will ripple out through the system.

Initially, when one of the 4th tank loads dropped and caused its associated valve to close, the higher pressure (deeper water) in the 4th tank would reduce the pressure difference between the 3rd and 4th tank, causing the flow from the 3rd to 4th tank to drop off, which would cause the level (pressure) in the 3rd tank to rise.

The deeper water in the 3rd tank would tend to drive the flow out of it to the 4th tank back up again.  But it would cause more than the design flow to leave the tank through the wide-open control valves, which in turn, would cause them to throttle (modulate towards the closed position) to try to maintain set point.

In the early moments of this event, since the control system is just starting to slow the pump down and the correct level has yet to be established in the 4th tank, the amount of water coming into the 3rd tank is likely more than required by the loads it serves directly and the loads it serves via the water it delivers to the 4th tank. The combination of excess flow and the throttled valves on the 3rd tank will cause the tank water level to rise, which will tend to increase the pressure difference between the 3rd and 4th tank all other things being equal.

This increased pressure difference will tend to increase flow to the 4th tank, causing its level to rise and the 3rd tanks level to drop, all other things being equal. As a result, the water level (pressure) in the 4th tank would tend to increase, further slowing down the pump to try to bring the system back into balance at the set point.

Response to Other Load Changes

A similar but slightly different dynamic would be set up if a control valve leaving the 3rd tank was to modulate closed instead of a control valve in the 4th tank. And yet another similar but slightly different dynamic would be set up if either of those valves modulated back open again.

The point is that this is a very dynamic process with a lot of interactions between different elements of the system, some of which have no direct impact on the speed of the pump. One of the tricks in tuning a system like this is to try to find a tuning solution that will deliver stable performance under all the operating conditions that the system will see, including modest, gradual changes in load. But the process also needs to be able to react quickly enough to a major load change to prevent overflowing a tank (blowing up a duct).

System Dynamics at a Full Load Start-up

For most systems, a start-up is the largest load change the system will see, especially if the conditions at the loads are out of control. For example, a VAV system that is starting up on a warm morning after a long, hot weekend is likely starting with all the terminal units fully open and demanding their maximum flow.

Due to system diversity, this demand could actually be in excess of the design flow requirement.  As a result, the system will ramp up to full speed but will not be able to achieve its design static pressure set point until some of the zones start to cool off and close their dampers.

The illustration below shows the conditions immediately after start-up on a design day for our tank system.

Start-up

Immediately prior this point in time, the tanks were all empty. Since there is no water (pressure) in the 4th tank, at start-up, the sensor that is located there to control pump seed commands the pump to full speed and will keep it at full speed until the water level in the 4th tank approaches the targeted set point (the red line next to the tank in the figure).

The pump was selected to deliver design flow to the system at the head established by the design water level in the 1st tank along with the elevation change required to get water to the tank in the 1st place and the pressure drop due to flow through the suction and discharge piping. But when the pump starts with no water in the tank and no flow in the system, the only head it sees initially will be the what is required to lift water to the open tank.

As soon as it starts, the pressure drop due to flow will show up in the piping circuit. But depending on the volume of the tank relative to the pumps flow capacity, it could be a while before the head associated with the design water level in the tank is established. Thus, for a while at least, the pump will see less than the design head. 

And, since the level control system is asking it to run at full speed, it’s operating point will shift out its curve (impeller line) from the design point.  As a result, it will initially deliver more than the design flow to the tank.

As the tank fills, the head the pump sees increases and the operating point will move up its curve. If the pump was being controlled for the pressure at the bottom of the 1st tank instead of the pressure at the bottom of the 4th tank, as soon as the water level in the 1st tank approached the design level (the red line next to the tank in the figure), the pump would start to slow down in an effort to come into balance at the design level.

But, until water flows through the series of tanks and starts to fill up the 4th tank, there is nothing to tell the pump to reduced speed.

Thus, it will continue running at full speed for the time required to establish a level near the design level in the 4th tank. This time lag will be a function of several variables which are discussed subsequently. But for this entire time interval, the pump will remain at full speed, although the flow rate will continue to drop as the additional depth of water in the tank increases the head it sees and pushes it up its curve.

Of course, as the water level in the 1st tank increases, water will start to flow out of it to the other tanks. However, if you consider a special case – a situation where there was a valve in the line connecting the 1st tank to the 2nd tank and that valve was closed –  I think you can see that the pump would continue to run at full speed until it overflowed the 1st tank (ruptured the duct) simply because the signal controlling it was disconnected from what was going on in the tank due to the closed valve.

Returning to our case – where there is not a closed valve – the resistance due to flow and the volume associated with the network of tanks and pipes causes the first tank to initially fill up faster than the other tanks.

For one thing, the rate at which water is transferred from tank to tank is controlled purely by the level in the tanks relative to each other and the pressure drop due to the flow that is created by the level difference in the interconnecting piping.   Increasing the level difference will tend to increase the flow rate. 

But at the same time, the resistance due to flow will also increase as a result of the higher flow rate.  As a result, doubling the level will not double the flow rate;  it will only increase it by a factor of 1.41, which you can predict by applying the square law to the situation.

The bottom line is that until the design level is achieved in a given tank, the tanks downstream from it will not be able to deliver design flow. More specifically in the context of our example, that means that until the design level is achieved in the 2nd tank, the 3rd tank will not be able to deliver design flow to its loads and to the 4th tank.

And only after the design level is achieved in the 3rd tank will it be able to deliver design flow to the 4th tank. During this entire time, the pump will have been running at full speed, potentially over-filling the first tank.

The duration of this transient state will have a lot to do with the volumes of the tanks relative to the flow rate the pump could produce at full speed and the resistance to flow created by the piping interconnecting the tanks. If the volume of the tanks is small relative to the pumps rated flow and/or the flow required by the loads (imagine tall, thin tanks), then the required operating levels will be achieved much more quickly than if the volume of the tanks is large relative to the pump’s rated flow and/or the flow required by the loads (imagine tall, wide tanks).

Similarly, if the piping is small relative to the flow it needed to carry at design conditions (visualize soda straws interconnecting the tanks), it will take more time and/or larger level difference between the tanks to move a given volume of water from one tank to the other. In contrast, if the piping is large compared to the design flow (visualize a subway tunnel interconnecting the tanks), then it will take much less time and/or much less of a level difference to move a given volume of water between the tanks.

It is also important to recognize that during this start-up process, there is water leaving the tanks via the wide-open control valves serving the loads. In other words, some of the water that is transferred from the 2nd tank to the 3rd tank leaves the 3rd tank to go to the loads and is not available to increase the water level in the tank and/or be transferred to the 4th tank.

This further delays the time required to establish the desired operating level in the 4th tank, as does the fact that some of the water entering the 4th tank leaves to go to the loads and thus is not available to increase tank level and ultimately bring the system under control.

System Dynamics at a Part Load Start-up

When the system starts at part load, all of the dynamics outlined above come into play. But in addition, when the pump is running at full speed, it is over-sized for the current load condition.

For the sake of discussion, let’s assume that the two way valves representing the loads are all 50% open at start-up. On the plus side, this means the water level required in the 1st tank to deliver design flow to the downstream tanks will be established more quickly. This is because the partially open valves will reduce the flow rate out of the tanks for a given water level compared to what happened when they were wide open.

But, if the water can not get out of the 1st tank or downstream tanks fast enough, it is possible that the 1st tank still will overflow (the duct will fail) before the required operating level is established at the 4th tank. In fact, this could happen more quickly than it did during a start-up at full load (visualize starting up with the valves all closed).

Analogy Bottom Lines

Hopefully, at this point, you can see that there could easily be a combination of system dynamics that would cause the 1st tank to overflow before the desired operating level was achieved in the 4th tank.  And if you can see that, then you probably can understand what I believe to be the root cause behind my blowing up the duct in the MCI building.

Connecting the Dots

More specifically, when we went to start up the system for the first time using the remote sensor to control the inlet vanes on the supply fan (analogous to the pressure sensor on the 4th tank controlling the pump speed), it was a mild day.  Since the building was generally at the ambient temperature because we were just starting up the HVAC systems, many of the terminal units were partially closed (analogous to the valves on the tanks being partially closed.

Since the fan was off, the duct system was not pressurized (analogous to all of the tanks being empty).  When we started the fan, for it to pressurize the remote portion of the system where the controlling sensor was located, it also needed to pressurize the duct system leading to the remote location (analogous to the upstream tanks needing start to fill up before the 4th tank where the pressure sensor was located starting to fill up).

The geometry of the fittings on the discharge of the fan caused the static pressure to build up fairly rapidly at that location and at the same time, delayed the pressurization of the downstream ductwork (analogous to the size and length of the piping interconnecting the tanks impacting how quickly they are able to be filled up by water coming from a tank upstream of them.

All of this time, because the pressure at the remote location in the ductwork was below set point (the level in the 4th tank was below the design water level) the inlet guide vanes at the fan were held wide open (the pump ran at full speed).

As a result, the fan was able to generate a pressure that exceeded the pressure rating of the discharge duct even though the pressure at the remote location had not come up to set point (the pump completely filled up the first tank and caused it to over-flow before the 4th tank was at the targeted operating level).

And while there are some differences between the tank system and the MCI VAV system that is behind this string of blog posts, I am hoping that you can see that what happened in the MCI VAV system on the day of my significant emotional event was very similar to what happens in the tank analogy in a scenario where the pump can fill and over-flow the 1st tank before the required operating level is achieved in the 4th tank.

Differences Between the Pump and Tank Analogy and the MCI Building Air Handling System

As I mentioned at the start of the post, there are some differences between my tank analogy and the air handling system in the MCI building that will come into play.  The primary differences are:

  • Air is compressible and water isn’t.
  • For all practical purposes, the fan does not have to lift the air to the top of the system where-as the pump had to lift water to the tank level.
  • As a result of the preceding, the system curve[v] for any given operating condition will always pass through 0 cfm at 0 in.w.c. But the operating curve for a VAV system will not do that as the load drops off because if it is being controlled for a fixed pressure someplace in the system.
  • The pumping analogy is about filling volumes. The fan system is about pressurizing volumes. In the fan system at start-up, the volumes represented by the duct system are already full of air at the ambient pressure, the fan simply adds more air to the volume to elevate the pressure to the targeted design static pressure.
  • If the 40 or so feet of straight duct on the discharge of the fan at the MCI building was a closed volume, the ideal gas equation says it would only take about 14 extra standard cfm of air to pressurize it to 4 in.w.c. But, if it was open ended, then the fan that was in place, operating at the design speed, could never reach 4 in.w.c. because of how much air was exiting at the other end of the duct.
  • The reality for a large VAV system will be between the two extremes described in the previous bullet and will be a function of the size of the volumes and the nature of the resistance between the various volumes in the system.

So there you have it;  my theory about why the lags introduced by the configuration of a large distribution system can make the system challenging to bring on line and tune.

In the final post of this series, I will touch on some of the reasons that I think not every system will exhibit the problem I experienced at the MCI Building. And I will look at how we solved the problem in the MCI building, a solution which is also applicable in the general case if you are dealing with a large, complex system.


David-Signature1_thumb1_thumb                                                        

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

[i] If you want more details on pump physics, you can probably get them by exploring the Energy Design Resources Design Brief titled Pump Optimization and Assessment, which can be found on the Energy Design Resources page of our commissioning resources website.

[ii] For more on linkage systems kinematics, visit Economizers–The Physics of Linkage Systems at https://av8rdas.wordpress.com/2015/10/04/economizersthe-physics-of-linkage-systems-2/.

[iii] One of the interesting about large ducts (in a nerdy sort of way) is that while they may be operating at a fairly low friction rate due to the large cross-section they contain relative to the perimeter, the velocities at the low friction rate can be quite high. As a result, the velocity pressure will also be quite high. Since duct fitting pressure drops are a direct function of velocity pressure, a string of closely coupled (interactive) fittings like those that existed at the MCI building to get from the roof, into the building and over to the distribution shaft can represent a significant pressure drop, even though the friction rate of the duct they are serving is fairly low.

[iv] The Howden/Buffalo Fan Engineering Manual includes a discussion of fan system start-up characteristics, including performance curves in Chapter 15. That chapter also illustrates how inlet guide vanes impact fan performance. You will find a link that will allow you to obtain a free electronic copy of the manual at https://av8rdas.wordpress.com/2017/11/15/howden-buffalos-fan-engineering-handbook/.

[v] It is important to remember that VAV systems operate over a family of system curves with the steepest one generally associated with the condition created by all terminal units operating at minimum flow and the shallowest one created by all terminal units operating at maximum flow. If, for either of these curves, or any one in between, I were to slow the fan down and nothing in the system moved, then the operating point would go through 0 in.w.c. and 0 cfm at 0 rpm This is different from the operating curve that a VAV system follows as the load drops off while it attempts to maintain a fixed pressure at some point in the system. You will find more information about this at http://www.av8rdas.com/affinity-laws.html#Profile.

Posted in Air Handling Systems, Controls, HVAC Fundamentals | Leave a comment

Lags, the Two-Thirds Rule, and the Big Bang, Part 3

In the previous post, we took a look at why moving the sensor that controls discharge static pressure in a variable volume fan system out into the distribution system will save energy compared to controlling for the pressure at the fan discharge. But when we moved the sensor out into the system, we introduced a lag, which can make the control process more challenging to tune and can even lead to a significant emotional event like the one I described in the first post in this series.

In this post, we will take a focused look at what lags are in the general case.  In the next post, we will look at them in the context of  the specific case of the system where I had my learning experience.

Lags

Whether you realized it or not, you probably have observed a lag in a control process. For instance, when you raise the set point on the thermostat in your house on a cold day, you probably hear a click, as a relay closes its contact in response to your adjustment. Though this may seem immediate, there is a small lapse of time between your turning the knob or pushing the button and the relay pulling in and then another between the relay pulling in and the furnace starting.

These lapses in time are termed “lags” in control industry jargon. The accumulation of all of the lags in a control process is what the folks who tune control loops call the “apparent dead time.”

An Example

To illustrate lags and apparent dead time, I am going to use a thought experiment centered on the steam-heat-exchanger control process shown below[i] . The numbers in the diagram indicate points in the control process where a lag occurs. The chart shows the effect on water temperature.

Heat Exchanger r1

For our experiment, I am going to use a pneumatic controller because that is what I was working with when I made my “discovery.” The reality is that most of the issues I was up against would have existed with a DDC system, just in a different form. At the end of the day, to design a good control process, you need to understand the physics of the system and equipment, and a lot of that will be independent of the control-system technology you are using.

At the start of the experiment, the system is steady-state with a hot-water-supply temperature of 100°F. Just prior to Time = 0 on the chart, we place the controller in “manual,”[ii] and at Time = 0, we increase the output by a fixed amount in the direction that will cause the steam valve to open. Technically, this is called inserting a step change, which upsets the steady-state condition.

With a heat exchanger, when you do this, the system temperature will rise and then level out at a new steady-state condition, as illustrated in the chart in Figure 1. Technically, we would say the process exhibits a first-order response to the step change.

What happens between Time = 0, the point when we turn the knob on the controller, and Time ≈ 0.5, the point when the temperature of the water starts to climb (blue circle on the chart)? That is the apparent dead time. The apparent dead time is the accumulation of all of the lags in the system, which occur at the points numbered from 1 through 8.

Item 1 – Controller and Set-Point-Adjustment Lags

With an analog mechanical controller, there will be a slight lag between the time we first touch and begin to turn the set-point knob and the time the controller reacts. For one thing, it takes a finite amount of time for a human to move a knob through an arc. Also, moving the knob compresses a spring or bellows or moves a lever, and there is some measure of hysteresis associated with the mechanism.

With a DDC system, despite the electrical signals moving at the speed of light, there still is a lag, one between the keystroking of the command and the time the keystrokes become electrical signals. What’s more, depending on the structure of the control system and process, a lag could be introduced by the control network, if the set-point change was initiated at an operator workstation not directly wired to the controller where the control-process code is executed. The duration of the lag will be a function of the network architecture, traffic level, and communication speed and could range from seconds to, in the case of an older legacy system, a minute or more.

Item 2 – Signal-Transmission Lags

Valve and ActuatorFor a pneumatic valve to move, a volume of air needs to flow from the air source through the controller mechanism to the actuator. Most pneumatic actuators balance air pressure against a spring force  as illustrated to the left.

More specifically, air pressure is applied to one side of a diaphragm to generate motion in one direction (blue arrow in Figure 2). This force, in addition to moving the piston and shaft, compresses a spring, which generates an opposing force (red arrow in Figure 2). The pressure of the fluid in the pipe also plays into the balance of forces (green arrow in Figure 2). The direction in which it is applied will vary with the design of the valve and the direction of flow through it.

The volume of air needed to move to the actuator as a result of our step change will be a function of how much motion the change in pressure will create in the actuator shaft when all of the forces are multiplied by the cross-sectional area of the piston or diaphragm used to generate the force and transmit it to the actuator shaft.

The speed at which the volume of air is delivered will be a function of the pressure difference available to drive the air through the pneumatic tubing and controller internals and the resistance to flow associated with the path.

The timing will be a bit non-linear because, at the beginning of the change, the pressure difference across the system will be greater than it will be toward the end. For example, if moving the dial changes the controller output from 5 psi to 8 psi and the air source to the controller is 20 psi, when the process starts, 15 psi (20 – 5) will be available to drive air to the actuator, but by the end of the process, only 12 psi (20 – 8) will be available.

With a DDC system using an electronic signal, the change in value happens at the speed of light and is inconsequential as a lag. The electronic/electric actuator, however, may introduce a lag that is significantly larger than the one introduced by a pneumatic actuator. That is because most electric/electronic actuators used in HVAC use a geared-down motor, as we typically need a significant force or torque to move a valve or damper, but are limited in how much power we can send to the actuator via the current flowing in the wire serving it.

As a result, it is challenging to find an electric/electronic actuator with a full-stroke run time of less than 15 sec. Thirty- and 60-sec run times are common; large actuators delivering a lot of torque may have full-stroke run times of 90 to 120 sec. In contrast, a pneumatic actuator with enough air pressure and volume could go full stroke in a second or two. In fact, it may need to be slowed a bit to avoid air or water hammer.

Item 3 – Linkage-System Hysteresis

In practical terms, with a linkage system, hysteresis means “play” in the linkage. Consider a quarter-inch pin attached to a link that is used to connect to a second link, where the hole provided for it in the second link is a half-inch in diameter. If, at the start of motion, the pin is in the center of the hole, the linkage physically connected to the pin will need to move one-eighth of an inch before the linkage containing the hole comes into contact with the pin and starts to move.

On the return stroke, the pin will need to move a quarter of an inch before it contacts the other side of the hole and begins to move the link containing the hole in the other direction. This will introduce both a delay and non-linearity into the control process because the timing of the lag will vary with the direction of motion.

Item 4 – Valve-Plug/Disc Characteristics and Hysteresis

In a manner like what was described for the linkage system, there can be some “play” in the connection between the valve stem and the valve plug or disc immersed in the fluid stream. In addition, the flow-vs.-motion characteristic of the valve plug may not be linear.

Item 5 – Steam-System Response Time

When the steam valve in our experiment opens, the rate of steam flow increases because the resistance to flow represented by the position of the valve plug decreases. Initially at least, this causes pressure in the steam-distribution system to decrease, which has quite a few elapsed-time implications associated with it, especially when a saturated or superheated fluid is involved.

For one thing, there may be a pressure-regulating valve managing the local steam-distribution pressure that will need to react to the pressure drop. Meanwhile, the control system managing the boilers that are generating the steam will need to adjust the firing rate to match the new load condition. How quickly this happens depends on the boiler fuel and burner-control technology.

For instance, a coal-fired boiler with a chain-grate stoker will probably not react as quickly as a gas-fired boiler.  And the reaction time for the boilers serving a massive campus distribution system will likely be different from what occurs with boilers serving a facility locally.

With a saturated or superheated system, a change in pressure means a change in fluid state and characteristics. For example, the saturation temperature of steam at 24 psia (approximately 9.3 psig) is about 238°F; at 23 psia, it is more like 235°F.

All of this impacts how long it takes to re-establish a flow and heat-transfer rate at the new steady-state condition and contributes to apparent dead time.

Item 6 – Thermal Lags

The heat exchanger and the water it contains represent thermal mass. Changing the temperature of that thermal mass will take time. The complexity of the response will be compounded by the fact the water is flowing, as opposed to stationary.

Item 7 – Transportation Delays

There will be a lag between the time the water inside the heat exchanger changes temperature and the time the higher-temperature water reaches the sensor that provides the input to the controller. If the sensor is near the heat exchanger, most of the time, the lag will be relatively small.

The lag, however, will be influenced by the thermal mass of the piping system and the quality of its insulation because some of the energy in the water will be used to elevate the temperature of the piping between the heat exchanger and the sensor and, thus, will modestly reduce the temperature of the water that reaches the sensor until steady-state conditions are achieved.

Item 8 – More Thermal Lags

Virtually every temperature-sensing element—be it mechanical, electromechanical, or electronic—has some sort of thermal mass associated with it. For a pneumatic controller, a common approach is to use a system consisting of a hollow bulb connected by a capillary tube to a bellows. The system is filled with a liquid-vapor mix that operates as a saturated system. As a result, if the temperature is increased, some of the liquid vaporizes. Since the mix is in a confined volume, the result is a pressure increase, which causes a bellows to expand or contract and move something in the controller to cause an appropriate reaction.

For all of this to happen, the metallic enclosure containing the fluid needs to change from the initial steady-state temperature to the new temperature associated with the steam-valve opening. Because the sensing elements in a water system usually are installed in a well so they can be replaced without the system being drained, the mass of the well needs to warm before the sensing bulb warms.

The illustration below is based on logged data from an experiment in which heat from a hair dryer was applied to a temperature sensor with and without a well.

image

Note that:

  • The well does make a difference. However, even without a well, a lag is introduced between the time heat is applied (gold dashed line in the example) and removed (blue dashed line) and the time the system reacts (solid red and green lines).
  • With or without a well, because of the impact of the thermal mass, the temperature keeps rising after the heat source is removed.
  • With or without a well, the sensor “thinks” the local environment is warmer than it is (72°F to 74°F) for quite some time after the heat is removed.

You can see a video of this experiment and watch how things change in real time via the meter along with a more detailed discussion of thermal lags, including another experiment demonstrating the lags associated with two temperature sensors that have very different masses in a previous blog post titled 4-20 ma Current Loop Experiments – Thermal Mass Effects .

Hopefully, this has given you some insight into what lags are and how they can impact a system and its control processes.

In Part 4 of the series, we will look at the lags I was dealing with in the MCI building, with a focus on what turns out to be a very complex transportation lag. I believe there are also reasons aside from the system lag dynamic that result in this problem not occurring on all projects and I will highlight them in this blog post.

Finally, in Part 5 of this series, we will look at how we solved the problem in the MCI building, a solution which is also applicable in the general case if you are dealing with a large, complex system.

David-Signature1_thumb1_thumb                                                        

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

[i] While our discussion has been based on a case study centering on a variable air volume system, all the concepts apply to variable flow water systems.

[ii] What this means is that we disconnect the output of the controller from the mechanism in the controller so that the controller has not influence on it. In this operating mode, the out put of the controller will only be affected by the manual adjustments we make.

Posted in Air Handling Systems, Controls, Mentoring and Teaching, Pneumatic Controls | Leave a comment

Taylor Engineering’s COVID-19 White Paper

Just a quick note to let you know about a very timely, well researched, well-written, well thought-out and practical discussion of the COVID 19 crisis in the context of the HVAC systems many of us deal with as a part of our jobs.

image

On Thursday, Taylor Engineering published a white paper that takes a very thorough look at the topic as you can see from this screenshot of the bookmarks.

image

Steve Taylor, the paper’s primary author and a leader in our industry, certainly has the expertise, technical background, and relationships to put something like this together having been a member of  the committee responsible for ASHRAE Standard 62.1 Ventilation for Acceptable Indoor Air Quality for 8 years, serving as its chair for 4 of those years.

Currently, he is member of ASHRAE’s Technical Committee TC 4.3 which addresses ventilation and demonstrates his ongoing passion for the topic.  Steve mentioned to me that he read 80 some research papers in the course of developing this, another sign of his passion and dedication.

If you are involved in any way with commercial building operations, in particular their HVAC systems, I think you will find this to be a valuable resource and reference, including links to other information on the topic.  So please follow the link, down load a copy, and take the time to read  through it.  I think you will find it to be well worth the time.

David-Signature1_thumb1_thumb                                                        

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

Posted in Uncategorized | Leave a comment

Lags, the Two-Thirds Rule, and the Big Bang, Part 2

In the previous post, I describes a significant emotional event I experienced in an early attempt to use a remote duct static pressure sensor to control a large variable air volume system. The remote sensor approach represented an application of the two thirds rule to make the system more efficient.

In this post, I will look at why a remote duct static pressure sensor has the potential to deliver energy savings compared to controlling a VAV system based on a fixed discharge pressure.

Why Worry About the Two-Thirds Rule

At the time of the project behind this blog post, the reason for wanting to apply the two thirds rule was a personal and corporate goal to be energy efficient. But it was not a code driven requirement.

However, work by ASHRAE during the late 1980’s and 1990’s resulted in Standard 90.1. which in so many words, mandated applying the two thirds rule as a code requirement for many systems. But before we look at what current codes would require, let’s explore why the two thirds rule concept saves energy in the first place.

Two-Thirds of What?

The real question about the two thirds rule for many is “two thirds of what?” I am frequently asked this question by operators and technicians who have heard of the concept and are interested in its benefits but are uncertain of how to implement it.

In other words, is the rule saying the sensor should be at a point that is:

  • Two thirds of the horizontal distance from the discharge of the fan to the most remote point in the system on a plan view of the facility? Or,
  • Two thirds of the vertical distance from the fan to the most remote floor? Or,
  • Two thirds of physical length of the longest duct run from the fan? [i]

As we will see, all those interpretations would work. In fact, the rule could have been called:

The “75 to 100 percent out the duct rule” (per the Honeywell Gray Manual)[ii], or

The “15/16ths” rule, or

The “27/32nds” rule.

The bottom line is it was intended as a guideline, not an exact solution, that encouraged moving the sensor out into the distribution system.

Contrasting Discharge Pressure Control with Remote Pressure Control

To illustrate the benefit associated with controlling for a remote duct static pressure, lets contrast what happens for a simple system if it is control for discharge pressure vs. a remote pressure. This example is based on a SketchUp model I use for Existing Building Commissioning (EBCx) training, which has its roots in some of the systems I have seen in existing hotels serving meeting rooms and ball rooms.[iii]

Controlling for Discharge Pressure Near a Fan Location

Consider the system illustrated below (the ceiling of the mechanical room has been removed to reveal the distribution ductwork serving the two zones in the ballroom above).

Ball Room AHU Technically, we could control fan speed based on the pressure near the fan discharge—for instance, after the two elbows and transition (Point A).

Engineering calculations similar to those illustrated subsequently under Controlling for a Remote Duct System Pressure reveal that a static pressure of 1.102 in.w.c. is required under design conditions at Point A to deliver design flow.   Meaning that this metric would become the set point for a control process referencing that location.

As the load in either of the zones served by the system drops and the terminal unit dampers throttle, the discharge pressure will tend to go up. Upon detecting this, a properly designed control process would reduce the fan speed (or, for the MCI Building, close the IGVs) to return the discharge pressure to set point.

Examination of the fan-energy equation …

Fan bhp

… reveals in this scenario, energy would be saved for two reasons. One is that the flow rate dropped, meaning one of the terms in the numerator became smaller, which will make the result smaller even if nothing else changed.

But the pressure drops through the filters, coils, and other components of the air-handling unit that are upstream of the discharge sensor also will drop due to the reduced flow rate. The square law [iv] …

Square Law

… allows us to quantify this for the new flow condition based on the design flow conditions.

As a result, the total system static pressure would be reduced, even if the discharge static pressure were held constant. Thus, a second term in the numerator of Equation 1 became smaller.

Clearly, then, a system designed to reduce flow as load drops will save energy compared with a system with a steady flow rate, even if the design discharge static pressure is held constant for all hours of operation.

If the square law is to be believed (in other words, if you have a modicum of respect for Isaac Newton and Johannes Kepler and those that followed), the pressure required to move air from Point A to Point B also will drop as flow drops. But because the control process is forcing discharge static pressure to the design requirement—even though that amount of static is not required at the reduced load condition—the terminal-unit dampers will need to throttle to dissipate the unnecessary pressure the fan is creating, which can also create a lot of noise.

Therein lie the improvements that can be achieved by applying the two-thirds rule.

Controlling for Remote Duct-System Pressure

Consider what would happen if we located the sensor immediately ahead of the point where the duct splits to serve the two ballroom zones: Point B in the first illustration(which just happens to be about two-thirds of the way to the terminal-equipment location).

A Cautionary Tale

Before going further, there is a point I feel compelled to make about the specific code requirements that would drive a design decision process to use remote duct system pressure to control a VAV system. 

In the first draft of this post, at this point in the discussion, I wrote:

For current design projects, ANSI/ASHRAE/IES 90.1, Energy Standard for Buildings Except Low-Rise Residential Buildings, prescriptively requires that duct static-pressure-sensor location be such that a set point of no more than one-third of total system static-pressure drop is required. Clearly, then, a sensor cannot be located at the discharge of a fan.[v]

At the time, I didn’t have the most recent copy of the referenced guideline, but I did have the 2019 ASHRAE Applications Handbook, so I referenced that.

One of my colleagues, in their review, pointed out that despite what the handbook says, my statement was not correct, which is why I include this little cautionary tale.

ANSI/ASHRAE/IES Standard 90.1-2019, now says:

Static pressure sensors used to control VAV fans shall be located such that the controller set point is no greater than 1.2 in. of water. If this results in the sensor being located downstream of major duct splits, sensors shall be installed in each major branch to ensure that static pressure can be maintained in each.[vi]

The standard includes an exception that allows facilities with DDC systems to implement a trim-and-respond control strategy like the one recommended in ASHRAE Guideline 36, High Performance Sequences of Operation for HVAC Systems,[vii] to be used to achieve compliance. DDC systems may or may not be required depending on a number of variables as illustrated below, which is a screen shot of ANSI/ASHRAE/IES Standard 90.1-2019 Table 6.4.3.10.1 – DDC Applications and Qualifications.

ANSI ASHRAE IES 90.1 Table

I believe the current language in 90.1-2019 is unchanged from what the 2016 version of the standard would require. That implies that the 2019 ASHRAE Applications Handbook reference is to a version prior to 2016.

My point here is that even though the handbook represents ASHRAE’s position on a subject, in the code compliance scenario associated with a design process for a new construction project or a retrofit, the code in force is what will govern. In other words, I should have gone straight to the source and dug out the code and verified what I had read in the handbook before I wrote those lines in the first draft of the blog post.

Having said that, even if you go straight to the source; i.e. the governing code, things may not be as clear as you would hope, especially in existing buildings.

Existing Building Complications

In the mid-1980s – when my significant emotional event in the MCI building happened, neither of the standards and guidelines referenced above existed. In fact, the technology for performing a trim-and-respond strategy did not exist. Thus, our goal was to deliver the benefits of a concept that was being used as a general guideline for improving energy efficiency.

If I was working on the MCI project today (the project associated with the story behind this string of blog posts), either as a new construction project or as a retrofit, I would need to comply with the more specific language of the governing code. But the governing code may or may not be the most current version of a given standard, depending on where the jurisdiction is in terms of updating the codes they enforce. As a result, things can start to get a little “murky”.

And in my experience, in the existing building operations arena, this can get even “murkier”. Most of the time, the facility operators and technicians I get to work with have a passionate desire to improve the performance and efficiency of their systems. Frequently, they are crippled in their efforts by the realities of their operating budgets and equipment. Every year I run into one or two operators who are working with systems that have pneumatic controls and who don’t have the budget to upgrade to DDC. But what they do have is the skill and interest in making what they have work better once they understand how to go about doing it.

That means that for operators in a facility that does not have the technology in place to comply with the “letter of the law” (a trim and respond strategy for controlling duct system static pressure), the approach we used for the MCI building could deliver a significant portion of the savings that can be achieved. 

Returning To our Discussion

For a sensor located at point “B” in Figure 1, engineering calculations would reveal that a pressure 0.975 in.w.c. is required at Point B to maintain flow to the two symmetrically ducted zones served by the system. Thus, if we were to use our control process to maintain this pressure, we would deliver the design flow rate to each zone.

If we used one of the terminal units to do zone-level scheduling by stopping airflow to half of the ballroom if it was not in use when the other half was, the demand for airflow would be cut in half. But, if we maintained 0.975 in. w.c. at Point B when the inactive zone shut down, we would deliver design airflow to the half of the ballroom still in service.

The image below illustrates the pressure drop calculation just to give you a sense of what something like that looks like..

Fan Static Projection v2

The graphics are screen shots from the ASHRAE fitting database, which was used to do the math for the fittings in the analysis.

In addition to looking at the design flow rate, the calculations also  look at what would happen to the pressure drop in that section of duct if the flow were reduced 50 percent using both the square law and the more precise One Point Eight Five to One Point Eight Nine Law. Because the difference between what the square law and what the more refined calculations predict for this short duct run is in the third decimal place, I simply will reference the numbers as predicted by the square law for the purposes of this discussion.

How it Works

Under design conditions, if a sensor at Point B were to meet its targeted set point of 0.975 in. w.c., the fan would be forced to deliver 1.102 in. w.c. at Point A because that is the pressure needed to overcome the resistance due to flow between the two points and deliver 0.975 in. w.c. at Point B. This is the same result as would be achieved by a system that simply controlled for the design static pressure at Point A.

However, at 50-percent flow, a system controlled by a sensor at Point B would force the fan to deliver only 1.007 in. w.c. (the 0.975 in. w.c. required to deliver design flow to either zone from Point B plus the 0.032 in. w.c. required to deliver 50 percent of design flow to Point B). Thus, at part load, the total system static requirement is reduced from what would be achieved in a system controlling for a fixed discharge static pressure.

Good News and Bad News

The Good News

By moving the sensor used to control fan static pressure out into the duct system, we can maximize the energy savings in a variable-flow application. The same is true regarding the location of a sensor controlling the distribution pumps in a variable flow pumping application.  In fact, if you want a detailed look at that, you will find it in a string of blog posts I did a while back about applying the two thirds rule to a pumping system, complete with pump curves and everything.

In any case, selecting the location for the remote sensor is a balancing act, with energy savings pushing the sensor to the most hydraulically remote branch in the system and caution pushing the sensor back toward the fan because the most hydraulically remote branch can be challenging to identify in a large system.  And, it can move around in the system as load conditions change. In fact, for a large system using the remote sensor strategy, it may be desirable to install several sensors and use low-signal-selection logic to dynamically choose the appropriate sensor.

The Bad News

The bad news is that moving a sensor out into a distribution system introduces a lag into the control process. For the system in the model, an air molecule leaving the fan discharge will take only about 1.5 seconds to reach the remote-sensor location, so the lag is likely not much of an issue. But for a large high-rise, the implications can be much more significant.

For example, for one of the systems in a 475 foot tall high rise that I did work in, on a time-rate-distance basis, an air molecule that left the AHU on the top level would take 10 to 12 seconds to reach the terminal unit it served on the lower level.   This slide from a presentation I do about the project, which includes a scale drawing of the duct system will give you a sense of what I mean.

image

For the MCI Building, the distance to the remote sensor was in the range of 300 ft and the time-rate-distance lag that was introduced probably approached 8 to 10 seconds.

Because of the dynamics of large systems, the lag we are discussing is much more complex than a simple time-rate-distance assessment would lead you to believe. I will discuss why this is in a subsequent blog post. But for now, the take-away is that lags can make control-process tuning challenging and generally are the enemy of tight control. This was the issue I failed to recognize with my initial fan static pressure control system design for the MCI Building and is the reason I blew up the duct.

In the next post, we will take a closer look at exactly what lags are in the general case. Once we establish that, I will do a post that looks at the lags I was dealing with in the MCI building, with a focus on what turns out to be a very complex transportation lag.

Finally, I will wind up the series by looking at how we solved the problem in the MCI building, a solution which is also applicable in the general case if you are dealing with a large, complex system.

David-Signature1_thumb1_thumb                                                        

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

[i] This is the generally accepted meaning. Interestingly enough, nobody really seems to know where the “two thirds” part came from. Chuck Dorgan did some research about that at one point and concluded that it evolved from a recommendation made in a technical guide developed by one of the major control system vendors in the late 1970s that targeted providing support for their field technicians who were running into the requirement at the time. Personal discussion with Chuck Dorgan, approximately September 20, 2010.

[ii] The Honeywell Gray Manual is an industry classic and was the text book Honeywell used to train new engineering recruits after hiring them. Originally published in 1934, it went through 21 editions with the latest I know of being 1997. While, it is not current with regard to the control system technology in our buildings these days the fundamental principles it discusses like psychrometrics and different applications still apply and are explained in layman’s terms and I frequently recommend it to folks coming into the industry, especially if they do not have a technical background. You can download a copy at http://www.av8rdas.com/honeywell-gray-manual.html.

[iii] Incidentally, the duct configuration on the discharge of the fan in the model and related system effect is abysmal; in class I also use this system as an example of how not configure the fan discharge and also to discuss what you can do about it if you find it as an existing condition. For a longer discussion of system effect that uses an earlier version of this model, visit this blog post.

[iv] The square law has its roots in the Darcey-Weisbach equation, which assumes fully developed turbulent flow. ASHRAE research has demonstrated that for most applications, the Square Law is really the One Point Eight Five to One Point Eight Nine Law because there are places in our systems where we do not have fully developed turbulent flow. But for field work, preliminary estimates and developing a general understanding of how things work, it is reasonable to use an exponent of 2 instead of 1.85 – 1.89. Plus, it’s easier to do the math on a slide rule that way (I still carry one around).

[v] 2019 ASHRAE Applications Handbook, Chapter 48, page 48.8.

[vi] ANSI/ASHRAE/IES Standard 90.1-2019, paragraph 6.5.3.2.2 VAV Static Pressure Sensor Location, page 235.

[vii] ASHRAE Guideline 36-2018, paragraph 5.1.14 Trim & Respond Set-Point Reset Logic.

Posted in Air Handling Systems, Controls, Mentoring and Teaching, Pneumatic Controls | Leave a comment

Lags, the Two-Thirds Rule, and the Big Bang, Part 1

MCI Building 02This string of blog posts started out as an ASHRAE Engineers Notebook Column.  But they got to long for that format, so I decided to post them here.  The story is an example of how I was mentored by a building and it’s systems and learned a number of lessons that I use to this day. 

My mentor in the story is a building that was, at the time, known as the MCI building, on the riverfront in St. Louis Missouri, where I lived and worked during that period of my life.  I believe it is now called the Deloitte Building.  In any case, it is the teal colored building in the picture to the left.  I will call it the MCI building as I write this because that is how it was known to me at the time.

The Situation

From 1984 – 1986, I had the privilege of working for Murphy Company under Tom Lillie in their Design/Build department as a combination field engineer, start-up engineer, and control system designer.  I can’t remember what it actually said I was doing on my card, but basically, that is what I was doing.

One of the projects I worked on was the MCI Building.  Although the industry was moving from pneumatic control to Direct Digital Control (DDC), the owner wanted to stick with pneumatics, primarily because of budget constraints, but also because of uncertainty about how well their operations staff would be able to deal with the new technology.  Tom placed his trust in my abilities as a control-system designer and startup technician for the large (two, nominal 90,000 cfm units in parallel) variable-air-volume (VAV) air-handling system that would serve the facility. My work on that system brought about a “significant emotional event”— a phrase coined by Jay Santos, PE, co-founder of Facility Dynamics Engineering.

The Story

Significant Emotional Events

A significant emotional event is an attention-grabbing, eye-opening incident that changes the way you think about and approach something in a very profound, fundamental way. This event clarified one of the principles that David St. Clair wrote about in “Controller Tuning and Control Loop Performance, a Primer” —namely, “IT ALL DEPENDS ON THE LAGS!”[i]

Setting Up the Significant Emotional Event

With Tom’s blessing, I applied the two-thirds rule to the duct static-pressure-control process for the Inlet Guide Vane (IGV) equipped MCI Building supply fans, using a high-quality pneumatic control system featuring two-pipe transmitters and Proportional plus Integral plus Derivative (PID) capable receiver controllers. At the time, the two-thirds rule was an emerging energy efficiency recommendation that advocated moving the duct static pressure control sensor from the fan discharge to a point out into the distribution system. But it was not yet a code or efficiency standard requirement.

I will discuss the rule in more detail in the next post , in this series, and you can find an illustration of it applied to pumping systems in a previous string of posts.  But suffice it to say that our engineering calculations indicated that we should install the sensor that would control the supply fan static for the system at the supply main on the second floor using a set point of 0.75 in. w.c. to accrue the two-thirds rule benefits. In terms of linear feet of duct, this turned out to be about two-thirds of the distance from the 12th floor penthouse location of the air-handling unit; just saying.

When the time came to bring the system on line for the first time, I stationed myself at the remote sensor so I could watch what was going on there.  Ray Baltimore, a very gifted control system pipe fitter that I was working with was up in the penthouse 12 floors above me coordinating things there and monitoring the process from that perspective.

Upon initial startup, the discharge-static-pressure safety switch (3.5-in.-w.c. set point, 4.0-in.-w.c. duct-pressure class) tripped, even though the pressure at the sensor location that I was monitoring never reached the targeted set point of 0.75 in. w.c.

Believing we were dealing with a control response problem; we narrowed the throttling range of the controller and restarted the system. After the discharge-static-pressure safety switch tripped again, and after consulting the specifications to verify the duct-pressure class, the safety switch setting was increased to 3.75 in. w.c., the throttling range of the controller narrowed further, and the system restarted.

Not Quite Connecting the Dots

Following yet another safety trip, restrictors were added to the pneumatic tubing serving the IGV’s to slow them down and allow downstream pressure to build without exceeding the discharge safety set point.

When we restarted the system, discharge-static-pressure safety switch did not trip. But after 10 minutes, the actuators had not moved far enough to get the system to set point because of the large actuator volume and the reduction in flow imposed by the restrictors.

After experimenting with several restrictors, we concluded that we had simply traded a safety trip problem for an unresponsive system problem. So, we removed the restrictors, and increased the safety setting to 4.0 in. w.c. Upon restart of the system, the discharge-static-pressure safety switch tripped once again.

The Big Bang;  A Significant Emotional Event

Assuming there was a tolerance on the duct static-pressure class rating, we increased the discharge-static-pressure-safety-switch set point to 4.25 in. w.c.

That’s when it happened: the big bang. Ray, always the humorist and trying to put a positive spin on things radioed …

Well, at least we know the duct pressure class is right.

Sadly, I had just performed an (unintentional) destructive test verifying the duct system pressure class. While destructive testing may have its place for verifying that things like airbags in a car will work in an actual crash, it is not the approach recommended by SMACNA for verifying duct pressure class.

As the fan spun down, David St. Clair’s words hit home. And it was also apparent why “It’s all about the lags” was in all capital letters, with an exclamation point, in an extra-large font, in a highlighted box at the end of the lags chapter in his book.

Up until then, I had not appreciated what he was saying at all. Now, I fully appreciated it and had added my very own exclamation point.

Solving the Immediate Problem

I desperately wanted to capture the savings associated with using remote duct pressure instead of fan discharge pressure to control the supply fans. But to maintain schedule, I concluded that I would need to move the transmitter to the fan discharge and control the system based on that for the time being.

Ray and I made plans to gather the necessary hardware and make the change.  The tinner was already putting the blown duct joint back together and figured they would have the system ready to go again before they went home for the weekend. But between the changes that Ray and I would need to make to the control piping and the fact that some of the parts we were having air-freighted in would not arrive until Saturday, it looked like we would be working the weekend.

Another Mentoring Story

When I called Tom to tell him the bad news and what our plans were, he kind of chuckled and said something like …

Well Dave, we aren’t the first people to do this two thirds rule thing, so there must be a way to make it work and I bet you guys will figure it out. And I’m sure your temporary plan will work until then.

But you and Ray have been working hard and you have that new little baby sitting at home.  God put us on this earth to do certain things and it wasn’t to constantly be messing around with buildings.   So go home and take a break.  I’ll meet you on site Monday to brainstorm a solution and help get the temporary control plan working. 

Pretty cool;  like I said, I can’t remember the exact quote, but I won’t ever forget the intent and message about paying attention what is important in life.

The Temporary Fix

After moving the sensor to the fan discharge, we were able to tune the control loop to allow the system to start and achieve stable operation at the targeted 3.00 in.w.c. set point without a safety trip.

One obvious solution to our problem was simply to let go of the concept of controlling the system based on pressure at a remote point. But if we did that, we would not be delivering the efficiency we promised our client.

And in the bigger picture having both been mentored by Bill Coad,[ii] Tom and I wanted to make our system as efficient as possible. Thus, my quest to understand the reason I could not get the system to work using a remote sensor continued.

Not Every System Will React This Way

(Thank Goodness)

I want to emphasize that I am not saying this problem will occur in every VAV system out there and that controlling static pressure based solely on a remote sensor in the duct system won’t work. Obviously, it works in many situations.

But in this particular case, due to the dynamics of the MCI Building system, the duct-pressure-class limit was exceeded at the fan discharge before the desired operating pressure was reached at the remote-transmitter location; i.e., our problem was related to a lag. This caused me to realize that the dynamics of some systems may require a different approach for achieving the benefits of duct static pressure control based on a remote pressure in the system.

Coming up On Lags, the Two-Thirds Rule, and the Big Bang

In the next post, I will look at exactly why using a remote static pressure sensor to control a VAV system will save energy compared to simply controlling for discharge static pressure.

In a third installment, I will take a closer look at exactly what lags are in the general case.

In the fourth installment of the series, I will look at the lags I was dealing with in the MCI building, with a focus on what turns out to be a very complex transportation lag. I believe there are also reasons aside from the system lag dynamic that result in this problem occurring on some but not all projects, occurring on all projects and I will highlight them in this installment.

Finally, in Part 5 of this series, I will look at how we solved the problem in the MCI building, a solution which is also applicable in the general case if you are dealing with a large, complex system.

In closing, I wanted to thank the Engineering Notebook team for their initial feedback on the article, which helped me focus it and address some technical questions it brings up.  And I also want to thank Michael Ivanovich and Scott Arnold of AMCA, who jumped in and helped organize the original article into the more manageable string of five articles that have evolved to this string of blog posts.

David-Signature1_thumb1_thumb                                                        

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/


[i]     This is still available for purchase, at www.straightlinecontrol.com/index.html and is well worth the money if you are trying to understand PID control loops in practical terms.

[ii]     Bill Coad was the vice President of McClure Engineering when I interviewed there in 1976. In the 1980’s Bill wrote an article for the ASHRAE Journal titled Energy Conservation is an Ethic. (ASHRAE Journal, vol. 42, no. 7, July 2000) But he was thinking that way long before he wrote the article. That philosophy, conveyed to me during my interview in 1976, was one of the things that caused me to want to get into this field. You could say it changed my life. You can find a copy of it on the ASHRAE website at https://www.techstreet.com/ashrae/standards/energy-conservation-is-an-ethic?product_id=1719726#jumps

In addition, we have a page on our website with a lot of the other things Bill wrote, which are still applicable today since he dealt in fundamental physics. http://www.av8rdas.com/bill-coads-writings.html.

Posted in Air Handling Systems, Controls, Mentoring and Teaching, Pneumatic Controls | 1 Comment