Developing a System Diagram in the Field

Well, its been a while since I have posted (I say that a lot I realize).  But I have been using my free time to understand how to use an application called 3DVista to create ways for folks to learn commissioning skills.[i]  So, I thought I would make one of my exercises public so folks could try it out and see how it works.

imageTo try it out, follow this link.  When you get there, you should see something like this in your web browser.

image

The purpose of this blog post is to let you know about the exercise, provide the link to it and provide a detailed answer showing the system diagram and how it can be used to understand how the system might work.

And I would be remiss if I did not say a big “Thankyou” to one of our EBCx students at the Pacific Energy Center – who shall remain nameless to keep this sanitized – who – as chief engineer, along with her Owner, allowed us to use her project site for a field exercise, from which the content of this post and the exercise is created.  

Contents

The links below will jump you to the indicated topic.  A “Back to Contents” link at the end of each section will bring you back here.

Using the Model

When you open the model, there will be a tabbed form with the general instructions in it.  The first time you use it, its probably worth taking a few minutes to look at the instructions and the symbols.  You could even cheat and look at the answer up front;  your choice.  The goal is to educate, not evaluate.  So maybe looking at the answer will help guide your effort to discovering it on your own 

But you may also find it fun to see what you come up with first, then look at the answer.  When ever you do that, you will discover that in addition to providing a diagram, it references this blog post for the details of how to apply the system diagram to understand how the system works.

Note that there are several buttons in the lower left corner that allow you to:

  • Toggle the sound sound on and off (trying to create a real world experience here, so pretend you put in ear plugs when you toggle it), and
  • Toggle the floor plan on and off;  if you get stuck in a view, just toggle it back on and hit a red dot to jump back in, and
  • Toggle the instructions on and off (in case after thoroughly reviewing them before you start you end up having a question), and
  • Show your current score.

Once you pick a view, you can pan around it it by holding down your left mouse button and moving your mouse.  Or, if you just move the mouse with out holding the button down, you will discover there are hot-spots that pop up when you move over them.  If you click on a hot spot, a number of things might happen, including changing views or a question window might open to allow you to score a point.  All of this is explained in more detail in the instructions and symbols tab in the instruction window.

<Back to Table of Contents>

About the Score

As I mentioned, the purpose of the exercise is education, not evaluation.  So, you are allowed to repeat questions if you get them wrong.  That means if you are persistent, you can get a perfect score. 

The concept here is based on a conversation I had with some students after I had given them a quiz and then the answer key.  They said they learned as much or more from the answer key because it told them why they were wrong or right.  

In an effort to take their observation to heart, I (think) I have the exercise set up so that when you pick an answer in a question window, generally speaking, you will see if your answer was wrong or right (the answer designator should turn green if you are right and red if you are wrong) and then an information window will open explaining why you were right or wrong.

Armed with your new found knowledge, you should be able to try the question again and eventually find the correct answer, and thus, achieve a perfect score.

<Back to Table of Contents>

Mobile Device Considerations

In theory, the application will work in mobile devices like iPhones, iPads and Android phones and tablets.   But getting that to happen is surprisingly complex because of screen aspect ratios, resolutions, and how you interface with the screen (mouse vs. touch for instance).  So, while I am pretty confident that things will work pretty well on a PC or MAC, I am still testing it on my iPad and iPhone an there may be some bugs that show up that I will need to figure out.   If you discover one, feel free to reach out to me and let me know so I can address the issue.

<Back to Table of Contents>

Learning to Draw System Diagrams

QRC System Diagram ResourcesThis post is about how to apply a system diagram rather than how to draw one.  If you are wondering what a system diagram is and how to draw one, here are some resources that should help you out with that.

<Back to Table of Contents>

The Answer

I am going to use images from the slides I use in class when we do the exercise to illustrate this.  Bottom line, if you trace out the piping in the model, you should end up with something that looks like this.

System Diagram 2

The answer is not an absolute, meaning your diagram may look a bit different.  What matters is that if you put your finger on the pipe and followed my diagram and compared it to yours, the order of connection would be the same. 

In other words, if you started at the return form the loads:

  • You first come to a tee, one side of which would go to the mixing valve, the other side of which would go to another tee, and
  • If you followed the line to the other tee, one side of it would head off to another tee and the boiler and one side of it would head off to the distribution pumps,
  • Etc.

<Back to Table of Contents>

Adding a Few Loads

I added a few loads and flow rates, temperatures, etc. to the system diagram to allow me to explain how the system works.  On the site, we did not really get to see the loads, so in the context of the specific system, what I show is an assumption.  But in the context of how a system like this would work, it is a reasonable assumption.

image

imageThe system configuration is called variable flow, primary/secondary.  For more information regarding how this system configuration works as well as other configurations like constant flow and variable flow primary only, visit this location – http://tinyurl.com/VariabeFlow.

<Back to Table of Contents>

Working With Nodes

Tees are “nodes” in the system where flows converge or diverge.  Plenums are nodes in air systems.

There are two fundamental principles that apply to a node:

  1. Conservation of mass;  this is reflected by the flows.  More specifically, the sum of the flows entering a node will equal the sum of the flows leaving the node.
  2. Conservation of energy;  this is reflected by the temperatures entering and leaving the node relative to the mass associated with the temperature.

From the steady flow energy equation, we can express this in a simplified form as follows:

Conservation of Mass and Energy at a ..

imageIf you wanted to see the full derivation of that relationship, you will find it at this link, which derives it for a mixed air plenum.  But the concept is virtually identical for a tee in a piping network.

<Back to Table of Contents>

The Self Contained Thermostatic Valve

P1060563One of the interesting features of this system is the self contained thermostatic valve used to regulate the entering water temperature to the boiler, illustrated in the image to the left.   The boiler is a non-condensing boiler, thus, it needs to operate with an entering water temperature that is above the dew point temperature of the flue gasses (in the range of 130-140°F for a natural gas fired process).

The valve is arranged so that it will recirculate the water leaving the boiler to keep the entering water temperature above it’s set point.   This ensures that the boiler will quickly warm up from a cold start, minimizing the condensation that will occur during that part of the operating cycle. 

Once the boiler is warmed up, the valve blends return water from the distribution loop into the boiler loop, warming up the distribution loop and meeting the load while at the same time, protecting the boiler from an entering water temperature that would cause condensation under normal operating conditions.

CandlesThe operating principle behind this particular valve involves the change in volume that occurs when wax changes phase from a solid to a liquid.   If you have ever made candles you probably have noticed how, as the candle cools, the top surface of the candle becomes concave due to this phenomenon.  Thermostatic Valve QRCYou can control the temperature at which the wax changes phase based on the type of wax you are using and additives that are mixed with it.   The manufacturer of this valve has a nice video that illustrates how the valve works located on their website at this link (the QR code above should also take you there).

Bottom line, if the water temperatures and/or the valve body temperature is below the valve’s set point (135°F in this example), then it is as if the boiler loop is a totally separate loop from the distribution loop.

image

<Back to Table of Contents>

My Assumptions For the Purpose of the Example

The temperatures used in the slides that follow are approximations based on engineering experience and judgement v.s. the result of modeling the specific dynamics of the coils and piping network.  They intent is to illustrate the general dynamics of the system as it moves from a cold start to full load and then part load.

The assumptions behind the example include:

  1. The piping network is relatively short and well insulated.
  2. As a result of item 1, once the system is up to temperature, the parasitic losses from the system are minimal, thus the temperature reaching the loads will be virtually the same as the temperature leaving the plant and vice versa on the return side.

<Back to Table of Contents>

At Start-up

At start-up, since the boiler loop is isolated from the distribution loop by the thermostatic valve, the boiler pump only circulates water through the boiler, as illustrated below.

image

Since the loop is cold relative to the boiler’s design set point of 160°F, the boiler will start and go to full fire, as shown in this next illustration.

image

Notice how the thermal mass of the boiler loop (the pipe, valves, pump, etc.) absorbs some of the energy added by the boiler.  In other words, even though the boiler raises the water temperature by 40°F when firing at full capacity, the 120°F water leaving the boiler is cooled down as it circulates through the boiler loop.  As a result, the loop temperature will tend to gradually rise vs. going up in 40°F incremental steps.

Given the small volume in the boiler loop, it will warm up fairly quickly. Meanwhile, the water in the distribution loop will remain at about the same temperature that it was at when the system was started if the AHUs have not been started yet. This is because the thermostatic valve is isolating the boiler loop from the distribution loop. You could even delay the start of the distribution pumps until the boiler loop was up to temperature to save a bit of pump energy.

But even eventually (and fairly quickly due to the relatively small amount of mass in the boiler loop), the boiler loop temperature will have warmed the thermostatic valve body up to its set point of 135°F as shown below.

image

<Back to Table of Contents>

Warming Up the Distribution Loop

When the valve body reaches its set point, the wax inside the internal actuator changes state from a solid to a liquid. The change in state is also accompanied by a change in the volume of the wax, as discussed previously.

The change in volume of the wax as it melts is used to move the actuating mechanism in the valve and it begins to close off Port A and open up Port B. As a result, some cool water is blended into the boiler loop from the distribution loop and warm water is sent to the distribution loop and it begins to warm up, as illustrated below.

image

Note that the temperature of the water drops as it leaves the boiler loop and is pumped through the distribution loop because the water is warming up the previously cold piping.

As the piping system warms up, the temperature loss associated with warming the piping up drops off, and warmer water reaches the loads. The warmer water allows the coils to transfer more heat to the air stream that they serve, which drops their leaving water temperature. But the trend is for the system return water temperature to rise and the warmer return water allows more and more flow from the distribution loop to enter the boiler loop while still holding the 135°F thermostatic valve set point as illustrated below.

image

Assuming the loads to do not exceed the boiler capacity, then eventually, the return water temperature will rise to 135°F. This will take longer on a day at or near design conditions vs. a day at less than design conditions.

When this happens, all of the water circulating in the distribution loop will be directed into the boiler loop because the thermostatic valve will completely shut off Port A and fully open Port B as shown below.

image

Assuming the loads to do not exceed the boiler capacity, then the boiler LWT will continue to rise and reach its design set point. And the piping system will warm up and the design supply water temperature will be delivered to the loads (see assumptions above) as illustrated in the next figure.

image

<Back to Table of Contents>

Operating at Less than Full Load

Up until this point, the boiler operated at full capacity because the loads were not satisfied and their control valves were positioned for full flow through the coil.   Any capacity that the boiler had that was in excess of the current load condition was used to warm up the piping system and the mass in the spaces served by the air handling systems.

Once the distribution system and the zones served by the AHUs have been brought up to set point, on any day other than the design day  or a day in excess of design, the demand from the load will be less than the boilers rated capacity. 

This will cause the control valves on the loads to begin to reduce flow through the heating coil (assuming things are working properly). 

  • For the load with the three-way valve, (assuming the balance valve in the bypass is properly adjusted), the flow to the load will remain relatively constant, but the return water temperature will tend to rise as warm supply water is bypassed around the coil and blended with the cooler water coming off the coil.
  • For the load with the two way valve, the flow to the load will drop off and the temperature drop across the load will tend to hold constant.[ii]

The combination of a reduction in flow below the design value will reverse flow in the decoupling bypass (the pipe shared by the boiler loop and the distribution loop) as shown below.  This, in and of itself would tend to raise the boiler entering water temperature. 

The increase in return water temperature caused by the load on this particular system that is served by a three-way valve will further contribute to the increase in boiler entering water temperature, all as shown in the following figure.

image

As a result of the increased return water temperature, if operating at full fire, the boiler leaving water temperature will start to rise above set point.

<Back to Contents>

Turn Down Capability and Part Load Operation

If things are working properly, the boiler will reduce it’s firing rate (i.e. unload) to the extent possible given the nature of its control system.  For example, boilers with a modulating gas valve will modulate the flow of gas and combustion air in an effort to match the capacity they are producing to the current load condition.

Boilers that have a number of on-off stages will turn off one or more stages.  But, if the capacity reduction associated with turning off a stage exceeds the reduction required to match the load, the system supply temperatures and flows will tend to move in a direction that will cause the boiler to stage back up again.

The ability of a boiler (or any prime mover) to modulate its capacity to match the load is termed “ turn-down capability”.   A piece of equipment with a 10:1 turn down capability can generally reduce its capacity to 10% of its rated capacity with out cycling.  A piece of equipment with a 2:1 turn-down can only reduce it’s capacity to approximately 50% of its rated capacity before it starts to cycle.

Cycling is undesirable for a number of reasons.

  1. Cycling causes unstable supply temperatures and flow rates, which can ripple out and have an adverse impact on the loads.   Frequently, the thermal inertia of the system and loads mitigate this to some extent.  But for loads with tight environmental tolerances, this can be an issue.
  2. Cycling causes wear and tear on the equipment.  For example each time a motor starter has to break the motor current at shut down, the contacts are worn.
  3. Cycling can cause efficiency losses.  The purge cycle associated with a boiler cycle is an example of this and is explored in depth in a couple of previous blog posts, starting with this one.

At some point, even for a modulating boiler, the load will reach a point where the boiler can no longer reduce capacity to the point where it matches the load and the control system will begin to cycle the boiler. 

<Back to Contents>

Conclusion

Well, that’s it for now.  Hopefully, this has given you some practice at developing a system diagram and then shown you how you can use techniques like node analysis to apply the system diagram as a tool for understanding how a system performs and possible opportunities to improve things.

Happy holidays everyone.

David-Signature1_thumb_thumb_thumb

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

[i]    I was turned on to this by my friends/colleagues at CERL who have been experimenting with using it as a way to bring someone into  a virtual mechanical room by leveraging the 360° and panorama capabilities of a GoPro Hero camera. It is a pretty cool application and I have figured out not only how to import video from real world locations, but also how to create video from my SketchUp models(Back to Content)

[ii]    The constant temperature drop characteristic of a variable flow load is somewhat dependent on the coil entering conditions staying fairly consistent.  For coils that see a significant difference between the entering conditions on the design day vs. the entering conditions at other times, this constant temperature drop characteristic will start to break down at low load conditions.   Coils handling 100% outdoor air are an example. 

The reason, this happens is that the difference between the entering air condition and the coil water temperature that drives the heat transfer has changed with the change in coil entering conditions.  You can get a sense of this by “playing” with a coil modeling program like the free programs offered by Greenheck or USA Coil and/or by getting out in the field and watching how a system works over time.

This phenomenon breaks the design assumption often used for variable flow central plants; that being that flow is directly proportional to the load.   As a result, chilled water plants will experience “low delta t syndrome” simply because of how coils work.  Steve Taylor does an excellent job of describing this phenomenon and how to deal with it in

Degrading Chilled Water Plant Delta-T: Causes and Mitigation.

Hot water plants can experience a similar thing but tend to be more forgiving of it, at least in my experience.  (Back to Content)

Posted in Uncategorized | Leave a comment

Training Opportunities

This is just a short post to let you know that I have updated the Training Opportunities page on the Commissioning Resources website so that it is current, including links to the classes I help support, may of which are offered free of charge.

I have to confess to neglecting that page once COVID hit because it totally changed the way we deliver the content.  But I would like to think that the change was a positive one because we learned how to deliver the content for many of the venues via webinars, using SketchUp models to provide virtual field experiences (complete with sound if you unmute/turn up the volume).

As a result, you can attend some of the classes from anywhere in the world, all-though we still have retained several face to face field sessions for the year long existing building commissioning workshop I support at the Pacific Energy Center.

Part of the transition has been to move towards the “flip the classroom” approach.   The idea is to provide self study resources that folks can access ahead of time so they can pick up the basics on a topic via a series of video self study modules.  The resources you will find on our On Demand Training page are examples of some of this content.

This allows us to devote the class to questions and answers on the fundaments behind the class and interactive exercises that apply the concepts.  We have also started using break-out sessions in some of the classes to facilitate the involvement of the attendees with the exercises.

Bottom line, if you find the content on this blog to be interesting and helpful and are looking for classes with additional, similar content, you may want to check out the Training Opportunities page.   Maybe I’ll see you in class some day.

David-Signature1_thumb_thumb_thumb

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

Posted in Mentoring and Teaching, SketchUp Model Based Self Study | Leave a comment

PID Loop Tuning and Aliasing

In the past, I have discussed the topic of aliasing; i.e. how looking at trend data where the sampling rate is to slow to capture what is really going on can mislead you. 

Yesterday, in the course of a class discussion about loop tuning, one of the attendees offered to use one of the control loops in their facility for a science experiment so we could try tuning the loop in real time.  The loop appeared to be fairly stable,  as illustrated below …

Initial Conditions

… but was not at set point, which is what a proportional only control loop might look like due to the proportional error that is inherent in a proportional only control process.[i] 

Specifically, the loop set point was 62.5°F, but the trend line indicated that it was running in the 58°F range, which implied the proportional error was in the range of 4.5°F.  But it turned out that we were being fooled by aliasing and I thought I would share some of what we learned and observed in this blog post.

Contents

The links below will jump you around in the blog post.  The Return to Contents link at the end of each section will bring you back here.

The Typical PID Loop Response Pattern

The classic signature of a well tuned PI or PID control process is a waveform that is sometimes referred to as a “quarter decay ratio”, as illustrated below.

Quarter Decay

To discover if your process displays this type of response, you need to upset it, which can be done in any number of ways, the most common being to change the set point and observe the result. [ii] 

Depending on the specifics of the process, the time frame associated with the first few cycles of the wave form usually will be in terms of minutes or possibly, seconds.  That means that if you want to observe it, then you need to be sampling the process at least twice as fast as the frequency of disturbance, which is something that Mr. Nyquist identified for us

Often, this means sampling data several times a minute, perhaps even as fast as every 5 seconds. or less.   For some commercial building control systems, this is quite possible, especially if you are not creating a lot of network traffic by archiving the trend data. But for many systems, especially legacy control systems with relatively slow network speeds, this can be challenging or even impossible because the high traffic rates created will slow down or even crash the network. 

This is an example of a situation where a data logger can be quite helpful because most current technology data loggers can sample at rates as fast as once a second, sometimes even faster as illustrated in the screenshot below where I am setting up an Onset MX 1101 Bluetooth logger.

Logger Deployment

This is also one of the reasons why pre-DDC pneumatic PID controllers typically included a chart recorder as shown in this picture of a process control room taken in the early 1940’s.[iii]

PID Control Room

In addition to documenting the operation of the process over time, the chart recorder allowed the operating team to observe the response of the control process in real time when they were tuning the control loops.

[Return to Contents]

Taking a Closer Look at Our Control Loop

If you look closely at the window with the PID logic block in it in the previous illustration, you will notice that it says the input to the process is 61.2°F, not the 58°F ish value shown by the yellow trend line. 

Loop Details

That is a clue that aliasing is going on. In other words, the logic block window says that it knows something we don’t know via our observation of  the trend graph,. 

When I noticed this, I asked our brave volunteer, to see what the sampling rate was for the trend.  

It turned out that it was once every thirty minutes.  So the evidence now suggested that even though at the 30 minute point, the input value might be in the range of 58°F or so, in between those points in time, the value was somewhat different; 61.2°F for instance at the time we opened up the logic block window.

Notice also that the loop has integral gain applied to it.  That means that it is not a proportional only loop.

In turn, that means the apparent difference between the set point and control point can not be attributed to proportional error given that the whole point if the integral function is to eliminate proportional error.  This was another clue that there as “more going on than met the eye” when viewing the trend graph.

[Return to Contents]

“Sampling Rate” and “Too Fast” are Mutually Exclusive Terms

The line above is one of my loop tuning and trend analysis rules of thumb because frequently, when you are doing diagnostics and commissioning, one of the reasons you are looking at trend data is to figure out what is going on and one of the things that could be going on is that the control processes are unstable.  But since you have no idea what the frequency of the disturbance might be – if it exists – you can’t really apply the Nyquist Theorem.

My “default” sampling rate for trend analysis is once a minute because it can be handled by most systems with out crashing the network if you use it judiciously, perhaps by applying it to one or two systems at a time.  And while it may not be fast enough to fully capture the issue at hand (for instance, something with a cycle time of once every two minutes or less), it will probably pick up some a measure of that kind of instability and having observed that, I can either elect to increase the sampling frequency if possible or deploy a data logger.

But for loop tuning, you really do need an accurate picture of the wave form, especially if you are going to use the open loop method.[iv]  So I typically try to use a sampling rate in the 1-5 second range if possible if I am tuning a control loop.

[Return to Contents]

Increasing our Sampling Rate

In this instance the system was in fact capable of a 5 second or better sampling frequency. Since we were planning to tune the loop eventually, and since that same fast sampling frequency might reveal what was actually going on in between the 30 minute data points, our brave volunteer went ahead and set the trend to use a 5 second sample rate.[v]  Here is what that revealed.

Initial 5 second pattern

This is quite different from the impression we would have had if we had retained the 30 minute sampling rate as shown below.

Initial 30 minute pattern

At the time we increased the sampling rate, our brave volunteer had already made some tweaks to the loop tuning parameters based on what he heard me saying in class. 

So the pattern you are seeing above reflects the system’s reaction to those changes and is not representative of the pattern in the chart at the beginning of the post where the temperature seemed to be floating around 58-59°F. 

Fear not, we actually put the system back to the way it was and captured that data.  I will share that in just a bit.  But first, I wanted to show you a couple of other things that we observed.

In making his initial adjustments, we eliminated the integral gain and tweaked the proportional band to see if we could identify the natural frequency[vi] of the control process, which is the first step in the closed loop tuning technique.  Thus, proportional error should be “back in the picture” and trends at both sampling rates suggest there is proportional error.

But the 30 minute sample would lead you to think that you had not found the natural frequency of the system [vi], while the 5 second sampling rate reveals that the system is oscillating.

[Return to Contents]

I Feel the Need for Speed Patience

If you look at the wave form that is starting to emerge IN THE FIRST TWENTY-FIVE MINUTES of our test, you will notice that it is not a consistent pattern.

First 25 minutes

The reason for the capital letters above is that I wanted to emphasize that when you are doing this, it takes time.  You have to allow the system sufficient time to establish a pattern (or not).  For some processes, this can be a matter of a minute or two or even seconds.  But for many of our systems, it is more often a matter of 5-10 minutes or more.

If you make a change based on what you think is probably going to happen rather than waiting to find out if you are right in your assumption, then you may end up wasting time and/or having things really un-wind on you as a result of zigging when you should have zagged.

[Return to Contents]

HVAC Systems are Highly Interactive

A pattern that is not consistent in the context of it’s frequency and the shape of the wave form may mean that you need to allow more time for the system to stabilize.  But it can also mean that the control process you are looking at is;

  • Interacting with other control processes. 
  • Reacting to some sort of upset, like a set point change.

So, we decided to check to see if there were other control loops running in the system in question while we waited for the process to settle into a consistent pattern (or not) and found that:

  • The system had a control loop that might modulate the outdoor air and return air damper under some conditions and also
  • A control loop that would modulate fan speed.

Changes in any of these processes can (probably will) have an impact on the others.

For instance, if the outdoor air and return air dampers move, the pressure drop through them will likely change.  This will shift the fan operating point and cause the fan speed control process to change the fan speed.  Both of these interactions will likely impact the flow through the cooling coil, causing its leaving air temperature to change. 

The leaving air temperature change will eventually affect the zone control processes (the system was a Variable Air Volume (VAV) reheat system) and cause the terminal units to move their dampers, which will affect the flow rate and duct static pressure and cause the fan speed control process to react.

Etc, etc, etc.;  you get the idea. 

In a way, its kind of amazing that we can get these systems to work at all (you can, that is part of the challenge and fun).  And its probably not at all surprising that there are a few issues to be dealt with.

In any case, as the result of our discovery, we decided to eliminate the interactions temporarily by locking the other control processes down to a fixed output value.

[Return to Contents]

A Few Precautions

Locking down a potentially interactive control process has been a valuable troubleshooting technique over the course of my career.  If you do it and the problem disappears, then you likely found the root cause and can focus your attention on that, as in this example.

Insidious HVAC

But if the pattern changes persists, then it is likely that some other process is driving the dysfunction and the one you locked down was reacting to it (and in doing that, contributing to the dysfunction). but was not the root cause.

But you need to be careful when you do this.

The System May Be Serving Something Mission Critical

Before you do this, you need to ask yourself what will happen to the load being served if something goes wrong relative to what is happening currently.  If there could be major problems, you may want to wait to try troubleshooting on a day where the impact will be less severe.

Or, if you really need to do something, then you may want to “sneak up” on disabling the process by gradually limiting its impact rather than completely shutting it down.

You May Not Want to Totally Eliminate the Process

Our goal here is to stabilize the other processes.  Eliminating processes one by one will do that.  But you can also lock them down at some intermediate condition.  

For example, if we suspected that a preheat coil control process had gone unstable and was driving the other processes into instability, but it is below freezing outside, then totally shutting down the preheat process would be a really bad plan.

But you could “eyeball average” an output state that would provide a tolerable (above freezing) leaving condition from the preheat process and lock the process output down at a condition that would deliver a safe leaving air temperature for the time being. 

You would want to combine this with careful observation of the preheat leaving condition to make sure your fixed output was delivering the intended result (a temperature leaving the preheat process that was above freezing in addition to being stable) while you also observed the result of your change in the operating pattern produced in the other control processes in the system.

Remember to Release Your Over-ride At the Conclusion of Your Test

Its very easy to walk away from a test and forget to release the manual over-ride that you put in place.

Don’t do that.

It could – probably will – come back to haunt you in the form of wasted energy or worse yet, a frozen coil or a ruptured duct.

[Return to Contents]

Eliminating Interactions in Our Experiment

For the system we were considering, at the time of our science experiment, the team was comfortable with totally locking out the other control processes given what was going on in the facility.   So we did just that and that is what caused the “burble” (a technical term for “weird bump”) in the data stream at about 1:20 PM.

First Upset

Presentation Makes a Difference

Ryan Stroupe, the person I work with for the classes at the Pacific Energy Center, introduced me to the works of Edward Tufty, who is brilliant in terms of showing us how to present data in meaningful ways.  One example of this is the scaling of an axis.

Problems Can Be Hidden by the Axis Scale

The chart below presents the AHU leaving air temperature data for our focus system for the period of time that we worked with it and for several hours there-after with the Y axis scaled so that it is just slightly wider than the wave form.

Data Stream

Viewing the data at this scale allows us to clearly see that in the context of a temperature control process, there is a problem in the form of a lack of stability and in the form of a significant deviation from the desired set point

The chart below presents the same data but with the Y axis set to a much broader scale, a scale that might be invoked if data from multiple sensors was being displayed where a second data stream needed the broader range to be fully displayed.  For this example, I have scaled the axis so that flow data (if it existed for the system) with a range of 0-2000 cfm could be displayed on the same axis as the temperature data.

Data Stream Big Y Range

Note how the temperature data, which in reality shows significant instability and deviation from set point if you look at it in the first chart, appears to be fairly stable when presented on a chart with the Y axis scaled for a much broader range.[vii]

I realize that it would be possible to place the flow data on a second axis, or even a third axis, which would allow the temperature axis to be scaled in a meaningful way relative to the temperature data.  But many control systems would not present the data that way;  rather they would scale the axis for the trend group to allow the parameter with the biggest range to be displayed, which will tend to flatten out the data from a sensor with a smaller data range.

Problems Can Be Over-stated by the Axis Scale

The chart below is from the same data set but with the time axis focused on a 15 minute window and the temperature axis set for a span of 1.5°F.

Data Stream Narrow Y Range

If you looked at it with out duly noting the span of the Y axis, you could interpret it as showing major instability in the discharge temperature when in fact, for this portion of the data set, the temperature was only moving around a bit over half of a degree at a frequency of about 9 minutes per cycle.

The bottom line is that you can easily loose sight of the axis scale when you are troubleshooting in real time because your attention is grabbed by the shape of the wave form.  And as a result, a data stream that is actually quite unstable can be mistaken for a stable data stream and (erroneously) eliminated from consideration in the troubleshooting process or vice versa.

[Return to Contents]

Identifying the Natural Frequency for the Control Process

The chart below reflects the full data set for for the period of our science experiment with the three points where we made an adjustment to the system highlighted.

Data Stream Highlights

The period of time prior to the yellow band represents the system’s reaction to the initial adjustments the team made to try to find the natural frequency for the process, which as I mentioned is the first step in the closed loop tuning method. 

The yellow band is the point in time when we locked down the other interactive control processes in the system to allow us to focus on the discharge temperature loop that was in charge of the chilled water valve.   After we did that, the system appeared to settle into a steady oscillation with a period of about 9 minutes and with a consistent magnitude for the peaks and valleys.  The image below focuses on this area of the chart.

Natural Frequency

This oscillation likely represents the natural frequency of the system.  Ideally, we would have watched the process for several more cycles (meaning another 20-30 minutes) to make sure that the pattern was consistent.  But to make better use of the class time, we assumed that we had identified the natural frequency and moved on to our next experiment, which I will discuss in a minute.

But before doing that, I wanted to point out that if we were in fact correct about having identified the natural frequency of the system, then if we would have narrowed the throttling range a bit more (a.k.a. narrowed the proportional band or increased the gain or made the system more sensitive), the system would have become totally unstable. In other words, the peaks and valleys would become larger and larger with each cycle.

In contrast, if we had opened up the throttling range (a.k.a. opened up the proportional band or reduced the gain or made the system less sensitive), then the oscillations would have flattened out and the system likely would have stabilized with a proportional error in the range of 4-6°F.  In other words, the process would have become stable/flat lined at about 68- 70°F with a set point of 64°F.

[Return to Contents]

Restoring the “As Found” Tuning Parameters and Observing the Results

The next step in the tuning process would have been to begin to add integral gain to the loop to eliminate the proportional error, which would typically involve increasing the proportional band so that adding the integral gain did not push you into unstable operation.   The natural frequency of the system can be used to estimate first pass tuning parameters and the resources I mention in footnotes [i] and [iv] describe how to do that in detail.

But rather than do that, for our exercise, we decided to restore the “as found” tuning parameters to the loop.  Prior to this point in time, the operating team was under the impression that the loop had been tune by the control vendor.   After all, it seemed to be stable and there was integral gain provided in the logic block settings.

Stable with I Gain

But it did not seem to be holding set point; i.e. there always appeared to be proportional error, which, in theory, should not exist if the integral gain had been properly adjusted.

When the team asked their vendor about this and how they had arrived at the loop tuning parameters, the vendor said that they had used the metrics recommended by the factory for systems of this type rather than using a more formal, rigorous approach and that they felt comfortable with things because the loop was stable.

But given what we were seeing, we had started to wonder if that was really the case.  In other words, could the loop only have appeared to be stable because of the 30 minute sampling rate?  And could the sampling rate also explain why an apparently stable PI loop exhibited proportional error?

If the loop really was well tuned with the recommended factory settings, then it should exhibit stability and no proportional error at the faster sampling time we now had in place.  But, as you an see from the images shared previously and reproduced below, that was not at all the case.

Data Stream Highlights

Note that the red band is the point in time when we restored the original tuning parameters.  The orange band is when we allowed the other control processes to run and interact with the chilled water valve control process.  

Clearly the loop was not well tuned and needed some attention. 

That is where the operating team plans to go next, so hopefully, I will have some additional data to share with you to show the results.   But to finish up this post, I thought I would manipulate the data set illustrated above to show how and why the 30 minute sampling rate gave the wrong impression about what is actually going on.

[Return to Contents]

Sampling the Five Second Data at Thirty Minute Intervals

By applying filters, I was able to take the 5 second data set and pull out data points to represent what we would have seen at the original sampling rate of 30 minutes.  Here is the result of that effort compared to the actual data stream and the points we upset the system.

5 Second at 30 Minutes

Here is what it would look like if the other data was not visible.

5 Second at 30 Minutes No 5

And here is what it would look like if the format was similar to the format used by the control system we were playing with.

5 Second at 30 Minutes No 5 ALC Format

This next chart is very busy, but my point in presenting it is to show how variable the wave form can be from reality if you are being aliased because your trend sampling rate is too slow.

All Too Slow

What I am hoping you can see from that progression is:

  1. How the aliasing associated with a sampling rate that is too slow compared to the disturbance you are trying to capture will tend to change the frequency, flatten out the disturbance, and otherwise distort the wave form, and
  2. How the scaling of the axis can further flatten out the waveform and understate the significance of the disturbance.

[Return to Contents]

Sampling the Data at the Frequency of the Disturbance

If you were to sample the process at a rate that was very near to or the same as the frequency of the disturbance, a very interesting (and misleading) thing happens.

12.75 Minute Sample

Note how the trend line flattens out on the timeline once we are past the point of the last upset we created and the system has settled into a fairly steady state oscillation.  A similar thing happens if you sample at even multiples of the frequency of the disturbance.

25.5 Minute Sample

And the point in time when you initiate the trend sampling also has an impact.

25.5 Minute Sample Shifted

  This image illustrates why these things happen.

12.75 plus 25.5 Minute Sample plus 5 Second

As you can see, once the steady state wave form is established (around about 2:35 PM) if a data sample is taken at a frequency that matches or nearly matches the frequency of the wave form, then the data that is read will be read at the same point in the cycle, thus at the same value.  This will create the illusion of a straight line.

Shifting the point in time when the sample is taken moves the point that is sampled up or down the wave form.  Thus, if the sampling is initiated at or near the time when the wave form is passing through set point, then the result will look like a fairly straight line/stable pattern that floats around the set point.

The reason the line drawn by the aliased data does not end up perfectly straight is that there are minor differences in the frequency of the wave form from wave to wave. In this particular instance, the cycle time varied from 12 minutes and 45 seconds to 13 minutes and 10 seconds.

I know of at least one instance where a some what clever, but perhaps also, somewhat unscrupulous control system technician realized all of this and used it to resolve loop tuning issues flagged by a commissioning provider on a new construction project. 

In other words, rather than actually tuning the loops, they set the sampling rate to match the frequency of the disturbance, triggered the trend so that it was capturing data around the set point, and submitted the results as evidence of having tuned the loops. 

Fortunately, the commissioning provider (not me in this particular instance) was even more clever, and was scrupulous in their dedication to representing the Owner, meaning they “were on” to the trick and rejected the proposed solution.

[Return to Contents]

Setting the Sampling Time Based on the Nyquist Theorem

As mentioned previously, the Nyquist Theorem suggests that if the average cycle time  for the wave form is in the range of 12.75 minutes, then to capture it we would need to sample the data at twice that speed or faster.   That would translate to a sampling rate of 6.625 minutes or faster.   The chart below illustrates what happens if we were to sample this data once every 5 minutes and once every minute.

1 and 5 Minute and 5 Second Samples

Notice how the 5 minute sampling frequency – which is a bit faster than the Nyquist recommendation – fully captures frequency of the actual wave form.  But it does not fully capture it’s shape.  That means that if you set the sampling time based on the Nyquist Theorem, you would recognize the problem exists, but may not have a fully accurate picture of it.

Notice also how the waveform associated with the 1 minute sampling frequency is virtually indistinguishable from the actual wave form.  This illustrates how a increasing the sampling frequency above the Nyquist suggested minimum will not only capture the frequency of the disturbance, it will also capture an accurate picture of it.  The faster you sample, the closer the image created by your sample will match reality.

[Return to Contents]

What’s the Practical Meaning of This?

The folks attending the class were somewhat entranced but dismayed by all of this.  One poignant question that was asked was long the lines of:

Practically speaking, is this just an interesting intellectual exercise or does addressing this go to the bottom line?

For me, the discussion was (and always has been) interesting and probably a bit intellectual.   But there are also some very practical implications.  

For the particular class in question, being a class for directors of engineering, chief engineers, and engineering technicians in the hospitality industry, part of the goal of the class is to teach things that will help the attendees save energy and reduce their carbon foot print.  But at the end of the day, their mission is to deliver guest satisfaction.

I can tell you from direct personal experience that an out of tune control loop in a terminal system, or even in a system supporting a terminal system, that causes the temperature in the occupied zone to vary several by several degrees multiple times (or even once) an hour will create guest dissatisfaction.  So in the context of the hospitality industry bottom line, identifying and stabilizing an out of tune control process will likely impact your bottom line in the form of happy guests who had a good experience in your facility and therefore, will return. 

But if you don’t identify and correct the problem, you will find yourself and your facility as subject of bad reviews in social media and may even end up compensating the guest by waiving their room charges, providing a free meal, etc.

From an energy and carbon stand point, there are likely direct and indirect benefits. 

The direct benefits may be difficult to quantify.   But it is likely that a process that – for instance – over-cools then under-cools – will be wasting energy and creating unnecessary carbon emissions compared to a stable process, especially if there is a compensating process like reheat that is “stepping in” to compensate for the over-cooling when it happens.

The indirect benefits actually (in my opinion) lend themselves to quantification.  More specifically, in the case of our example, we have an observed, but here-to-for, unrecognized control process that is cycling once every 12.75 minutes.

  • If the system operates 24/7, that is about 41,224 cycles per year.
  • For a system that operates 5,000 hours per year (my observation of the typical number of hours a ball room or meeting room system might operate in a hospitality facility that runs 80% or more occupied most of the time), that is about 23,529 cycles per year.
  • For a typical office building operating 3,400 or so hours per year, that is about 16,151 cycles per year.

The design life for actuators like the Belimo product line is in the range of 50,000 – 150,000 cycles.  That means that the observed cycle frequency, if unrecognized or unaddressed will likely:

  • Result in actuator failures in 1.2 – 3.6 years for systems running 24/7, and
  • Result in actuator failures in 2.1 – 6.4 years for systems running approximately 5,000 hours per year, and
  • Result in actuator failures in 3.1 – 9.3 years for systems running 3,400 or so hours per year.

Given that:

  • The design life for the air handling systems we are discussing will be in the range of 15-30 years or more and that
  • If the control process was stable, the actuator life could exceed the system life,

then addressing the hunting control process will represent a non-energy benefit to the facilities bottom line.

More specifically, eliminating an actuator failure eliminates one or more hours of technician time to identify the problem and correct it as well as $200 – $300 or more in hardware cost for the replacement actuator.

But, I would postulate that if you consider this from a holistic perspective, then there is an energy and carbon impact associated with the failed actuator itself.  It is reflected in the replacement cost of the actuator and represents the embedded energy and carbon associated with its production.  

Thus, if tuning a control loop allows an actuator to last the entire life of the system it serves rather than having to be replace multiple times over the course of the system life, then I would postulate that you have saved energy, carbon and other resources.

[Return to Contents]

Targeting Loops To Tune

Another common question that came up in our discussion was:

Does this mean I need to look at and re-tune all of my control loops? There are hundreds of them!

In the bigger picture, I think the answer to this is along the lines of this.

If you were unaware of the potential for aliasing and the trend rate you are using to assess your control loops is once every 10 to 30 minutes or more, then you probably need to take a look at what is really going on.

This doesn’t mean you should go in and shorten the sampling time of all of your control processes all at once.  In fact you probably do not want to do that and should not do it because of all of the data and traffic it will create in your system.

Rather, you should consider taking an organized approach to it.  A good starting point is to try to assess how much of a tuning effort has been made already by generating a report that shows what the proportional, integral and derivative gain settings are for all of your loops.

It is not unusual to find that many, or maybe even all of the loops are at the factory default values or that loops for different system and equipment types (VAV terminal units, AHUs, distribution pumps, etc.) have different gains but on a system and equipment type basis, most if not all of the gains are at the factory defaults. 

If you take a moment to consider:

  1. How dynamic and nonlinear HVAC processes and performance curves can be, and
  2. How variable HVAC systems can be in terms of the size and configuration of the equipment serving them, and
  3. How dynamic the climate and loads driving the HVAC systems are,

then you can probably reach the conclusion that it is highly unlikely that the tuning solution for each and every loop in a facility will be identical.  So you may want to consider focusing on one system at a time to check the loop tuning, perhaps when you are doing preventive maintenance on it. 

Or you may want to target systems because;

  1. You know they are unstable for other reasons like tenant or guest complaints about temperature swings in their zones, or noise, or
  2. You seem to have a high number of actuator, valve packing failures and valve stem or damper linkage wear on some systems, or
  3. You have systems that are difficult to restart if they are shut down;  perhaps you have even disabled schedules in some systems because of this.

Once you start to gain a better picture of what is really going on, before jumping in and starting the loop tuning process, you may want to ask yourself a few other questions.

[Return to Contents]

Should the Control Loop in Question Be a PI or PID Loop or would a P only Loop be Just Fine?

The figure below compares the space temperature stability for a physics lab that was served by a Variable Refrigeration Flow (VRF) fan coil unit controlled by a PI DDC control loop (left image) with a similar lab that was served by a chilled water fan coil unit controlled by a proportional (P only) pneumatic thermostat (right image).

Lab Compare

Both labs were lightly loaded at the time.

For the lab served by the VRF unit, notice how over the course of an hour and 15 minutes, the space temperature drifts up about 2°F and then is driven back down 2°F in 10 minutes.[viii]

Raising the set point simply changed the 2°F span over-which the cycle occurred. This cycle repeated hour after hour and was perceived by the occupant of the space as being very uncomfortable.

In contrast, for the nearby, similar lab that was served by a fan coil unit with chilled and hot water coils in it, as the load changed, the space temperature floated around fractions of a degree inside the proportional band of the thermostat.  And while the actual space temperature never matched the set point for much of the time over the course of the day, it was very close to it and the occupant of the lab considered the space comfortable.[ix]

In this particular instance, the instability with the DDC process was related to an equipment sizing issue, a problem that loop tuning could not solve.  More on that in the next section.  

But, it is quite possible for a poorly implemented and tuned PI or PID loop to generate a similar pattern due to the added complexity of the process and the lack of understanding of the process and how to tune it. 

So my point is that for many applications, in particular, zone temperature control, the simpler, proportional only process, properly set up, will likely provide a satisfactory, less complex, more persistent solution, as was the case for the lab with the pneumatic thermostat.

In turn, that may mean that the first step in your loop tuning process may be to make some of the loops that are currently PI or PID loops into P only or floating control       loops.[x]

[Return to Contents]

Does the Precision Gained by PI and PID Matter?

I discovered PID because I finally realized that by nature, there would be proportional error in a P only process and I did not want the error because it represented an unnecessary reheat load (the details are in the PID paper I referenced previously).

Not all control processes are like that.  The zone temperature control example discussed above is an example. 

Control loops are are being reset by other parameters can be another.  As long as the process is stable, in some ways, if properly implemented, the reset schedule will find the appropriate set point, including consideration of any proportional error that may be present.

[Return to Contents]

Can the Control Loops be Tuned Successfully?

There is only so much that a control process can do. 

For instance, if extreme conditions place a load on a central plant that exceeds the capacity of the equipment in terms of being able to deliver the design supply temperatures and thus, the design zone conditions, then there is nothing that a control loop can do about that other than ask for everything the central plant can deliver and recover as quickly as possible once the extreme condition passes.

Another common example is related to how well (or not) the final control elements are sized.  Few if any of the typical final control elements we deal with have linear characteristics. For instance:

  1. Valves and dampers need to be appropriately sized in the context of the loads served and the capabilities of the control system to provide a somewhat linear response.[xi]
  2. The shape of the impeller lines for fans and pumps are not linear, nor are the system operating curves.

For the system behind the example we used in this blog post, it turned out that the control valve was a line sized butterfly valve.  As a general rule of thumb, for a modulating control valve to be able to deliver a semi-linear control response and achieve satisfactory control (i.e. is properly sized), then it will be at least one size smaller than the line that it is in;

  • A variable load served by a 3 inch line will have a control valve that is 2-1/2 inches or smaller.
  • A variable load served by a 1 inch line will have a control valve that is 3/4” or smaller
  • Etc.

Because butterfly valves have such a low, wide open pressure drop, it is not out of the question that a properly sized butterfly valve will be 2 line sizes smaller that the line size for the load it is serving.

Similarly, the VRF system associated with the lab that was discussed previously was oversized and was not able to reduce capacity to the point that matched the load condition that existed in the lab most of the time.  As a result, the most that the control process could do was to try to reduce capacity as quickly as possible when the cooling process started and then try to maintain the minimum capacity in an effort to address the load.

If the minimum capacity was still more than needed to balance the load, the control process’s only option was to over-cool the load or cycle off, wait a while, and try again.

Lags can also make a control process impossible to tune successfully.  I discuss this in more detail in Lags, the Two Thirds Rule and the Big Bang – Part 5 in the paragraph titled “One Final Thought about Lags”.  So I will refer you to that for more information on this particular subject.

The bottom line is that a control process may not be able to compensate for errors that are made in terms of sizing control valves, control dampers or equipment or for excessive lags that might be inherent to the configuration of the system or the nature of the hardware.  Thus, the first step in tuning a loop for a system that is challenged by an issue like this might be to take the steps necessary to appropriately size the control valves or dampers,  improve the turn-down capability of the system or equipment, or eliminate lags in the process.

[Return to Contents]

Derivative Gain Needs to be Used Carefully

The derivative gain associated with a PID control loop can be a “mixed bag” and needs to be used carefully. 

On the one hand, if you get it right,

  • It will reduce the swing you see in the process variable when you upset the system, and
  • It will reduce the settling time required to become stable at set point with no error.

On the other hand, if you get it wrong:

  • It can make a bad situation worse because by design, it responds to the rate of change in proportional error. 
  • It does this causing the output of the control process react even more quickly than it otherwise would and may make the system even more unstable.

Generally speaking, I (and may other folks whom I respect quite a bit) do not use derivative gain unless I really think it is necessary;  perhaps to minimize the overshoot on start-up or improve the settling time for a large variable flow system.  And even then, I am a bit (comfortably) nervous about doing it.

[Return to Contents]

There Are Multiple Successful Solutions to Many Loop Tuning Problems

The quarter decay ration I mentioned at the beginning if this post is a general indication of how well a a PI or PID process is tuned.  In a real application, there are some practical considerations that come up as illustrated below.

Practical Quarter Decay

In other words, for example, for a VAV AHU, you want the response to an upset to not trip out the static safety switch on the first overshoot.  And you probably would like to see the process variable stabilize at the set point in 5 to 10 minutes or less.

When you start playing with loop tuning, you will discover that there may be a number of combinations of proportional, integral, and (if you use it) derivative gains that will give you a satisfactory result in terms of minimizing the overshoot and settling time after an upset.   I realize this one day when, at the end of 4 or so hours, it dawned on me that the parameters I had used in my  initial effort, based on the closed loop test method had actually worked just fine. 

Every thing else I had tried since then had also worked (no safeties tripped and a reasonable settling time);  the different settings had simply produced a different response pattern.  But at that point, while feeling a sense of intellectual satisfaction, I was about 3 hours behind on what I needed to get done that day.

[Return to Contents]

Once May Not Be Enough

Even if it turns out that your loops had been tuned at one point, it wise to check them occasionally because things that affect the tuning parameters change in our systems over time;

  • Linkage wears,
  • Heat transfer characteristics change,
  • Occupancy patterns change,
  • Etc. 

For new construction projects, I often tell people that we really may not have the building completely tuned until after about a year because it will take that long to cycle through the seasons and all the variations they bring.

That is one of the reasons most commissioning processes include peak season and swing season testing.  Loops that seemed to work just fine during the fall swing season may be unstable when the peak heating season hits and will need to be retuned.

When spring rolls around some of the loops may still exhibit some instability and will need attention then too.   And of course, the peak cooling system will push things to a place that they have not been to before potentially triggering a few more loop tuning problems to be addressed.

Ideally, by the end of the year, you will have found loop tuning solutions that work for all of the seasons.   But you still will probably want to spot check your loops, especially PI and PID loops because wear and tear can change the system characteristics enough to trigger instability again.  Spot checks are also warranted when operating patters change or after equipment repairs or upgrades.

David-Signature1_thumb_thumb_thumb

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

[i]       My focus here is on aliasing and its impact on a PI (Proportional plus Integral)  por PID  (Proportional plus Integral plus Derivative) loop tuning process so I am assuming that the folks reading this are familiar with P, PI, and PID  control.   But if you aren’t you may find the PID Resources page on our Cx Resources web site to be helpful, especially the ICEBO paper that is shared there on the topic.

[ii]        The response of the system to a start-up is also a way to observe this. You could even say that a system that is operating on a schedule is testing its control systems for an appropriate response every time it starts up.  In fact, if you are trying to get a general sense of how well a building is tuned, one thing you can do is observe how well (or not) it responds to starting from a dead stop. 

Even if the trends are not fast enough to catch the waveform, if the systems are not well tuned, you will likely observe a lot of instability, potentially including processes that take a very long time or maybe even never stabilize at set point.  In some instances, you may even find that the system has a hard time coming on line because the swings in parameters like temperature or pressure are so large that they trip the system safety switches.

In fact, if you ask a construction team or operating team if its O.K. to shut everything down and then start it up again to observe the response and they immediately object to doing it, perhaps even refuse to do it, then you probably should not proceed with your test because you may find out something the hard way that they already have discovered the hard way.

But I would postulate that in a case like this, your test actually was successful, even though you did not run it.  That is because you discovered that the systems are challenging to bring on line if they shut down and thus, you may want to try to work with the team to understand why and correct it. 

At some point, Mother Nature will run that test for you in the form of a power outage.  If you take the time and effort to understand the problems and correct them, the team will be in a much better position to recover from Mother Nature’s test when it inevitably happens.

[iii]     This image is taken from a series of articles Control Engineering published in the early 1980’s about PID.  You can download a copy of them from the PID resources page on the Cx Resources website.

[iv]      If you want to learn more about loop tuning, you will  find a lot of information about it in the Control Engineering PID article series I mention in the footnote above, including a copy of  Optimum Settings for Automatic Controllers, the original paper on the subject by John Siegler and Nathan Nichols.  David St. Clair’s book Controller Tuning and Control Loop Performance is also a resource well worth the money and is how I learned most of what I know, along with field experience.

[v]       Up until now, I have been using screen shots from the system we were working with.  But moving forward, I am going to use the trend data that was sent me after our session.  I loaded into Excel so that I can highlight and compare a number of things.  So, even though the presentation will look different from the original black screen with a yellow ling in the image at the start of the post, the data is the exported data from that system.

[vi]      The system has reached its natural frequency at the point where it starts to cycle at a steady frequency with a consistent magnitude at the peaks and valleys rather than displaying:

  • No cycling, or
  • A quarter decay ratio, or
  • Cycling with an increasing magnitude for each half cycle at the peaks and valleys (a.k.a dynamic instability).

[vii]     For this system, we did not actually have flow data so I am creating a “what if” scenario here to illustrate a point.

[viii]    The details of the algorithm behind this are not totally exposed in the vendor documentation, but trend data that we collected along with the space temperature line shown above suggests that the root cause of the problem was not a loop tuning issue, rather it was a capacity issue. 

DDC Lab Details

More specifically, the system would:

  1. Initiate cooling when the space temperature rose about 1/2°F above setpoint, then
  2. Attempt to modulate cooling to try to match the load, and then
  3. When it could not match the load, run at minimum capacity, and then
  4. When the space continued to overcool, cycle off, and then
  5. Remain off for a predetermined period of time before trying again.

[ix]     To understand what is being illustrated here, it is important to realize that  for a proportional control process, there will always be a difference – proportional error – between the set point and the control point other than at the condition at which the device was calibrated.  In this example, the thermostat had been calibrated so that when the set point matched the space temperature, the output was at the midpoint (9 psi) of the 3-15 psi span. 

The pneumatic signal was connected to both a normally open hot water valve and a normally closed chilled water valve. The spring ranges had been selected so that the hot water valve would be fully open at or below 3 psi and driven closed with an 8 psi signal. 

The chilled water valve would not begin to open until the signal reached 10 psi and would be fully open when the signal reached or exceeded 15 psi.

When the temperature matched the set point, neither valve would be open.

The thermostat was was direct acting, which means it’s output would increase as the difference between the space temperature and its set point increased.  With the throttling range set for 1.5°F,  and a set point of 71.25°F, the result was that the temperature would need to reach approximately 71.4°F before chilled water would be used and would need to reach approximately 72°F to get the chilled water valve fully open, as illustrated below.

Pneumatic Lab

A similar deviation from set point the other way would be required to cause the unit to start to use hot water.

For the period associated with the data, you can see that conditions were such that the system initially the system only need a small amount of chilled water.  Then, for a while, it did not need any chilled water.  Later in the sample period, the loads increased to the point where the space temperature drifted far enough above set point to cause the system to opened the chilled water valve about 50% before the gains came into balance with the cooling capacity provided by the chilled water, 

During the entire window of time, the space temperature was held with-in 2/3°F of set point or less. Because the variation in temperature was modest and gradual and the space generally seemed at about the temperature the occupant had set the thermostat for, they felt comfortable.  This was not the case for the VRF system where the space seemed to swing erratically by 2°F every hour or so.

[x]       Floating control is a process that simply targets keeping a process inside a “window”.  If the process strays outside the window, then corrections are made to drive it back into the window.  But if the process is inside the window, no adjustments are made.

[xi]      For more information about valve and damper sizing see the Honeywell Gray Manual and the MCC Powers Valve and Damper Sizing Engineering Bulletins.

Posted in Uncategorized | Leave a comment

System Diagramming Resources

Logic Diagram Exercise HW System v6_Page_1AHU-2 and 2A System Diagram v1Those of you who know me or follow this blog know that I am a big advocate for applying the system concept and developing and using system diagrams for design, diagnostic and operational purposes.  It is a concept I was exposed to on my first day in the industry and is one of the most important and useful skills I have learned.  The diagrams to the right are examples of a water system and air system diagram developed using the concepts and techniques I advocate.  If you want higher resolution versions of them, you will find them on the system diagram symbols tool page of our Commissioning Resources website.

Over the past year or so, I have been working with several clients, including CERL, Marriott and the Pacific Energy Center to develop some self study resources to supplement the series of blog posts on the topic and the system diagram symbols tool on our Commissioning Resources website.  As of last week, I think I finally have things organized and developed to the point where they might be useful to the general public, at least as a Beta test of the concepts.  So the purpose of this post is to link you up with the resources if you are interested in learning more about system diagrams.

You will find these resources under the Training portion of the website under the On Demand Content on the System Diagraming page.

Self Study Video Modules

There are a number of self study video modules you can work with.

That last resource updates the exercise that I put up several years ago.  The model of the plant is much more detailed these days, as you can see by contrasting the image from the original model (upper image below) with a similar perspective from the current model (lower image).

Current Plant Model 2

Thus the virtual experience of using the latest model will better prepare you for developing one in an actual field environment.

Self Guided Exercises

Developing a System Diagram in the Field

The resources I have mentioned so far are in the form of videos that provide guidance in one form or another.  But we have also developed a few resources that will allow you to practice the techniques that are discussed in the videos using self guided exercises.

The first exercise gives you the sketch of the distribution portion of the  Hijend Hotel chilled water system that was developed in the video series mentioned above so that you can add the portion of the system serving the chiller evaporators to it.  You are provided with a PowerPoint slide deck that has instructions, links to other resources and screen shots from the model that you can use to do this.

The other resources include a video fly-through of the plant and access to the actual model in case you want to download the free SketchUp Viewer and then download the model and work with it in the viewer.  This will allow you to navigate in the model and look at things from any angle that you want.   This is the most realistic way to do the exercise since you can virtually walk around the plant just as if you were there.

Be aware that the model is one of my more complex models so it may push the graphics card on your machine pretty hard and if things grind to a halt because of that, the screen shots and video fly-through should give you the perspectives you need to do the exercise.

Field Verifying a System Diagram

The second exercise gives you a system diagram for the plant that was developed by Joe DeNuguyi using the project documents that were provided to him prior to coming on site to do some RCx.   Your job is to field verify it to see if the system was actually installed as intended.

Once you complete your verification effort, you are linked to an MS Forms based quiz that will provide the answer and then ask you a few questions that will cause you to apply the diagram to understand how the installed system works compared to what the design intent was.

David-Signature1_thumb_thumb_thumb

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

[i]   Joe DeNuguy is our virtual hero doing retrocommissioning for the Hijend Hotel.  We work virtually with him, Noreen McAlister, the Chief Engineer, David Carson, the General Manager, and Harley Davidson, an Engineering Tech when we use the models in class.

Posted in Chillers and Chilled Water Systems, HVAC Calculations, HVAC Fundamentals, Operations and Maintenance, SketchUp Model Based Self Study, System Diagrams | Leave a comment

Energy Loss from a Run of Pipe

One of the folks in a class series I am helping with ask a really good question about the temperature drop they were observing in a run of pipe and its implications in terms of energy.   Answering it involves applying a number of concepts that are useful if you are out there doing design, commissioning and operations. 

So, I thought I would use answering it as an opportunity to create a new blog post.  I’ve been doing a lot of other development, including on the Cx Resources website, but have gotten behind on my blogging.  So hopefully this will get me back on track for being a bit more regular in my blog posting activities.

Contents

As usual, this gets a bit long, so I am including a table of contents to allow you to jump around.  The <Back to Contents> link at the end of each section will bring you back here.

The Statement of the Problem

Here is his statement of the problem.

We are about to replace existing domestic hot water heaters 3×2 Millions BTU. We are operating rental hot water heaters for now; costing 10K per month. Owners want to replace existing boiler like for like including 3 storage tanks (900 Gal). The existing heaters (boilers) are 200’ across the street on the garage. There are 4” supply and 2”return pipes 200’ each.

I want to move these boilers inside hotel saving these 400’ pipe loss for 2.2 Million Gal per year use of domestic hot water for a year. The loss in temperature are 2⁰F from existing boiler to water mixing valves inside the hotel. I wanted to calculate $ dollars loss per year. I also want to replace existing boilers which uses storage tanks with instant hot water tankless boilers.

<Return to Contents>

A Very Long Heat Exchanger

One way to think of the 400 feet of pipe, even if it is insulated, is as a very long heat exchanger.  As a result, the waterside load equation that I suspect many of you will recognize comes into play.

Water Side Load

What that relationship is saying is that water flowing in a pipe represents energy.   The amount of energy is a function of:

  1. The specific heat of the water (i.e. the Btu’s it takes to change a pound of it 1°F) which is part of the units conversion constant of 500, and
  1. The flow rate, which is expressed as a volumetric flow rate in the equation since we tend to think in terms of gallons per minute vs. pounds per minute (the numbers wrapped up in the “500” term also account for this), and
  1. The temperature change; i.e. if water has a specific heat of so many Btu’s per pound per degree F and the temperature changes so many degrees F, for a given number of pounds of water, you can figure out how many Btus were added or removed.

(If you are curious about the details behind the “500” term, this blog post will provide some additional insight).

Unit Conversion Constants; The Answer to Deriving the 500 in Q = 500 x gpm x delta t

Thus, to calculate the energy savings, you would need to know the flow rate in addition to the temperature change.

<Return to Contents>

Conduction Losses from the Pipe

The reason the temperature drops between the boilers and the mixing valve is, as you probably suspect, the heat transfer from the water through the insulation to the ambient environment.  The basic relationship for that is as follows:

Heat Transfer - Conduction - Basic

For a cylinder with insulation, that gets more complex because the area that the energy is flowing through gets larger as you move out, away from the pipe surface as illustrated below.

More specifically, if you consider an insulated pipe and projected the area outlined in red on the pipe in the second picture out through the insulation, there would be more square inches of area at the insulation surface (the white area in the third picture) than at the pipe surface (the red area).

clip_image006clip_image008 clip_image010

As a result, if you had to do the math, it gets more complex.

Heat Transfer Through Cylinder

But don’t let that scare you.  There is a free software tool you can use that will do that math for you that I will show further on.  Its just something to be aware of.

<Return to Contents>

A Steady State 2°F Temperature Drop

From a pure physics standpoint, for the run of pipe under consideration, this means that if the loss from the existing boiler location to the mixing valve location was always 2°F, then:

  1. Either the flow rate (the amount of mass loosing energy) is  constant, or
  1. The water temperature and ambient temperature (the drivers for the heat transfer) are always constant, or
  1. The variation in flow rate and the variation in temperature difference just happen to be such that the higher losses associated with low ambient temperatures occur exactly and in  an inverse relationship with the times when the flow rate is low and vice versa.

None of those things are likely true (this is assuming the piping is running through an unconditioned garage, so the temperature in the garage varies with outdoor temperature).

In other words, even though my friend observed the 2°F temperature drop and it may represent the average, it probably varies with the ambient temperature and the flow in the pipe.  

You could probably validate that with some data logging but if you did it you would want to be sure to do a relative calibration for the temperature sensors because we are talking about small differences in temperature and install them so they were not influenced by the ambient environment themselves.

The relative calibration topic is covered in this blog post.

Relative Accuracy

Measuring a surface temperature accurately is covered in this blog post.

Measuring Pipe Surface Temperature

<Return to Contents>

One Potential Calculation Approach

One way to approach this would be to assume an average temperature drop and an average flow rate and use the waterside load equation above.   And while the assumption of an average temperature drop may be reasonable coming up with an average flow rate for a domestic water load is probably pretty challenging, at least given the variability I have seen when I have logged it or  a proxy for it or watched what was going on in a hotel.

The image below is an example for a conventional booster pump where the pumps run at constant speed against a pressure regulating valve.i

image

In other words, over night, there is virtually no load and flow other than the recirculation flow and associated losses in the piping network.   But you will likely see a big peak early in the morning as folks get up and shower for the day and food services get underway, then a drop to a baseline level associated with the daily operating profile and driven by consumption in the kitchens, meeting room lavatories, and day time guest room occupancy. 

There will be smaller peaks around the noon hour as folks return to their rooms to clean up and food services start to do lunch, and near dinner time for a similar reason with a final peak late evening.  After that, the consumption drops to virtually nothing again overnight.ii

As a result, it could be challenging to come up with a number that represented a truly average flow conditions, especially for a hotel, where the magnitude of the baseline and peaks will also vary with occupancy.  So, what is a mother to do?

<Return to Contents>

Using an Professional Grade Electronic Psych Chart as a Resource

Well, interestingly enough, this is where the professional versions of the electronic psych chart psych chart tool can be helpful I discuss in the blog post titled A Free Electronic Psych Chart and How to Use It to Plot Basic HVAC Processes  can come in handy.  There is also a self study video that I just posted a link to that shows how to use the chart and includes a section where I demonstrate this at this link.

https://www.av8rdas.com/loads-and-psychrometrics.html

As a side note, the “original” of this particular post was an e-mail to a class I am helping to support and as part of their training package, they are provided with a version of the chart I discuss in the blog post referenced above that is branded with their training program logo.  

But it is functionally identical to the chart I illustrate in the blog post so the images below and techniques will apply if you are using that chart or any of the Hands Down Software based charts, including the Akton chart and the ASHRAE chart.

Returning to the topic at hand, you may be wondering what this discussion has to do with psychrometrics, and the answer is “not much”.  But, as you may recall from the blog post about the electronic chart, if you upgrade it, it includes weather data. 

I should also mention that there are other ways to come up with the TMY or similar weather data we will use in this example.   I discuss many of them and provide links to the data sets in an ASHRAE Engineers Notebook article I wrote fairly recently.  You can find the article and related links here.

https://www.av8rdas.com/ashrae—engineers-notebook.html#TMY

In any case, this is what the data looks like for Addison Airport, which is the closest location in the chart database relative to Plano, Texas, where the project was located.

clip_image014

So, if we export that data to a spreadsheet we might be able to use it to answer this question since the losses from the pipe will vary with the ambient temperature (again, I am assuming the pipe runs through an unconditioned garage, but even if it didn’t and the ambient environment was more steady state, you could still use the technique I will illustrate). 

The fundamental premise and assumptions behind this approach are as follows:

  1. The water slide load relationship mentioned previously applies.  For the purposes of our discussion, the load (Q) is the energy transferred by our heat exchanger a.k.a the length of pipe.
  2. The energy loss from the run of pipe is directly related to its length, the insulation, the temperature of the water, and the ambient temperature in the location the pipe runs.
  3. The energy loss from the pipe is not vey much affected by the flow rate.  Granted, the heat transfer coefficient between the water and the wall of the pipe will vary with the flow.  But if the pipe is insulated, the largest part of the thermal gradient is through the insulation and the heat transfer coefficient/resistances at the boundary layer between the water and pipe wall is somewhat inconsequential.
  4. For a pipe that is warm relative to the ambient environment, the energy loss from the pipe will be reflected as a temperature drop along its length with the warmest part of the pipe being the point where the flow enters it.  If the flow rate, water temperature and ambient temperature are steady, the temperature drop will be constant.  If the flow rate and water temperature are steady but the ambient temperature varies, the temperature drop will vary directly with the ambient temperature.
  5. Cooler ambient temperatures will result in more energy loss and a bigger temperature drop.
  6. Warmer ambient temperatures will result in less energy loss and a smaller temperature drop.
  7. If the water temperature and ambient temperature are steady but the flow rate varies, then the temperature drop will vary inversely with the flow rate.
  8. Lower flow rates mean there is less mass for the fixed amount of energy (that is being transferred by the constant temperature difference over a fixed length and area) to be removed from and as a result, the amount of energy removed per unit mass for the mass that is flowing through the pipe has a bigger impact on it’s energy content per unit mass when it reaches the end.  The lower energy content per unit mass (compared to a high flow rate) is reflected as a larger temperature drop that would exist at a higher flow rate.
  9. Higher flow rates mean there is more mass for the fixed amount of energy to be removed from and as a result, the amount of energy removed per unit mass for the mass that is flowing through the pipe has a smaller impact on it’s energy content per unit mass when it reaches the end.  The higher energy content per unit mass (compared to a low flow rate) is reflected as a larger temperature drop that would exist at a higher flow rate.

That final point can seem wrong or counter-intuitive.   We are accustomed to large loads having large temperature drops associated with them.   But when that is true, they also have large flow rates associated with them.

It might help to consider a limiting condition.   What if there was no flow in the pipe from the boiler location to the mixing valve location? 

In this scenario, the boilers would fire to keep themselves hot and the pipe connected to them, by virtue of conduction, would be hot.  A thermometer at that end of the pipe would show that it was in fact hot there.

But, if there was no flow in the pipe between the boiler and the mixing valve, then nothing would be moving the energy into the pipe.  The water that was in the pipe, while initially warm due to a previously existing flow, would cool off (assuming the ambient environment was cool relative to the temperature of the pipe).

The rate at which cooling would happen would be non-linear because initially when the water was hot, the temperature difference would be high.  But as the water cooled off and the temperature of the pipe dropped while the ambient environment remained fairly steady (we are assuming it is acting as the proverbial “infinite sink”) the driving force removing the energy from the fixed amount of water in the pipe would drop off as would the heat transfer rate.

At some point, there would be no temperature difference between the water in the pipe and the ambient environment. A thermometer in the pipe at the mixing valve would show the water at the ambient temperature, and thus, a large temperature difference compared to the thermometer at the boiler end of the pipe.

But since there is no flow, there is no load at all and no loss of energy from the pipe anymore, even though there appears to be a very large temperature difference.

<Return to Contents>

Using the Hourly Weather Data

O.K., Enough about that.

What we are going to do is use the hourly weather data to calculate the loss from the pipe for each our of the year, assuming the unconditioned garage temperature tracks the outdoor air temperature.  

If the garage was conditioned – for instance, held above 40°F for instance, then you could still use this technique but for hours below 40°F, you would calculate the energy loss associated with an ambient environment at 40°F, which could be done in a number of ways with formulas in Excel.

If the pipe ran through a conditioned area where the ambient temperature was relatively constant, then you don’t need to do the hourly calculation at all.  You just need to figure out what the energy loss was for a given temperature difference between the water in the pipe and the ambient temperature and multiply it by the hours in the year (which assumes the pipe always has a bit of flow in it;  enough to keep it at or near the boiler water temperature).

The Trick

The trick is that you have to figure out the energy loss through the insulation, which gets into that somewhat scary looking equation I showed earlier.   But the good news is that the North American Insulation Manufacturer’s Association has a free application  called 3EPluss that you can use to figure that out.   I have the desktop version but they just recently released an online version.

https://insulationinstitute.org/tools-resources/free-3e-plus/

If the ambient temperature and water temperature really were relatively constant, then you would just need to figure out the loss per foot in Btu/hr, multiply it by the feet of pipe, and multiply that by the hours in the year.

Here is what that looks like using the desktop version and assuming:

  1. 140°F water
  2. 75°F ambient environment
  3. 4” line size
  4. 1-1/2” of fiberglass insulation

image

image

Fixed Ambient Results

If you have an ambient temperature that varies (our case since we are assuming the garage temperature follows the outdoor temperature) or a water temperature that varies (which is what you would be looking at if you were trying to understand the energy savings associated with implementing a reset schedule), then you need to use 3EPlus to come up with the loss at a number of conditions and then do a curve fit.    So that is what we will do next.

Doing a Curve Fit For Loss vs. OAT

More specifically, we will assume a constant water temperature and then look at how heat loss varies as ambient temperature varies using 3EPlus.   I started by looking at 4 points spread across the typical temperature range that Plano sees in a year using 3EPlus to calculate the loss. 

If four points show that the relationship was non-linear, then I might elect to add a few more points to define the curve.  But in this case, it appears to be a linearish relationship, which is what you would expect.  So I just used the four points and plotted loss as a function of OA.  Then, I used used Excel’s trendline function to fit a curve to it as shown below.

clip_image024

Now I have an equation that lets me calculate the loss that will occur (y in the equation) if I know the OAT (x in the equation).  Since I know the typical OAT for each hour of the year from the TMY3 data file I exported from the psych chart, that means I can use the equation to calculate the loss for each hour.

Since the loss is in Btu/hr/foot, it represents the total Btu loss for one foot of pipe for the hour (Btu/ft/hr x hr = Btu/ft).  Thus, I can multiply the hourly value by the length of pipe for and come up with the loss for each hour for the run of pipe.  If I add those loses up, that is the loss for a typical year, assuming the temperature around the pipe varies just like the OAT and the temperature of the water is steady at 140°F.

The Bottom Line

Here is that bottom line.

Annual Loss

<Return to Contents>

Performing a Sanity Check

There are a few cross-checks you can do to see if the bottom line I arrived at above seems reasonable.  That sort of step is really important;  I can still see and hear Mrs. Mack, my first grade teacher, reminding us to check our math, a message that continued to be voiced by my mentors through college and beyond.

To check the result I came up with to see if it was believable, I did some math based on assuming a ground water temperature (i.e. the temperature that the cold water make-up coming into the domestic hot water heating system would be at), a supply water temperature from the mixing valves to the showers, and a 2 gpm shower.  Then, I varied the number of showers from 1 to 404 (the room count for the facility, which I thought might represent the maximum number of concurrent DHW loads worst case).  Here is how that ended up.

Checking Math

To me the numbers seems reasonable given what I know about the situation.  They do imply that the observed 2°F temperature drop is associated with a fairly low activity level.  That seems  believable to me also given how sporadic the DHW load can be and that the peaks will often occur at odd hours.  Plus, there is the relative accuracy issue.

<Return to Contents>

A Few Caveats’

A couple of other things to consider in this situation.

The Average Water Temperature for the Run of Pipe

For my example, I used a water temperature of 140°F to estimate the losses for the entire length of the run.   The reality is that:

  • Since the loss is a function of the water temperature, and
  • Since the loss will tend to decrease the water temperature over the length of the run,

then, if the temperature drop for the run is a large number, I will be better off using the average water temperature for the length of run vs. the entering water temperature for the length of run.

Plate and Frame Heat Exchangers as Instantaneous Domestic Hot Water Heaters

The question that triggered the discussion involved moving boilers that were dedicated to domestic hot water from a remote location to a location that was in closer proximity to the loads they served.  But, one of the options the engineer who asked the question was looking at was using instantaneous hot water heaters vs. the storage tank based system that was currently in place. 

If the instantaneous heaters took the form of plate and frame heat exchanger fired from the heating hot water system, it is important to recognize that doing that means you can not lower the Heating Hot Water (HHW) supply temperature below something above the desired Domestic Hot Water (DHW) supply temperature.   That could be a limitation on savings in other areas.  For instance, you could not implement a reset schedule that lowered the HHW supply temperatures below what it took to create 140°F DHW leaving the heat exchangers, even if you discovered that during the summer months, you could do reheat with HHW that was in the 85 – 100°F range (this paper will give you some insight into that)

As a result, you may want to consider a configuration that dedicates a piping circuit to the DHW heaters (vs. having them be across the mains someplace) so you can run that circuit at the required temperature needed to supply DHW while running the actual HHW system for the heating loads (reheat, preheat, space heat) at what ever temperature makes sense for the current conditions and can be tolerated by the prime mover in the system.

Gas or Electric Fired Instantaneous Domestic Hot Water Heaters and Low Flow Safety Interlocks

If you are considering electrically or gas fired instantaneous heaters, there can be a minimum flow they will support.  Meaning, if the flow rate is below that minimum flow rate, they will not fire.  This can be more of an issue with residential applications, but it could come up in some commercial settings.

In other words, because of turn down or safety limit considerations, you could have a DHW flow demand that was  too low to allow the heater to fire.  So, the water would certainly flow when someone opened the tap.  but it would not be heated,  so no DHW.  That means that some sort of storage/buffer tank may still be desirable, even if you are using instantaneous heaters.

Domestic Hot Water Tank Insulation

If you have a storage tank based system, the cost of hyper-insulating the tanks so there are relatively insignificant storage losses can be attractive compared to the infrastructure needed to support an instantaneous heater, in particular, the electrical distribution required to support an instantaneous electric heater vs. an electric storage heater.

David-Signature1_thumb_thumb_thumb

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

i.     This type of pump is a good energy conservation candidate because the flow is varied by the throttling  action of the pressure regulation valve, moving the the pump up and down its constant speed impeller curve.   Retrofitting to an approach that uses VFDs, either by adding them to the existing booster pump system or replacing it with a new, VFD equipped pump skid saves energy because the flow is varied by moving the operating point up and down the system curves associated with the different load conditions that occur in the system.  It is not unusual for utility companies to offer an incentive for an upgrade like this.

ii.   You may be wondering why, if the booster pumps continue to draw amps when the water consumption is low or virtually nothing, like overnight.   That is because pump power is related to both the flow and the pressure produced by the pump as well as the efficiency of the pump, motor and drive if present, as illustrated below.

Since the pump is still producing pressure, even though it is not producing flow, it will still use power.  Contributing to this is the fact that the pump and motor efficiency at this condition will tend to be very low.  So, a higher percentage of the power consumed is associated with efficiency losses relative to what happens under higher flow conditions.

kW Into a Pump

Posted in Boilers, Hot Water Systems, and Steam Systems, Excel Techniques, HVAC Calculations, HVAC Fundamentals, Steam Systems | Leave a comment

Some Psychrometric Self Study Resources

A while back, I wrote a blog post titled A Free Electronic Psych Chart and How to Use It to Plot Basic HVAC Processes.   I use that tool frequently for the classes I help support and recently, with support from the Pacific Energy Center, Marriott, and CERL, developed some self study content that will help you learn to apply it and give you some exercises you can do to test your knowledge.  I have posted them both to a page on the On Demand Training section of the Commissioning Resources website.

Hot Humid OA PECThe first video and related resources are a sort of beta test for the content which will eventually end up on Pacific Gas and Electric Companies learning platform.   But I have been using it to support a couple of classes now and it seems to be helpful, so I thought I would share it here.

The goal is to provide an overview of the topics listed in the title so that attendees in the classes can start to become familiar with them and apply the concepts as they work on existing building commissioning (EBCx) projects.  Generally, when we ask the to do this particular self study content, they are at the point in the class where they have:

  • Picked a facility they want to do a personal project in, and
  • Studied its utility consumption patterns, and
  • Are getting ready to scope the facility for EBCx opportunities.

So the content is presented in an EBCx context.  But since it is based on fundamental principles, it generally applies across the boards.

The video starts by providing you with some resources pertinent to the topic.  It then explores:

  • Some basic definitions associated with loads, like sensible and latent load
  • Load dynamics including the impact of climate, time of day and thermal lags
  • Common psychrometric parameters like dry bulb and wet bulb temperature
  • What the lines on the psych chart mean and how to read data
  • How to use the sensible heat ratio axis in the basic version of the chart and the pro version of the chart
  • What apparatus dew point is
  • How to use some of the advanced features of the professional version of the chart like the design data tool, bin data plots, exported TMY data, working with comfort zones and defining your own zones

At the end of the video, you will be linked up to an MS Forms based quiz that you can take to test your new-found knowledge.  If you get a question wrong, you will be given some guidance about why its wrong and can try again.

Psych Loads Envelopes v1When you make it to the end, you can claim an exciting merit badge which has absolutely no value but may make you feel good about the whole thing.  (Actually, for some classes, we ask the attendees to submit the merit badges as proof of having done the self study, so if you decided to take one of the classes I help with, it could be worth something).

PG&E Lobby AnswerThe second video is intended to be a self guided exercise that shows you how to use a psych chart and some basic equipment performance information to estimate the outdoor air design conditions used by the system/building designer and the minimum outdoor air percentage for an air handling system.

In the video, I demonstrate the steps in the process, allowing you to pause the video between each step to either try it yourself or use the technique I demonstrated to answer a question.  When you resume, I show the answer and then demonstrate the next step. 

At the end of the video, you are given a similar problem to try totally on your own along with a link to a quiz where you can enter your answer and see how you did.

Good luck!

David-Signature1_thumb_thumb_thumb

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

Posted in Air Handling Systems, Economizers, HVAC Calculations, HVAC Fundamentals, Psychrometrics | Leave a comment

An Interesting Psychrometric Process

2022-10-04 – Author’s note:  In reviewing this post yesterday to answer a question that came up, I discovered that some of the psych chart images had their quality degraded for some reason.  So, I have replaced them and I believe now, everything is legible.  

In answering the question, I also realized that I needed to mention one additional consideration that you would want to address if you used the process discussed, that being the need for good mixing – which is always important –  becomes even more important because of the lower set points used in this process.  So I added a paragraph about that when I re-posted.

O.K.

I realize that for most normal people the word “interesting” could in no way, what-so-ever be associated with the words “psychrometric process”.  As I often tell folks,

When I say “interesting”  you can (and probably should) add the words “in a nerdy sort of way” to the end of my sentence.

That is the case here, so having given fair warning, I am going to proceed.

Some Background

As some of you likely know, I occasionally write for the Engineers Notebook column in the ASHRAE Journal, usually about twice a year.  Last April, I wrote a column  titled The Perfect Economizer, which was actually the trigger for the blog post series I am currently working on (and lagging behind on).  In any case, the magazine received a letter to the editor in response to it from Mr. C.  Mike Scofield , PE, ASHRAE Fellow, President of Conservation Mechanical Systems, Sebastopol, California. 

In it, he presented an interesting system configuration and psychrometric process and wondered if I had seen it applied in Portland, which I had not.  My editor asked me if I would mind responding to Mike’s question, and I did (published in the September ASHRAE Journal).

If you don’t receive the Journal, you may want to refer to a copy of the letter and my response that I have posted along with the copy of the article on our Commissioning Resources website since the discussion sets the stage for what follows.

What follows is an edited version of the correspondence between Mike and myself subsequent to my initial published response.   That happened because I became curious about the details of the process he had plotted on the psych chart he provided and I wanted to understand it better.

Once I understood it, I realized that it was a very clever process, but also an interesting psychrometrics exercise because it makes you think outside the box a bit compared to the psychrometrics of a conventional system.  So, I asked Mike if he would mind co-authoring this blog post with me to go into the details of the process so folks could learn from our discussion and he graciously agreed.

This will get a bit long (as usual).  The links below will allow you to focus in on the specific content of interest.  Each section as a “Back to Contents” link that will return you to this point.

A Few Resources

The process Mike asked about in his correspondence involves evaporative cooling and humidification.  Evaporative cooling is a constant wet bulb process and you can simply accept that as being true.  But if you want to understand it in more detail, along with the related concept of adiabatic saturation, I wrote a blog post that explores evaporative cooling in detail, including adiabatic saturation and wet bulb temperature that you can refer to.

If you want to work along with what follows on a psych chart of your own, you can download a free version of an electronic psych chart that Ryan Stroupe of the Pacific Energy Center has made available from the link in this blog post. In addition to providing links to the chart the post illustrates how to plot basic psychrometric processes and also illustrates the features associated with upgrading the chart to the professional version. The process plot examples can also be used if you are working with a paper chart, you simply need to manually plot the points on paper vs. using the tool in the electronic chart to enter them.

Alternatively, I uploaded a blank .pdf chart to the page associated with the Perfect Economizer article on our Commissioning Resources website.  There is nothing wrong with using a paper chart.  Mike himself is a self-confessed paper chart and slide rule guy, and I did things that way myself for a long time.   In fact, I still carry my slide rule around, partly for nostalgia, partly to show folks who have never seen one, and if push comes to shove, no batteries required!

Slide Rule 01

But the electronic chart does have some benefits in terms of being easily reproducible in things like this blog post and other tools that it includes, like the ability to plot TMY data as bin data on the chart, which gives you a “visual” on the climate you are considering.

If you are just learning about psychrometrics and using the psych chart, you may also find the chapter on Psychrometrics in the Honeywell Gray Manual to be useful.  And there are a number of slides in resource provided on the Useful HVAC Equations and Concepts page of the Commissioning Resources website that deal with the psych chart and basic psychrometric parameters.

<Return to Contents>

The System and Psych Chart

Here is the system AHU configuration and psych chart that Mike sent with his letter.

System and Chart

Mike’s written description of the illustration was as follows:

Has your team installed and tested a WB airside economizer using a
high saturation efficiency (97% to
99% RH) rigid media adiabatic evaporative cooler/humidifier (AC/H) to mix building return air with outdoor air to produce a supply air dew point that ranges between 45°F DP to 55°F DP during cold and dry ambient conditions?

The psychrometric chart shows a VAV system at 50% fan turndown with an assumed minimum 25% outdoor air to meet
code ventilation requirements. The
high saturation efficiency, at fan turndown to 50% flow, ensures that the delivery DB temperature off the AC/H is within a fraction of 1°F of both the WB and DP temperatures at the saturation curve. A low-cost commercial-grade DB sensor may be used with acceptable accuracy in determining the delivery DP condition of the supply air.

<Return to Contents>

The Reason the System Might Be of Interest

Note that the final element in the system is the evaporative cooler/humidifier.  There are a number of reasons that a system of this type might be of interest currently.  But Mike brought it up because ASHRAE research suggests that …

… maintaining the space relative humidity between 40% and 60% decreases the bio-burden of infectious particles in the space and decreases the infectivity of many viruses in the air.

One place you can find this is in the ASHRAE Building Readiness information published by the ASHRAE Epidemic Task Force.  It is also discussed in the ASHRAE Position Document on Infectious Aerosols (see page 8).  And I suspect folks with a healthcare background were not surprised by this since maintaining humidity levels in that range in a health care environment have been a requirement for quite a while for the reason indicated.  

But COVID has brought that to the forefront as something that might be considered more generally by designers. and in that context, I suspect the system configuration Mike suggested may merit consideration as long as due consideration was given to the application issues the committee mentions in the Journal’s May 2021 IEQ Applications column.  For instance:

  1. Is the building envelope suitable for an indoor environment with a higher than typical humidity level?  Or will condensation on surfaces or inside building assemblies become and issue?
  2. What will the water that is consumed cost?  This will likely vary significantly with the nature of the climate and the local rate structure.
  3. Related to item 2, does the utility offer a sewer charge credit for water that is supplied to the facility but not discharged to sewer?  The sewer charges can be as much or more than the water charges, so having a credit of this type can make a bit impact for evaporative processes like we are discussing.
  4. Also related to item 2, what will the parasitic losses associated with the added pressure drop in the system and the operation of the evaporative cooler pump cost? 
    • In addition to varying with climate and rate structure, the pressure drop loss will vary with the flow rate. For a constant volume system, this could be significant.  But,
    • For a variable volume system with a lot of part load hours, this may not be as big a factor as it seems due to the square law relationship between flow and pressure drop.

COVID and infections control issues aside, there are other reasons you might consider applying this approach.   When I did a quick survey of the company to see if anyone had seen the configuration Mike proposed, it turned out that we had.  But the applications were driven by the nature of the load and included automotive paint booths, server rooms, and museums. That’s not to say the concept does not have merit for the reason Mike pointed out. It just means that myself and the folks I work with have not seen it applied for that reason (yet).

<Return to Contents>

Taking a Closer Look at the Process

Finally, the part you have all been waiting for.  To get started I want to clarify a few of the assumptions and details behind what Mike presented.

Process Analysis Assumptions and Details

There are a number of things you need to understand for the discussion of the process to make sense.  But if anyone is still actually reading this at this point, and  if said person can hardly wait to read the process discussion and feels fairly comfortable with psychrometrics, then said person may want to skip this section and jump straight to the discussion of the process itself

Having said that, the following paragraphs kind of lay a foundation for the discussion of the process.

The Line on Mike’s Chart is the Result of a Bunch of Processes, Not a Single Process

Probably the most important thing to recognize is that the heavier black line Mike drew on the psych chart was not one specific psychrometric process.  Rather, it is the locus of points representing the leaving conditions from the evaporative cooler that will be produced by a system configured and controlled as he proposed as the outdoor conditions varied.  I did not realize this initially, and it is an important point to recognize.

In the course of what follows, Mike and I identify specific points on this line for specific indoor and outdoor conditions  The hope is that this will allow you to “connect the dots” and understand the locus of points that Mike presented, which is what it did for me.

<Return to Contents>

The Air Inside the Building Came from Outside the Building

In some ways, this is obvious.  But there is an implication to it that I want to highlight, that being that the lower limit on the moisture level in the building is most likely set by the ambient moisture level outside the building. 

In other words, most processes that occur in buildings add moisture to the air.  Since the air inside the building comes from outside, then the moisture added in the building will tend to raise the dew point and specific humidity of the air inside the building.

There can be exceptions to this.   For instance:

  • If the facility was hosting a desiccant manufacturers product showcase and all of the vendors had their wares on open display, then potentially, the moisture level inside could be reduced relative to the outside. Or, in a more realistic example,
  • For a facility that processed paper and stored the raw material in a warehouse that was maintained at a low temperature relative to the process area which was maintained at a higher temperature and actively humidified, during cold, dry weather, when the raw material was brought in, it would tend to absorb moisture and lower the indoor humidity level.

But most of the time, building processes will add moisture to the air.  We can reflect this on the psych chart using a sensible heat ratio (SHR) line, which is the ratio of sensible (heat or temperature changing energy) added to the air  by the process occurring in the building relative to the total amount of energy added (both heat and moisture in the form of water vapor, the latter increasing the specific humidity). 

A SHR of 1.0 means there is no moisture being added to the air.  Increasing latent loads cause the SHR to drop away from 1.0.  The chart below illustrates several different sensible heat ratio lines plotted relative to a 72°F/50% RH space. 

SHR Example

So, for example, an air handling system was delivering saturated 45°F air at its design flow rate to serve a design load condition for a space with a SHR of 0.9 and a set point of 72°F, then the resulting space condition would be 72°F,  42% RH.  If the SHR was 0.8, then the space condition would be 72°F, 46.8% RH.  The chart below illustrates these two processes.

SHR Example 2

The 45°F saturated air could be the result of any number of processes, including:

  • The leaving condition from an evaporative cooler, or
  • The leaving condition from an active cooling coil coil that was condensing, or
  • An air handler supplying 100% outdoor air on a foggy day.

<Return to Contents>

The Process Targets a Space Condition Window, not a Point

In the charts that follow, the trapezoid highlighted in orange represents the space conditions targeted by the process we will discuss, specifically:

  • 70-75°F dry bulb temperature
  • 40-60% relative humidity

The chart below contrasts the window targeted by the process we are discussing with the 2010 ASHRAE summer (red) and winter (blue) comfort zones.

Zones Chart

As you can see, the range we are discussing is a subset of the winter comfort zone, which is the season during which the process would be used.

While most designs target a specific point for calculation purposes, real processes operate over a range that is set by things like the tolerances on the design point and the accuracy of the control process.  In this case, the range allows the proposed process to be used over a fairly large range of climate conditions in the Portland area. 

If we narrowed the range down, either in terms of temperature or relative humidity, there would be fewer hours were we could use the process in the Portland climate and vice versa. I believe this will become apparent as we move through the details of our discussion.

<Return to Contents>

The Evaporator Cooler will Produce Near Saturated Air

Evaporative coolers are to some extent, field deployments of adiabatic saturators.   For a true adiabatic saturator, at its exit, the leaving air is saturated, which means:

  1. The relative humidity is 100% and
  2. The dry bulb temperature, dew point temperature, and wet bulb temperature are identical numerical values.

To achieve this, among other things, a true adiabatic saturator needs to be infinitely long, which (I suspect) is one of the reasons you do not run into many of them out in the field.  For one thing, they would kind of get in the way. And for another, Owners and Architects – with some justification I might add – are somewhat opposed to infinitely long mechanical rooms.

One of the things that happens when you make your evaporative cooler less than infinitely long is that the air coming off of it is not 100% saturated.   But, units can typically produce air with wet bulb temperatures that approach the dry bulb temperature by 3-4°F under design conditions, with efficiencies in the 80% –95% range depending on the specifics of the design.[i]

If you reduce the flow and thus provide more time for the air in the evaporative cooler to be in contact with the media in the cooler, you can approach adiabatic saturation. Mike’s diagram assumed that would happen because he was modeling the application in a VAV system that was at 50% of its design flow and as a result, the saturation efficiency of the evaporative cooler would approach 100%.

The charts that follow make the same assumption for the purposes of illustration.  But a real system would generate leaving conditions that are very near but not on the saturation curve of the psych chart.  How close the leaving conditions got to saturation would depend on the efficiency of the evaporative cooler at the flow rate that existed at the time.  The approach to saturation will improve as the flow rate drops below the design value. 

<Return to Contents>

The Chilled and Hot Water Coils are Not Active

Mike’s analysis focused on outdoor conditions when neither preheat nor mechanical cooling would be required to achieve the targeted leaving air condition.   In other words:

  • The evaporative cooling process alone could deliver the desired leaving air temperature, which in the example, ranges from about 45°F to about 55°F.
  • The outdoor conditions are such that the system was never driven to minimum outdoor air when it was cold outside, which is when preheat would be required if the outdoor air temperature continued to drop with out causing the evaporative cooler leaving air temperature to drop.

How many hours this encompasses will vary significantly with climate.  In particular, the metrics Mike cites were based on assumptions about applying the process in the Portland, Oregon climate and the analysis and charts that follow use the same assumption.

<Return to Contents>

A Brief Review of Mixing on a Psych Chart

To understand the discussion that we are leading to, it is important you understand how a mixing process shows up on a psych chart, in particular that:

  1. The mixed condition for two points on the chart will lie on a line that connects them and,
  2. The mixed point will be proportionally spaced between the two points in direct relationship to the percentage of the mass flow rate associated with each of the points.

This is illustrated below for a number of different mixing percentages, temperatures and humidity levels.  Notice how the mixed temperature and its location relative to the two conditions being mixed is the proportional to the minimum outdoor air percentage and the two temperatures that are being mixed.

Mixing Example 25 50 75 Pct

<Return to Contents>

The Mixing Dampers are Controlled by the Dry Bulb Temperature Leaving the Evaporative Cooler, Not the Mixed Air Temperature

This is really important because, as mentioned previously, for an evaporative cooling process, the leaving air is nearly saturated and as a result, measuring dry bulb temperature will also provide an indication of the wet bulb temperature and dew point temperature. 

If the air is saturated, they will be exactly the same.   If the air is near saturated, then they will be very close.   For example, if the saturation efficiency of the evaporative cooler was 95%, then the leaving wet bulb temperature would likely be with in a degree or less of the leaving dry bulb temperature.

If you consider this for a minute, you will realized that for a given outdoor dry bulb temperature and a given evaporative cooler leaving air temperature set point, where the evaporative cooler leaving air dry bulb temperature is being used to control the mixing dampers;

  1. Because the air is nearly saturated, the mixed air dampers are also being controlled for a leaving wet bulb temperature that is nearly identical to the dry bulb temperature , and
  2. As a result of item 1, the mixed air dampers are also operating to maintain a fixed wet bulb temperature set point, and
  3. The amount of outdoor air brought in to the system will vary with the outdoor wet bulb temperature;  on a dry day, the system will bring in less outdoor air to achieve the required set point vs. what it will need to bring in on a moist day. 

This is illustrated in the chart below.  Note how the outdoor air percentage required to achieve the 45°F saturated leaving air dry bulb/wet bulb temperature varies with the outdoor conditions.

MAT Evap Cooler LAT Controlled

The next chart illustrates what happens in a more conventional mixed air control process, where the mixing dampers are being controlled for a fixed mixed air dry bulb temperature.  Note how the outdoor air percentage does not change, even when the outdoor conditions change.

MAT Dry Bulb Controlled

<Return to Contents>

The Mixed Air Set Point is Lower than Typically Used

As you have probably observed, the 45°F supply temperature we are discussing is a lot cooler than we typically use in our systems, all-though you might see temperatures in this range for some special processes.[ii]

Generally speaking, running colder discharge temperatures than needed to satisfy the space dehumidification load will cost you energy when you are doing mechanical cooling. 

  1. For one thing, it will require lower refrigerant temperatures in the coils, which will tend to lower the efficiency of the compressors providing the refrigeration.
  2. For another, if the minimum flow rate provided by the terminal equipment provides more sensible cooling than needed once they are at minimum flow,  you will use unnecessary reheat compared to what would happen with warmer supply air temperatures.

But, if you are not using mechanical cooling, issue 1 enumerated above goes away.  That means that as long as a lower supply air temperature does not drive zones into a reheat mode, then for a variable air volume system, there could be a fan energy benefit associated with the lower supply temperature.

In other words, if a zone required 1,000 cfm of 55°F supply air to maintain a 72°F set point, it could also maintain that set point by using about 630 cfm of 45°F air.  So, as long as:

  1. The diffusers would perform with the cooler air, and
  2. The colder distribution temperatures did not result in condensation issues on the ductwork and related hardware, and
  3. None of the other zones on the system were driven into a reheat cycle when they would not have been driven into a reheat cycle with warmer supply air,

… then fan energy will be saved.

For Mike’s idea, the colder supply temperature will translate to lower system flow rates.  This will tend to push the saturation efficiency of the evaporative cooler to higher values, which means using dry bulb temperature to control the process will provide satisfactory results with out the added first and ongoing cost of some sort of humidity sensor.

<Return to Contents>

Good Mixing is Critical to Success

Achieving thorough mixing in a mixed air plenum is critical to success and is surprisingly hard to achieve.  Velocity and temperature stratification are very common, especially if you don’t pay attention to the details.  In fact one of my current focuses on the blog is a series of posts looking at this topic.

Since a process using the approach we are discussing may use a mixed air temperature set point that is lower than typically encountered, as discussed in the preceding paragraph, ensuring that the mixed air plenum is designed to promote good mixing will become even more critical.  The most serious potential issue, of course is a localized cold spot where temperatures could drop freezing during extreme weather, even though the average mixed air temperature was well above freezing.

<Return to Contents>

The Process (Finally)

What follows is my transcription of the dialog between Mike and myself as we discussed the process he suggested.  At the end of it, he indicated that I had “nailed it”.  But if there are errors in the transcription that follows, they are totally on me.

For the discussion that follows, I have assumed a space SHR of 0.90.  But other SHRs (until you got pretty extreme in terms of space latent load and outside of what you would see for most commercial office buildings) would have similar results.

In general terms, since the system is controlling for the temperature of near saturated air leaving the evaporative cooler:

  • The mixing point will lie on the constant wet bulb temperature line associated with the set point. 
  • The blend of outdoor air and return are required to meet set point will vary as the outdoor conditions vary causing the mixing point to move up and down the constant wet bulb line.
  • Once the outdoor wet bulb exceeds the set point, the system will be driven to 100% outdoor air, which will cause the discharge condition from the evaporative cooler to move up the saturation curve.

The following paragraphs illustrate this in more detail. 

An Extreme Winter Portland Day

If we start with a somewhat extreme condition for Portland (based on TMY3 data) then the process looks like this.

Chart - Extreme Dry

Controlling the mixed air dampers to deliver 45°F air off the evaporative cooler puts you at about 45% outdoor air and delivers a space at the bottom end of the targeted temperature window and up a bit from the bottom end of the targeted RH window.

A Typical Portland Fall/Winter/Spring Day

If we look at what would happen if the OA was in a more typical but cold range (the left most red squares on the chart), we end up here.

Chart - Typical

We require a higher percentage of OA (83%) because it is already moist.  But since we are modulating the mixing dampers based on what happens after the evaporative cooler to maintain 45°F at that point (remember, for this discussion, because of the saturation efficiency of the evaporative cooler, 45°F dry bulb is about the same as 45°F wet bulb), we just slide up the 45°F wet bulb line and the space condition we deliver (assuming the load – sensible and latent – did not change) remains the same.

A Warm But Dry Portland Fall/Winter/Spring Day

If we look at what happens on a warmer, but dry OA condition, as long as the OA dew point is below the evaporative cooler LAT set point, we still hold the same space conditions.  But this time, we need to use more OA because the OA is warmer and dryer.

Chart - Warm Dry Spring

Moving from Spring to Summer (Summer to Fall Transition Similar, Just Going the Other Way)

If the OA wet bulb rises above the evaporative cooler LAT set point (which is controlling the mixing dampers), it will drive the mixing dampers to the 100% OA position and hold them there. 

The control process can not meet its set point and as a result, the evaporative cooler LAT rides up the saturation curve, following the outdoor air wet bulb temperature.   Here is what that looks like for a somewhat common condition with a OA wet bulb above the 45°F evaporative cooler LAT set point.

Chart - 100% OA 48 Typical

Now, the space temperature and humidity start to drift up because the evaporative cooler LAT starts to drift up, but (assuming the load did not change and the VAV system flow did not change), you are still inside the envelope you targeted.  If you really wanted a lower space temperature, you could allow the VAV system to move a bit more air.

Encountering a Limiting Condition

Once the outdoor wet bulb drifts up to 50°F, we reach the limit of what we can do with the current VAV system flow rate (50% of design) assuming the load condition did not change;  i.e. at that point the space ends up at the upper limit of the temperature window, but below the humidity limit.

Chart - Upper Limit

Allowing the System Flow to Increase

If we continue to let the evaporative cooler LAT drift up as the outdoor air wet bulb drifts up, the VAV system could still keep us in our targeted window if it increased the flow rate.  When we reached the 55°F upper limit Mike discussed (a common commercial building HVAC system leaving condition) we would end up here.

Chart - Upper Limit 55

But, if the load had not changed, we could actually allow the LAT to drift up to about 59°F before the leaving condition was outside the targeted window (assuming the load has not changed and the VAV system is allowed to move more air to accommodate the lower LAT to space temperature difference).

Chart - Upper Limit 60 Pct

<Return to Contents>

Some Bottom Lines

How you would decide if you should do this and when to do this would be a function of the ability of the envelope to handle higher humidity levels in cold weather, the ability of the operating team to maintain the equipment, utility rates, hours of operation, and climate in addition to a desire to hold indoor conditions in the 40-60% RH range.  A totally brilliant idea in location “A” could be a disaster in location “B”.  

For instance, if you had an artesian well on your property and the law was written to say you owned the water rights (i.e. free water), what you would do would be totally different from a location where the water rates where high and you also did not get a credit on your sewer bill for water that was evaporated.

And if you did get a credit on your sewer bill for water that was evaporated, then that would also change the financial perspective.

Mike and I talked about using the TMY3 data to look at the water consumption and pump energy for the process in Portland to assess the full cost implication of using this strategy, but neither of us have had the time to do this at this point, so fodder for a future post.

But hopefully, what we have shared will help you “think outside the box” in terms of how we operate our buildings to deliver a clean, safe, comfortable, productive environment as efficiently and sustainably as possible, given the ever changing challenges we face.

David-Signature1_thumb_thumb_thumb_t

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

[i]     For those who are interested, the relationship for saturation efficiency is as indicated below.

Direct Saturation Efficiency v1

<Return to Reference>

[ii]    For example, for the make up air systems serving the clean rooms I worked with when I was a facilities engineer/system owner at Komatsu’s Hillsboro plant, we targeted a 46°F leaving air temperature from our cooling coils in order to hit the space relative humidity requirement.

<Return to Reference>

Posted in Air Handling Systems, Economizers, HVAC Calculations, HVAC Fundamentals, Psychrometrics | Leave a comment

The Perfect Economizer–Part 1–Laying Some Groundwork

An amazingly long time ago, I started a string of blog posts about economizers, that included posts about:

All of this was leading up to a blog post about a diagnostic tool that I use that I call the “Perfect Economizer” concept.  And I almost got there, but not quite, until now.

Contents

For those who want to jump around, the following links will take you to the different topics.   The “Return to Contents” link at the end of each major section will bring you back here.

Introduction

As it turns out, the evolution of the ASHRAE Journal Engineers Notebook column that I help write led to an opportunity to do a column on the the perfect economizer because it complements a column I wrote about a similar concept for assessing chilled water plant performance titled Modeling Perfection. which is illustrated below.

image_thumb11

In in the case study associated with the Modeling Perfection column, I mentioned that the reason for the unnecessary chilled water use in the areas outlined in red and yellow above was dysfunction in the preheat and economizer processes and that the team I was working with used the “Perfect Economizer” concept to assess them.

The idea behind the concept  is similar to the perfect chilled water plant concept;  you create a chart that shows how you would expect a perfect economizer to function and then plot real data against it to see how closely reality matches perfection.  The lines of perfection are illustrated below.

image

That concept is the focus of my next column, which will run in May. 

Defining Perfection

To be able to discuss the perfect economizer, one needs to define perfection.   Word count precluded me from doing that in the upcoming Journal column.  So I decided to do a few blog post that will focus on defining perfection to complement the column.  I actually started down that road in the post titled Economizer Analysis via Scatter Plots–Linking Excel Chart Labels to Data in Cells.  I will build on some of the concepts I outlined there in what follows and in related subsequent posts.  This first post defines a few baselines so we are all “on the same page” for the discussion that will follow.

Not a New Idea

I am not at all asserting that I came up with this idea. I believe you will find a version of it in the application software that Architectural Energy Corporation supplied for their data loggers in the mid to late 1990’s.  And the (free) Universal Translator application (which has nothing to do with Star Trek but is still pretty cool) includes a module that uses this approach.

(Return to Contents)

The Relationship Between and Economizer Process and Building Pressure Control

As discussed in the Economizer Basics post I referenced above, economizer processes bring in outdoor air volumes that are above and beyond what is required to ventilate the building, blending this extra outdoor air (OA) with return air (RA) in order to minimize the need for mechanical cooling.  At its core, an economizer process is a cooling and temperature control process. 

Conservation of mass and energy dictates that to achieve success, we need to complement the economizer process with some sort of building pressure control process that provides a path for the extra outdoor air to exit the building.  That becomes the role of the relief system.  The obvious components in this system are the relief  air dampers and depending on the system configuration, the relief fan and/or the return fan.

The less obvious components are the imperfections in the building envelope, which can also become part of the relief system. Recognizing this can provide benefit in terms of comfort by managing infiltration, and in terms of energy, by minimizing the need for return or relief fan operation.

A Word about Return vs. Relief Fans

When I discuss this topic, I am frequently asked about the difference between a return and relief fan.  The images below are from a set of slides that I used in class to discuss the topic.

image

image

This link takes you to a bit more information in a previous blog post.

Economizers and Building Pressure Control Coordination in the Olden Days

In the olden days, for a simple, constant volume system that incorporated an economizer process, there was a fairly direct relationship between:

  • The position the outdoor air and return air dampers were driven to in order to control temperature, and
  • The position the relief dampers needed to be driven to in order to manage building pressure. 

Thus, it was not unusual for the same signal that was used for the outdoor air and return air dampers to be used to drive the relief air dampers, especially in pneumatic control systems.[i]

Those of us working in existing buildings can still encounter this approach.  Sometimes, a minimum relief position is also provided.  And sometimes, the modulation of the relief dampers is delayed to provide a bit of positive pressurization for the building. 

And for a simple constant volume system, it can be made to work, especially with the minimum relief and delay feature mentioned above.  So if you have a very simple HVAC system, you can get away with out a building pressure control process, even in modern times.

Economizers and Building Pressure Control Coordination in Modern Times

The variable air volume (VAV) systems we commonly use in modern times breaks the relationship between outdoor/return air damper position and relief air requirements.  Consider a VAV system operating on a day when the outdoor temperature is 58°F with a 58°F leaving air temperature (LAT) requirement with variable speed relief fans under a part load condition. 

Lets imagine the system is operating on a day when the load in the building, and thus the supply flow rate is 50% of the design value.  With it being 58°F outside, if everything is working properly, the outdoor air dampers will be commanded to the 100% outdoor air (0% return air) position.  But, since the load in the space is only 50% of the design load, the supply flow rate will half of the design value. 

If the relief fans are commanded to 100% speed because they are controlled by the same signal used by the outdoor air and return air dampers, they likely will cause the building pressure to become very negative because their full speed, design flow rate was likely set to on the basis of the design supply flow rate.[ii]

This was a common problem in the field when we started transitioning from pneumatics and constant volume systems to DDC and VAV systems. And it still shows up on occasion in our modern day world.

(Return to Contents)

ASHRAE Guideline 16

The final control elements in an economizer process are the OA and RA dampers and the sizing and configuration of them is critical to success. 

Similarly, the relief dampers are often the final control element for the building pressure control process all though variable speed relief fans that have simple back-draft dampers or are sequenced with modulating relief dampers can also come into play.

ASHRAE Guideline 16 – Selecting Outdoor, Return, and Relief Dampers for Air-Side Economizer Systems provides a lot of good information about how to select and configure these dampers. But it also specifically states that

this guideline does not cover air mixing

Thus, it’s important to recognize that using the guideline is a good first step in the economizer design process, but there are other things that also need to be addressed.

In addition, the guideline is focused on proper design, meaning that you are starting with a “clean sheet of paper”. If you are working with existing buildings, that “ship has already sailed” and the challenge is understanding what you have, how well it is functioning, and how to correct any deficiencies that you discover within the constraints of the existing equipment capabilities and the operating budget.

For example, all of the recommended control sequences in the guideline require that outdoor air flow be measured somehow. In my experience, this is surprisingly uncommon in existing building systems, especially in older facilities.

Still, understanding what constitutes a good design can help folks performing existing building commissioning, ongoing commissioning and facility operations understand the changes needed to improve performance and resolve any issues they identify.  And the Perfect Economizer concept is a useful way to identify the problems.

Ultimately, when we apply the “Perfect Economizer” technique to existing facilities, we need to be extra diligent when we start to work to improve the mixing process so that we do it in a way that still ensures the required ventilation rates are maintained.

(Return to Contents)

That’s it for now.  In my next post, I will get into damper sizing and configuration, which are part of the focus of Guideline 16 and which are key to achieving perfection for an economizer process.

David-Signature1_thumb_thumb_thumb

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

[i]     And, since many legacy pneumatic systems were upgraded to DDC by handing three different control vendors a set of the building’s pneumatic control drawings and telling them to provide a bid for a DDC system just like it (and incidentally, we will be taking the low bid), you find DDC systems with a single pneumatic output driving the outdoor air, return air and relief air damper systems.

I am not at all advocating this design approach;  there are obvious problems with it.  I am simply saying that just because you have  DDC system doesn’t mean you will not see this configuration and the potential challenges it can introduce.

[ii]   The relief flow would generally be set to the supply flow minus the ventilation air flow which will generally be removed by toilet and hood exhaust.  An allowance for building positive pressure may also be included, further reducing the relief air flow rate relative to the design supply flow rate.

Posted in Air Handling Systems, Controls, Economizers, The Perfect Economizer | Leave a comment

Using a Formula to Adjust an Axis in Excel, Plus a Simultaneous Heating and Cooling Case Study

Author’s Note; 2022-02-01.  I discovered that earlier today, when I thought I had saved this post, planning to make some final additions, edits, and add a table of contents when I got back from my walk, what I actually did was publish it.  So, if you read this before about 4:30 PM, there were some typos and the bottom line on the case study was not there yet.   My apologies;  I will click more carefully next time.

Preface

I want to preface everything that follows by saying that while the case study I share is from my own experience, I did not develop the technique I will share.  Rather I discovered it as the result of an internet search in the form of a very generous and well written blog post by a guy named Mark on his Excel Off the Grid web site. 

I’ll be linking to some specific content there as I move through this post, in which I use a case study from a past project to illustrate applying Mark’s technique.

And thanks also to Thy, a student from one of my classes, who asked the question that led to the post and “commissioned it” by taking my first draft and using it successfully to implement the feature in a spreadsheet of his by following my suggested directions.

Contents

These links will jump you around in the content to a topic of interest.   The <Return to Contents> link at the end of each major section will bring you back to here.

A Bit of Background

If you do existing building commissioning work, you spend quite a bit of your time looking at time series data.   Sometimes, you are interested in the over-all pattern for a long period of time, like this.

Logger Data Full Period CC LAT

For the project behind the data above, I was using steam condensate pump cycles as a proxy for steam consumption (the red data stream), a technique Chuck McClure taught me years ago using an alarm clock.  I was comparing the pump cycles to the operation of a steam preheat coil in a large laboratory air handling system, using the leaving air temperature from the coil as a proxy for coil operation (the orange data stream).

The reason that the condensate pump line looks like a red band with occasional spikes vs. a fine red line is that relative to the range of the time axis, there were a zillion pump cycles.  In other words, if we were to zoom in, we would discover that the red band was actually many, many, many spikes spaced closely together with each cycle representing one pump cycle.  In fact, that is what I needed to do in order to assess the number of pump cycles relative to the leaving air temperature spike.

<Return to Contents>

Diagnosing a Dysfunctional Preheat Process

There will be more on zooming in a minute, but before going there, I thought I would explain what was going on in the system behind of the data.

My initial view of the data, shown above, revealed that I had in fact captured the dysfunctional operating pattern I suspected to exist based on my field observation when I walked the project several days prior.  More specifically, I suspected something was amok when I walked by the unit on a 60ish°F day and noticed that the preheat coil was active along with the cooling coil.  

As a result, I deployed a few data loggers the next day and the pattern above is what I found as Mother Nature performed a natural response test on the system [i]. Note how the preheat coil leaving air temperature seems to vary vs. hold a fixed set point and also how on occasion, it jumps up and runs at 90+°F for periods of time. 

This was an issue because the system was set up to hold a fixed 55°F leaving air temperature, and it was doing a very good job of that (the blue data stream).   But, since it was a 100% outdoor air system and since the preheat coil was ahead of the chilled water coil, the only time the preheat coil should have been active was if the outdoor temperature dropped below the desired 55°F leaving air temperature set point.  And then, it should have not heated things up any higher than the desired leaving air temperature.

Since the preheat coil was the major load on the steam system for the facility, I anticipated that the condensate pump cycles would be higher during the periods of time when the coil was delivering a leaving condition in the 90’s°F, which would tend to validate my proposed approach for developing the system load profile since there was no steam meter.

But to verify that, I needed to zoom in on one of the dysfunctional cycles, which brings me to the point of this post.

<Return to Contents>

Changing the Range of a Time Series Axis in Excel

Excel and Dates

One of the things that is not immediately obvious when you start working with time series charts in Excel is how Excel represents a date and time, at least it wasn’t for me.   It turns out that Excel represents date and time as a serial number that increments by 1 each day, and which was arbitrarily set to zero at 12:00 AM on on January 1, 1900.   

That means that:

  • January 2, 1900 would be represented as “2”
  • January 2, 2022 would be represented as 44,562 since it is that many days after January 1, 1900.
  • One hour would be represented by 1/24 = .0147.

I go into more detail about that in a blog post titled Setting Time Axis Values in Excel.  But once I understood the way things worked, I made myself a little “sheet cheat” that allowed me to quickly come up with the values I needed to format a time series axis to the specific range I wanted to look at.

<Return to Contents>

Setting the Date Range in an Excel Chart

Since I wrote that post, I have discovered that if you type a date and time into the “Maximum” and “Minimum” fields in the axis format dialog box (the cells with the red arrows pointing to them in the image below) …

Format Axis r

… then Excel automatically makes the conversion for you.  I’m not sure if that was always there and I just missed it or if it’s a feature that showed up sometime after 2002 (when I built the first version of my cheat sheet).  

But so far, I have not figured out a way to set the major and minor units (the fields with the blue arrows pointing to them in the image above) with out “doing the math” to figure out, for instance, the decimal value that resents 1 minute if the decimal value of 1.0 represents 1 day.  

So, the little cheat sheet spreadsheet I built to help me come up with the values for the minimum and maximum dates and the major and minor units on my charts still comes in handy.

Time Values

If you want a copy of it, you can download it here.

<Return to Contents>

Zooming In the Old Fashioned Way

Having said that, if I wanted to zoom in on a portion of the chart to take a closer look at a pattern – for example, zoom in on one of the errant events above to see what the condensate pump cycles looked like during that period of timed …

Four Hours 1

… then, up until I found Marks blog post, I would have to go into the axis format dialog and make the change.  

In the image above, I zoomed in to show what was happening from 12 AM to 6 AM on October 10, 2009.  This revealed what I hope I would see;  that the condensate pump cycles in fact increased as the steam load increased.  In fact, occasionally both of the pumps serving the receiver needed to run, which is what caused the occasional higher than typical spike.  All of this validated my propose approach of using the pump cycles to come up with a load profile.[ii]

Since I often wanted both images for a report, I would typically would make a copy of the chart and then change the axis so that I had both views available.   If you are doing this a lot, it can become somewhat tedious and time consuming [v]. And, the file size can start to get to be significant if there are a lot of data points in each chart.

As a  result, I would occasionally find my self wondering if there was a way to get change the maximum and minimum values for a chart’s axis based on parameters that you entered in cells in the spreadsheet that would then, somehow, magically perhaps, be referenced by the appropriate fields in the “format axis” dialog.

My more observant readers may have notice that the dates and times I mention above show up in the yellow cells in the image and could be thinking:

I wonder if those cells have anything to do with where he is heading?

The answer is:

They do!

<Return to Contents>

Introducing User Defined Functions

It turns out that if you know how to program in visual basic, you can do just that. 

Or, in my case, it turns out that if you know how to do an internet search for something like …

Excel change chart axis automatically from cell values

… you will discover generous people who are good writers with blog posts that explain how to do it and also share the code required to do it and tell you how to make it all happen.

The trick is that you create a thing called a User Defined Function or UDF  that, when you execute it, calls some VBA (Visual Basic) code that causes the magic.   While I aspire to write VBA, I am in my infancy in that.  But thankfully, Mark does that for us in his Excel Off the Grid Column titled Set chart axis min and max based on a cell value.

It really is well written so I am not going to regurgitate it here since you can follow the link above and find out all of the details and copy and paste the required code from there.

But I will provide some screen shots of my implementation of it in the spreadsheet we have been looking at to clarify its application in that context and clarify a few things that were questions for me as I added the functionality to my copy of Excel.

<Return to Contents>

Using a UDF to Change the X Axis Minimum and Maximum

In the image below, I have clicked into cell range GH34 (orange highlight) and you can see the UDF in the formula bar where it says”=setChartAxis(“Data”,”chart 2″,”Min”,”X”,”Primary”,H35”.  (The red arrows point to the two spreadsheet locations I just mentioned).

X Min

“SetChartAxis” is the UDF.   It acts just like any other Excel function once you create it.  For instance, if I open a spreadsheet, click in a cell, type an “equal” sign, and then “if(“, Excel kind of says:

O.K.  I have a formula that has that name and here it is along with the function arguments you need to provide as inputs if you want to use it.

=if

If I click on the little fx symbol by the function bar, a dialog box will open up so that I can enter the necessary function arguments into data fields.

=ifarguments

Of course, if I use the formula a lot, I probably can remember them and just type them into the formula bar in the correct order, separated by commas.  But the dialog box sure is handy for less often used formulas (and/or as you age and find your memory is not quite what it used to be).

Assuming you don’t have the code associated with the “setChartAxis” UDF installed on your computer (more on how to do that in a minute), then, if you were to click into a cell in a spreadsheet on your machine and start typing setChartAxis, you would get a list of built in Exel functions that have the word “set” in the name like “OFFSET” and others depending on the plug-ins you have installed.   But “setChartAxis” would not be one of them.

In contrast, since I have added the code for the UDF “setChartAxis” to my copy of Excel, when I click on a cell and start typing “set …” it shows up as a function I can select along with all of the other functions installed on my machine that have “set” in their name.

=setchartaxis

Thus, I can pick it and provide the arguments it asks for …

clip_image008

… and the UDF does the “magic” for you.

Here’s what those arguments look like for the chart I am using as an example.  You will find a copy of it on the same webpage as the time value conversion spreadsheet tool if you want to download a copy to work with.

=setchartaxisexample

So basically, the formula says:

Set the minimum value for the primary, X axis, of Chart 2 on sheet Data to the value entered in cell H35.

The formula is looking for a numerical value (vs. a date), so, to make it easier to work with, I have cell H35 formatted to display the numerical value associated with a date and then set it equal to the value in cell I35, which I have formatted as a date and time.  That allows me enter the date and time and I35, which shows up as the numerical value associated with that date and time in cell H35, which is then referenced by the “setChartAxis” UDF. 

<Return to Contents>

Not Just for the X Axis

You can use the UDF for the other axis on the chart.  For example, to really understand how well the control loop is tuned, it would be nice to zoom in on the burble in the blue line that happens when the preheat coil discharge temperature spikes.   To do that, I used the “setChartAxis” UDF but set it up to adjust the maximum and minimum on the secondary Y axis based on spreadsheet cell parameters.

 Secondary Y

And, as you can see, by zooming in, I can now tell that the control loop response exhibits the somewhat classic quarter decay ratio associated with a well tuned PID loop. [vi]

I can also quickly re-scale the axis again to let me contrast both the response and the upset itself. (Note that I hid the pump amps data series to allow me to focus on the other two data streams).

Upset2

You will also note that I provided similar functionality for the primary Y axis (the center cluster of orange and yellow cells) by simply copying and pasting the cell block then editing the UDF arguments as needed.

<Return to Contents>

Addressing a Few Questions that May Come Up

So, a couple of points.

  1. To find out the name of the chart, just click on it and it will show up in the cell name window next to the formula bar (“Chart 2” below next to the fx bar, right below the “snap to grid” quick access button on the left).

Chart Name

  1. The UDF is a Visual Basic module, so you need to have the “Developer” tab available in Excel to do this.  I think that sometimes, Excel can be installed without this enabled, but I believe it is a standard feature and you just need to turn it on, which is described here, in case you don’t see the “Developer” tab in your ribbon.[vii]
  2. The blog post I referenced above is (to my way of thinking at least) really well written and I think that if you page down to the “Creating the User Defined Function” topic, you would have no trouble setting it up;  the code you need is included so its really just a matter of copying and pasting it into the right place in a VBA module you create.
  3. If you do that, it will only be available in the spreadsheet you created it in.  But you can make it available for all of your spreadsheets by installing it as an Add-In.  That is described further down in the post under the “Making the function available in all workbooks” topic  which links  you to this page after telling you what you need to do first.
  4. <Return to Contents>

Back to the Case Study

As I indicated in an endnote previously (see end note [iv]), the somewhat wild temperature excursions seemed to be a freeze protection strategy gone amok.  

But when they were not occurring, the preheat coil still did not hold a leaving air temperature at a fixed value, causing the chilled water coil to do unnecessary cooling.  The reason for this was that the face and bypass damper system that was intended to control the leaving air temperature was out of adjustment and was always allowing some air to flow through the heating elements, even if no additional preheat was required.

Integral Face and Bypass Coils

The slides below illustrate the type of face and bypass damper system that was in place in the system we are discussing. 

image

image

image

image

This type of assembly is technically called an “integral face and bypass” coil.  But is also frequently referred to as a “Wing” coil since one of the major manufacturers at one point in time was the Wing Company.  Its kind of like calling every box of facial tissue  – a paper product produced by many manufacturer’s –  “Kleenex” – which is a common brand of facial tissue.

The pictures that follow are of the  actual hardware.  The assembly shown on the left uses hot water for the heat source.  The picture on the right uses steam and is the actual preheat coil associated with the case study.

image

image

image

<Return to Contents>

Why Integral Face and Bypass?

The design of this type of coil is intended to enhance its ability to resist freezing by:

  • Always keeping the heating elements active with the control valve wide open.  For water coils, this means design flow will always be moving through the coil (as long as the pump serving the system is running).  For steam coils, this means that the coil will be able to draw as much steam as needed and that the steam in the elements will be at near the saturation pressure and temperature associated with the distribution    system.[viii]
  • Vertical orientation for the heating elements in steam fired coils to ensure rapid condensate drainage via gravity.
  • Supply and return headers located outside of the air stream minimize the potential for condensate (water) to be exposed to sub-freezing conditions.

<Return to Contents>

Things that Can Go Wrong (a.k.a. EBCx Opportunities)

So, the good news is that a coil of this type is less likely to freeze.  But there are a couple of down sides.

One is that the actuation mechanism for the clam-shell doors is somewhat complex. With out regular maintenance and lubrication, it can fail, which, as we saw in the coil in the example, can cause a significant energy waste.

Another opportunity is related to the control of the steam valve.   Even if the clam-shell dampers are fully closed, there is significant heat transfer, primarily by radiation, from the live, saturated steam inside the tubes.  For instance, if the steam was at atmospheric pressure, the temperature would be 212°F. 

As a result, there can be a significant parasitic load associated with this type of coil.    To prevent that, it is desirable to close the  steam valve when preheat is no longer required.   It is not uncommon for this contingency to go unrecognized.  For example

  • A value-engineer, who is perhaps not totally familiar with HVAC processes and how this type of coil works may eliminate the control valve from the project as an unnecessary first cost, thinking it was not needed since there were dampers provided to control the leaving air temperature.
  • A control system designer who was not familiar with the specifics of how this type of coil operates may sequence the operation of the valve with the operation of the clam-shell dampers.  While this may tend to alleviate the parasitic load to some extent, it is likely that it compromises the “freezeproof(ish) aspect of the design.

As a result, when I encounter this type of coil in the field, I just about always flag it as a target for further investigation.  Frequently, one or more of the opportunities I mention above exist and I can save some steam (and maybe a frozen coil or two). 

And frequently, as was the case for the coil in the example, savings show up at the cooling plant in addition to the steam plant because of the unnecessary simultaneous heating and cooling.

<Return to Contents>

How Come Nobody Noticed?

Some readers may wonder why nobody noticed this problem.  After all, it kind of jumps out at you when you look at the trends I have shared.  

A big part of the reason was that the control system was somewhat antiquated and unreliable.   Sensors had failed, graphics could take minutes – like 5 or more minutes – to update (assuming they didn’t “crash”in the process), and sampling speeds faster than once ever 15-30 minutes were not possible due to the network configuration.  As you may surmise, those are the the reasons I was using data loggers to assess the system instead of the trends.

Because the chilled water coil masked the preheat dysfunction and the lab zones were constant volume pneumatic reheat  zones with repairs undertaken when an occupant complained, a lot had to go wrong before it would show up as an actual comfort problem.

The operating team itself –  like most teams these days – was spread really thin, trying to operate and maintain a complex full of mission critical facilities with a handful of people.

<Return to Contents>

Leveraging the Savings Potential

The good news was that once the problem was recognized, it opened the door for improvements.   Due to …

  • The size of the system (nominally 70,000 cfm), and
  • The 24/7, constant volume, near 100% outdoor air operating cycle associated with the laboratories it served

… the savings potential associated with repairing the errant preheat process was very significant;  tens of thousands of dollars annually.  The savings could have been accrued by simply repairing the damper linkage system and ensuring that the steam valve fully closed when preheat was not needed.

Recognizing that there was more to the issue than the immediately obvious root causes,  The Owner elected to leverage the savings to upgrade the control system to a current technology system, including:

  • The sensors necessary to perform diagnostics, not just control the system,
  • Trending and graphic capabilities that would deliver meaningful information to the operating team in a timely fashion, and
  • DDC controls at the zone level, which would allow the operating team to much more quickly identify operating issues that are typically masked by the insidious nature of HVAC processes.

And like most energy savings projects, the results of this project also moved the Owner down the road towards their long term carbon reduction goals.

So there you have it;  a cool little Excel trick generously shared by Mark on his Excel Off the Grid blog along with a little case study of a common existing building commissioning opportunity.

David-Signature1_thumb_thumb_thumb

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

[i]    If you want to know a bit more about natural response tests vs. forced response tests or functional testing in general, then you may find a series of video modules I recorded on the topic to be helpful.

[ii]   It also revealed that the control loop for the chilled water valve was pretty well tuned.  Notice how what ever caused the errant change in set point [iii], initially, there is a big jump in steam flow and leaving air temperature and then a continued increase until the process stabilizes.  The leaving water temperature from the chilled water coil hunts around a bit trying to “find itself”.   But then it settles in;  more on that a bit later in the post.

[iii]   Can you put an end note on an end note? [iv]

[iv]    Assuming you can;  we never really figured out why the program running the system was set up to cause the set point jump.  But the trends indicated it was very predictably tied to the outdoor temperature and was triggered when the outdoor temperature dropped below 38°F and released when the outdoor temperature went back above 40°F.  And it was not really a set point change;  rather, the valve was simply driven fully open.  Thus, our conclusion was that it was a freeze protection strategy gone amok.

[v]    But not as tedious and time consuming as in the olden days when we would have had to transcribe the data from a strip chart and manually plot it on graph paper.  So count your lucky stars you young people out there.

[vi]   The slide below illustrates what the term quarter decay ratio means.

image

The pattern was the result of the work of John G. Ziegler and Nathaniel B. Nichols, who developed a very common tuning technique for PID control loops.  If you want to know more about PID, this link will take you to a webpage that contains some resources, including the original paper they published and an interview with John Ziegler himself.

[vii]   I suppose that there may be some corporate IT policies that would prevent you from turning on the developer tab feature with out someone from IT allowing you.  But I have not had that experience and only know about turning it on because I was helping someone once and it was not there and I poked around and found the link above.  Its always been on in any copy of Excel I have had.

[viii] There is a very subtle thing that can go on in steam fired heat exchangers due to the fact that the steam side is a saturated system.  Depending on the operating conditions, it is possible that the pressure inside the heat exchanger will be sub-atmospheric unless vacuum breakers are installed on the heat exchanger. 

That means that for condensate to drain out of the heat exchanger, or more specifically, to an open return system that is above atmospheric pressure, condensate has to accumulate inside the coil to a depth that is high enough create the head necessary to cause the condensate to flow out of the coil.  If the condensate accumulates in a portion of the coil that is exposed to the air stream, and the air stream is below freezing, then you can freeze the coil; bottom line steam coils can freeze.

By keeping the steam valve wide open on an integral face and bypass coil and relying on the damper system to control discharge temperature, it is significantly less likely that the conditions inside the heating elements will be sub-atmospheric.  This, combined with the vertical tube arrangement and locating the headers outside of the air flow path helps ensure that this type of coil is fairly freeze-proof.

Posted in Uncategorized | Leave a comment

Happy Solstice

2021-12-26 – Authors’ Note:  Yesterday, I realized that I had not fully taken into account how a pin hole camera works when I developed the SolarCan pictures.  The image in a pinhole camera is upside down relative to reality.  

When I started working with my images, I simply rotated them 180°;  sort of an intuitive reaction I suppose, since I instinctively knew the sun should rise and then  fall over the course of the day.  I was so excited about seeing the suns path that I did not initially realize that things were backwards;  on my backyard photo, my neighbor’s house is on the wrong side and in the Neskowin photo, Neskowin Creek disappears on the wrong side of the photo.

Rotating the image did in fact put the bottom at the top.  But it also put the left side of the image to the right, making it backwards relative to reality.  What I actually needed to do was flip the image along the horizontal axis, which makes the bottom the top, but keeps left to the left and right to the right. 

So, I have uploaded correctly oriented images in this revised post.

A friend called me yesterday to wish us a happy solstice.  I had an appointment I needed to head out to, so we only talked briefly.  But in doing that, I mentioned a solstice related “toy” I had found and said I would e-mail him about it after I returned home with more information.  But as I was starting that process, I realized that it would be kind of a cool thing to share for my semi-traditional ”holiday post”.   So here we go, and thanks to Sabastian for inspiring this.

The Shortest and Longest Day of the Year

Tuesday was the winter solstice;  the shortest day of the year,  and the path of the sun was at its lowest point in the sky relative to the horizon.  As most, if not all of you likely know, there is also a summer solstice, which falls on or about June 21st.  That, as you might expect, corresponds with the longest day of the year and the path of the sun is at its highest point in the sky.

The Equinox

Between those to extremes lie the two equinox (equinoxes? equinoxi?  equineex?,  not sure about the plural, but the spell check favors equinoxes and the others sound like part of a Gallagher routine or something).  Anyway, each day, the path of the sun across the sky will shift between the two extremes set by the solstices and will be halfway between them on the equinoxes.

A Major Driver

The daily shift in the pattern of the sun across our day is a fundamental reality in our lives, driving the seasonal changes we all experience, and for those in the buildings industry, driving the loads we try to address with our envelope and HVAC system designs.  Sadly, I think we may be less and less aware of the reality of it.

Most of it would readily acknowledge the impact that seasonal changes have on our lives and on the facilities we design and endeavor to operate.   But how many of us could, by virtue of our daily observations, point to exactly where – on the horizon – the sun rose and set on the solstice and equinox?

Some, I am sure can do just that.   But I suspect that in general, we are much less aware of that than we were even a generation or two ago, let along a century or two ago.

Buchananhenge

One of Kathy’s and my traditions is that we sit on our porch swing (or in our front room when its cold) and watch the sunset together, so I have developed a pretty good sense of where the sun will be in the evening in Portland or Neskowin Oregon.  Neskowin is where we own a share in a fractional and thus, get to spend 4 weeks a year at the coast.

A couple of years ago, I realized that by some cosmic coincidence, the long axis of the sofa and/or deck we sit on in Neskowin to watch the sunset is probably aligned with-in 5° or less of the same axis on our porch swing.  Kind of cool;  same view, just a different distance from the ocean.

But it was not until about 15 years into our life here on Buchanan Avenue that I realized that the long axis of our shot-gun bungalow (which is perpendicular to the long axis of the porch swing) is lined up so that on the equinox, the sun (if it is shining) beams down the basement stairs and hits the back wall of the basement.

IMG_2258I was walking down the stairs through the yet to be completed remodeling project that occupies half of the basement to the fairly completed remodeling project called my office, which occupies the other half, when I noticed something unusual as shown in the photo to the left.

One unusual thing was that it was not overcast early in the morning, which it often is in March here in Portland.  The other was that the rays of the sun were hitting the back wall of the basement.

This was on March 7th, and as the morning progressed, the sun beam retreated across the floor as the sun rose in the sky.  And as the days progressed, the point of light (when it was visible) moved across the far wall until the path of the sun was cut off by the stairwell. 

Kind of cool.  It reminded us of Stonehenge so we officially termed it Buchananhenge.  Kathy plans to paint some sort of mural tied to the event on the back wall, and maybe the floor, once the (somewhat mythical) remodeling effort is completed.

Enter SolarCan

SolarCan is the “toy” I mentioned at the beginning of the post.  I discovered it thanks to the “Somewhat Occasional Newsletter” that I receive by virtue of my membership in the Cloud Appreciation Society.   SolaCan is  a pin hole camera fabricated from a beer (or soda) (or actually now-days, I have discovered, wine) can.

Inside the can is a piece of really, really slow film facing the pin hole.  As a result, if you mount the “can” to some stationary, vertical object with the pin hole facing south, over time, you will generate a photograph that shows the path of the sun across the sky each day.  And, if you allow it to remain in place long enough, the background image will also burn itself into the film.

When your patience wears out, you open the can with a conventional can opener, pull out the film, and scan it, which generates a negative.  Then, you import it into some sort of photo processing software like Gimp or Photoshop or PaintShop and reverse the negative and start playing with it.

Upon discovering SolarCan, I procured several;  enough to send one to each of the grandkids, send one to my brother (who is an actual, for real graphic artist/producer) along with several to experiment with here on Buchanan Avenue and on the deck at Neskowin.

The View from Neskowin

Just to orient you, here are a couple of pictures from the deck at Neskowin with the SolarCan immediately behind me.  The were taken the day I took the can down and headed home to process the film.

2021-11-23 Neskowin Rainbow 03

2021-11-23 Neskowin Sunset

The large “rock” in both images is called “Proposal Rock” and appropriately enough, several proposals and weddings occur in its presence every year.  And probably about once a year, the coast guard has to come in with a helicopter and pull hikers off the top because they forgot to consider the tides when they planned their hike and were stranded as a result.

This next image is a panorama that I shot several years ago now.  But I include it because I was standing about where the SolarCan was mounted and because it the field of view is comparable to the field of view captured by the SolarCan.

December at the Beach 2014

Here is the negative image from the SolarCan, which captures events from June 7, 2021 through November 23, 2021, so pre-solstice to almost equinox.

CCI_000120 cr

And here is what that looked like when I scanned it into PaintShop, rotated it  and reversed it.  Note that since it is rotated, not flipped, the image is backwards from reality.  More on that in a minute.

CCI_000119 - Copy

The blotches are there because despite being under and eve and only having a pin hole exposed, the driving rain that is common at the coast managed to gain entry into the can and the film was wet.   I have played with the image some in Gimp and PaintShop (steep learning curve for me so probably a lot more that I can do) and here is where it is currently.

CCI_000120 - Copy

So, some improvement, but a ways to go.  Initially, I was kind of disappointed, viewing the imaged as being damaged by the water. But my perspective changed when Kathy looked at it, flashed her “come hither eyes” at me and said she thought I had achieved a very artistic effect.  So, I am thinking of leaving well enough alone.

Getting It Right

This paragraph did not exist in my initial post because I had not realized the error of my ways when I rotated vs. flipped the image.   But as I subsequently studied the two images I had, I realized things were backwards, as I mentioned in my note at the beginning.   So here is the SolarCan image flipped (vs. rotated), which puts everything into the proper orientation.

CCI_000120 - Copy Flipped

In the image below, I tried to overlay the panorama I took and the SolarCan image so you could kind of correlate things.  I played with the aspect ratios in the images to try to get things to correlate as closely as possible using the tree in the center of the picture and proposal rock (the flattened “bump” on the right side) as the frames of reference.

Combined Coast 2

The correlation is not perfect;  obviously the sun does not rise from inside the condo on the left.  That is primarily because I was not standing exactly where the solar can was located when I took the panorama among other things.  

For instance, the film in the can is curved because it lies on the inside wall of the can; i.e. it lies on the circumference of the circle represented by the can’s diameter.  This is in contrast to being on a plane perpendicular to the pin hole, extending across the diameter of the can.  But it will give you the general idea.

The View from Buchanan Avenue

I mounted the Buchanan camera on the pole supporting the rain gauge that is attached to the little deck on Kathy’s art studio in the back yard.  (The rain gauge in the foreground is now located on a pole just below the blue bird house in the background;  South is to the center right;  where the bright spot in the trees is).

2019-07-24 Art Studio View

We are blessed with a lot of trees and that is just about the only spot with a clear view to the South for a significant part of the day. 

The “can” went up right after the 4th of July and my patience ran out Thanksgiving week, so the image below does not cover the entire span from equinox to solstice, but almost.

Back Yard CCI_000117 - Copy 02 Flipped

In both images, the arching bands are the daily path of the sun.  Variations in intensity are (I suspect) due to clouds passing through. Gaps between the bands (I suspect) represent days of total overcast. 

I also suspect the intensity of the bands when the sun is lower in the sky is generally higher on a clear day than when the sun is higher in the sky due to the incident angle between a ray of light and the film in the can;  not totally sure about that but I think it is true.

Next Steps

Having done my initial experiments, I am already on to my next artistic effort.  I just deployed a new SolarCan on the rain gauge pole on the solstice and plan to leave it there until the June solstice, thereby capturing the full path of the sun from Winter to Summer.  I will replace it with another to capture the path the other way.

I plan a similar effort at Neskowin although the dates are constrained a bit by when we have our weeks in the rotation.  But I should be able to capture the full cycle and may try to find a way to keep the film dry (or maybe not, given the flashing of come hither eyes associated with perceived artistic efforts on my part.)

And I will shoot a panorama with my digital camera oriented as close as possible to the orientation of the SolarCan so I can better correlate the two images.

Conclusion

Hopefully, my adventures and experiments observing the sun’s path will inspire you to consider doing the same (obviously, don’t look directly at it).

For me, even though I had an intellectual awareness of it from a very young age,  watching the minute by minute, hour by hour, day by day shift via Buchananhenge and SolarCan gave me a firmer grasp of it.   And it also made me feel a bit more connected with this amazing universe we are all apart of.

IMG_0075In fact, if you find this to be interesting, then you may also enjoy one of my favorite books, Connecting with the Cosmos, by Donald Goldsmith.  The subtitle says it all in a way;  each of the 9 chapters is dedicated to exploring a different aspect of the sky, starting with sunrise and sunset, my topic here in a way, through observing the moon and various constellations, all with the unaided eye.

So here’s to happy sky-watching and a great holiday season.  And thanks to all of you who continue to visit the blog.

David-Signature1_thumb_thumb_thumb                                                                                                          Holly

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

Posted in Uncategorized | Leave a comment