Hoping you are having as much fun this holiday season as Kathy and I are having.
Meanwhile, thanks for supporting the blog and Happy Holidays.
Senior Engineer – Facility Dynamics Engineering
Hoping you are having as much fun this holiday season as Kathy and I are having.
Meanwhile, thanks for supporting the blog and Happy Holidays.
Senior Engineer – Facility Dynamics Engineering
If you look at a psych chart closely, you will notice that the constant wet bulb lines are not exactly parallel to the constant enthalpy lines.
Note that to make things more visually apparent in this blog post, for most of the psych chart images, I have narrowed down the temperature and humidity scales. So the chart probably looks a bit different from what you are accustomed to seeing.
In any case, it’s tempting to just ignore the fact that the enthalpy and wet bulb lines are not exactly parallel. But in the context of an evaporative cooling process, the non-parallel nature of the lines is an important distinction if you are trying to understand the physics behind the process. The purpose of this series of posts is to explore that distinction a bit and look at what it means practically in the context of air handling systems that use an evaporative cooling process.
The context for all of this was that we were lucky enough to have a field day in a facility that was served by both direct and indirect/direct evaporative cooling air handling systems (with “we” being myself and the folks participating in the current round of the Existing Building Commissioning Workshop at the Pacific Energy Center). Here is the Google Earth view of the facility we were at, and you can see the systems we were working with sitting in the equipment area on the right half of the roof. (The round structure is a planetarium, so, pretty cool to be working on a building with a planetarium.)
If you know how the direct and indirect/direct evaporative cooling processes work, you can actually tell which unit is which by studying the appearance of the equipment in the Google Earth image. So, I will let you check out what you learn from reading this by coming back and identifying which unit is which after you finish the series.
From my perspective as an instructor, the evaporative cooling systems represented an unique opportunity to connect the psych chart with reality. For the class participants, it is a chance to see something different and learn how to understand it by thinking about it in terms of fundamental principles (I hope).
To me, this is an important thing. If I and others like me were to endeavor to spend the days that remain to us in instruction targeted at describing every conceivable type of HVAC system that might exist, we would simply run out of time, as can be seen from the following relationship, which calculates the maximum possible number of HVAC system configurations that could exist in our little corner of the universe.
On the other hand, at the end of the day, the phenomenon going on in most HVAC systems can generally be described by a few fundamental relationships and tools including the steady flow energy equation …
… which I realize is a bit scary until you think of it in the terms Dr. Albert Black, one of my mentors put it to me in, those terms being that …
The Goes Inta’s Gotta Equal the Goes Outa’s.
My recollection is that Al (modestly) told me that he can not claim total credit for that phrase in that it was passed to him by one of his mentors. But it resonated with me and has been a guiding principle and foundation for me when all else seemed to fail. That includes something that happened in the context of my developing this blog post; i.e. being confronted, as I frequently am, by the realization that understanding something and being able to explain it are two different things.
I actually learned that lesson very early on in my technical training career as a flight line lab instructor when in my first lab session, an aspiring Airframe and Power Plant Mechanic (A&P) asked me (a freshly minted A&P) a question about a concept that I understood, but found that I could not explain in a way that made sense to him. So, I told him I didn’t know how to provide a clearer answer to the question right then (harder than it sounds, at least for some of us) but that I would fix that and and get back to him, which I did and have been doing ever since. It’s one of the things I really love about teaching; to teach, at least in my experience, you have to be in a constant state of learning.
Al also pointed out to me at one point that all of this math is just a reasonable model for us to use to predict what might be going on in an HVAC system and building. In reality, we probably don’t really have a clue.
Beyond conservation of mass and energy, one of the most important tools for understanding what is going on in an HVAC system is the psychrometric chart. This is something that Bill Coad initially inspired me about via his very cool engineering trick of creating one by hand via the application of basic principles. Replicating that trick is the subject of yet-to-be completed string of blog posts starting with this one. Eventually, I will get all of the way through showing you the trick, so stay tuned.
The specific driver behind my developing this post was a field question that came up several times in the field class regarding the evaporative cooling process. If you are looking at the depiction of the process on a psych chart and don’t fully appreciate that the constant wet bulb and constant enthalpy lines are not totally parallel, then you might ask:
How can a water be evaporated by a process that occurs at a constant enthalpy which implies there is no energy change?
The answer is …
Actually, it is a constant wet bulb process not a constant enthalpy process, and the enthalpy of the air increases.
The amount of latent energy associated with adding water to the air stream by evaporation is in fact exactly equal to the reduction in sensible energy in the air that entered the process. But the water represents mass being added to the air stream and that mass had some energy associated with it before it entered the process, just like the air did. So the enthalpy at the the end of the process, with the added water vapor mixed in with the original air sample has been increased by the amount associated with the added water.
It turns out that to really explain this, at least to explain it in the way I thought I needed to, things got long (surprise). So what started out as a blog post has evolved to series.
The remainder of this post will be dedicated to explaining some basics behind the evaporative cooling, primarily adiabatic saturation and wet bulb temperature. I will follow this post with a post that looks at practical adiabatic saturation a.k.a. evaporative cooling. Finally, I will do a post sharing what we saw in our recent field experience, along with some of the insights that were gleaned from the experience.
As is my practice for my annoyingly long blog posts, the following links will take you to topics of interest which will include a “Back to Contents” link at the end of the section to bring you back here.
A parcel of air is a concept used in psychrometrics and meteorology. It implies a sample that is large enough to contain many molecules, but much smaller than the surrounding volume or environment. It will have uniform temperature and moisture characteristics but those characteristics may be different from the surrounding environment.
The bubbles of steam that rise through the liquid in a pot of boiling water are an example of a parcel. Both the vapor and liquid are made up of water molecules, but the conditions inside the bubbles are different from the conditions outside the bubble
To understand evaporative cooling, you need to understand what makes up the energy content of a parcel of air. Unless a parcel of air is totally devoid of moisture (0% relative humidity), then the energy it contains includes both a sensible energy component and a latent energy component.
The sensible component is the easiest to understand because manifests itself to us as the the dry bulb temperature. Most people are very familiar with it and frequently, we simply call it the “temperature” of the air. Changes in dry bulb temperature are associated with the change in sensible energy.
Another way of thinking of it is to say that sensible energy manifests itself to us as heat. If I increase the sensible energy of an object, it becomes hotter to the touch with “touch” being one of our senses and thus the name.
Our comfort is also affected by the amount of moisture in the air because it impacts how efficiently (or not) our body’s evaporative cooling process works. If you have traveled around the country a bit, you probably have noticed that a 95°F, sunny day at someplace like the Grand Canyon feels much more comfortable than a 95°F, sunny day in the Midwest or Southern states or even Northeastern states like Pennsylvania right after a thunderstorm. That is because the summer time air is much dryer at the Grand Canyon (most of the time), compared to the summer time air in the Midwest, South, and Northeast.
The moisture in an air parcel has energy associated with it because it takes energy to convert the moisture from a liquid to a vapor (or from a solid to a liquid for that matter). Going from a liquid to a vapor or a solid to a liquid is called a phase change.
Unlike sensible energy, the energy associated with the phase change that adds moisture to a parcel of air does show up as a temperature increase. Rather, we sense it a change in comfort level that we frequently call feeling “muggy” or “humid”.
So the good news is that we can detect that it is there. But unlike heat, which we can measure with a thermometer, it can be challenging to measure and quantify “mugginess”. The term applied to this energy is latent energy or sometimes, latent heat.
Latent is a term that means “hidden” or “concealed” and it is used to describe the energy associated with a phase change because. We really didn’t understand latent heat until about 250 years ago when a Joseph Black, a Scottish scientist intuitively connected a few dots by pointing out that what people thought should happen, based on the science of the time, did not actually happen.
For instance, the science of the time suggested that it would take only a small amount of heat to melt snow and ice. Mr. Black pointed out that if that was really true, then the world would be ravaged by floods due to the immediate melting of snow and ice when the temperature increased from just below to just above freezing. In other words, the expected didn’t happen, which implied there must be something else going on.
He reached a similar conclusion about boiling water by observing that while below the boiling temperature, the addition of heat caused the water temperature to increase fairly quickly. But once boiling started, applying the same amount of heat did not cause the temperature to change at all but rather, caused the water to become vapor at the same temperature as the boiling water.
He also noted that it took quite a bit of time and heat to convert all of the water from liquid to vapor relative to the amount of heat it took to simply raise the temperature to the boiling point. In other words, a significant amount of energy had to exist in the water vapor that was generated by the boiling process, even though its temperature was the same as that of the liquid water it came from. That energy was invisible in the context of the conventional way of measuring heat (temperature) and he termed it “latent energy”.
Enthalpy is the term we used to refer to the total energy content of an parcel of air. It will be exactly equal to the sum of the sensible energy and latent energy in the air parcel and is typically expressed in terms of energy per unit mass; Btu per pound in the system of units we typically use here in the United States.
The symbol h is often used for enthalpy. There are a couple of conventions that often show up in psychrometric discussions, steam tables, and psychrometric equations.
You may actually think we already were in the weeds. And we probably are a little bit.
But they weeds are more like Queen Anne’s Lace …
… and Dandelions, both of which I actually happen to like and thus, don’t consider weeds. Truth be told, there are very few plants that I don’t like (and there are very few things that don’t fascinate me at some level). So there are very few things I consider weeds (plant-wise and otherwise).
But I am pretty sure I am a bit odd that way, so I am just trying to draw a line of distinction to acknowledge that I realize what I am doing here in that context.
Having said that, there have been occasions where I was totally confused because, for instance:
Stuff like that. So, as I was writing this post, I was including all of that information in the stream of it. But doing made it even longer than I had thought it would be. In addition, at one point, I realized that it could obscure the real information I was trying to convey.
But, since some of the details I am alluding to here are important to be aware of, I decided I would create a “weed patch” at the end of the post and put that information there so you could jump to it if you wanted to or just keep moving forward through the primary content of the post.
So if you are interested, the “weed patch” contains the following “weeds” along with a “Back to Contents” link so you can get back to where you came from pretty easily if you go there.
O.K.; enough of that.
The reason that understanding sensible, latent and total energy matters in the context of a discussion about evaporative cooling is that the amount of cooling – i.e. the energy change – provided by an evaporative cooling process is very much dependent upon the amount of moisture in the air and the latent energy it represents. That means that if we really want to understand the energy content of an air sample, we need to know more than its temperature. We also need to have some sense of the amount of water vapor it contains.
That is basically what the science of psychrometry is about; it is the study of the physical and thermodynamic properties of gas/vapor mixtures. The sensible energy is reflected by the psychrometric property called dry bulb temperature. The latent energy is reflected by a number of psychrometric properties including including relative humidity, wet bulb temperature, and dew point temperature. Both the sensible and latent energy (total energy) are reflected by the property of enthalpy.
Quantifying how much moisture is in the air is actually much harder than it sounds, and we have been trying to figure out how to do it for a long time. While reading The Invention of Clouds, which is about Luke Howard (the person who came up with the system we use to this day to classify and discuss clouds) I learned that in China, over 2,000 years ago, during the Han Dynasty, the scientists of the time used the change in weight of a dry piece of charcoal that was exposed to the atmosphere as a measure of humidity; pretty clever.
As a somewhat related aside, one of my favorite philosophical quotations comes from a a 6th century BC Chinese philosopher named Lao-Tzu who once said:
If lightning is the anger of the gods, then the gods are concerned mostly about trees.
Anyway there are a multiplicity of approaches that we have used over the years to try to quantify the moisture content in a sample of air. The Malcolm J. McPherson reference I provide a bit further down has a pretty good discussion about them if you are interested.
But if your are trying to do building science and quantify latent energy in an air sample, you will eventually run across a discussion of the concept of adiabatic saturation. The concept is important because it is the basis for wet bulb temperature measurements, one of the basic ways we assess moisture content in the air.
The device that is used to define adiabatic saturation, is appropriately enough an adiabatic saturator. That name sounds scary and complicated and may cause you to want to run off and pursue something else.
But its actually a relatively simple device and (thank goodness) no where near as complex as a turboencabulator. Now that is a device where the scariness of the name is warranted due to it’s reliance on a mixture of high S-value phenyhydrobenzamine and 5 percent reminative tetraiodohexamine for operation rather than a mix of air and water vapor.
In addition, critical to the functionality of a turboencabulator is the alignment between the two spurving bearings and the pentametric fan, which of course, requires that six hydrocoptic marzelvanes be installed on the ambifacient lunar vaneshaft to prevent side fumbling.
In contrast, the entry and exit points in the adiabatic saturator can have significant misalignment issues as long as they are far enough apart to allow the adiabatic saturation process to run to completion.
Truth be told, if success in building science relied on a deep working familiarity with the principles of turboencabulation, many of us would have fallen by the wayside given the complexity.
But thankfully, to understand evaporative cooling and for that matter, the psychrometrics of moist air, we only need to grasp the operation of an adiabatic saturator. That’s because in reality, an evaporative cooler is just a practical implementation of an adiabatic saturator.
Adiabatic saturation is a kind of thought experiment that involves a device in which an parcel of air is cooled adiabatically (with out the addition of heat from an external source) to saturation (100% relative humidity) by evaporating water into it. All of the energy (latent heat) required by the evaporation process comes from the parcel of air and as a result, the parcel of air is cooled (sensible energy is reduced) as its moisture content (latent energy) increases.
Aside from the explanations given to me by my mentors I have encountered two written explanations of adiabatic saturation that seemed very approachable to me. One is provided by Willis Carrier in his book Modern Air Conditioning, Heating and Ventilating where he describes a process that involves a fan blowing air through an insulated box full of wetted excelsior (softwood shavings that were used to package fragile items back in the olden days).
You can still find copies of the book and to me, it is worth having for a number of reasons ranging from sentiment to the fact that one of the stated goals of the book was to present the material in a manner that would not only be useful to the scientifically minded, but also to those who had a technical interest but not an extensive background in engineering and science.
In other words, they hoped to convey somewhat complex information in a useful manner to people coming into the field from some other industry, like airplane mechanics in my case. In other words, people who have taken an interesting in building science but are coming at it from outside of the engineering profession.
And, in my opinion, the authors did a pretty good job of it. So, I have scanned the pages on adiabatic saturation from my copy of the book and put them on a page on our commissioning resources web site if you are interested.
The other explanation that made sense to me is part of the chapter on psychrometrics (Chapter 14) in a book by Malcolm J. McPherson titled Subsurface Ventilation and Environmental Engineering where he uses the analogy of air flow through a long tunnel with no heat sources in it and a puddle of water on the floor, which I imagine might be what some parts of a mine might be like.
There seems to be a .pdf copy of Mr. McPherson’s psychrometrics chapter out there in the public domain if you want to take a look. In addition to providing an approachable explanation of adiabatic saturation, it is also an approachable explanation of psychrometrics in general so you might find downloading a copy to be useful.
For the purposes of this post, I made a little diagram to illustrate the adiabatic saturation concept as I understand it.
You start out with an insulated chamber so that the air and water in it will not experience any heat transfer from external sources which is what makes the process adiabatic.
The chamber also needs to be very, very long, some say infinitely long (which I guess is why you seldom see one sitting around out there in the field since they would get in the way a lot). But the length is necessary so that by the time the air parcel exits the chamber, it has come into equilibrium with the liquid water in the pool inside the chamber and is saturated, meaning the relative humidity is 100% .
In other words, the air is going to exit the process at a point on saturation curve on the psychrometric chart.
Since water will be evaporated from the pool inside the chamber into the air stream, there needs to be a water make-up connection. But to ensure that energy is not transferred from the water to the air stream by radiation or convection, the temperature of the water must be controlled to match the saturated leaving air temperature so that by the end of the process, the water temperature has no influence on the energy content of the air.
Bear in mind that the pressure of the parcel of air entering the process is created by the combined action of the constituent air elements as well as the action of the water vapor molecules, each contributing to the total pressure. The pressure contributed by a constituent element is called its partial pressure. If you want a more detailed explanation of this, or at least my take on it, you may want to take a look at the blog post titled Build Your Own Psych Chart – A Few Fundamental Principles.
Since the air coming into the process is not saturated, the partial pressure of the water vapor it contains is lower than the vapor pressure of the water in the pool inside the chamber. Thus, there is a driving potential causing water to evaporate from the pool and become water vapor in the air parcel.
Conceptually, this is very similar to sensible heat being transferred from a warm object to a cold object. The temperature difference is what causes the heat transfer to take place and the bigger the temperature difference, the higher the heat transfer rate will be.
In the case of water vapor, it is the difference in vapor pressure that causes the water vapor to move around. It will be inclined to travel from an area with a high vapor pressure – for instance the immediate vicinity of a liquid water surface – to an area of lower vapor pressure – for instance, the dry parcel of air entering and moving through the adiabatic saturator.
Because we have insulated the adiabatic saturation chamber and we are maintaining the make up water temperature at a fixed value that is identical to the leaving air temperature, the only source of energy available to cause the water to evaporate is the sensible energy in the air parcel. As a result, the air parcel is cooled while its moisture content is increased until it becomes saturated. At that point, the driving potential (the difference between the partial pressure of the water vapor in the air and the vapor pressure of the water in the pool) is zero and no additional water is evaporated.
Since all of the energy required to saturate the air came from the sensible energy in the air when it entered the device, the latent energy added is exactly equal to the sensible energy lost. The resulting temperature is called the adiabatic saturation temperature or thermodynamic wet bulb temperature, the technical definition of which is:
The temperature a volume of air would have if cooled adiabatically to saturation by evaporation of water into it, all latent heat being supplied by the volume of air.
It is the difference between this parameter and the dry bulb temperature of the air entering the process that sets how much cooling will occur for a given air parcel.
This is a very important thing in the context of evaporative cooling. For a dry air parcel from, say, the Grand Canyon area in the summer, the difference will be large compared to that of a moist air parcel from say, central Pennsylvania after a summertime thunderstorm. As a result, more evaporative cooling can be produced by the Grand Canyon air parcel than the central Pennsylvania air parcel.
For a saturated air parcel, there is no difference between the dry bulb and adiabatic saturation temperature and thus, no evaporation (and no cooling) will occur.
From a conservation of mass standpoint, the adiabatic saturation process looks something like this for a parcel of air that enters the process totally devoid of any mosture.
Note that there is a net increase in mass through the process because of the water vapor that is added to the air parcel. This is important and it is why the enthalpy (total energy content) of the air parcel increases through the process.
From a conservation of energy standpoint, the adiabatic saturation process looks like this for that same parcel of air.
Essentially, the equation says that there is an increase in energy (enthalpy) through the process due to the addition of the water that is evaporated into the air parcel.
But to fully appreciate what is going on, I am going to expand the terms in the relationship above a bit. And in doing that, I am going to focus on a special case because (I think) it will allow me to make the point I am trying to make in a bit less confusing manner. Specifically, I am going to focus on the case where the air entering the process is totally dry (0% RH).
The expanded form of the equation includes terms for the sensible energy that is lost from the air parcel (the green term) and the latent energy that is gained by the air parcel (the purple term) as it moves through the adiabatic saturation process. Since the latent energy increase is exactly equal to the sensible energy decrease (by the definition of the process), then the combination of the two terms ends up being zero.
That means that the only reason that there is an energy gain in an evaporative cooling process is due to the energy that comes in with the mass of the water that is evaporated.
In some way’s it’s kind of hard to get your head around the energy represented by the purple and green terms in the equation. It’s there, but it’s not there, kind of like Wile E. Coyote’s ACME Corporation portable hole. But the reality is that, this is very useful thing to recognize for those of us using psychrometrics to assess HVAC systems.
Stated mathematically, the words the latent energy increase is exactly equal to the sensible energy decrease (i.e. the purple term in the expanded equation is exactly equal to the green term) look like this on a per pound of air basis using psychrometric parameters for our special case (check out the Weed Patch for the more general case where the air coming into the process has some water vapor in it).
In other words, we could figure out how much water the totally dry air could hold if we saturated it by measuring the temperature change through the process and multiplying it by the specific heat of air (the 0.24 value).
The temperature change through the process is the difference between the entering dry bulb temperature and the leaving dry bulb temperature. The leaving dry bulb temperature is equal to the adiabatic saturation temperature, by the definition of the process.
If we could come up with those three numbers, then we could figure out how much water a totally dry parcel of air at a specific dry bulb temperature would hold if we saturated it using an adiabatic saturator. Heck, if we could do it, that would let us draw the saturation curve for a psych chart . This could be cutting edge!
We can easily measure the dry bulb temperature of the entering air parcel. And the specific heat of air is also a measurable quantity and well documented (check out the weed patch for more on that).
Gosh, if only there was a real world way to measure the mythical adiabatic saturation temperature of the entering air parcel, we would be able to quantify how much evaporative cooling a given parcel of air could produce.
By virtue of a happy thermodynamic coefficient coincidence, a thermometer that has its bulb covered by a wet wick will nearly (but not exactly, more on that later) measure the adiabatic saturation temperature of air at the conditions commonly encountered in an HVAC system.
In fact, the value it indicates is close enough to the adiabatic saturation temperature that we can assume that the the adiabatic saturation temperature is identical to the temperature measured by a thermometer with a bulb that is wet.
In fact, when we plot constant adiabatic saturation temperature lines on a psych chart, we are actually plotting constant thermodynamic wet bulb temperature lines. And, we call them temperature measured by a thermometer with a bulb that is wet lines.
Well, actually, we generally don’t call them that. But my point is that constant wet bulb lines on a psych chart specifically represent a value that goes by two different names (adiabatic saturation temperature and thermodynamic wet bulb temperature) neither of which is what we typically measure out in the field.
The word “thermodynamic” ahead of the term “wet bulb” reminds us that what we measure with a thermometer with a bulb that is wet (which we often call a wet bulb thermometer) is not quite the same thing as the adiabatic saturation temperature, a.k.a thermodynamic wet bulb temperature.
But we sure feel calmer, and thus, continue to breath normally, by calling what we measure the “wet bulb temperature” instead of “the temperature measured by a thermometer with a bulb that is wet” or “the approximate adiabatic saturation temperature”.
So having beaten that into the ground, moving forward, I will refer to the temperature measured by a thermometer with a bulb that is wet as wet bulb temperature.
The stationary wet bulb thermometer was one of the earliest ways that folks used to try to understand the amount of moisture in a sample of air by measuring the temperature of a thermometer bulb that was wet (the images below are courtesy of http://physics.kenyon.edu/EarlyApparatus/Thermodynamics/Hygrodiek/Hygrodeik.html)
Empirical data (data based on observation or experience vs. theory or logic) derived using an instrument similar to the images above was likely the starting point for the psych chart as we know it today.
The stationary wet bulb thermometer evolved to the sling psychrometer, which I will describe and illustrate later in the post.
Incidentally, if you are interested in learning a bit more about the history of psychrometrics in our industry, then you might find the AHSRAE Journal article titled Psychrometric Chart Celebrates 100th Anniversary to be of interest. You can find a copy on the Hands Down Software web site (they are the folks behind the free Pacific Energy Center psych chart that I have written about on the blog).
Returning to our discussion about the perfectly dry air parcel that moves through an adiabatic saturator, recall that we had concluded that we could figure out how much water it would take to saturate the air if we knew the entering dry-bulb temperature (tEntering in the equation below) and the adiabatic saturation temperature (tLeaving in the equation below).
Now we know that we can do it if we measure the dry bulb temperature of the air parcel and also measure the temperature of the air parcel using a wet bulb thermometer. There are two interesting things to recognize as you contemplate all of this.
One is that the only reason there is a change in total energy content/enthalpy through the evaporative cooling process is that the water that was evaporated into the process – i.e. the mass that was added – already had energy associated with it; the enthalpy associated with the saturated liquid for water at the conditions entering the process. But bottom line, the change in total energy/enthalpy through the process is entirely due to the addition of mass and the energy it brings into the process.
The rest of the process is just trading some of the sensible energy in the entering air parcel for latent energy in the leaving air parcel, which is the second point of interest.
If you rearrange my expanded form of the conservation of energy equation to show this mathematically, it looks like this.
In other words, the amount of energy that entered the process as sensible energy in the totally dry air does not change; it stays constant.
The only thing that changed was how much of it is sensible energy and how much of it is latent energy at the end of the process. Willis Carrier recognized this and that is where the term Sigma Heat came from.
In the psych chart below, I started with air at the 0.4% cooling design conditions in a number of climates and calculated Sigma Heat for that air as it moved through an adiabatic saturator to saturation, and also what would happen if that air sample entered the saturator at the same adiabatic saturation temperature but with a lower specific humidity.
Notice how the lines are straight lines that follow the constant wet bulb temperature lines and diverge slightly from the constant enthalpy lines.
Just to be clear, Sigma Heat and the amount of sensible energy that is converted to latent energy in the adiabatic saturation process are not exactly the same thing. For the temperatures and pressures we deal with in HVAC, the entering air parcel will have a lot more sensible energy available that is required to saturate it by evaporating water into it.
The important point is that the sensible to latent trade-off energy is part of Sigma heat. And the amount of energy traded off is a function of the difference between the entering dry bulb temperature and the entering wet bulb temperature. In other words, Sigma Heat is a pure function of the difference between the entering dry bulb temperature and the entering adiabatic saturation temperature.
At this point, I imagine you have realized that the amount of water vapor that exists in a parcel of air is reflected by it’s wet bulb temperature. Relatively dry air will have a lower wet bulb temperature than relatively moist air, all other things being equal.
In addition, the amount of water vapor that a parcel of air can hold will be reflected by the difference between it’s dry bulb temperature and its wet bulb temperature. The difference between the two represents sensible energy available in the air parcel which can be used to pick up moisture via conversion to latent energy. The bigger the difference, the more water vapor that can be evaporated into the air parcel. When the two are identical, the air parcel is saturated and can hold no additional water vapor.
Furthermore, that conversion energy is a component of the Sigma Heat of the air parcel, which remains constant in an adiabatic saturation process. So if you think about it, that implies that for every dry bulb temperature, there is a very specific wet bulb temperature (adiabatic saturation temperature). And:
the wet bulb temperature (adiabatic saturation temperature) will remain constant through an adiabatic saturation process (evaporative cooling process). That means that if you wanted to model and evaporative cooling process on a psych chart, you would do it by moving up a constant wet bulb temperature line.
In the psych chart below, I started with air at the 0.4% cooling design conditions in a number of climates and calculated Sigma Heat for that air as it moved through an adiabatic saturator to saturation, and also what would happen if that air sample entered the saturator at the same adiabatic saturation temperature but with a lower specific humidity.
I can imagine that even if you were only mildly excited by the content of this post up to this point, that the revelations of this section have made you ecstatic, perhaps kindling a desire to go get yourself something that can measure wet bulb temperature. So, lets take a look at a couple of the options for doing that next.
In the olden days, when I first entered the industry, wet bulb measurement involved using a device called a sling psychrometer (the black gizmo to the right in the picture below), which relied on evaporative cooling to directly generate the wet bulb temperature.
It was an improvement over the stationary wet bulb thermometer for a number of reasons including the variable nature of the velocity of air flow across the stationary device.
Now-days, we use modern electronics and measure relative humidity and dry bulb temperature (the “space age” light gray gizmo on the left in the picture). There are some pros and cons to both approaches, which I will get to in a minute.
In this breath-taking close-up of the sling psychrometer, you can see that it actually has two, identical, factory matched thermometers, one of which has a cloth sleeve (called a wick) around its bulb (the upper one in the photo).
The little cap on the left is actually the cover to a water reservoir that the wick threads into. Once the wick has absorbed some water, it will keep the bulb of the thermometer that it encases wet, thus, the term wet bulb
To take a reading, you use the vertical part that is pointed down and off the picture as a handle and swing the horizontal part as quickly as you can for about 1-2 minutes(while avoiding slamming it into things like walls, ducts, pipes, associates, etc. that are in the vicinity).
As a result of all of this activity, the temperature of the bulb with the wick will drop due to – you guessed it – evaporative cooling. At some point, the temperature of the wick, the bulb, and the water will come into equilibrium with the moisture content in the ambient air and the temperature will stop dropping. That point is what we call the wet bulb temperature.
Slinging the thermometers for 1-2 minutes is harder than it sounds and if you do it a lot, I suspect you get pretty well developed fore-arm muscles on your “psychrometer arm”. But the speed and time are important because you want enough air to flow past the bulb with the wick on it so as to keep the little micro climate in the area of the wick at about the same condition as the ambient environment.
If you don’t keep it moving fast enough, the water evaporating from the wick will influence the local vapor pressure in the immediate vicinity of the thermometer bulb, which affects a number of things. But bottom line, you end up with a high reading.
When I am using a sling psychrometer, after my first 1 –2 minutes of slinging, I stop, take a quick reading, and then sling again for another 30 or so seconds to make sure I have reached the equilibrium state; i.e. if my second reading is the same as the first one, I figure I have.
But if it has dropped some more, I keep on slinging a bit more (aching fore-arm aside) until I get two readings in a row that are about the same. Here is what my psychrometer looked like right after I slung it in my office earlier today; wet bulb above and dry bulb below.
It’s important to take your reading right away because once the airflow stops, the wet bulb will start to rise pretty quickly. In fact the wet bulb reading in the picture is a bit higher than it was when I stopped slinging because of that effect.
Parallax also comes into play in the context of the picture; you need to read the thermometer “dead-on” and some psychrometers even have mirrored scales to facilitate that. The content just below where this link takes you talks about mirrored scales and parallax if you want to know a bit more.
You also want to be careful not to touch or breath on the thermometers since that could also throw your readings off. But bottom line, once you know a dry bulb temperature and a wet bulb temperature (or any other indication of moisture) you can use a psych chart or psychrometrics calculator to come up with other parameters like relative humidity.
Or you can just use the handy slide-rule built into the sling psychrometer.
As you can see from the image above, my little field test said the RH in my office is about 58%.
What I like about that number is that it was generated directly by fundamental principles (evaporative cooling and the expansion of the thermometer liquid from the bulb up a capillary tube) with no batteries or electronics in-between what I was measuring and the result.
But it is very much subject to technique and is also limited by the manufacturing tolerances of the thermometers. If you go to the spec sheet for my little Bacharach tool, you will discover that it is accurate to +/- 1°F dry bulb, which, since two thermometers are involved in generating an RH reading, translates to being accurate to +/- 5% RH.
As discussed earlier in the post, wet bulb temperature lines on a psych chart are more specifically thermodynamic wet bulb temperature or adiabatic saturation temperature lines. The wet bulb temperature we measure out in the field is not exactly the same thing and there are a number of reasons for that.
At a fundamental level, the thermodynamics behind what causes a wet bulb thermometer to register a temperature lower than the dry bulb temperature are fundamentally different from the process occurring in an adiabatic saturator.
But by a happy coincidence of physics. the coefficients associated with the thermodynamics of the real wet bulb process (there is a convective heat transfer coefficient and a mass transfer coefficient involved) are such that the result is very nearly identical to the thermodynamic wet bulb temperature, at least for a mixture of air and water vapor in the range where we apply the device in building science.
If you want to know a bit more about that, there is a YouTube video by a guy named
Mitchell Paulus that will give you a pretty good idea the mathematics behind what I just typed. He also has a couple of videos where he goes through the mathematics of adiabatic saturation, which you probably would want to look at first since that math becomes the foundation for the match in the video about the difference between sling psychrometer wet bulb and thermodynamic wet bulb.
There is also a paper out there titled Calculation of the Natural (Unventilated) Wet Bulb Temperature, Psychrometric Dry Bulb Temperature, and Wet Bulb Globe Temperature from Standard Psychrometric Measurements that explores the topic and includes charts showing the amount of deviation and how it varies with different conditions like wind speed and the radiant temperature of the surroundings.
As I said previously, in my day, we measured wet bulb temperature by slinging a psychrometer until our arm ached, and we liked it. But people these days want everything to be easy schmeezey so they buy fancy schmancy electronic gizmos to measure wet bulb temperature with the press of a button.
Truth be told, so do us old timers.
Contrast the Bacharach result with what my modern electronic gizmo said was going on at the same time (a Vaisala HM 40 series hand held humidity and temperature meter). At the conditions that existed in my office at the time of the reading, it is accurate to +/- 0.36°F and +/- 1.5% RH.
Pretty different from what my trusty sling psychrometer told me.
But, when I plotted the points on the psych chart along with a box around them to reflect the accuracy of the instrument associated with them and project the Vaisala accuracy window across the Bacharach accuracy window, I concluded that given the stated accuracies, both instruments did their job.
Note how the Bacharach RH window overlaps the projected RH window from the Vaisala HM 40 as does the dry-bulb temperature window. In terms of wet-bulb accuracy, this test result says the Bacharach is probably more like +/- 1.5°F vs. +/- 1°F.
But, as I discussed above, the wet bulb lines on the chart are thermodynamic wet bulb temperature lines, or more specifically, adiabatic saturation temperature lines , and the number they represent is not exactly the same as the temperature I measured with a thermometer bulb that was wet.
The test also says that the sling reading will tend to be higher than the reading derived from the Vaisala instrument. I have used the sling a lot longer than I have used an instrument like the Vaisala; the latter simply was not available for the first part of my career, at least not at an affordable level for me. But in thinking back through my experience taking readings with both instruments, I believe that most of the time, this tended to be true; i.e. the Bacharach relative to something like the Vaisala would tend to be high.
That could be the result of technique, the resolution capabilities of a glass tube with degree marks etched into it, and even the cleanliness of the wick; mine is a bit dirty right now and I should probably replace it. A dirty wick can impact the reading because it can affect the how well the water is absorbed by it and thus, how “wet” the wet bulb really is along with how easily the water can evaporate from the wick.
couple of important and interesting things to note about all of this.
If you wanted to buy a sling psychrometer like my Bacharach tool, it is currently priced at just a bit over $100 on Grainger. In contrast, the Vaisala HM40 starts at $534 for an instrument like the one in the picture and runs up to $1,168 for one with a longer, separate hand-held probe.
For even tighter accuracies, you might be looking at something like a Vaisala HMT330, which is the standard instrument used by the U.S. Climate Reference Network. Those can start at about about $1,900 and go up from there to as high as $3,000 or $4,000 depending on accessories and the specific application targeted for the device. For the added dollars, you get +/- 1% RH accuracy, so a 0.5% improvement over the HM-40 that I have. The temperature accuracy is +/- 0.36°F, which is the same as the HM40.
If you assume an instrument similar to he HMT330 is being used at the Automated Surface Observation System (ASOS) station at the Portland International Airport (PDX), here is what it thought the humidity was outside when I was taking my readings inside the office.
It can be helpful when you are trying to understand what is going on in a building to remember that the air that is inside came from the outside. That means that the outdoor psychrometric conditions establish the baseline for the conditions inside the building.
Most of the time, unless you have a bunch of open desiccant containers lying around, the dewpoint/specific humidity inside will be no lower than the dewpoint/specific humidity outside. In fact, it will likely be a bit higher because there are things going on in most buildings that add moisture to the air in addition to adding heat.
The ratio of sensible heat or energy added inside the building to the total heat or energy added inside the building (i.e. sensible plus latent energy) is called the Sensible Heat Ratio or SHR. On a psych chart, if you plot that line using the SHR scale, it gives you a “visual” on how much energy is added to a parcel of air as it goes from one state to a different state.
It also helps you determine the leaving air temperature required from a cooling coil given a specific sensible and latent load to be addressed in a zone served by the coil. I discuss that in a bit more detail in the blog post about how to use Ryan’s free psych chart resource if you are interested.
When I plotted the implied SHR line for my office assuming that the air outside my house was about the same as the air at the airport, which is 13 miles East-Northeast of me, it implied that the SHR was about .65 (the orange line in the image above).
Initially, that seemed a bit high; typically, the SHR for a house or office will be in the 0.75 to 0.95 range. But my office has a number of moisture sources in it, some of which are a bit out of the ordinary including:
Plus, Kathy was doing some cooking up-stairs and that was generating enough moisture that the windows on the French Doors to the deck from the kitchen had some dew on them. So I am not surprised in hindsight by the higher than normal SHR.
My main point in bringing all of this up is to show how challenging it can be to measure relative humidity and how the humidity indoors is going to be related to the humidity outdoor somehow. For me, these things have been important considerations to keep in mind as I work with existing buildings.
Short of breaking a thermometer (what happens if you don’t avoid walls, ducts, pipes, associates, etc. in the vicinity of your slinging), there is nothing much to cause my Bacharach instrument to go out of calibration. My bifocals are probably the biggest issue along with my age because ….
… what was I saying?
Anyway, if you did need to “recalibrate” the sling, then it would involve ordering 2 new thermometers for about $31 each.
In contrast, Vaisala recommends recalibration of their instrument once a year. You can have that done at the factory for $292 per year. If you want an extended warranty that covers parts, no questions asked, for three years plus calibration plus priority service plus shipping and handling, then it costs $380 per year. Alternatively, you could purchase a calibration tool for just under $1,000 and do the calibration on your own.
If you don’t do the calibration, then the industry data out there suggests that at some point, probably sooner rather than later, the Vaisala will have about the same accuracy as the Bacharach. This link (page down a bit after you go there) takes you to a page where there is a report done by the Iowa Energy Center for the National Building Control Information Program (NBCIP) that looked at out of the box accuracy and maintained accuracy for blind purchased humidity transmitters. The results were all over the place as you can see from the images below, which were extracted from the report.
Granted, the report is several years old now. But the technology in the electronic relative humidity instruments we are using currently is the same basic technology that was being used back when the report was developed.
Given the data in the report, which was specifically targeted at commercial building HVAC sensors, the data I glean from my Bacharach is probably about as good or even better than the average DDC system relative humidity sensor, especially if a high accuracy sensor was not specified and especially if the sensor has not received regular maintenance.
In the experience of FDE as a whole, calibrating a humidity sensor annually would be a minimum requirement. For critical applications, it is probably desirable to calibrate a humidity sensor every three to four months. This conclusion is generally consistent with the NBCIP Humidity Transmitter Product Testing Report Supplement (which looks at the long term accuracy of the sensors covered by the report mentioned previously) all though:
For most buildings, where I am just trying to get a general idea of what might be going on, I am pretty comfortable with numbers from my sling if I don’t have anything else with me at the time. But that is contingent on using good technique and not taking the number I get more seriously than warranted given the stated accuracy of the device. If the system I am looking at uses a well maintained, high accuracy sensor, then that is a different situation in terms of what I would do with the data I got from my sling and I would likely want to use my HM40.
Just in case you wanted to know …
With regard to the hf term in the list earlier in the blog post; if you are doing chemistry, it is used to represent the enthalpy of formation, not the enthalpy of a saturated liquid. The way I think of enthalpy of formation is that it is the amount of energy it took to create the substance in the first place.
Enthalpy of formation values are based on molar quantities (the chemistry unit for amount of something usually in terms of number of atoms or molecules or fundamental particles) and are referenced to a specific temperature and pressure condition, typically 1 atmosphere of pressure and 298.15 K as I understand it (about 77°F). In contrast, the enthalpy of a saturated liquid is typically given on a Btu per pound basis for a specific saturation temperature and pressure.
The way I think of it is the saturated liquid enthalpy value includes the enthalpy of formation along with the additional energy associated with the difference between the saturation temperature you are working with and the reference temperature for the enthalpy of formation.
In some ways, the industry is not exactly good about using consistent sets of symbols and terms and you will find different symbols used in different discussions about the same concept by different resources, including some of the ones I will mention. So it is important to make sure you know what a symbol means in the context in-which it is being used. And it’s also important to document your use of a symbol in anything you are working on, just so there is no confusion.
Towards that end, because I am trying to write this so that it is approachable for folks who are wanting to get into working with existing buildings and learn building science, but who don’t necessarily have engineering backgrounds, I am going to use the term “energy” instead of enthalpy most of the time unless I need to explain something very specific in the context of enthalpy. And I will use the symbol “Q” with subscripts like S for “sensible” or L for “latent”. So, for instance, I will use QSAitIn instead of hAirIn to represent the sensible energy content of a parcel of air entering a process.
That’s sort of a judgment call on my part. But in my personal career path, I came into this topic from the perspective of a somewhat math-phobic airplane mechanic. And from that perspective, the term energy was less intimidating than the term enthalpy. From at technical purity perspective, a few objections are probably justified. But in the context of trying to promote a broader understanding, I am taking a few liberties and acknowledging that here (and hope it does make it harder instead of easier to understand).
While you probably don’t need to worry about it too much in the real world, day to day, building operations and commissioning environment, out here in the weed patch, it is probably worth noting that:
It’s important to remember that air is a mixture of elements, primarily Nitrogen and Oxygen but including a number of others.
These elements are not bonded together; they are all just bouncing around together with Boyle’s Law and Dalton’s Law being good models for how we think they are gong about doing it. In other words, there is no such thing as an air molecule in the technical sense, even thought we often talk about air molecules.
That means that the term “enthalpy of formation” is not really appropriate for air, at least that is how I understand it. The enthalpy of a parcel of air is the sum of the enthalpies of the mixture of pure substances it contains, each of which, being a pure substance, has an enthalpy of formation.
That is not totally true in the general case because mixing substances can often release or absorb energy. But for gases, this effect is generally negligible and we can usually just add the enthalpies of the constituent elements.
If you go to a chemistry book and add up the enthalpies of the constituents of air, you do not end up with value of 0 Btu/lb at 0°F, which is what most psych charts show for the enthalpy of totally dry air.
Furthering the confusion, if you look at a chart in SI units, you find that the enthalpy is 0 kJ/kg at 0°C. Since 0°F and 0°C are two different temperatures, you might wonder how the enthalpy of air could be 0 at both of them, or at least I did.
The answer to that is that what we typically are concerned about when working with psychrometrics is the change in enthalpy, not the absolute value of it. At some point, for psych charts, the zero values were arbitrarily referenced as indicated above.
For our purposes in HVAC psychrometrics, we generally consider air to be a superheated gas and behave as an ideal gas, meaning it follows the ideal gas relationship.
In words, the ideal gas equation says, among other things, that if the temperature changes, the pressure and volume will change in proportion.
But, if you cool air enough (to about –318 °F), it will become a liquid, which is a phase change, and a phase change is a deviation from ideal gas behavior. When something goes through a phase change the temperature and pressure hold constant while there is a very large change in volume as energy is added to the system.
If you use a steam table to get scientific about this phenomenon for water, like the Keenan and Keyes table below …
… and compare the specific volume of saturated liquid water at atmospheric pressure with the specific volume of saturated water vapor at atmospheric pressure, you would find that they differ by a factor of about 1,600. In other words, one cubic inch of liquid water becomes about 1,600 cubic inches of water vapor when you boil it.
The reason this matters is that unless the you are working with air is absolutely devoid of moisture (RH = 0%), one of the elements bouncing around in the parcel of air with the other molecules listed above are molecules of water in a vapor state.
That means even though the air in our HVAC systems will never become cold enough to change phase (even in place like Minnesota or Siberia or Antarctica), the water vapor in it can and will.
So, confusingly enough, one of the common constituents of air – water vapor – does not behave as an ideal gas some of the time. But since it is such a small constituent, even if the air is saturated, for our purposes, we can assume ideal gas behavior for air.
But we can’t assume that the water will not change phase in our systems or in our environment, and that is important to us for a whole bunch of reasons.
The equation I use for the conversion of sensible energy to latent energy in the body of the blog post is associated with a special case; i.e. the case where the air entering the adiabatic saturator is totally dry. Most of the time, that is not the case; the air entering the process will already have some water vapor in it. And the water vapor brings energy into the process with it, just like the dry air did.
That means that in the more general case (and the more realistic case in terms of what you will actually run into out in the field), the words the latent energy increase is exactly equal to the sensible energy decrease would look more like this on a per pound of air basis using psychrometric parameters.
The darker green term represents the sensible energy content of the water vapor entering the process, and is the difference relative to the special case I used in the body of the blog post.
Specific heat (also called heat capacity) is a measurable quantity that is defined as the amount of energy it takes to raise a unit mass of a substance through a unit temperature change. Specific heat values for specific substances can be found in tables in thermodynamic and chemistry text books.
In fact, for water. you could figure it out from your copy of Keenan and Keyes (you all have one of those, right?) …
… or by creating your own steam table using REFPROP.
For instance, in the Keenan and Keyes table above. the saturated liquid enthalpy hg (energy content) of saturated water at 209.56°F is 177.61 Btu/lb. At 212°F, it’s 180.07. Doing the math:
If you were to go through a similar process for the saturated water vapor over the temperature range I show from my REFPROP table, which are temperatures commonly encountered in HVAC systems, you would come up with the .45 Btu/lb/°F value that is shown in the sensible equals latent energy equation above.
There are similar resources out there for air. This graph was generated using valued from a thermodynamic text book.
Hopefully, all of this has given you a sense of the fundamental principles behind an evaporative cooling process. In the next post, I will build on this and take a look at real world evaporative cooling processes. So if you thought this was exciting, wait until you see that.
Senior Engineer – Facility Dynamics Engineering
Sorry to disappoint Star Trek fans, but this is not a workshop about a device used to decipher and interpret alien languages into the native language of the user. Having said that, if you are into building systems commissioning and the related field work, then you may find this is even more exciting (in a nerdy sort of way).
Specifically, this post is a “heads-up” to let you know that a no-cost training that is focused on using the Pacific Energy Center’s Universal Translator tool will be offered on November 13, 2018; you can attend in person or via the internet.
The reason this is exciting news is that the Universal Translator is an ever evolving, feature rich tool that supports trend analysis and diagnostics of building system data retrieved from DDC control systems, energy management systems, and data loggers. The image to the left illustrates some of its capabilities including:
to name just a few of the features illustrated in the current brochure. You can download the current brochure from the Universal Translator page on our Commissioning Resources web site, where you will also find links to the website that will allow you to access a no-cost copy of the tool and the YouTube video channel that has been created to support it.
So bottom line, in my opinion, it will be well worth your time to visit the UT Online website and obtain your own personal copy of the tool. And you may want to consider attending at least a portion of the upcoming class, either in person or via the internet. I have a few other commitments that day but in-between things, I plan to join the internet session and brush up on my UT skills. So hopefully, I will “see” you then.
Senior Engineer – Facility Dynamics Engineering
The picture above is a panorama shot of the tool lending library at the Pacific Energy Center. As I took the picture, I was literally thinking so many tools, so little time, wishing I could spend some of my day learning more about some of the hardware arrayed before me that I was not familiar with. If you follow the blog you likely have heard me mention the lending library before and you can find a description and a link to their tool inventory on our Commissioning Resources web site.
And if the picture causes you to think something like Wow, that would be a cool place to work, well, keep on reading.
The library currently has over 5,000 tools in it’s inventory, ranging from something as simple as a $15 diameter tape, which is a specialized tape measure that is applied to the circumference of a pipe but reads out in pipe diameter, to an $8,000 plus FLUXUS F601 transient time ultrasonic flow meters, one of the devices that the tape helps you apply properly. If you are a California public utility customer, you can borrow anything in their inventory free of charge, no matter if you are going to use one of the FLUXUS flow meters to optimize a 5,000 ton central chilled water plant or borrow a HOBO U12-012 RH/Temp/Light/External Input Datalogger to teach your kids or grandkids about wet bulb depression, as illustrated below.
So, you can quickly see that the library is a wonderful resource, for someone in the California public utility service territory.
But the truth is that it is a great resource for anybody wanting to learn about the tools and instruments associated with Building Science because their online inventory includes not only a listing of the instruments available, but also includes the replacement cost of the equipment, links to the manufacturer’s website, data, and software, and many times, application notes that fill you in on how to apply the device.
As a result, I frequently point people all over the country to the site as a resource for learning about tools and making decisions about building their own tool inventory.
But my real reason for putting up this post is to share information about an opportunity to get hands on with the inventory. It came up as the result of the retirement of Bill Pottinger, formerly the coordinator for the library. The job title is Energy Audit Equipment Specialist, and the successful applicant will work with Mary McDonald, who formerly held the position but has moved up to fill Bill’s former role.
As a Energy Audit Equipment Specialist, you will support the day to day operations of the lending library, include working with customers and professionals to provide energy and building measurement tools for energy efficiency, demand reduction and demand response projects in California. You can download the full job description from the Tool Lending Library page on our commissioning resources website and apply for the position at this link.
So bottom line, if you live in the Bay Area and like tools and are looking for a way to break into the technical side of the commissioning industry, this could be a great opportunity. Not only would you be meeting a steady stream of the people working in the industry, you would be getting paid to play with a vast array of the equipment and instrumentation they apply to improve building performance and efficiency. And in doing that, you would be making a contribution to the process yourself, all of which would leave you feeling pretty good at the end of the day I think.
Senior Engineer – Facility Dynamics Engineering
One of my current obsessions is how subtle details regarding how you pipe a cooling tower can make a huge difference in how the flow is distributed. I’ve been interested in that for a long time actually as the result of an early field experience. And it looks like the building where that happened is still there and the towers even seem to be in about the same location all-thought I am sure they must be newer versions relative to the the time I was last on the roof (with, I hope, a better piping arrangement).
The building is in downtown St. Louis and Bill Coad sent me there to figure out why one tower basin was overflowing while the other one was making up.
When I tried my hand at calculating the pressure drop in the two different runs of piping, from the point where they came together at a tee to the point where each run connected to a tower cell, I came up with a difference in pressure drop 0.15 psi. It was the first time I had tried to do pipe pressure drop calculation for a practical reason and for a while, I thought I had made a mistake. But had I realized then what I know now, I would have realized that the different water levels were telling me the answer with out having to do the math. In other words. the 4 inches of level difference I was seeing in the field is what you get if you multiply 0.15 psi by 2.31 ft.w.c. per psi and 12 inches per foot.
The experience convinced me that symmetrical tower piping, flumes, and basin equalizer lines are critical for multiple cell tower installations because seemingly inconsequential differences in the pressure drop in the interconnecting piping can make huge differences operationally. I took a look at a more recent version of a similar situation in a magazine article I wrote for Consulting Specifying Engineer a while back that you can download from FDE’s commissioning resources website if you are interested.
My initial experience and the article I reference are focused on the pressure drops in the piping leaving the towers. But similar things can happen if the piping is not symmetrically arranged to the hot basins and distribution headers of multiple cell cooling towers (the pipe entering the tower).
Recognizing this has always been important, but I think it may be even more important now due to the energy conservation driven desire to:
If you don’t recognize the importance of uniform flow distribution over the tower cells and the role that piping configuration will play, then your energy savings measure may not deliver the anticipated savings. In fact, you could also cause damage to the cooling tower fill, significantly shortening its life span and setting up water quality control problems.
The discussion that follows will focus on towers that use a hot basin with orifices to distribute flow over the fill (generally induced draft cross-flow towers), which look like this.
But similar concepts apply to towers that use a manifold with spray nozzles such as induced draft counter-flow towers, which look like this …
… and forced draft counter flow towers, which look like this.
That said, there are induced draft cross-flow towers that utilize pressurized feed distribution systems. In fact the video on our website showing a pressurized distribution system in action is from just such a tower.
Incidentally, you can find more images of different types of cooling towers on the What’s That Thing” page of FDE’s commissioning resources website if you are new to cooling towers. There is even a couple of pictures of a tower that uses the water jets to induce the air flow through it instead of a fan, which is not a very common configuration.
Our focus in this blog post is going to be on how hot water is typically distributed over the fill in cooling towers and how flow reductions can lead to poor flow distribution if you go to far. The links below will jump you into the post to a particular topic of interest. The “Back to Contents” link at the end of each section will bring you back here.
Cross-flow, induced draft towers typically gravity to distribute water over the tower fill via a hot basin with orifices in it. Here is a picture of what that type of distribution system looks like out in the field.
Each of the little round black things is an orifice with a deflector plate mounted below the hole.
Here is a close-up of an orifice and nozzle from above (left) and from the side (right) which shows the deflector. This particular nozzle is from a BAC tower.
Here are pictures of a nozzle from an Evapco tower (to the left) and a Marley tower (to the right, courtesy Ryan Stroupe of the Pacific Energy Center) which are similar, but slightly different from the BAC approach.
As you can see, while the details differ from manufacturer to manufacturer, the general idea is the same; to leave the hot basin, water has to flow by gravity through the orifice and then the stream of water hits the deflector plate and is splashed out over the fill in all directions.
This type of distribution system is typically found in cross-flow towers, both forced and induced draft. This action packed video clip illustrates what that type of design looks like in operation.
I don’t have many pictures of these types of nozzles, but here is a close-up of some spray nozzles in a distribution header from a small forced draft cross flow tower.
The manifolds are sitting vertically leaning against the outside of the tower because they had been removed to repair damage that occurred when the condenser water temperature got out of hand and ruined the fill. If you look closely, you can see bits of the fill that had been picked up and circulated by the pump caught in the outlet of the nozzles.
The flow rate through the nozzle is a function of its diameter, the details of its shape (rounded edges on the opening, etc.) and the depth of water over it. Tower manufacturers can provide you with a set of “nozzle curves” that document the flow that will be achieved with different nozzle sizes and designs at different water depths. Here is an example of the curves for Marley NC tower.
This is a similar example of the curves that apply to a Baltimore Air Coil Series 1500 and Series 3000 tower.
You can get similar curves for the spray nozzles that are used in towers that use a pressurized distribution system where the water is distributed from a set of headers over the fill with nozzles that are basically like shower heads.
If you have the nozzle curves, you can use them to assess the flow rate over the tower. The trick is to determine the nozzle that is in place and then measure the basin level. You can then look up the flow rate for the nozzle with that water level over it and multiply it by the number of nozzles in the basin to come up with the gpm going through that basin (assuming the nozzles are not plugged).
You then repeat that procedure for the other basins on the tower and add up the results to get the total flow for the tower.
Note that this is also a way to quickly assess if the flow over a tower or number of towers is balanced between the towers and between the basins in the towers. If the towers and orifices are all the same, then the flow is likely well balanced if the water levels in all of the basins are the same.
In contrast, if the level in one hot basin is higher than the others, it is probably getting more flow than the others. Similarly, a basin with a lower level than the others is probably getting less flow.
A couple of caveats’;
One is that you have to have the nozzle curves or do the math on the orifice so you have the relationship between the water level in the basin and the flow rate per nozzle.
Another thing o take into consideration is that at low flow rates, the water level in the basin will probably vary, being higher near were the water enters the basin and lower at the most distant point relative to where the pipe connects to the basin. So, you may have to divide the nozzles up and assess them at different water levels or use and average water level.
And, of course, the nozzles need to be clean. It is not uncommon for flakes of metal to break loose from the condenser water piping and become lodged in the orifices, especially in older piping that has some corrosion accumulating in it. A sudden, radical change in water treatment can sometimes trigger this, as can operation at a new, higher flow rate.
Finally, towers can have weirs or cups in the basin to force water to flow preferentially through some orifices before flowing to others. More on that in a minute, but first, lets look how the system works in the first place.
To explain how a hot basin type distribution system works, which is something you have to understand if you want to understand the constraints on varying flow over a tower, I am going to use images from the cooling tower model I use in class exercises.
If you want to work with this model directly, you can download it from the Cooling Tower Scoping Exercise page on our website. There are actually a number of retrocommissioning opportunities in the model so you can try your hand at scoping them out if you want to. You will find the answers and related information available for download on the web site also, as well as a scene guide that will help you navigate through the model.
Returning to our discussion, in a perfect world, to get uniform distribution over the tower fill from this type of distribution system, you would like to have the water distributed as a sheet of uniform depth across the entire basin area where the nozzles are located. The problem is that all of the water arrives in a pipe that will connect to the basin at a single point, concentrating a large volume of flow in a small area relative to the area covered by the basin. To solve that engineering problem, designers of distribution basins of this type use a combination of manifolds and the flow orifices to create the sheet of water and manage it’s depth.
For the discussion that follows, I will use the image below, which is from the model I mentioned. Note that the transparent grayish color is how I represented water; for instance, the area that the “A” points to is completely full of water. The two arrows on the “D” illustrate the basin water level by pointing to the bottom of the basin and the surface of the water in it.
Water enters the basin at the piping connection at Point A, which places it inside a triangular shaped manifold (the corrugated metal panel is the hypotenuse of the triangle).
Since the only way out of the manifold is the slot at the bottom (point “B”), the water is generally forced to spread out across the length of the basin. Thus, the manifold takes on the roll of creating a sheet of water that covers the width of the basin.
Once the sheet of water is established, the the size, number, and arrangement of the orifices (the round object at “C” is a typical orifice) take over and generally force the sheet of water to extend across the basin and control its depth.
One way to understand how this works is to imagine what would happen if there literally was no bottom to the basin to the left of the slot in the manifold at B. If that were the case, I think you can imagine that the water would simply cascade out of the manifold making a little water-fall of sorts onto the fill in the area where the “B” is. The fill further to the left (towards “C” and “D”) would receive little if any flow.
In contrast, if the basin had a bottom, as it does in the illustration, but it only had orifices at the far left (where the “D” is), then you can probably imagine that the sheet of water created by the slot would have to extend all of the way across the basin to the row of orifices.
In this case, if the sum of the cross-sectional areas of the orifices tended to be small relative to the cross-sectional area at “B”, then water would tend to “pile up” in the basin; i.e. the level depth or thickness of the sheet of water would tend to increase. But, because of how liquids work, as the depth of the water increased, it would provide more pressure to push water through the orifices, which would tend to increase the flow out of the basin.
And it would also tend to push back a little bit against the flow of water coming in through the slot. This would be minor; inches.w.c. of pressure created by the depth of water in the basin vs. ft.w.c. of pressure created by the pump. At some point, this would come into balance and a steady state condition would be established with the water at a fairly uniform depth across the basin.
In the limit, for instance, imagine that the orifices at “D” are pin holes but that 500 or so gpm is coming into the basin. If that were the case, I suspect you would conclude that the basin would quickly overflow because more water could be delivered by the pump via the manifold than could leave via the orifices.
Going the other way, you can probably imagine that if the gap at the bottom of the manifold or the orifices (or both) are large relative to the flow rate, then you would tend to get a semi-circular water distribution pattern in the distribution basin, centered on the point where the pipe connects to the manifold.
The bottom line regarding this type of distribution arrangement is that how well it works depends on on of the flow rate into the manifold, the width of the gap, and the number and arrangement of the orifices in the bottom of the basin. Even if you get perfect distribution with a given flow rate, gap width, and orifice size and arrangement, if you vary the flow, there will come a point when the desired distribution pattern degrades to the point where some of the fill receives little if any water.
If you have taken a shower (and the shower was not a rain barrel shower), then you have a pretty good idea of how a pressurized feed cooling tower distribution system works. And if you have ever been in a shower with low water pressure, either due to undersized piping or due to a temporary demand for flow in a different part of the system, then you also have a sense of what can go wrong if you reduce the flow rate to the shower.
For shower heads and spray nozzles to create a useful spray pattern, they need to have some pressure behind them. If you drop the pressure, the pattern decays and then simply turns to a dribble as the water just sort of falls out of the nozzle or shower head.
The pictures below, which are snapshots from a short video on our website that shows a pressurized cooling tower distribution system in action, illustrate what happens to the flow pattern achieved as the over-all flow rate is reduced. This first picture was taken with the tower flow rate estimated to be in the range of 50-60% of its design flow.
Notice how the spray nozzles are making little umbrella like spray patterns, generally covering the fill. Ideally, they should over-lap to completely cover the fill, and I believe that is what we would have seen if we could have increase the flow rate to the design level.
Here is what the pattern looked like when Gary, the chief engineer, later in the week, switched over to a smaller chiller which probably cut the flow rate to the tower to about 20-30% of its design value (Image from a video courtesy Gary Walters).
As you can see, the distribution pattern is not nearly as good and I suspect that parts of the tower fill were starting to run dry, which introduces a number of issues that we will discuss at the end of this post.
The distribution pattern achieved by this system is also very much a function of the volume of flow delivered to the tower cell relative to its design volume, just as it was for the gravity feed system discussed previously. With the pressurized system, reducing the volume of flow to a give manifold reduces the pressure in the manifold, which cases the flow pattern to decay and not completely wet the fill.
If the return/hot water flow to a cooling tower is low enough to result in the distribution system failing to fully wet the fill and as a result, some of the fill starts to run dry, a number of problems can emerge.
The resistance to air flow of wet fill is higher than that of dry fill. So, if the fill on a tower starts to run dry, there is a tendency for more of the air to go through that part of the fill vs. the part where the fill is wet.
Of course it is the air flow over the wet fill that generates the cooling effect, so air that bypasses the wet fill represents fan power that is delivering no meaningful cooling. In other words, for the current heat rejection requirement, the tower is using more fan energy that it would need to if fill was uniformly wet and all of the air flow was generating cooling.
There are a couple of video case studies on our commissioning resources web site that illustrate this. One centers on a cooling tower where a combination of piping configuration and fluid mechanics results in a two cell cooling tower with a gravity feed distribution system spending a significant number of hours in the year with one cell that has little or no flow over it but has the fan running. In the image to the left, the cell to the top of the tower, closer to the plant has flow, but the cell in the foreground is virtually dry.
The other centers on a phenomenon that caused one hot basin served from a common header supplying a gravity feed system to run full (left image below) while the other basin ran dry (right image below), even though there were no valves on either side of the tee connecting the incoming header to the hot basins.
If the air velocity through the dry fill becomes high enough, it can cause the fill to flutter. If you have venetian blinds, you may have seen a similar phenomenon occur with the blades on a windy day. In any case, if the fill flutters too much, the movement can lead to cracking and premature failure of the fill.
Even if the cooling tower water quality is properly controlled, if the water flow is so low that on some portions of the fill, the stream of water totally evaporates before it reaches the cold basin, then the minerals in the water are left behind on the fill. (If you have an aquarium, you are probably familiar with this phenomenon.)
As the minerals build up, that will tend to make the problem worse. There are cleaning processes that water treatment companies can perform to remove the minerals, but this is at an added cost above and beyond the normal costs for cooling tower water quality management.
And, I know of at least one Owner who built a stainless steel pan slightly larger and deeper that a section of the fill in their tower and then bought an extra section of fill so that they could rotate a section of fill out of the tower and soak it in a mild acid solution in their stainless steel pan, to clean it, using the extra section of fill to replace the section removed for cleaning. So a do-it-yourself approach that probably saves some money but still takes some labor and an initial investment.
If you let the accumulation of minerals go unchecked, then eventually, this will happen.
This condition cost the Owner of the two cell tower associated with the picture (photo courtesy Sabastian St. John, St. John Consulting), which served a nominal 700 ton plant, about $50,000 to replace the fill.
That cost is something that likely would not have been required for another 5-10 years at least if the deposits had not built up. Some of that cost was because the tower was on the top of a high rise, so getting the ruined fill out and the new fill in was labor intensive. But even at half that price it’s still a pretty expensive problem.
A phenomenon similar to the one that leads to mineral accumulation on the fill of a tower at low distribution flows can also cause ice to accumulate during sub-freezing weather. The ice accumulation can be even more destructive to the fill than the accumulated minerals due to the weight of the frozen water. And the build-up can happen much more quickly.
As I mentioned at the beginning of the post, the desire to save fan and pump energy can cause us to implement strategies that will result in a reduction in water flow over a cooling tower cell. This is because one of the fan and pump affinity laws states that for a fixed system, the relationship between flow and fan or pump power is cubic.
That means that if I were able to cut the flow rate in half, then I would reduce the power required to one eighth of what it was originally (half of a half of a half).
So for example, if I decided that instead of running a steady flow of water through the condenser of a chiller irrespective of the load condition, I would vary the flow to maintain a constant head pressure, then the pump energy I would consume at part load could be drastically reduced, especially if I had a lot of part load hours.
Or, I may decide to use two tower cells when one chiller is running instead of one cell. Assuming a uniform distribution of flow to both cells, this would split the load equally between two cells. Since the capacity of a cooling tower is nearly linear with air flow, that would mean that with the load for one chiller split between two cells, the air flow rate would be half of what it would be if either cell was used by itself to reject the heat from the chiller.
In turn, the affinity law cited above (also known as the “cube rule”) would indicate that running the fan for each tower at half speed would reduce that tower fan’s energy to one eighth of what it was at full speed. Of course, you would be running two fans at that level instead of one at full speed, but in the end, you would have reduced the fan energy to one quarter of what it was with one cell alone serving the chiller (one eighth plus one eighth).
But if some of the problems associated with low flow rates over cooling towers started to emerge, then I likely will not fully realize the savings anticipated. That could happen because the air flow short circuits would cause the fan to have to work harder to achieve the same amount of cooling as it would achieve if all of the air flow went past wet fill.
Or, it could happen because the accumulation of minerals or ice on the fill caused its untimely failure, placing a big hit on the operations and maintenance budget. Or, if the potential to accumulate minerals was recognized and address, it would result in higher on-going maintenance costs because of the added effort and procedures necessary to keep the fill clean.
On one recent project, I was working with a team in Marriott’s AEP program to assess the cost/benefit of spreading flow out over two tower cells to save fan energy, which was how the system was originally designed. But as a result of non-uniform flow distribution created by the piping geometry, the fill was starting to accumulate minerals because there were times when some of it was running dry.
This particular plant was in the mild, San Francisco Bay area environment. That meant that while it occasionally over the course of the year would see its peak load condition, nominally 700 tons, most of the time it was significantly below that. In fact when the team developed the load profile from measured field data, it revealed that the plant likely spent 80% of its time at 140 tons or less and 90% of the time at 210 tons or less.
That meant that even if all of the flow was directed to one cooling tower cell, the fan energy most of the time would have been modest because the VFD equipped fan would not have to run very fast due to the high number of part load hours.
Certainly, additional savings were achieved by running the flow over both cells and further reducing the fan speeds. But a lot of times, the low speed limit came into play keeping the fan running at a minimum speed set by the need to maintain lubrication in the gear box, even though that much air flow was not required for cooling the condenser water.
So at a certain point, the control strategy could no longer optimize the fan speed to the load to capture the theoretical savings that were possible. In addition, since the fan speed was higher than needed, the fan had to cycle, which added some wear and tear to the system that would otherwise not have been there.
The bottom line was that when the team took all of these things into consideration, the fan energy savings achieved by running two cells instead of one were more than offset by the added operation and maintenance costs, primarily the added cleaning costs required to minimize the build up of minerals on the tower fill. And as a result, their recommendation was that automatic control valves be added to allow one tower cell to be associated with one chiller.
This would increase the flow over the cell and shift the load to one tower fan. But operating in this manner would improving the over-all efficiency of the tower since it’s fill would run fully wet. And keeping the fill uniformly wet would eliminate the need for spending several thousand dollars on an annual cleaning process to remove the minerals that would accumulate as the result of the non-uniform flow distribution.
So by this point, hopefully, you understand how cooling tower flow distribution systems work and how at some point, the principles they are based on to provide uniform flow distribution run counter to our desire to save energy.
But, there are some steps that can be taken to modify the distribution systems so they the towers can accommodate a wider range of flow variations, at least that is the case for towers using gravity type distribution systems. In general the manufacturers indicate that they can accommodate a 50% reduction in flow rate by using either weirs or cups.
Several manufactures enhance the range of flow that their towers can accommodate by installing weirs in their basins. That was the case for this recently installed cross flow, induced draft cooling tower, which is the source of the pictures that follow.
The air entered the tower from the left and right side in the context of the picture above and exited on top at the center, where the fan was located. There was a hot basin on each side of the fan that distributed water to the fill located below it.
Water was distributed from a piping connection that was made to a manifold in the center of each basin. The picture below, which I took while we were opening the basin covers, will give you a sense of that. You can see the connection to one manifold towards the top of the picture, where we are just getting the first basin cover open. I am standing on the basin covers for the second basin and you can see the connection to the center manifold in the bottom left corner of the picture.
The picture below is what I saw when the team I was working with opened the hot basin covers (what I was standing on when I took the preceding photo). The green pipe in at bottom of the picture below is the pipe in the lower left corner of the picture above. The basins all had weirs in them in it in addition to the flow distribution nozzles, just like the basin pictured below.
A weir is just a technical name for a dam and in the picture above, the weir is the metal fence in the basin that is forcing the water to the right side. That side of the tower is the entering face for the airflow associated with the fill below the basin in the picture and the fill below the basin I was standing over to take the picture.
That means that the weirs are acting to keep all of the fill on the entering face wet at low flow rates before allowing water to reach the nozzles serving the fill deeper into the tower. As the flow comes up, the water level on the entering side of the weir (the right side of the picture above) rises to the top of the weir, wetting the fill immediately below it while the fill further in to the tower is denied water.
This is an important element in ensuring tower efficiency because if part of the entering face of the fill is running dry, it will be much easier for the air to pass through it, which will reduce the air flow over the wet portion of the fill, which, of course, is where the evaporation that cools the condenser water is taking place.
If the flow rate continues to come up, the area blocked by the weir fill and then overflows, allowing the fill further into the tower to receive water. As a result, the entire entering face of the tower will have wet fill as long as the flow rate is high enough for the weir to have an impact. That in turn means that all of the air flowing through the tower will encounter wet fill.
At full flow, the water level is a fairly uniform 3-4 inches across the entire basin. The weir itself is about 2 inches tall.
This cooling tower had variable flow condenser water and the control process was unstable at the time we were looking at it. So it was an opportunity to catch weirs in action and we shot a bunch of video. I am still working to put a narrated version of that together, but for now, I have uploaded the raw footage to our commissioning resources website if you are interested. The next few pictures are taken from the video and will illustrate in general what happened.
When I took the picture above, the flow rate had just reached the point where the water level on the entering side of the weir was going to over-flow and start directing water to the fill deeper into the tower. This is what it looked like when the flow had increased to the point where a significant amount of water was over-flowing and wetting the fill further into the tower. (Note that these images are from a different basin in the tower relative to the one in the picture above).
We had set the cover hold-down bolts upside down into the basin (center of the picture above) so we could use the threads as little level indicators. In the videos, if you pause them and count threads, you can tell that the levels are changing under different flow conditions.
In addition, because of the piping configuration, shown below …
… due to the dynamics of the flow through the tee, initially, as the flow came up, the level in the basins on the side of the tower served by the branch of the tee came up faster than the level in the basins served by the run of the tee, presumably because at relatively low flow rates, it took slightly more pressure to get the water to flow through the extra feet of pipe and the elbow on the run.
But as the flow increased and the dynamic losses through the tee started to have an impact, the level in the basins served by the run caught up with the level in the basins served by the branch and then eventually the level in the basins served by the run started to rise faster than the level in the basins served by branch, presumably due to the higher dynamic loss associated with flow through the branch of a tee vs the run of the tee.
The chart above, based on data from the ASHRAE Handbook illustrates the tee pressure drop phenomenon.
Another approach to favoring one portion of the cooling tower fill over another is called a “cup”. Marley uses this approach and while I don’t have any pictures of my own of a tower that has been outfitted with cups (yet), Marley has a great YouTube video that illustrates what they look like and how they work.
The general principle is the same as for a weir; the cups are arranged so that water has to achieve a significant depth over the nozzles across the entering face of the fill before water is allowed to flow into nozzles serving fill deeper into the tower.
So, that was a lot of information I guess; I’m kind of prone to doing that as most of you know. But there are a couple of nuggets in there.
One nugget in terms of operations and commissioning is to always/regularly open up the basins on a tower and take a look at how well the flow is being distributed (or not). What you see may surprise you and be an indicator of an opportunity to improve things or a clue about why you are not achieving the fan energy savings you anticipated via your energy efficiency measure.
The other nugget is that for cooling towers in particular and open systems in general, it is a game of inches in terms of getting levels between the different basins to balance out. For instance, a pressure drop difference of 3 or 4 inches (about 0.15 psi) in the return piping leaving two cooling tower basins that are piped in parallel can mean one basin is making up water while the other is overflowing. If you are curious about that, I wrote an article in CSE magazine a while back that you might find to be useful and you can download it from our commissioning resources website.
So, I guess that ended up being a pretty long discussion for something that started out a while back as answer to a question about air venting. But hopefully, the information is useful; if it is, you can thank Kam for asking the question.
Senior Engineer – Facility Dynamics Engineering
The picture above is a panorama I shot earlier this year in a recently renovated central chilled water plant serving a large high-rise office building in down town San Francisco. The people in the picture are students in the Existing Building Commissioning Workshop Series that I help Ryan Stroupe of the Pacific Energy Center teach. The workshop is a year long, hands on class that is designed to allow the attendees to learn and apply existing building commissioning skills. We have been doing it for 13 years now; time sure flies when you are having fun. (Not to be confused with what frogs are known to say, which is time sure is fun when you are having flies).
The point being that we are about to start the 14th year of the class, so if you are interested in a hands-on, field-based learning opportunity to help you develop existing building commissioning skills, then this may be something you want to consider.
The class is structured around the ten key commissioning skills. Our goal, as the instructors, is that by the end of the class, you have had a chance to try your hand at all of the skills by applying them to a project you work on over the course of the year in a building you have access to.
The work on your project is supplemented by hands-on lab sessions and field activities using the systems in the Pacific Energy Center, SketchUp models, spreadsheets and other software based tools, and even an escape room.
If you are thinking of taking the class, it is important to realize that it is not a casual undertaking. In signing up, you are committing to:
So it is a significant commitment in terms of time and effort.
But most, if not all students indicate the undertaking is well worth it as you can see from the analysis below and the quotes in the updated flyer Ryan recently put together.
Don’t let the Level of class’s fear of Ryan data series scare you off. Much of the success of the class can be attributed to Ryan’s dedication to making it a learning experience of the highest quality. And experience has shown that for that happen, we need a small group of dedicated people with a fairly strong basic skill set. Such a group allows us to focus on developing the more advanced skills with a low student to instructor ratio for the lab sessions.
For Class 8, what the fear was really driven by was the necessary process of winnowing down the group of 60 to 80 people who initially sign up for the class to about 20 to 25 people, a manageable size for the interactive, hand-on focused lab and field activities that commence in earnest around the third session. Ryan is an excellent judge of who is ready for the class and and who is not and gently but firmly manages the task. So the fear was not so much of Ryan as it was of being winnowed out of the class.
The fact is that most of the time, we find ourselves at the appropriate number of students simply by attrition. During the first few sessions, we go through a number of exercises, including a basic skills quiz, an Excel skills quiz, and mystery graph quiz, all of which are intended more to be learning opportunities than test. But the scores also help Ryan, and the students themselves, assess where they stand relative to what it will take to successfully complete the class. If you aren’t quite there yet in terms of your readiness, then it is in everyone’s best interest that you defer for a year and take advantage of some of the other learning opportunities at the Energy Center to become better prepared.
My point is that winnowed once does not mean banned forever. Ryan has put together a very comprehensive set of classes that are offered annually at the PEC and targeted at helping people prepare for the year long class in addition to providing general knowledge on various topics including Excel Skills, HVAC basics, and common HVAC system types.
As a result, anytime Ryan suggests that perhaps a student is not quite ready for the rigors of the class, he also suggests an appropriate course of study that the students can follow so that they are more fully prepared to succeed in the next class series. Several of our most successful students have followed this path; i.e. voluntarily or at Ryan’s suggestion, dropping out for a year, pursing the suggested course of study and re-enrolling the following year to deliver stellar project results.
The class is taught at the Pacific Energy Center, which is in San Francisco. And because of the hands-on, field experience-based approach taken for delivering the class, unlike some of the other classes I am involved with at the PEC, this is one you have to attend in person. So, obviously, living in the Bay Area would be a plus in terms of participating given San Francisco traffic. Sometimes, I think I get back to Portland faster than the folks who attend from Sacramento.
Having said that two of our most enthusiastic participants last year took it upon themselves to travel all the way up from Southern California each month, never missing a session. A number of other students have done the same thing in other years.
Technically, the class is funded by public benefit money from the California Utility System rate payers. As a result, people living or working inside that system are considered first in terms of who can attend. But that does not mean that people from other areas are not considered. In past years, we have had students from Oregon, Illinois, and even New York City come and complete the class.
If you want to get a fuller sense of what the class will be like, consider attending the RCx 101 class on June 6, 2018, either in person or via webinar (select the “Internet” location when you register to take it as a webinar).
For one thing, the RCx 101 class is a prerequisite for taking the workshop series. And even if the year long effort does not seem like something you are ready for or are willing to commit to, the RCx 101 class will get you up to speed on the existing building commissioning process and the basic skills you need if you want to work in that field.
In terms of gaining additional perspective on the class and existing building commissioning, once you have reviewed the Series 14 Class Flyer, you may also find some of the following resources to be of interest.
If you want to take it a step further, then you may even want to consider the following. For an number of years now, we have been working with 3D SketchUp models as a tool for providing a virtual field experience in the classroom and for self study. Pulling that off is taking some time, but in terms of self study, there are two offerings that you might want to explore.
In fact, my very first assignment when I entered the industry in 1976 was to draw a system diagram for a chilled hot water system serving a pharmacy school in St. Louis, Missouri. It was a great experience and I have been honing that skill ever since.
You can download the model, a Scene’s Guide and an answer list from FDE’s commissioning resources web site and explore to your hearts content. While not as much fun as an actual mechanical room, we hope that working with the model will get make you more productive on your next visit to the real thing
If you are new to SketchUp, The website also has a page where you also will find instructions regarding how to obtain a copy of the free SketchUp software you need to work with the model along downloads of legacy versions of SketchUp and links to tutorials that will expose you to the basics of working with it, which is all you need to do the exercise.
So, if you have found all of this intriguing, follow the link and register for the RCx 101 class. Worst case, you will have spent a day learning about the Existing Building Commissioning process and the skills it takes to work in the field. And you may just find your self “hooked”, opening the door to a very interesting and rewarding career. At least that has been the case for me.
David Sellers, P.E., Senior Engineer
Facility Dynamics Engineering
Visit FDE’s commissioning resources website at http://www.av8rdas.com/
Visit my non-technical blog The Other Side of Life at https://av8rdaslife.wordpress.com/
As you probably have noticed if you follow the blog, I love finding old instruments in my travels. I have even been lucky enough to save a few of them from the dumpster, like this resonant frequency-based tachometer …
… or this Foxboro pneumatic proportional plus integral (PI) controller …
… or this 1970’s vintage central control panel (the state of the art about the time I entered the industry).
Just the other week, I was in a building down in San Francisco that had originally been built in the 1960’s by Bethlehem Steel as their headquarters on the West Coast.
That was of unique interest to me because my grandfather on my mother’s side was a welder for Bethlehem Steel in their Johnstown Pennsylvania plant around that time; who knows, maybe he made some of the welds in the steel for the building when it was being fabricated back then. (The picture is from one of the elevators; they feature different vintage photos related to the building’s history).
The central plant in the facility had been recently upgraded from the original system. But when we got to the basement mechanical space, I was treated to a few more legacy control components, including this 2-pipe indicating temperature transmitter …
… and another central control panel.
The last picture is an interesting juxtaposition of technologies; the two monitors and the black box behind them (a PC) contains many orders of magnitude more information than the legacy control panel behind them. But I still have a soft spot for the legacy panel and was glad to see that it had been retained when the plant was upgraded.
My reason for bringing all of this up is that about a week ago, Steve Briggs, one of the other FDE engineers that I have the privilege of working and teaching with on occasion, sent me a picture from the field of an old seven day timeclock, the type of device we used to schedule equipment back in the “olden days”.
The device was simply an electrically driven clock with a dial that made 1 revolution every 7 days. Small, adjustable “trippers” were mounted on the perimeter of the wheel with little thumb screws and were shaped so that the side visible to you pointed to the time setting you desired and a little lever on the back of them would trip another lever (which is concealed behind the dial in this picture). The concealed lever, in turn, worked a mechanism that would open and close contacts, thus turning things on and off on a schedule.
There were typically two different types of “trippers” (some people called them “dogs” for some reason). On the visible side, they were different colors, usually black and silver so you could tell them apart.
On the back side, the shape of the lever was different with the difference being that one type of tripper would move the concealed lever in a way to turn close the contacts that the clock controlled while the other type of tripper would move the concealed lever in the other direction, opening the contacts back up. The contacts, in turn, could be used to turn equipment on and off on a schedule. You can still find devices similar to this in the hardware store, targeted at controlling the lamp on your end table.
The brass screws you see below the dial are one side of a number of contacts. In other words, if the picture were zoomed out a bit, you would actually see two rows of screws, with each vertical pair corresponding to an independent contact. In this case, I believe the last pair of screws on the far right would the power connection where you landed the 120 vac power to run the clock.
When I saw Steve’s e-mail and the picture, it immediately reminded me of my very first exposure to the concept of persistence of benefits. In other words, it’s one thing to intend to have a building or system do something like operate on a schedule by providing a time clock with a wiring diagram and control sequence that indicates that the clock should start or stop a piece of machinery or cause a certain function to happen at a certain time of day on a certain day of the week.
But it turns out that it is entirely different thing to have that design actually work and remain in operation over time, something I really did not realize until I ran into my first time clock.
Specifically, in the fall of 1979 or so Chuck McClure sent me down to do field work at Kent Library on the South East Missouri State Campus.
Chuck founded McClure Engineering in 1953, the year before I was born. And, based on the recommendation of Dr. Al Black, a mentor and friend from my Park’s College days, Chuck had taken a chance on an Airframe and Power Plant mechanic with some engineering courses to his credit and hired me as an HVAC field technician, which is how I got my start in this industry.
The reason for the field work in Kent Library was that the University was interested in installing some sort of supervisory monitoring and control system to help them understand how their buildings were running from a central location and to allow them to identify operating problems and ultimately, optimize the existing stand alone control systems based on what they were observing. This, of course, was the fore-runner of what we take for granted now in our Direct Digital Control (DDC) systems. But at the time, it was fairly cutting edge.
In those days, large buildings might have central control panels similar to the ones I illustrated above. (And sometimes, the gauges were even right). But very few if any sites with multiple buildings, like a college campus for instance, had all of the buildings networked together and visible from a central location. So, it was exciting to be involved in a project like this, even though at the time, I did not fully comprehend how big a deal it really was. But eventually, Kent Library would become my first design for what we now would call a DDC system (under the watchful eye of Chuck and Al of course).
At the time of the site visit, my goal was to develop field verified diagrams for the existing interlock wiring and pneumatic control systems serving the equipment in the library. Thus, I found myself opening up control panels, junction boxes, motor control centers and wireways tracing out colored wires and copper tubes and trying to figure out what they were connected to and what all of these funny, new to me, electrical relays, switches, and pneumatic gizmos did.
The original library was dedicated in 1939. But all of the equipment I was looking at had been installed in a 1968 project that had been done by Chuck himself. So, I had a pretty good resource at my disposal in terms of trying to understand the design intent of the facility.
One of the things that had attracted me to McClure Engineering when I interviewed there was that they had always had an interest in energy conservation and the responsible use of resources, even before the first energy crisis hit in 1973. In the course of the interview process, Bill Coad pretty much said to me what would eventually become his Energy Conservation is an Ethic paper and as a result, I left the interview inspired in a way that changed my life.
One of the reasons Chuck and the University had targeted Kent Library for the pilot for a supervisory control system was that it was fairly energy intensive due to the archival storage nature of the application. If you are playing the archival storage game, one of the things you are trying to do is hold very stable temperature and humidity levels and keep the air very, very clean. Avoiding damage by light and vibration are also important. It’s really pretty interesting (in a nerdy sort of way) and the ASHRAE Handbook of Applications contains an entire chapter dedicated to the topic.1
All of those requirements tend to mean that the HVAC systems in archival storage facilities need to run round the clock, especially in the rare book areas, even if nobody is in the facility. But if nobody is in the facility, then one thing you don’t have to do is ventilate; i.e. introduce outdoor air to manage the contaminates introduced into the built environment by human activity.
In climates like Cape Girardeau, Missouri, ventilation loads can be significant because it can be very cold and dry in the winter and very hot and humid in the summer, as illustrated by this bin data plot I created using the Pacific Energy Center psych chart tool.2
As a result, one of the things that Chuck had done in his 1968 control system design was include a time clock that would shut down the minimum outdoor air that was brought into ventilate the building during the unoccupied hours. In other words, even though the systems could not be scheduled, the ventilation could and Chuck designed the clock into the control system to perform that function
Since I was using the original design documents and control submittals for the 1968 project to guide my field effort, one of the things I was looking for was that time clock because we planned to take over that function with the central monitoring and control system. Doing so would allow us to change the schedules by remote commands from the central location rather than by having to put out a work request to have one of the campus technicians visit the building and move the trippers around on the time clock every time the school was not in session or a schedule changed.
Having to do that every once-in-a-while doesn’t sound like a big thing until you consider the number of buildings on a college campus and that each building might have multiple time clocks in it. The overview above will give you a sense of that. Each of the little markers is a building. Kent Library is the yellow marker to the upper right of the clump of red markers at the lower left side of campus.
Eventually, one of the control panels I opened up contained the clock I was looking for. But the problem was that it looked just like the clock in the picture Steve sent to me; i.e. there were no trippers on it. That meant that currently, at the time of my visit, one of Chuck’s energy conservation features was not delivering the intended functionality.
But it was worse than that. There was a small manila envelope sitting in the bottom corner of the control panel. Even thought it was not very large – maybe 1 inch by 2 inches – it was kind of heavy. I broke the seal and opened it up to discovered that it contained the missing time clock trippers. There were 14 of them to be exact; 7 silver ones and 7 black ones.
That was enough of them to program one on and off event for each day of the week, just as Chuck had specified. The problem was, that since they had never been installed on the time clock the ventilation that Chuck had intended to be shut down for about 6 to 8 hours a day on week days, longer on the weekend as I recall, had not happened, not even once, since 1968.
The good news there was that we had just found a significant opportunity to reduce the operating cost of the facility, which would definitely help justify our project. The bad news was that it should have been happening all along.
The incident certainly caught my attention, Chucks too. And as a result of the incident and other insights we were having as a company about the how buildings were being operated, Chuck tapped into my A&P Mechanic background and had me start developing checklists for some of our new projects.
We applied the lists as a tool to help us prevent problems like the one I had uncovered that day in Kent Library. We also made an effort to train the operators about the features of our designs, especially the ones that would help save energy.
And we worked with our clients to help them understand how to monitor the performance of the facility on a day to day basis by using average daily consumption analysis and supplementing their stand-alone control systems with remote monitoring systems like the one I was starting to work on for Kent Library.
As I look back on it now, I realize that a lot of the things we were doing to try to address the lack of persistence of the benefits of Chuck’s design are the same things that are suggested today in the commissioning industry to help ensure the persistence of the benefits of commissioning.
At the time, the commissioning industry was just starting to emerge in Canada and the United states. But since I had not heard about the commissioning industry yet, I thought that all we were trying to do was operate the building properly.
David Sellers, P.E., Senior Engineer
Facility Dynamics Engineering
Visit FDE’s commissioning resources website at http://www.av8rdas.com/
Visit my non-technical blog The Other Side of Life at https://av8rdaslife.wordpress.com/