The Perfect Economizer–Part 1–Laying Some Groundwork

An amazingly long time ago, I started a string of blog posts about economizers, that included posts about:

All of this was leading up to a blog post about a diagnostic tool that I use that I call the “Perfect Economizer” concept.  And I almost got there, but not quite, until now.

Contents

For those who want to jump around, the following links will take you to the different topics.   The “Return to Contents” link at the end of each major section will bring you back here.

Introduction

As it turns out, the evolution of the ASHRAE Journal Engineers Notebook column that I help write led to an opportunity to do a column on the the perfect economizer because it complements a column I wrote about a similar concept for assessing chilled water plant performance titled Modeling Perfection. which is illustrated below.

image_thumb11

In in the case study associated with the Modeling Perfection column, I mentioned that the reason for the unnecessary chilled water use in the areas outlined in red and yellow above was dysfunction in the preheat and economizer processes and that the team I was working with used the “Perfect Economizer” concept to assess them.

The idea behind the concept  is similar to the perfect chilled water plant concept;  you create a chart that shows how you would expect a perfect economizer to function and then plot real data against it to see how closely reality matches perfection.  The lines of perfection are illustrated below.

image

That concept is the focus of my next column, which will run in May. 

Defining Perfection

To be able to discuss the perfect economizer, one needs to define perfection.   Word count precluded me from doing that in the upcoming Journal column.  So I decided to do a few blog post that will focus on defining perfection to complement the column.  I actually started down that road in the post titled Economizer Analysis via Scatter Plots–Linking Excel Chart Labels to Data in Cells.  I will build on some of the concepts I outlined there in what follows and in related subsequent posts.  This first post defines a few baselines so we are all “on the same page” for the discussion that will follow.

Not a New Idea

I am not at all asserting that I came up with this idea. I believe you will find a version of it in the application software that Architectural Energy Corporation supplied for their data loggers in the mid to late 1990’s.  And the (free) Universal Translator application (which has nothing to do with Star Trek but is still pretty cool) includes a module that uses this approach.

(Return to Contents)

The Relationship Between and Economizer Process and Building Pressure Control

As discussed in the Economizer Basics post I referenced above, economizer processes bring in outdoor air volumes that are above and beyond what is required to ventilate the building, blending this extra outdoor air (OA) with return air (RA) in order to minimize the need for mechanical cooling.  At its core, an economizer process is a cooling and temperature control process. 

Conservation of mass and energy dictates that to achieve success, we need to complement the economizer process with some sort of building pressure control process that provides a path for the extra outdoor air to exit the building.  That becomes the role of the relief system.  The obvious components in this system are the relief  air dampers and depending on the system configuration, the relief fan and/or the return fan.

The less obvious components are the imperfections in the building envelope, which can also become part of the relief system. Recognizing this can provide benefit in terms of comfort by managing infiltration, and in terms of energy, by minimizing the need for return or relief fan operation.

A Word about Return vs. Relief Fans

When I discuss this topic, I am frequently asked about the difference between a return and relief fan.  The images below are from a set of slides that I used in class to discuss the topic.

image

image

This link takes you to a bit more information in a previous blog post.

Economizers and Building Pressure Control Coordination in the Olden Days

In the olden days, for a simple, constant volume system that incorporated an economizer process, there was a fairly direct relationship between:

  • The position the outdoor air and return air dampers were driven to in order to control temperature, and
  • The position the relief dampers needed to be driven to in order to manage building pressure. 

Thus, it was not unusual for the same signal that was used for the outdoor air and return air dampers to be used to drive the relief air dampers, especially in pneumatic control systems.[i]

Those of us working in existing buildings can still encounter this approach.  Sometimes, a minimum relief position is also provided.  And sometimes, the modulation of the relief dampers is delayed to provide a bit of positive pressurization for the building. 

And for a simple constant volume system, it can be made to work, especially with the minimum relief and delay feature mentioned above.  So if you have a very simple HVAC system, you can get away with out a building pressure control process, even in modern times.

Economizers and Building Pressure Control Coordination in Modern Times

The variable air volume (VAV) systems we commonly use in modern times breaks the relationship between outdoor/return air damper position and relief air requirements.  Consider a VAV system operating on a day when the outdoor temperature is 58°F with a 58°F leaving air temperature (LAT) requirement with variable speed relief fans under a part load condition. 

Lets imagine the system is operating on a day when the load in the building, and thus the supply flow rate is 50% of the design value.  With it being 58°F outside, if everything is working properly, the outdoor air dampers will be commanded to the 100% outdoor air (0% return air) position.  But, since the load in the space is only 50% of the design load, the supply flow rate will half of the design value. 

If the relief fans are commanded to 100% speed because they are controlled by the same signal used by the outdoor air and return air dampers, they likely will cause the building pressure to become very negative because their full speed, design flow rate was likely set to on the basis of the design supply flow rate.[ii]

This was a common problem in the field when we started transitioning from pneumatics and constant volume systems to DDC and VAV systems. And it still shows up on occasion in our modern day world.

(Return to Contents)

ASHRAE Guideline 16

The final control elements in an economizer process are the OA and RA dampers and the sizing and configuration of them is critical to success. 

Similarly, the relief dampers are often the final control element for the building pressure control process all though variable speed relief fans that have simple back-draft dampers or are sequenced with modulating relief dampers can also come into play.

ASHRAE Guideline 16 – Selecting Outdoor, Return, and Relief Dampers for Air-Side Economizer Systems provides a lot of good information about how to select and configure these dampers. But it also specifically states that

this guideline does not cover air mixing

Thus, it’s important to recognize that using the guideline is a good first step in the economizer design process, but there are other things that also need to be addressed.

In addition, the guideline is focused on proper design, meaning that you are starting with a “clean sheet of paper”. If you are working with existing buildings, that “ship has already sailed” and the challenge is understanding what you have, how well it is functioning, and how to correct any deficiencies that you discover within the constraints of the existing equipment capabilities and the operating budget.

For example, all of the recommended control sequences in the guideline require that outdoor air flow be measured somehow. In my experience, this is surprisingly uncommon in existing building systems, especially in older facilities.

Still, understanding what constitutes a good design can help folks performing existing building commissioning, ongoing commissioning and facility operations understand the changes needed to improve performance and resolve any issues they identify.  And the Perfect Economizer concept is a useful way to identify the problems.

Ultimately, when we apply the “Perfect Economizer” technique to existing facilities, we need to be extra diligent when we start to work to improve the mixing process so that we do it in a way that still ensures the required ventilation rates are maintained.

(Return to Contents)

That’s it for now.  In my next post, I will get into damper sizing and configuration, which are part of the focus of Guideline 16 and which are key to achieving perfection for an economizer process.

David-Signature1_thumb_thumb_thumb

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

[i]     And, since many legacy pneumatic systems were upgraded to DDC by handing three different control vendors a set of the building’s pneumatic control drawings and telling them to provide a bid for a DDC system just like it (and incidentally, we will be taking the low bid), you find DDC systems with a single pneumatic output driving the outdoor air, return air and relief air damper systems.

I am not at all advocating this design approach;  there are obvious problems with it.  I am simply saying that just because you have  DDC system doesn’t mean you will not see this configuration and the potential challenges it can introduce.

[ii]   The relief flow would generally be set to the supply flow minus the ventilation air flow which will generally be removed by toilet and hood exhaust.  An allowance for building positive pressure may also be included, further reducing the relief air flow rate relative to the design supply flow rate.

Posted in Air Handling Systems, Controls, Economizers, The Perfect Economizer | Leave a comment

Using a Formula to Adjust an Axis in Excel, Plus a Simultaneous Heating and Cooling Case Study

Author’s Note; 2022-02-01.  I discovered that earlier today, when I thought I had saved this post, planning to make some final additions, edits, and add a table of contents when I got back from my walk, what I actually did was publish it.  So, if you read this before about 4:30 PM, there were some typos and the bottom line on the case study was not there yet.   My apologies;  I will click more carefully next time.

Preface

I want to preface everything that follows by saying that while the case study I share is from my own experience, I did not develop the technique I will share.  Rather I discovered it as the result of an internet search in the form of a very generous and well written blog post by a guy named Mark on his Excel Off the Grid web site. 

I’ll be linking to some specific content there as I move through this post, in which I use a case study from a past project to illustrate applying Mark’s technique.

And thanks also to Thy, a student from one of my classes, who asked the question that led to the post and “commissioned it” by taking my first draft and using it successfully to implement the feature in a spreadsheet of his by following my suggested directions.

Contents

These links will jump you around in the content to a topic of interest.   The <Return to Contents> link at the end of each major section will bring you back to here.

A Bit of Background

If you do existing building commissioning work, you spend quite a bit of your time looking at time series data.   Sometimes, you are interested in the over-all pattern for a long period of time, like this.

Logger Data Full Period CC LAT

For the project behind the data above, I was using steam condensate pump cycles as a proxy for steam consumption (the red data stream), a technique Chuck McClure taught me years ago using an alarm clock.  I was comparing the pump cycles to the operation of a steam preheat coil in a large laboratory air handling system, using the leaving air temperature from the coil as a proxy for coil operation (the orange data stream).

The reason that the condensate pump line looks like a red band with occasional spikes vs. a fine red line is that relative to the range of the time axis, there were a zillion pump cycles.  In other words, if we were to zoom in, we would discover that the red band was actually many, many, many spikes spaced closely together with each cycle representing one pump cycle.  In fact, that is what I needed to do in order to assess the number of pump cycles relative to the leaving air temperature spike.

<Return to Contents>

Diagnosing a Dysfunctional Preheat Process

There will be more on zooming in a minute, but before going there, I thought I would explain what was going on in the system behind of the data.

My initial view of the data, shown above, revealed that I had in fact captured the dysfunctional operating pattern I suspected to exist based on my field observation when I walked the project several days prior.  More specifically, I suspected something was amok when I walked by the unit on a 60ish°F day and noticed that the preheat coil was active along with the cooling coil.  

As a result, I deployed a few data loggers the next day and the pattern above is what I found as Mother Nature performed a natural response test on the system [i]. Note how the preheat coil leaving air temperature seems to vary vs. hold a fixed set point and also how on occasion, it jumps up and runs at 90+°F for periods of time. 

This was an issue because the system was set up to hold a fixed 55°F leaving air temperature, and it was doing a very good job of that (the blue data stream).   But, since it was a 100% outdoor air system and since the preheat coil was ahead of the chilled water coil, the only time the preheat coil should have been active was if the outdoor temperature dropped below the desired 55°F leaving air temperature set point.  And then, it should have not heated things up any higher than the desired leaving air temperature.

Since the preheat coil was the major load on the steam system for the facility, I anticipated that the condensate pump cycles would be higher during the periods of time when the coil was delivering a leaving condition in the 90’s°F, which would tend to validate my proposed approach for developing the system load profile since there was no steam meter.

But to verify that, I needed to zoom in on one of the dysfunctional cycles, which brings me to the point of this post.

<Return to Contents>

Changing the Range of a Time Series Axis in Excel

Excel and Dates

One of the things that is not immediately obvious when you start working with time series charts in Excel is how Excel represents a date and time, at least it wasn’t for me.   It turns out that Excel represents date and time as a serial number that increments by 1 each day, and which was arbitrarily set to zero at 12:00 AM on on January 1, 1900.   

That means that:

  • January 2, 1900 would be represented as “2”
  • January 2, 2022 would be represented as 44,562 since it is that many days after January 1, 1900.
  • One hour would be represented by 1/24 = .0147.

I go into more detail about that in a blog post titled Setting Time Axis Values in Excel.  But once I understood the way things worked, I made myself a little “sheet cheat” that allowed me to quickly come up with the values I needed to format a time series axis to the specific range I wanted to look at.

<Return to Contents>

Setting the Date Range in an Excel Chart

Since I wrote that post, I have discovered that if you type a date and time into the “Maximum” and “Minimum” fields in the axis format dialog box (the cells with the red arrows pointing to them in the image below) …

Format Axis r

… then Excel automatically makes the conversion for you.  I’m not sure if that was always there and I just missed it or if it’s a feature that showed up sometime after 2002 (when I built the first version of my cheat sheet).  

But so far, I have not figured out a way to set the major and minor units (the fields with the blue arrows pointing to them in the image above) with out “doing the math” to figure out, for instance, the decimal value that resents 1 minute if the decimal value of 1.0 represents 1 day.  

So, the little cheat sheet spreadsheet I built to help me come up with the values for the minimum and maximum dates and the major and minor units on my charts still comes in handy.

Time Values

If you want a copy of it, you can download it here.

<Return to Contents>

Zooming In the Old Fashioned Way

Having said that, if I wanted to zoom in on a portion of the chart to take a closer look at a pattern – for example, zoom in on one of the errant events above to see what the condensate pump cycles looked like during that period of timed …

Four Hours 1

… then, up until I found Marks blog post, I would have to go into the axis format dialog and make the change.  

In the image above, I zoomed in to show what was happening from 12 AM to 6 AM on October 10, 2009.  This revealed what I hope I would see;  that the condensate pump cycles in fact increased as the steam load increased.  In fact, occasionally both of the pumps serving the receiver needed to run, which is what caused the occasional higher than typical spike.  All of this validated my propose approach of using the pump cycles to come up with a load profile.[ii]

Since I often wanted both images for a report, I would typically would make a copy of the chart and then change the axis so that I had both views available.   If you are doing this a lot, it can become somewhat tedious and time consuming [v]. And, the file size can start to get to be significant if there are a lot of data points in each chart.

As a  result, I would occasionally find my self wondering if there was a way to get change the maximum and minimum values for a chart’s axis based on parameters that you entered in cells in the spreadsheet that would then, somehow, magically perhaps, be referenced by the appropriate fields in the “format axis” dialog.

My more observant readers may have notice that the dates and times I mention above show up in the yellow cells in the image and could be thinking:

I wonder if those cells have anything to do with where he is heading?

The answer is:

They do!

<Return to Contents>

Introducing User Defined Functions

It turns out that if you know how to program in visual basic, you can do just that. 

Or, in my case, it turns out that if you know how to do an internet search for something like …

Excel change chart axis automatically from cell values

… you will discover generous people who are good writers with blog posts that explain how to do it and also share the code required to do it and tell you how to make it all happen.

The trick is that you create a thing called a User Defined Function or UDF  that, when you execute it, calls some VBA (Visual Basic) code that causes the magic.   While I aspire to write VBA, I am in my infancy in that.  But thankfully, Mark does that for us in his Excel Off the Grid Column titled Set chart axis min and max based on a cell value.

It really is well written so I am not going to regurgitate it here since you can follow the link above and find out all of the details and copy and paste the required code from there.

But I will provide some screen shots of my implementation of it in the spreadsheet we have been looking at to clarify its application in that context and clarify a few things that were questions for me as I added the functionality to my copy of Excel.

<Return to Contents>

Using a UDF to Change the X Axis Minimum and Maximum

In the image below, I have clicked into cell range GH34 (orange highlight) and you can see the UDF in the formula bar where it says”=setChartAxis(“Data”,”chart 2″,”Min”,”X”,”Primary”,H35”.  (The red arrows point to the two spreadsheet locations I just mentioned).

X Min

“SetChartAxis” is the UDF.   It acts just like any other Excel function once you create it.  For instance, if I open a spreadsheet, click in a cell, type an “equal” sign, and then “if(“, Excel kind of says:

O.K.  I have a formula that has that name and here it is along with the function arguments you need to provide as inputs if you want to use it.

=if

If I click on the little fx symbol by the function bar, a dialog box will open up so that I can enter the necessary function arguments into data fields.

=ifarguments

Of course, if I use the formula a lot, I probably can remember them and just type them into the formula bar in the correct order, separated by commas.  But the dialog box sure is handy for less often used formulas (and/or as you age and find your memory is not quite what it used to be).

Assuming you don’t have the code associated with the “setChartAxis” UDF installed on your computer (more on how to do that in a minute), then, if you were to click into a cell in a spreadsheet on your machine and start typing setChartAxis, you would get a list of built in Exel functions that have the word “set” in the name like “OFFSET” and others depending on the plug-ins you have installed.   But “setChartAxis” would not be one of them.

In contrast, since I have added the code for the UDF “setChartAxis” to my copy of Excel, when I click on a cell and start typing “set …” it shows up as a function I can select along with all of the other functions installed on my machine that have “set” in their name.

=setchartaxis

Thus, I can pick it and provide the arguments it asks for …

clip_image008

… and the UDF does the “magic” for you.

Here’s what those arguments look like for the chart I am using as an example.  You will find a copy of it on the same webpage as the time value conversion spreadsheet tool if you want to download a copy to work with.

=setchartaxisexample

So basically, the formula says:

Set the minimum value for the primary, X axis, of Chart 2 on sheet Data to the value entered in cell H35.

The formula is looking for a numerical value (vs. a date), so, to make it easier to work with, I have cell H35 formatted to display the numerical value associated with a date and then set it equal to the value in cell I35, which I have formatted as a date and time.  That allows me enter the date and time and I35, which shows up as the numerical value associated with that date and time in cell H35, which is then referenced by the “setChartAxis” UDF. 

<Return to Contents>

Not Just for the X Axis

You can use the UDF for the other axis on the chart.  For example, to really understand how well the control loop is tuned, it would be nice to zoom in on the burble in the blue line that happens when the preheat coil discharge temperature spikes.   To do that, I used the “setChartAxis” UDF but set it up to adjust the maximum and minimum on the secondary Y axis based on spreadsheet cell parameters.

 Secondary Y

And, as you can see, by zooming in, I can now tell that the control loop response exhibits the somewhat classic quarter decay ratio associated with a well tuned PID loop. [vi]

I can also quickly re-scale the axis again to let me contrast both the response and the upset itself. (Note that I hid the pump amps data series to allow me to focus on the other two data streams).

Upset2

You will also note that I provided similar functionality for the primary Y axis (the center cluster of orange and yellow cells) by simply copying and pasting the cell block then editing the UDF arguments as needed.

<Return to Contents>

Addressing a Few Questions that May Come Up

So, a couple of points.

  1. To find out the name of the chart, just click on it and it will show up in the cell name window next to the formula bar (“Chart 2” below next to the fx bar, right below the “snap to grid” quick access button on the left).

Chart Name

  1. The UDF is a Visual Basic module, so you need to have the “Developer” tab available in Excel to do this.  I think that sometimes, Excel can be installed without this enabled, but I believe it is a standard feature and you just need to turn it on, which is described here, in case you don’t see the “Developer” tab in your ribbon.[vii]
  2. The blog post I referenced above is (to my way of thinking at least) really well written and I think that if you page down to the “Creating the User Defined Function” topic, you would have no trouble setting it up;  the code you need is included so its really just a matter of copying and pasting it into the right place in a VBA module you create.
  3. If you do that, it will only be available in the spreadsheet you created it in.  But you can make it available for all of your spreadsheets by installing it as an Add-In.  That is described further down in the post under the “Making the function available in all workbooks” topic  which links  you to this page after telling you what you need to do first.
  4. <Return to Contents>

Back to the Case Study

As I indicated in an endnote previously (see end note [iv]), the somewhat wild temperature excursions seemed to be a freeze protection strategy gone amok.  

But when they were not occurring, the preheat coil still did not hold a leaving air temperature at a fixed value, causing the chilled water coil to do unnecessary cooling.  The reason for this was that the face and bypass damper system that was intended to control the leaving air temperature was out of adjustment and was always allowing some air to flow through the heating elements, even if no additional preheat was required.

Integral Face and Bypass Coils

The slides below illustrate the type of face and bypass damper system that was in place in the system we are discussing. 

image

image

image

image

This type of assembly is technically called an “integral face and bypass” coil.  But is also frequently referred to as a “Wing” coil since one of the major manufacturers at one point in time was the Wing Company.  Its kind of like calling every box of facial tissue  – a paper product produced by many manufacturer’s –  “Kleenex” – which is a common brand of facial tissue.

The pictures that follow are of the  actual hardware.  The assembly shown on the left uses hot water for the heat source.  The picture on the right uses steam and is the actual preheat coil associated with the case study.

image

image

image

<Return to Contents>

Why Integral Face and Bypass?

The design of this type of coil is intended to enhance its ability to resist freezing by:

  • Always keeping the heating elements active with the control valve wide open.  For water coils, this means design flow will always be moving through the coil (as long as the pump serving the system is running).  For steam coils, this means that the coil will be able to draw as much steam as needed and that the steam in the elements will be at near the saturation pressure and temperature associated with the distribution    system.[viii]
  • Vertical orientation for the heating elements in steam fired coils to ensure rapid condensate drainage via gravity.
  • Supply and return headers located outside of the air stream minimize the potential for condensate (water) to be exposed to sub-freezing conditions.

<Return to Contents>

Things that Can Go Wrong (a.k.a. EBCx Opportunities)

So, the good news is that a coil of this type is less likely to freeze.  But there are a couple of down sides.

One is that the actuation mechanism for the clam-shell doors is somewhat complex. With out regular maintenance and lubrication, it can fail, which, as we saw in the coil in the example, can cause a significant energy waste.

Another opportunity is related to the control of the steam valve.   Even if the clam-shell dampers are fully closed, there is significant heat transfer, primarily by radiation, from the live, saturated steam inside the tubes.  For instance, if the steam was at atmospheric pressure, the temperature would be 212°F. 

As a result, there can be a significant parasitic load associated with this type of coil.    To prevent that, it is desirable to close the  steam valve when preheat is no longer required.   It is not uncommon for this contingency to go unrecognized.  For example

  • A value-engineer, who is perhaps not totally familiar with HVAC processes and how this type of coil works may eliminate the control valve from the project as an unnecessary first cost, thinking it was not needed since there were dampers provided to control the leaving air temperature.
  • A control system designer who was not familiar with the specifics of how this type of coil operates may sequence the operation of the valve with the operation of the clam-shell dampers.  While this may tend to alleviate the parasitic load to some extent, it is likely that it compromises the “freezeproof(ish) aspect of the design.

As a result, when I encounter this type of coil in the field, I just about always flag it as a target for further investigation.  Frequently, one or more of the opportunities I mention above exist and I can save some steam (and maybe a frozen coil or two). 

And frequently, as was the case for the coil in the example, savings show up at the cooling plant in addition to the steam plant because of the unnecessary simultaneous heating and cooling.

<Return to Contents>

How Come Nobody Noticed?

Some readers may wonder why nobody noticed this problem.  After all, it kind of jumps out at you when you look at the trends I have shared.  

A big part of the reason was that the control system was somewhat antiquated and unreliable.   Sensors had failed, graphics could take minutes – like 5 or more minutes – to update (assuming they didn’t “crash”in the process), and sampling speeds faster than once ever 15-30 minutes were not possible due to the network configuration.  As you may surmise, those are the the reasons I was using data loggers to assess the system instead of the trends.

Because the chilled water coil masked the preheat dysfunction and the lab zones were constant volume pneumatic reheat  zones with repairs undertaken when an occupant complained, a lot had to go wrong before it would show up as an actual comfort problem.

The operating team itself –  like most teams these days – was spread really thin, trying to operate and maintain a complex full of mission critical facilities with a handful of people.

<Return to Contents>

Leveraging the Savings Potential

The good news was that once the problem was recognized, it opened the door for improvements.   Due to …

  • The size of the system (nominally 70,000 cfm), and
  • The 24/7, constant volume, near 100% outdoor air operating cycle associated with the laboratories it served

… the savings potential associated with repairing the errant preheat process was very significant;  tens of thousands of dollars annually.  The savings could have been accrued by simply repairing the damper linkage system and ensuring that the steam valve fully closed when preheat was not needed.

Recognizing that there was more to the issue than the immediately obvious root causes,  The Owner elected to leverage the savings to upgrade the control system to a current technology system, including:

  • The sensors necessary to perform diagnostics, not just control the system,
  • Trending and graphic capabilities that would deliver meaningful information to the operating team in a timely fashion, and
  • DDC controls at the zone level, which would allow the operating team to much more quickly identify operating issues that are typically masked by the insidious nature of HVAC processes.

And like most energy savings projects, the results of this project also moved the Owner down the road towards their long term carbon reduction goals.

So there you have it;  a cool little Excel trick generously shared by Mark on his Excel Off the Grid blog along with a little case study of a common existing building commissioning opportunity.

David-Signature1_thumb_thumb_thumb

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

[i]    If you want to know a bit more about natural response tests vs. forced response tests or functional testing in general, then you may find a series of video modules I recorded on the topic to be helpful.

[ii]   It also revealed that the control loop for the chilled water valve was pretty well tuned.  Notice how what ever caused the errant change in set point [iii], initially, there is a big jump in steam flow and leaving air temperature and then a continued increase until the process stabilizes.  The leaving water temperature from the chilled water coil hunts around a bit trying to “find itself”.   But then it settles in;  more on that a bit later in the post.

[iii]   Can you put an end note on an end note? [iv]

[iv]    Assuming you can;  we never really figured out why the program running the system was set up to cause the set point jump.  But the trends indicated it was very predictably tied to the outdoor temperature and was triggered when the outdoor temperature dropped below 38°F and released when the outdoor temperature went back above 40°F.  And it was not really a set point change;  rather, the valve was simply driven fully open.  Thus, our conclusion was that it was a freeze protection strategy gone amok.

[v]    But not as tedious and time consuming as in the olden days when we would have had to transcribe the data from a strip chart and manually plot it on graph paper.  So count your lucky stars you young people out there.

[vi]   The slide below illustrates what the term quarter decay ratio means.

image

The pattern was the result of the work of John G. Ziegler and Nathaniel B. Nichols, who developed a very common tuning technique for PID control loops.  If you want to know more about PID, this link will take you to a webpage that contains some resources, including the original paper they published and an interview with John Ziegler himself.

[vii]   I suppose that there may be some corporate IT policies that would prevent you from turning on the developer tab feature with out someone from IT allowing you.  But I have not had that experience and only know about turning it on because I was helping someone once and it was not there and I poked around and found the link above.  Its always been on in any copy of Excel I have had.

[viii] There is a very subtle thing that can go on in steam fired heat exchangers due to the fact that the steam side is a saturated system.  Depending on the operating conditions, it is possible that the pressure inside the heat exchanger will be sub-atmospheric unless vacuum breakers are installed on the heat exchanger. 

That means that for condensate to drain out of the heat exchanger, or more specifically, to an open return system that is above atmospheric pressure, condensate has to accumulate inside the coil to a depth that is high enough create the head necessary to cause the condensate to flow out of the coil.  If the condensate accumulates in a portion of the coil that is exposed to the air stream, and the air stream is below freezing, then you can freeze the coil; bottom line steam coils can freeze.

By keeping the steam valve wide open on an integral face and bypass coil and relying on the damper system to control discharge temperature, it is significantly less likely that the conditions inside the heating elements will be sub-atmospheric.  This, combined with the vertical tube arrangement and locating the headers outside of the air flow path helps ensure that this type of coil is fairly freeze-proof.

Posted in Uncategorized | Leave a comment

Happy Solstice

2021-12-26 – Authors’ Note:  Yesterday, I realized that I had not fully taken into account how a pin hole camera works when I developed the SolarCan pictures.  The image in a pinhole camera is upside down relative to reality.  

When I started working with my images, I simply rotated them 180°;  sort of an intuitive reaction I suppose, since I instinctively knew the sun should rise and then  fall over the course of the day.  I was so excited about seeing the suns path that I did not initially realize that things were backwards;  on my backyard photo, my neighbor’s house is on the wrong side and in the Neskowin photo, Neskowin Creek disappears on the wrong side of the photo.

Rotating the image did in fact put the bottom at the top.  But it also put the left side of the image to the right, making it backwards relative to reality.  What I actually needed to do was flip the image along the horizontal axis, which makes the bottom the top, but keeps left to the left and right to the right. 

So, I have uploaded correctly oriented images in this revised post.

A friend called me yesterday to wish us a happy solstice.  I had an appointment I needed to head out to, so we only talked briefly.  But in doing that, I mentioned a solstice related “toy” I had found and said I would e-mail him about it after I returned home with more information.  But as I was starting that process, I realized that it would be kind of a cool thing to share for my semi-traditional ”holiday post”.   So here we go, and thanks to Sabastian for inspiring this.

The Shortest and Longest Day of the Year

Tuesday was the winter solstice;  the shortest day of the year,  and the path of the sun was at its lowest point in the sky relative to the horizon.  As most, if not all of you likely know, there is also a summer solstice, which falls on or about June 21st.  That, as you might expect, corresponds with the longest day of the year and the path of the sun is at its highest point in the sky.

The Equinox

Between those to extremes lie the two equinox (equinoxes? equinoxi?  equineex?,  not sure about the plural, but the spell check favors equinoxes and the others sound like part of a Gallagher routine or something).  Anyway, each day, the path of the sun across the sky will shift between the two extremes set by the solstices and will be halfway between them on the equinoxes.

A Major Driver

The daily shift in the pattern of the sun across our day is a fundamental reality in our lives, driving the seasonal changes we all experience, and for those in the buildings industry, driving the loads we try to address with our envelope and HVAC system designs.  Sadly, I think we may be less and less aware of the reality of it.

Most of it would readily acknowledge the impact that seasonal changes have on our lives and on the facilities we design and endeavor to operate.   But how many of us could, by virtue of our daily observations, point to exactly where – on the horizon – the sun rose and set on the solstice and equinox?

Some, I am sure can do just that.   But I suspect that in general, we are much less aware of that than we were even a generation or two ago, let along a century or two ago.

Buchananhenge

One of Kathy’s and my traditions is that we sit on our porch swing (or in our front room when its cold) and watch the sunset together, so I have developed a pretty good sense of where the sun will be in the evening in Portland or Neskowin Oregon.  Neskowin is where we own a share in a fractional and thus, get to spend 4 weeks a year at the coast.

A couple of years ago, I realized that by some cosmic coincidence, the long axis of the sofa and/or deck we sit on in Neskowin to watch the sunset is probably aligned with-in 5° or less of the same axis on our porch swing.  Kind of cool;  same view, just a different distance from the ocean.

But it was not until about 15 years into our life here on Buchanan Avenue that I realized that the long axis of our shot-gun bungalow (which is perpendicular to the long axis of the porch swing) is lined up so that on the equinox, the sun (if it is shining) beams down the basement stairs and hits the back wall of the basement.

IMG_2258I was walking down the stairs through the yet to be completed remodeling project that occupies half of the basement to the fairly completed remodeling project called my office, which occupies the other half, when I noticed something unusual as shown in the photo to the left.

One unusual thing was that it was not overcast early in the morning, which it often is in March here in Portland.  The other was that the rays of the sun were hitting the back wall of the basement.

This was on March 7th, and as the morning progressed, the sun beam retreated across the floor as the sun rose in the sky.  And as the days progressed, the point of light (when it was visible) moved across the far wall until the path of the sun was cut off by the stairwell. 

Kind of cool.  It reminded us of Stonehenge so we officially termed it Buchananhenge.  Kathy plans to paint some sort of mural tied to the event on the back wall, and maybe the floor, once the (somewhat mythical) remodeling effort is completed.

Enter SolarCan

SolarCan is the “toy” I mentioned at the beginning of the post.  I discovered it thanks to the “Somewhat Occasional Newsletter” that I receive by virtue of my membership in the Cloud Appreciation Society.   SolaCan is  a pin hole camera fabricated from a beer (or soda) (or actually now-days, I have discovered, wine) can.

Inside the can is a piece of really, really slow film facing the pin hole.  As a result, if you mount the “can” to some stationary, vertical object with the pin hole facing south, over time, you will generate a photograph that shows the path of the sun across the sky each day.  And, if you allow it to remain in place long enough, the background image will also burn itself into the film.

When your patience wears out, you open the can with a conventional can opener, pull out the film, and scan it, which generates a negative.  Then, you import it into some sort of photo processing software like Gimp or Photoshop or PaintShop and reverse the negative and start playing with it.

Upon discovering SolarCan, I procured several;  enough to send one to each of the grandkids, send one to my brother (who is an actual, for real graphic artist/producer) along with several to experiment with here on Buchanan Avenue and on the deck at Neskowin.

The View from Neskowin

Just to orient you, here are a couple of pictures from the deck at Neskowin with the SolarCan immediately behind me.  The were taken the day I took the can down and headed home to process the film.

2021-11-23 Neskowin Rainbow 03

2021-11-23 Neskowin Sunset

The large “rock” in both images is called “Proposal Rock” and appropriately enough, several proposals and weddings occur in its presence every year.  And probably about once a year, the coast guard has to come in with a helicopter and pull hikers off the top because they forgot to consider the tides when they planned their hike and were stranded as a result.

This next image is a panorama that I shot several years ago now.  But I include it because I was standing about where the SolarCan was mounted and because it the field of view is comparable to the field of view captured by the SolarCan.

December at the Beach 2014

Here is the negative image from the SolarCan, which captures events from June 7, 2021 through November 23, 2021, so pre-solstice to almost equinox.

CCI_000120 cr

And here is what that looked like when I scanned it into PaintShop, rotated it  and reversed it.  Note that since it is rotated, not flipped, the image is backwards from reality.  More on that in a minute.

CCI_000119 - Copy

The blotches are there because despite being under and eve and only having a pin hole exposed, the driving rain that is common at the coast managed to gain entry into the can and the film was wet.   I have played with the image some in Gimp and PaintShop (steep learning curve for me so probably a lot more that I can do) and here is where it is currently.

CCI_000120 - Copy

So, some improvement, but a ways to go.  Initially, I was kind of disappointed, viewing the imaged as being damaged by the water. But my perspective changed when Kathy looked at it, flashed her “come hither eyes” at me and said she thought I had achieved a very artistic effect.  So, I am thinking of leaving well enough alone.

Getting It Right

This paragraph did not exist in my initial post because I had not realized the error of my ways when I rotated vs. flipped the image.   But as I subsequently studied the two images I had, I realized things were backwards, as I mentioned in my note at the beginning.   So here is the SolarCan image flipped (vs. rotated), which puts everything into the proper orientation.

CCI_000120 - Copy Flipped

In the image below, I tried to overlay the panorama I took and the SolarCan image so you could kind of correlate things.  I played with the aspect ratios in the images to try to get things to correlate as closely as possible using the tree in the center of the picture and proposal rock (the flattened “bump” on the right side) as the frames of reference.

Combined Coast 2

The correlation is not perfect;  obviously the sun does not rise from inside the condo on the left.  That is primarily because I was not standing exactly where the solar can was located when I took the panorama among other things.  

For instance, the film in the can is curved because it lies on the inside wall of the can; i.e. it lies on the circumference of the circle represented by the can’s diameter.  This is in contrast to being on a plane perpendicular to the pin hole, extending across the diameter of the can.  But it will give you the general idea.

The View from Buchanan Avenue

I mounted the Buchanan camera on the pole supporting the rain gauge that is attached to the little deck on Kathy’s art studio in the back yard.  (The rain gauge in the foreground is now located on a pole just below the blue bird house in the background;  South is to the center right;  where the bright spot in the trees is).

2019-07-24 Art Studio View

We are blessed with a lot of trees and that is just about the only spot with a clear view to the South for a significant part of the day. 

The “can” went up right after the 4th of July and my patience ran out Thanksgiving week, so the image below does not cover the entire span from equinox to solstice, but almost.

Back Yard CCI_000117 - Copy 02 Flipped

In both images, the arching bands are the daily path of the sun.  Variations in intensity are (I suspect) due to clouds passing through. Gaps between the bands (I suspect) represent days of total overcast. 

I also suspect the intensity of the bands when the sun is lower in the sky is generally higher on a clear day than when the sun is higher in the sky due to the incident angle between a ray of light and the film in the can;  not totally sure about that but I think it is true.

Next Steps

Having done my initial experiments, I am already on to my next artistic effort.  I just deployed a new SolarCan on the rain gauge pole on the solstice and plan to leave it there until the June solstice, thereby capturing the full path of the sun from Winter to Summer.  I will replace it with another to capture the path the other way.

I plan a similar effort at Neskowin although the dates are constrained a bit by when we have our weeks in the rotation.  But I should be able to capture the full cycle and may try to find a way to keep the film dry (or maybe not, given the flashing of come hither eyes associated with perceived artistic efforts on my part.)

And I will shoot a panorama with my digital camera oriented as close as possible to the orientation of the SolarCan so I can better correlate the two images.

Conclusion

Hopefully, my adventures and experiments observing the sun’s path will inspire you to consider doing the same (obviously, don’t look directly at it).

For me, even though I had an intellectual awareness of it from a very young age,  watching the minute by minute, hour by hour, day by day shift via Buchananhenge and SolarCan gave me a firmer grasp of it.   And it also made me feel a bit more connected with this amazing universe we are all apart of.

IMG_0075In fact, if you find this to be interesting, then you may also enjoy one of my favorite books, Connecting with the Cosmos, by Donald Goldsmith.  The subtitle says it all in a way;  each of the 9 chapters is dedicated to exploring a different aspect of the sky, starting with sunrise and sunset, my topic here in a way, through observing the moon and various constellations, all with the unaided eye.

So here’s to happy sky-watching and a great holiday season.  And thanks to all of you who continue to visit the blog.

David-Signature1_thumb_thumb_thumb                                                                                                          Holly

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

Posted in Uncategorized | Leave a comment

Heat Pumps Don’t Create Energy, They Move Energy

Wow, its been a long time since I have written a blog post!

Times fun when you’re having flies …

as frogs are often heard to say.

I have been putting a lot of new content up on the Commissioning Resources website, so that has taken my time.   But fairly recently, I had a discussion with a friend who was having a hard time wrapping their head around the coefficient of performance for of a heat pump/refrigeration process and I came up with an analogy that – while not perfect – worked for them and which they found somewhat amusing. 

So I decided I would try to resurrect my blog posting activities by sharing it for what’s worth.

The Question

The fundamental question was …

It seems like magic that you can get a COP = 4.  I’m having a hard time wrapping my head around the fact that you can get 4 units of energy OUT for putting in 1 unit of energy. 

The Somewhat Technical Answer

I started out by saying that  I thought maybe the key was to think about the compressor as doing work to move energy rather than creating the cooling effect.  

In other words,  a refrigerant at a saturation temperature/pressure of “X”°F/Y psia will produce “Z” Btu’s of cooling via the phase change that occurs if heat is applied to the evaporator, causing the liquid refrigerant to boil and become a vapor.    It’s the energy absorbed by the phase change process produces the cooling. 

The amount of energy absorbed per pound of refrigerant as well as the saturation pressure associated with the temperature that the phase change occurs at will be a function of the physical properties of the refrigerant. 

In other words, you may need to move “U” pounds of refrigerant A at a saturation temperature/pressure of “X”°F/Y psia, but move “V” pounds of refrigerant B to produce the same refrigeration effect at a saturation temperature/pressure of “X”°F/Y psia.

Once the refrigerant has gone through the phase change, the problem becomes getting rid of the heat by condensing the refrigerant.  One way to do that is to move it to a higher saturation temperature and pressure so that you can use some other medium that is cool relative to this new, elevated pressure and temperature to reverse the process and condense the refrigerant. 

The compressor accomplishes this for us by compressing the cool vapor from the evaporator.   In doing this, it does work on the refrigerant (the pv/J part of the steady flow energy equation) …

… and the amount of work it does can be determined by plotting the cycle on a pressure enthalpy diagram[i].

The work includes the irreversibility losses, i.e. there is a change in entropy.  All of this will be specific to the refrigerant that is use, as will the evaporator saturation temperature and pressure relationships.

In addition, you will put more energy into the compressor motor than you get out as shaft power to the compressor because of the losses in the motor. If the motor is cooled by the refrigeration process, then these losses will also show up as heat to be rejected at the condenser.

At the evaporator coil and condenser coil, the energy transfer is 100% efficient; i.e. 100% of the energy removed from the fluid flowing through the evaporator shows up as vaporized refrigerant and 100% of the energy removed from the condenser by the air or water flowing through it shows up as an increase in air or water temperature.  But the amount of energy rejected is more than the cooling effect because the compressor energy is also being rejected. 

I think my friend kind of knew this all along;  he basically alluded to it in what he said when he initiated the discussion.  But somehow, my saying it back to him caused the dots to connect.  All I really did was mirror back what he already know.  That is the power of having a discussion I think.

But at that point, I was on a roll, so I continued with my analogy, which they patiently tolerated.   (You, of course, can just stop reading this and I will never know). 

The Analogy

Suppose you have a nice little cabin out in the Pacific North West woods next to a very pretty, deep lake that was fed by streams which were fed by melting glaciers..   Most of the time the cabin is quite comfortable, but there is the occasional hot summer day when it would be nice to have some sort of cooling system.  

One day, after going snorkeling to see the fish in the lake, you realize that the water towards the bottom of the lake is actually pretty cold, even though the surface water temperature is very pleasant.

That gives you an idea.  

You go buy an 800 cfm fan coil unit, install it in the basement of your cabin, and run a pipe from the inlet of the cooling coil out to just below the surface of the lake, then add a vertical extension to it so that when you open the valve to the coil, the head produced by the water level in the lake will cause  water to flow through your coil, but the flow will be from the bottom of the lake, where the water is coldest. 

You buy a kiddie pool to place under the outlet of the coil to catch the water so it doesn’t flood your cabin. The good news is that you can make 76°F air with this arrangement, which will cool down your cabin, which is at 90°F but very low RH (i.e. the coil is running dry). 

The bad news is that the 4.8 gpm it takes to do this adds up and the  kiddie pool starts to overflow.  So you build a flume and reservoir that allows you to fill a bucket, climb up a ladder 15 feet, and dump the 4.8 gpm into it, which returns it to the lake.

The bottom line is that the system is doing a ton of cooling by changing the temperature of the water going through the coil 14°F.

Natural forces are producing the cooling effect; 

  • The head created by the difference in the level of the lake and the outlet of your pipe moves the water through the coil to the basement. 
  • The ability of the water to absorb heat by changing temperature provides the actual cooling effect. Basically, the lake water is your refrigerant;  its just doing the cooling with a sensible energy change vs. a phase change.

But to keep your cabin from flooding you need to do some extra work to move the water back to the lake, which involves carrying a bucket of water multiple times from the basement level to the flume level.   When you dump the water into the flume, you are above the level of the lake. 

This is a bigger elevation change than the difference between the water level in the lake and the water level in the Kiddie pool.  But to get the water to flow from your cabin back to the lake, you have to dump it into the flume at the higher elevation. 

Bottom line, to keep the system working and keep from flooding your cabin, on average you need to move 4.8 gallons of water through a 15 foot elevation change.

But, of course, the mass of the water is not the only thing you move up the ladder.  You also move your own mass and the weight of the bucket.  If you do the math with the water horse power equation …

…   you discover that the water hp is about 0.018 hp. 

But if you  convert the gallons of water in the bucket to pounds and add your weight and the bucket weight to it and multiplying it by the 15 foot elevation change, and the number of trips you need to make to keep the basement from flooding, you discover that you are doing 0.087 hp of work or 220 Btus. 

If your body was about 25% efficient, you would need to consume a lot of calories to keep this process going[ii].

Since you find the free cooling to be quite desirable on the occasional hot day, but would rather not have to climb the ladder so much, you invent a device that can do that for you using solar cells as a source of power and begin to market your new product.  

Changing the Refrigerant

As a result of the success of your invention, you accumulate great wealth and decide to buy a place on the US Virgin Islands so you can spend some of your time there relaxing on the beach, snorkeling, and watching sunsets. 

Given the high temperatures and humidity levels, you decide to install your cooling system in one room of your beach house to provide a bit of relief from the heat and humidity, this time using seawater as the refrigerant.

When you commission your system, you discover a number of differences from the system in your cabin.  

For one thing, given the humidity in addition to the heat as well as the available water temperature, you realize you probably will need a larger fan coil unit; at one ton, your current model can not dehumidify and only performs sensible cooling.  So while it helps, what is really needed is some relief from the humidity in addition to the heat.

But you decide that the sensible cooling is better than nothing, so you continue to commission the system while waiting for your new, larger fan coil unit to arrive.  In doing that, you discover that to create the ton of cooling, you need a bit more flow, specifically 4.9 gpm instead of 4.8 gpm.  

After investigating and determining that your flow measurement is in fact accurate, you realize that  the specific heat of seawater is lower than that of the pure fresh water in the lake by your cabin; 1.00 Btu/lb-°F for the fresh water vs. 0.96 Btu/lb-°F for the seawater.

In other words, it is a different refrigerant and because of its physical properties, you need to move more of it to produce the same refrigeration effect. 

You also realize that the reason you seem to float better when snorkeling in the Caribbean is that the density of the saltwater is higher than that of the fresh water in your lake back at the cabin;  62.29 lb/cu.ft. for the fresh water vs. 64.00 lb/cu.ft. for the saltwater.

That means that you have to do a bit more work to keep the system running.  More specifically, you find that you are moving 2,509   lb/hr of saltwater up the 15 foot ladder or 0.0874 hp when you add your weight and the bucket into the mix.  This is in contrast with the vs. the 2,398 lb/hr you had to move up the ladder in your cabin using 0.0866 hp when you’re the weight of you and the bucket is added in.

Ultimately, you conclude that with a bit of development, you can expand your product line to provide a product suitable for providing relief to owners of USVI beach houses.  And what better place to do the development than from the deck of your beach house, over looking the Caribbean.

Thus Ends the Analogy

Hopefully that was more useful than silly. 

The idea was to illustrate that the actual refrigeration effect was provided by the refrigerant (the lake water or sea water) absorbing heat.  But to reject the heat, work had to be done to move the heat to a location where it could be rejected. 

In the case of the initial example it was done by carrying a bucket up a ladder to an elevation that would allow it to flow back to the lake were natural forces (like deep sky effect and evaporative cooling) would cool it back down.

But if you change refrigerant (seawater instead of fresh water), because its physical properties are different (its not as good of a refrigerant as pure water), you end up needing to move more mass to move the heat from the kiddie pool back to the oceans where evaporative cooling and deep sky effect can cool it back down. 

David Sellers
Senior Engineer – Facility Dynamics Engineering    

Visit Our Commissioning Resources Website at http://www.av8rdas.com/

[i]  If you want an example of a pressure/enthalpy diagram, you will find one in this blog post.  If you want to understand how to use one in practical terms, Sporlan publishes a very will done technical guide that is well worth reading in my opinion.

[ii]  In working on the analogy, I found a really interesting blog post about the efficiency of the human body.  The author was looking at biking and walking. 

Here is a summary table from the post showing miles per gallon for different activities and energy sources.  The difference between food and gas/lard is the energy density of our average diet vs. the energy we would get if all we ate was lard, which was the closest he could come to the equivalent of gasoline in terms of energy density.

Posted in Uncategorized | Leave a comment

Satellites, Eclipses, and Happy Holidays

As some of you know, I am pretty interested in the weather.  So most days, while having coffee and settling into the office, I am poking around on-line, looking at things like the models that the University of Washington Department of Atmospheric Science make available, looking at weather maps, downloading data and plotting soundings with ROAB and trying to understand what they mean. 

Sometimes, I even load data into Digital Atmosphere and try my hand at plotting a front.   Still a long way to go there but I think it may be kind of like learning to use a psych chart;  you just have to do it and it will eventually come to you.

20203351640_GOES17-ABI-FD-GEOCOLOR-1808x1808But my favorite part of the routine is the time I spend looking at satellite imagery.  I find myself mesmerized by the colored view of the earth and the clouds just hanging there in space.

The images update every 10 minutes and you can even create a little animated loop and watch the terminator and weather systems sweep across the globe, as shown below.

G17_fd_GEOCOLOR_36fr_20201219-1818

I was doing this earlier this week when my eye caught something.  At first, I didn’t realize what was happening.  But then, it dawned on me (and you probably have already figured it out from the title);  I had just seen the eclipse from the vantage point GEOS West.

I thought it was really cool.  So I created animations for GOES West and East, downloaded them and figured I would share them here.  This first one is from GOES West, which is what initially caught my eye. South America is in the lower right part of the image so watch that area to see the shadow show up.

This one is GOES East, which gives a better view of things since South America is front and center.  I don’t know exactly what the yellow bars that show up at the end of the sequence are, but I think they had something to do with the satellite data set not being fully complete.  Fortunately, the eclipse is in the first part of the sequence.

If you want to slow things down or pause, I made a little video that includes both of the animations with the yellow bars edited out.  You will find it at this link.

If you go to the GEOS imagery page and pick a view, you will discover that there are all sorts of ways to look at the images that reveal all sorts of different things about the atmosphere.   But the one that I love the most is the GeoColor product, which is what was used for the images above.

The image is actually a combination of different satellite data stream to create a very vivid realistic daytime image.  The night time image uses data from different infrared bands to show low liquid water clouds as differentiated from higher ice clouds.  The city lights are from a different static data based and provided to allow you to orient yourself.

To me, it is amazing to contemplate what you are seeing when you see that shadow pass over the surface of the earth; masses orbiting and interacting with each other in a perfect balance.   In the days leading up to Christmas this year, we will have the opportunity to see a different manifestation of that ballet as Saturn and Jupiter come into the closest conjunction they have been in for some 800 or so years.[i]

Saturn and Jupiter Conjunction

Some have even hypothesized that the star of Bethlehem may have been just such an event.

So now, (if you are still reading this) you are thinking O.K. there is the  “Happy Holidays” part of the post title.   And that is in fact part of it.

But, the other part of it is to point out that we did not always have such a spectacular view of our home available to us at our finger tips.  Prior to this time of year in 1968 – specifically December 21 through 27, 1968 – the most remote vantage point had been what Pete Conrad and Richard Gordon had captured for us from 850 miles up on their Gemini 11 mission, which is shown below [ii].

850 miles up 7-s66-54706-b

But on Christmas Eve, 1968,  the crew of Apollo 8 – Frank Borman, James Lovell, and William Anders – captured an earth rise while orbiting the moon; the first time humans had done that.

apollo08_earthrise

The image [iii] is, of course, is quite famous;  some have called it

the most influential environmental photograph ever taken[iv]

I tend to agree with that, having seen it with  my own eyes that evening.  That image, and the lunar surface rushing by and the words the astronauts shared that evening[v] are burned into my memory.  It definitely is part of the reason I do what I do these days.

Later that evening – actually, I think in the early hours of Christmas day (EST), this sequence of transmissions occurred (I believe the time stamp is hours into the mission and liftoff was at 7:51 a.m. EST on December 21, 1968):

089:31:12 Mattingly: Apollo 8, Houston. [No answer.]

089:31:30 Mattingly: Apollo 8, Houston. [No answer.]

089:31:58 Mattingly: Apollo 8, Houston. [No answer.]

089:32:50 Mattingly: Apollo 8, Houston. [No answer.]

089:33:38 Mattingly: Apollo 8, Houston.

089:34:16 Lovell: Houston, Apollo 8, over.

089:34:19 Mattingly: Hello, Apollo 8. Loud and clear.

089:34:25 Lovell: Roger. Please be informed there is a Santa Claus.[vi]

If you followed the space programs, the hours an minutes between the Christmas Eve broadcast and the transmissions above were pretty important because the Trans-Earth Injection burn would happen.  This event involved the (single) engine in the service module igniting and accelerating the spacecraft out of lunar orbit into a trajectory that would carry it back to earth.

If the engine failed for any reason, the crew was not coming back.

Thus, the acknowledgement of the existence Santa Clause.

Bill Anders, who took the earthrise picture above often said something along the lines of:

We came to explore the moon and what we discovered was the Earth

Ultimately, I think why I am writing this is to encourage you to take some time to contemplate and fully appreciate that discovery.   I think it’s easy to take for granted in the world we are in.  But I also think it is crucial that we appreciate it.

In her 1976 album Hejira,  in a song titled Refuge of the Roads, Joni Mitchell wrote:

In a highway service station
Over the month of June
Was a photograph of the earth
Taken coming back from the moon
And you couldn’t see a city
On that marbled bowling ball
Or a forest or a highway
Or me here least of all

These days, I think that is an important perspective to keep.   When you look at our pretty little home from the vantage point of space, all of the things that seem to trouble us and divide us become invisible.   And what becomes apparent is that we are all in this together on a beautiful but tiny little life boat.

David-Signature1_thumb_thumb_thumb

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

i          Image Credit: NASA/ Bill Ingall

ii         NASA/Dick Gordon; Sept. 14, 1966 – View From Gemini XI, 850 Miles Above the Earth | NASA

iii       Image Credit: NASA/Bill Anders; Apollo 8: Earthrise | NASA

iv       Nature photographer Galen Rowell

v        This link will take you to a recording.  There are religious overtones, so fair warning if you find that sort of thing offensive.   Me personally;  I am probably more spiritual than religious, but the moment was and still is very moving.

vi        Apollo 8 Flight Journal – Day 4: Final Orbit and Trans-Earth Injection (nasa.gov)

 

Posted in Uncategorized | 4 Comments

What is the Energy Content of a Pound of Condensed Steam? (Part 3)

or, It Depends …

This post is the last in a string of posts that started out as an e-mail answering a question from one of the folks taking the Existing Building Commissioning Workshop this year at the Pacific Energy Center.   The question was about the energy content of a pound of steam, which seems like a simple question but it turned out not to be.

In the first post we explored different ways to address the question including using published conversion factors, rules of thumb, and steam charts and tables.  In the second post, we took a closer look at how steam is procured, including on-site generation and district steam systems and how those approaches impact the amount of useful energy that is recovered from the steam.  We also looked at ways to maximize the amount of energy that you extract from a pound of steam for use in your HVAC processes.

In this post, we will look at some common energy saving opportunities associated with steam systems.  I should also mention that you will find a number of general resources about steam in this blog post.

Contents

I have included a table of contents that will allow you to jump to a topic of interest.  The “Return to Contents” link at the end of each section will bring you back here.

Maintaining The Benefits

Even if set points and processes have been optimized, there are things that you should look for in order to maximize the benefits, no matter where your steam comes from and where the condensate goes.  Typical issues (a.k.a EBCx and ongoing commissioning opportunities) include the following items.

Failed Condensate Return Pumps

Just because local boiler plants and campus district steam systems are set up to return their condensate and recycle it does not mean they are actually doing it. Condensate return pump failures are not unusual. 

Typically, when this happens, the receiver drain valves are opened until repairs can be made.  As a result, the condensate is dumped to the sewer, even though that would not be the case if the return pumps were operational.   Unfortunately, the failed pumps and open drain valve are often forgotten. 

A facilities director friend of mine at a large campus in the Midwest instituted a policy in his weekly meetings where each operator was required to report on the condition of the condensate return pumps in the facilities they were responsible for.   “Not working” was the “wrong answer”, and the policy quickly resolved what had been an ongoing problem with failed condensate pumps, saving a lot of energy, water, and water treatment chemicals in at the boiler plant.

<Return to Contents>

Failed Insulation

Condensate is hot and insulation will preserve the energy in the condensate.  Repairing damaged insulation typically delivers a quick payback and can frequently be accomplished in house.  All you need to do is measure the surface temperature with an infrared gun and look up the loss in a table or chart.

image_thumb12

There are a number of resources at this link that will help you get started.

<Return to Contents>

Steam Trap Failures

For a steam system to work properly, it is important to ensure that only condensate leaves the steam system.  Steam traps accomplish this function but can fail if they are not properly monitored and maintained.  If a trap fails, live steam enters the return system, wasting the energy it contains and potentially causing other issues on the return side.

The infrared thermometer shown above for checking out insulation savings will also help you find a failed stream trap.   If there is a temperature drop across the trap with the leaving temperature being at or below the saturation temperature for the pressure in the return, then the trap is probably doing just fine, like this one.

image_thumb81

But if the trap has failed, the temperature in the return line will be up near the saturation temperature of the steam, like this.

image_thumb10

It is important to realize that the high temperature down stream of the trap means that a trap in the area has failed, not necessarily the trap you took the temperature across. 

In other words, the steam leaking by from a failed trap will raise the temperature of all of the pipe in its vicinity.  So to narrow things down, you may need to use an auto mechanic’s stethoscope to listen for the steam jetting through the outlet orifice in the trap.

There are resources at this link that can help you assess steam trap failures and the related savings.

<Return to Contents>

Piping Failures Due to Corrosion

Condensate tends to be corrosive because the carbonate and bicarbonate ions that enter the boiler with the feedwater break down due to the heat and pressure in the boiler. One of the biproducts is carbon dioxide gas, which leaves the boiler with the steam and then reacts with the condensate to form carbonic acid.

image261_thumbimage_thumb1

There are water treatment strategies that can be used to control this as well as piping materials that can minimize the potential for failure.  But my point here is that when a failure occurs, then the condensate is lost along with the benefits of returning it to the plant.

<Return to Contents>

Long Pipe Runs to the Central Plant

As mentioned in the previous blog post under Paradoxes, long pipe runs to the central plant can result in parasitic losses, even if they are insulated.  As a result, a number of campuses I have been involved with include a heat exchanger in the condensate return system that is used to recover energy from the condensate for local use, perhaps preheating domestic hot water or serving other loads that can be served by low temperature hot water.

<Return to Contents>

Conclusion

Thus ends another string of somewhat long blog posts.  Hopefully, they have given you some insights into how much energy is associated with a pound of condensed steam, techniques that can be used to evaluate it, and ways that you can maximize the potential and maintain the benefits of a system that uses steam as a source of heat.

David-Signature1_thumb

David SellersPowerPoint-Generated-White_thumb2_th[2]
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/


Posted in Boilers, Hot Water Systems, and Steam Systems, HVAC Calculations, HVAC Fundamentals, Operations and Maintenance, Steam Systems | 1 Comment

What is the Energy Content of a Pound of Condensed Steam? (Part 2)

or, It Depends …

This post builds from the previous post, which started out as an e-mail answering a question from one of the folks taking the Existing Building Commissioning Workshop this year at the Pacific Energy Center.   The question was about the energy content of a pound of steam, which seems like a simple question but it turned out not to be.

In the previous post, we explored different techniques that could be used to assess the energy content of a pound of steam and looked at where the value used by ENERGYSTAR® for converting pounds of steam from a commercial district steam system to Btus came from.  It turned out to be associated with receiving steam at a delivery pressure of 150 psig, saturated and then dumping the condensate to the sewer.  

Dumping the condensate wastes quite a bit of energy, which is the reason the ENERGYSTAR® conversion factor seems high when you compare it to what you might expect based on rules of thumb or even an analysis that looked at the latent heat of vaporization for 150 psig saturated steam.   This approach also wastes water, another important resource with embedded energy implications. 

The good news is that there are other approaches that can be used to reduce the wasted resources.   This post looks at some of them as well as ways to maximize the amount of energy extracted from a pound of steam before it is recycled or dumped to the sewer.

Contents

Despite breaking up the original behind this into a string of posts, each post in the string is still somewhat long.  So, to minimize the pain for someone just wanting the bottom line, I have included a table of contents that will allow you to jump to a topic of interest.  The “Return to Contents” link at the end of each section will bring you back here.

Steam System Resources

I thought I would mention that there are several blog posts that will connect you with resources on steam and steam systems.

A Steam Heating Resources will connect you with a really good book titled The Lost Art of Steam Heating.  It also connects you with some articles Bill Coad wrote on the topic and a number of other resources.

Assessing Steam Consumption with an Alarm Clock is the first in a series that looks at a way that you can develop a steam system flow profile by monitoring condensate pump and feed water pump operation.  It was something Chuck McClure taught be very early in my career, but I still use the technique to this day (but do it with data loggers instead of alarm clocks).

<Return to Contents>

District Steam vs. Onsite Generation

The Operating Cycle

In terms of how condensate is handled, what I described in the previous post for a typical commercial district steam system (dumping it to sewer)  is a totally different scenario from what would happen if you had boilers on site generating the steam.  In the latter situation, the condensate is collected and returned to the boilers and recycled.   Some fresh water is added to make up for any losses due to leaks or the use of steam in a process (direct injection humidification for instance) and to make up for the water that is intentionally drained from the system to manage total dissolved solids levels (typically termed blow down). 

But for most facilities with local boiler plants generating steam, returning the condensate minimizes the amount of energy needed in the boiler to create steam since it only needs to heat the feedwater from the condensate return temperature (typically in the 140-200°F range) vs. heating it from the ground water temperature, which can be in the 45-50°F range for some parts of the year.  This practice also minimizes the consumption of water, another valuable resource. 

For a steam system of this type, you would probably not be entering thermal energy into ENERGYSTAR® as pounds of steam.  Rather, you would be entering it based on the fuel you used to fire the boilers.  This would reflect the net energy input required to bring the returned condensate back up to boiling temperature along with converting it to steam.

That’s not to say you would not be interested in the pounds of steam produced because that would tell you about the efficiency of your generating process.  And you would also be interested in the net energy change that occurred as the steam was condensed and the condensate was cooled, either intentionally or via parasitic losses like leaks or poor insulation.  If you had energy recovery devices in your boiler flue, you would want to consider their contribution also.

<Return to Contents>

The Operating Cost

If you were to look at the cost of a million Btus in the form of gas, which you would then burn in a boiler to make steam, and the cost of a million Btus delivered by a third party supplier as steam, the millions Btus as steam option would seem crazy expensive.   And it is if all you look at is solely as the cost of a Btu.

But, if an Owner elects to buy steam instead of gas, part of what they are electing to do is to not operate a boiler plant.   That has a number of implications including:

  • No need to purchase the boilers and related auxiliary equipment in the first place.
  • No need to operate the boilers plant, which may require operators with a different skill set from those required to only deal with steam, not generate it.  And it may require a round the clock operator presence depending on the pressure and temperature of the steam that is required.
  • Dealing with natural gas increases the level of risk associated with operations compared to dealing with just steam (which is not with out risk).
  • The reliability of a central plant may be much higher than a local plant unless significant investments were made in machinery and systems to provide N=1 redundancy at a local level.
  • The ASHRAE Systems and Equipment handbook has a chapter dedicated to  District Heating and Cooling systems that includes a discussion of the economic considerations and other issues if you want to learn more.

    <Return to Contents>

Campus District Steam Systems

It is not unusual at all for college, university, industrial and commercial building campuses (like the wafer fab I worked at) to use a central steam plant to serve multiple buildings on one site, basically a district steam system approach.  However, unlike the commercial district steam system we have been looking at, most of the systems I have been around are set up to return the condensate to the central plant.

Typically, this is accomplished by providing one or more condensate receivers for each building to capture the condensate for the facility.  The receivers are equipped with pumps that move the condensed condensate from the receiver to a return system that collects the condensate and returns it to a receiver in the central plan. 

From there it is pumped to a feedwater system where any necessary make-up water is added, water treatment chemicals are added and it is often deaerated (heated to drive out dissolved oxygen).  Pumps then move the treated condensate (now called feedwater) into the boiler as required by the load conditions, usually based on boiler water level.  Thus, the energy and water associated with the distributed steam is recovered instead of being dumped to sewer. 

The picture below will give you a sense of what this might look like.  It is from the central plant at the wafer fab I worked at for a while.

Boilers The cylinder in the lower left is one of the high pressure boilers.  We generate steam at 100 psig and distributed it to various locations on the site, where it was reduced to 5-10 psig for use in heat exchangers and coils.

The large elevated cylinder in the center of the picture is the deaerator and feedwater tank.  The feedwater pumps are located below it.  Condensate was returned to this tank by condensate pumps at the various points of use out in the facility.  The picture below will give you a visual on what a typical condensate pump looks like.

Condensate Pump

In the deaerator, the returned condensate was heated to 200°F+ to drive out the dissolved oxygen.  Then it was pumped to the boilers by the feedwater pumps when needed based on the water level in the boilers.

So for a steam system of this type, you really would be justified in doing some sort of analysis similar to the example in the previous post to come up with the KBtu’s delivered to the facility from the pounds of steam that you consumed (including the parasitic losses), even if you are billed by the central plant based on pounds of steam.  That would allow you to enter you consumption using a multiplier of 1 instead of 1.194.  And that would be legitimate (in my estimation) because by recycling the condensate, you are returning the energy and water associated with it back to the process rather than throwing it down the drain.

<Return to Contents>

Why Not Return the Condensate?

You may be wondering why a commercial district steam system would not include a return system that allowed them to collect and recycle the condensate from the loads they serve.  I can’t say that I know the answer to that for sure.  But my guess is that it has to do with a number of economic and operational factors that make it financially more attractive for the business entity to not deal with a condensate return system.

There are a number of things that make dealing with a condensate return system challenging, especially a system that covers an extensive area.  The map below illustrates the piping network associate with Clearway Energy Thermal San Francisco, who provides district steam to a number of cities across the country.

Clearway-SFO-Map_thumb

To give you a sense of scale, the map is probably in the range of 1-1/2 miles on a side. That is a pretty significant network to maintain; miles and miles of pipe running underground below streets and sidewalks.   Challenging enough for the steam piping, which is at high pressure and experiences significant thermal expansion and contraction.

While the pressures would be lower for a condensate return system, the thermal expansion and contraction issues will still exist.  And you would need to have multiple pumping stations to move the condensate back to the central plant location.  

Probably most significantly, condensate tends to be corrosive for a number of reasons.   And ensuring that the customers maintained the equipment necessary to return the condensate to the system can also be an issue.

So, those are some of the reasons that I suspect a commercial supplier finds it easier (more economical) to not deal with returning condensate.  Over time, as the value of energy and water increase, that could change.  After all, when we dump the condensate to drain, we are throwing away at least two resources (energy and water) and probably a third (boiler feedwater water treatment chemicals).

<Return to Contents>

*Sigh*

All of this may lead to the question:

What can we do to make steam and condensate return systems as efficient as possible?

The answer (as you might guess) is:

It depends …

The first thing to consider is if you have maximized the extraction of energy from the steam and condensate that was delivered to you.  The other is to make sure you are maintaining the mechanisms that deliver those benefits.

<Return to Contents>

Maximizing the Benefits

One way to maximize the benefits of a high temperature resource like steam is to make sure you have reduced the temperature in a way that provides useful heat to the facility as much as possible.

Cooling the Condensate via a Separate Process

It is easy to think that the energy benefit of steam is associated with condensing it.  And in the context of Btu’s per pound extracted, a phase change beats sensible cooling hands-down.   But, given that the condensate coming of a process that is condensing steam at atmospheric pressure is still quite hot, there may be some significant benefit associated with subcooling it. 

For the process we looked at in the previous post, when I illustrated how to use a p-h diagram, the condensate came off the process at 212°F.   If there are loads in the facility that can be served by a fluid that is at this temperature or lower, then it may be possible to serve them by cooling the condensate rather than by condensing steam. 

Examples include processes like preheating outdoor air, preheating or heating domestic hot water, heating swimming pools, heating spaces and/or loads with less stringent temperature requirements like parking garages, and snow melting systems.  The viability of these processes from an economic stand point can vary a lot, depending on:

  • Are considering this option during design or in the context of an existing building, and/or
  • The value of the resources and/or
  • What happens to the condensate after it leaves your facility (i.e. is it dumped to the sewer or is it recycled.)

But to illustrate the point, lets consider what would happen if we took the condensate coming off the process I illustrated in the p-h diagram in the previous post and subcooled it to 160°F, perhaps by using a heat exchanger to preheat domestic hot water or hold it at about 150°F in a storage tank.

image_thumb31

As you can see, this would have recovered about 30% of the energy that would have been throwing down the drain based on the district steam conversion factor that ENERGYSTAR® would use for systems that were billed in terms of pounds of steam consumed.[i]

An interesting paradox about this is that if you made this change in a facility where the domestic water heating was provided by electricity, you would see a drop in electrical consumption but no increase in the pounds of steam that were used.  That is because you would have been extracting more energy from the steam consumed for other purposes before discarding it to sewer. 

In contrast, if the domestic water had been provided by using steam in a heat exchanger directly, this change likely would have reduced the steam consumption because you would have been extracting more energy from the steam that was used by other processes, like preheat, heating, and reheat, before discarding the condensate.

Of course, for this to all work out, the loads generating the condensate would need to be concurrent with the domestic hot water load requirement.  If they weren’t, then alternative energy sources would need to be used to meet the load.

<Return to Contents>

Cooling the Condensate by Optimizing Process Set Points using a Reset Schedule

The Design Day is Not Everyday

If you study load profiles for a while, you will realize that the design condition is an anomaly.  In other words, equipment selected for the 99% ASHRAE heating design condition will be oversized for about 99% of the hours in the year.  The psych chart below illustrates this for Columbus, Ohio, a location that sees a wide range of outdoor conditions over the course of a year.

image_thumb[1]

The colored squares are a bin plot of the climate data;  warmer colors have more hours at the conditions inside the square than cooler hours, as can be seen from the key at the lower left of the chart.   Notice how most of the data point lie between the different design values, not on them.

That means that if, for instance, you selected a reheat coil serving a perimeter zone where, on the design day, the coil needed to supply 94-95°F air to offset the losses that were occurring through the envelope, then as it warmed up outside, the coil would not need to supply air at that temperature all other things being equal.

Heating and Reheating are Different Processes

In fact, once the outdoor air temperature rose above the balance point for the building (the point where the internal gains exactly offset the losses through the envelope) the coil would no longer need to provide heat, it would only need to provide reheat and in the worst case, deliver air at the zone temperature (a.k.a. “neutral air”).  This is a very important point to understand.  

Since this post is already very long, I will save a detailed discussion of this for a subsequent post.  But in a nutshell (perhaps a coconut shell) a coil that is doing heating is adding energy to the area it serves to offset losses (usually envelope losses) in order to maintain the desired space temperature.  Thus, it will need to deliver air that is warmer than the targeted space condition.

In contrast, a coil that is doing reheat is delivering air that is cooler than the space condition but warmer than the air that is coming from a central system serving multiple zones.  The reason for doing this is that the central system leaving air temperature was likely set based on a design day dehumidification requirement.  Then the flow rates to the zones were set based on the zone sensible load and the design day coil leaving air temperature.  

Because of the design process I just described, given a mix of zones, it is possible that an interior zone, say a server room, with a very constant load condition, will require the design day flow rate and temperature under all operating conditions.  In contrast, a perimeter zone likely will not because the transmission and solar loads will be change from hour to hour, day to day, and season to season.  Thus the design day flow rate and temperature will tend to over-cool it much of the time.

For the perimeter zone, this could be mitigated up to a point by reducing the flow rate.  But there can come a point when the flow rate has been reduced to the minimum flow required for ventilation and delivering air at that rate and at the design day supply temperature (which can not be raised because the server room still needs it) will over-cool the zone.  Thus reheat becomes necessary if we want to keep the zone clean, safe, comfortable, and productive, which are the basic goals of an HVAC process.

So, the reheat coil warms the air up slightly.  But since there is still a need for some cooling, the air is still delivered to the one below the zone temperature.  In the limit, the highest temperature the reheat coil would need to provide under conditions where there were no energy losses from the space would be at the space design temperature to maintain the ventilation requirement with out over or undercooling the space.

Real World Coil Performance and Performance Requirements

It turns out that a coil that is selected for the design heating condition using, for example, 180°F water, can provide reheat with much cooler water.   I discovered this with the “dots connected” about the difference between reheat and heating one day early in my career.  Joe Cook (the lead operator at the facility I was working in at the time), then proved it by lowering the water temperature on the system until he got a cold call. 

In other words, Joe “asked the building” and I attribute my belief in that process (note the words in the banner of the blog) to this event and Joe.  Tom Stewart and I eventually wrote a paper about it for ACEEE, which you can find here if you are interested.

You can also demonstrate this by modeling a coil, locking down the physical characteristics like the fin spacing, circuiting, face area, etc. and then playing with the entering water temperature and flow rate to see what happens.   Here is an example I developed using Greenheck’s free coil selection program.

Modeling a Coil on the Design Heating Day

I first modeled the coil to serve the heating load in a perimeter zone, which required 94-95°F air on the design heating day.  Here are the coil’s physical characteristics …

image_thumb33

… and here is the performance on the design day supplied with 180°F water and taking a 20°F temperature drop on the water side to match the heat exchanger selection I have been using as an example in this post.  The entering air condition is 53°F, the design day cooling coil discharge temperature that is required by a server room on the same air handling system, even though it is the design heating day.

image_thumb37

Modeling the Same Coil on a Day When Only Reheat Is Required

Here is the performance achieved with that same coil if I reduce the entering water temperature to 110°F and take a 20°F waterside temperature drop with 53°F entering air.

image_thumb39

Note that I am able to deliver 67.4°F air and only use 1.9 gpm to do it (35% of the design flow rate).   If I were to maintain the design flow rate of 5.5 gpm, I can deliver near neutral air.

image_thumb42

Heat Exchanger Performance at a Reduced Leaving Water Temperature and a Lower Flow Rate

If we look at where the heat exchanger I have been using in this example would perform if I reduced the water side flow rate by 50%[ii] and lowered the set point from 180°F to 110°F, it turns out that the condensate coming off of it would be at 141.4°F.   Here is what that looks like if you plot the process out on the p-h diagram.

image_thumb44

Here is that same diagram at a smaller scale and cropped to focus on the condensate condition (left image) next to the design day process (right image) so you can compare them.

image_thumb56

Notice how the condensate leaving the lower temperature heat exchanger process has an enthalpy of 109 Btu/lb compared to 181 Btu/lb for the design day process.  Thus, operating at a lower temperature allows us to recover more of the available energy from the steam that was delivered.

More specifically, by operating at a 110°F supply water temperature, we now recover 1,084 Btu/lb from the steam vs.  the 1,012 Btu/lb that we recovered operating at a 180°F supply water temperature set point. That’s a 6% improvement in making beneficial use of the 1,194 Btu/lb that the ENERGYSTAR® conversion factor would attribute to a district steam system where the condensate was dumped to sewer.

But Wait, There’s More!

There would also be savings due to lower parasitic losses in the piping network.  In other words, even with insulation meeting code requirements for piping operating at 180°F, there are still losses. 

You can get a sense of this by using 3EPlus, a free application from the North American Insulation Manufacturers Association.  Here are screen shots comparing a  4 inch line operating at 180°F with code required 2 inches of insulation in a 75°F ambient temperature to that same line operating at 110°F.

image_thumb58

The lower water temperature results in a 70% reduction in losses.  And while the Btu/hr/ft values are small, this is a situation where a little times a lot results in a big number.  In other words, there is an amazing amount of pipe in a typical building system, sometimes several miles.  So  if if you save 10-15 Btu/hr/ft over thousands of feet of length, it can add up.  

Reset Schedule Bottom Lines

The bottom line is that implementing  reset schedule that adjusted the supply hot water temperature based on the outdoor air temperature will save resources for a number of reasons.

  1. More of the available energy that was delivered as steam is recovered before the condensate is discharged to the sewer.
  2. The parasitic losses associated with the distribution system are reduced.
  3. Because of items 1 and 2, the pounds of steam consumed will be reduced, improving the building’s benchmark.
  4. If the piping ran through places that contain conditioned air, like a ceiling return plenum, then the reduction in parasitic losses will also represent a reduction in cooling load.
  5. Because the building is using fewer pounds of steam, it will uses fewer pounds of water, another important resource that we need to do our best to conserve.

All of this can be accomplished for a modest investment because in most situations, all that is required is a minor modification of the control system to add the reset schedule.  If the control system is a DDC system and was already monitoring outdoor air temperature, the improvement could be captured by making a relatively simple modification to the software.  The images below illustrate what this logic might look like before …

HHW-Logic---Basic_thumb1

… and after modification.

HHW-Logic---Reset_thumb1

Note that the “after” version includes some other enhancements like trending and graphic indication.   The diagrams were developed using an Excel based logic diagram tool that you can download here along with the actual logic diagrams  If you wanted to dig in and understand it a bit, you will find an exercise here that uses a virtual EBCx project in a SketchUp model as a mechanism to present the opportunity and develop the logic.

<Return to Contents>

Flash Steam

It is not uncommon for the loads served by a steam system to use steam at a pressure significantly higher than atmospheric pressure.  The distribution systems we have been discussing for district steam systems are onw example.  For these networks, because insulation is not perfect, energy is lost from the piping and some of the distributed steam condenses.  Condensation loads are even higher at start-up, when the piping is cold. 

It is critical that this condensed steam be removed from the piping system to avoid significant operating problems and even catastrophic failures.  Towards this end, steam traps are provided at regular intervals and at elevation changes in the distribution system.  These traps are termed “drip traps” and the condensate coming off of them will be saturated liquid at the saturation temperature associated with the steam in the distribution system.

Steam fired sterilizers in labs and hospitals are another example of a load that must be served at a higher pressure, typically requiring steam at approximately 30 psig (often termed “medium pressure steam” in the industry).  The saturated condensate coming off of these loads is at a temperature above 212°F saturation temperature associated with atmospheric pressure;  in this case, about 273°F. 

As a result, if the condensate was dumped into a return system that is open to atmospheric pressure, some of the condensate will “flash” to steam.   In other words, the 273°F saturated condensate coming off a 30 psig (44.7 psia) process will have a lot more energy than saturated condensate at 212°F.  The temperature difference reflects some of the additional energy content at the higher saturation temperature. 

The enthalpy (total available energy) of the saturated 30 psig condensate is about 243 Btu/lb.     If you reduce the pressure that it experiences to atmospheric pressure, the condensate can not exist at a saturated state and remain at 243°F;  it has too much energy to do that.

It solves this problem by converting some of its liquid to steam;  exactly enough mass to absorb the excess energy.  You can use a steam table like the one I provided earlier to figure out exactly how much of the liquid will be converted to steam by reading the appropriate data directly or interpolation.

image_thumb2

Or, you can plot the process out on a thermodynamic diagram like a p-h diagram where the process will look just like the throttling process we looked at previously and occur at a constant enthalpy.

image_thumb4

One thing that is more apparent from the p-h diagram plot, at least to me, is that the result of the process is not pure, saturated water vapor.  Rather, it is a mix of saturated liquid and saturated vapor, a.k.a wet steam.  This is what the thermodynamic term “quality” that I mentioned in the first post in the series is about.  

Note that the “Flashed Steam Condition” is at about the 6.4% quality point (the constant quality lines are the curved, dashed black lines that mirror the saturated liquid and vapor lines). What this is saying is that of the 242.9 Btu/lb of energy represented by the flash steam, 6.4% of it is in the form of steam, where a significant portion of the available energy (1,151.1 Btu/lb) could be captured by condensing it, which would provide 970.8 Btus/lb (1,151.1Btu/lb – 180.3 Btu/lb).  The bulk of it is saturated liquid (condensate) where the available energy (180.3 Btu/lb) could be captured by cooling it.

Hopefully, in light of the preceding, you can see that if your high temperature condensate is going to end up at atmospheric pressure, then it will “flash”, all though perhaps not in the way a non-thermodynamic oriented person would think of the term. 

Stop-when-flashing_thumb

(I thought I would insert that as an amusing comic interlude and a reward for anyone who is still actually reading this.)

If you simply dump it into the low pressure return, a lot of problems can occur including condensation induced water hammer (which can be quite destructive), along poor return system performance in terms of steadily removing condensed steam from the loads and returning it to the collection point.

This problem is addressed by providing flash tanks, which are sized to allow the flashing process to occur with out causing problems.  Here are pictures of a couple.

Blow-Down-Flash-Tank_thumb1 AHU5-equipment-room-flash-tank_thumb

Flash-Tank_thumb1

A number of steam system vendors provide very useful information about flash tanks, including Sarco and Armstrong if you want to know more.  

My point here is to say that the 970.8 Btus/lb of energy in the low pressure steam coming off of a flash tank is just as useful as low pressure steam generated in a boiler.  Yet, you frequently find them vented to atmosphere.    This may represent an opportunity.  

One way of capturing the benefit is to vent the flash tank to the low pressure system header.  This will move the “Flash Steam Condition” line on the p-h diagram upward from atmospheric pressure. The lower the header pressure is, the more energy you recover.

<Return to Contents>

A Few Paradoxes

All of the opportunities we explored would extract more energy from a pound of steam relative to the process that occurs in the heat exchanger operating at the design supply water temperature.  As a result, they will reduce the pounds of steam consumed all other things being equal. 

In addition, the lower distribution temperatures associated with the reset schedule will also save energy, additional energy.  And using flash tanks to drop the temperature and pressure of medium and high temperature condensate will keep the condensate return system running more smoothly and quietly.

But, if the condensate is being recycled instead of dumped to sewer, the lower condensate return temperatures will mean that the boilers will need to add a bit more energy into the feedwater to get it to the steaming temperature as compared to what would be required if the condensate came back hotter.  So for systems that recycle their condensate, the impact of the lower temperature condensate on the cycle efficiency will be different from what it would be for a system where the condensate is dumped to sewer.

On the other hand, if the piping runs back to the central plant were long, there could be benefit to the lower temperature condensate because the energy would have gone into a useful process instead of being lost to the ambient environment on the way back to the plant. 

In other words, if the 200°F condensate leaving the heat exchanger has cooled to 140°F by the time it gets back to the central plant to be recycled due to the time it spent sitting around in condensate receivers and in long piping runs, then the boilers are going to have to heat it up from 140°F to the steaming temperature anyway.

In contrast, if it was cooled to 140°F to serve a domestic hot water load before being returned to the plant, the parasitic losses in the return system would be reduced and additional energy would have been extracted from the system for a useful purpose.

Extracting as much energy as possible for a useful purpose will improve the over-all cycle efficiency and will lower the parasitic losses in the condensate return system since it will be operating at a lower temperature.

<Return to Contents>

Thus far, we have talked about how to maximize the amount of energy extracted from a pound of steam.  In the final post in this series, we will look at how to ensure peak efficiency for your steam system in the long term.

David-Signature1_thumb_thumb_thumb

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

[i]     The ENERGYSTAR® conversion factor implies that you would reduce the enthalpy of the incoming steam to 0 – which is about where the saturated liquid (dark blue) line crosses the enthalpy axis) –  if you recovered all of the energy represented by a pound of steam.

[ii]    This was an arbitrary selection on my part.  You will recall that the coil I modeled could do quit a bit of reheat with only 35% of its design flow rate and a lower entering water temperature.  And it could  deliver near neutral air if supplied with its design flow rate at the lower water temperature.

It would be somewhat unusual for an occupied zone to require neutral air if the building was above the balance point;  basically, that would indicate that there was not load in it and that you were still moving air through it.  Thus for the sake of discussion, I assumed that a variable flow hot water system serving multiple zones and operating with a reset schedule that lowered the supply temperature as the outdoor air temperature rose would operate at less than design flow and arbitrarily selected 50% of design flow.

Posted in Boilers, Hot Water Systems, and Steam Systems, HVAC Calculations, HVAC Fundamentals, Operations and Maintenance, Steam Systems | Leave a comment

What is the Energy Content of a Pound of Condensed Steam? (Part 1)

or, It Depends …

This post started out as an e-mail answering a question from one of the folks taking the Existing Building Commissioning Workshop this year at the Pacific Energy Center.   But as I worked on it, I realized that the question had come up before and that the answer and related concepts might be useful to others. On the surface, it seems like a simple question.  But if you really want to understand, it  is fairly complex.  Thus, this blog post.

Contents

This ended up becoming quite a long post (surprise, surprise, surprise).  So, I broke it up into several posts, which are still somewhat long.  To minimize the pain for someone just wanting the bottom line, I have included a table of contents that will allow you to jump to a topic of interest.  The “Return to Contents” link at the end of each section will bring you back here.

Overview

Students participating in the workshop are required to have access to a building that they can use as a living laboratory to apply the EBCx skills we teach in the class.  One of the first things they do is benchmark their building in the LBNL Building Performance Database and ENERGYSTAR®.  To benchmark, you typically need to convert the annual energy consumption of a facility into some sort of index, typically an EUI (Energy Use Intensity or sometimes also called an Energy Utilization Index). 

EUIs can be stated in terms of site or source energy.  If you want to know more about the difference, this blog post will provide the details.  In the discussion that follows, I will be considering things in terms of site energy.

EUIs typically have engineering units in the form of energy use per unit area per year, such as kBtu/sq.ft. per year (kilo or thousands of British Thermal Units per square foot per year).   Energy is not always billed directly as Btus.  For instance electricity is billed in terms of kWh or kiloWatt Hours consumed.  District steam is often billed as pounds of steam consumed.  To create an EUI from the bill metrics , you need to convert the billing units to Btu’s.  

In the industry, most people are pretty familiar with the conversion factor for kWh to Btus, which is 3.413 kWh per Btu and pretty invariable.   But there is less familiarity with how to convert a pound of steam to Btus, and there can be some variability related to exactly how the thermal energy is billed (kBtus, pounds of steam, thousands of pounds of steam, etc.) and the nature of the steam source (district steam, central plant, or boilers on site).  Bottom line, if you want an exact value, it can become more complex than the single factor used to make electrical conversion.

<Return to Contents>

The Question

As you may have guessed by now, the question I was asked was how to go about converting pounds of steam to Btus.   The answer is:

It depends ….

One of our students has a facility that purchases steam from a district steam system[i] and their bill states consumption in the form of Mlbs.  For example,

Total usage invoiced in Mlbs –  301.3

Note the letter “M” which means the unit of measure is not simply pounds, it is some multiple of pounds.

So the first part of answering the question is to determine what the “M” stands for, because to correctly answer the question,

It depends on the units of measure.

Most of us (probably because of computers) would take the M to be the SI (System International; often referred to as metric) prefix denoting a factor of one million (1,000,000) as in the MBytes or MB associated with a file or hard drive size.  Thus we might conclude the bill is stating that the facility was being invoiced for 301.3 x 1,000,000=301,300,000 pounds of steam.

Unfortunately, that turned out not to be true in this case.

<Return to Contents>

Confusing Units

It turns out that there is another system of units that uses “M” for a multiplier;  the Roman Numeral System, where “M” is used to indicate thousands (1,000), not millions (1,000,000).  And to make things interesting, the industry uses both systems and (to me at least), seems to figure you will simply know which one applies. 

If you have been in the industry for a while, that is probably true.  But if you are new to it all (or suffer from aging brain cells like I seem to), then it can be confusing.  

For example, we have control systems that are moving and storing Mb or  Megabytes  of data (where mega is the SI prefix for millions, so millions of bytes).  These systems can be monitoring and managing air handling systems that  are moving cfm of air (where the “c” stands for “cubic”, not the SI prefix “centi” or hundredths, nor does it mean hundred, which is what it would stand for if it was a capital letter in the Roman Numeral system).

The air is often being cooled using electricity, which is often billed as kWh ( where the “k” means the metric prefix “kilo” or thousands of watt hours), and heated, perhaps, with steam generated by a boiler that might be rated in terms of  MBtu (where the M is the Roman Numeral M and means thousands of Btu), or MMBtu(still the Roman Numeral M, but two of them, meaning thousand thousand, or million Btu).

If the boiler is fired using natural gas, then the gas might be billed in terms of MCF (thousands of cubic feet, where the M stands for the Roman Numeral, but the C stands for cubic not the Roman Numeral for 100 and F stands for feet),  or in terms of therms (which stands for 100,000 Btus),

Or the consumption could be billed in terms of Dth (which combines therm with the metric prefix “Deka” or 10 to stand for 10 therms), which is approximately the same amount of energy as an MCF of natural gas (see above) depending on the exact heat content of the gas, which varies with the source of the gas.

Other than nuances like that, we have a pretty straight-forward system of units in the industry. So there should be little confusion about what things mean.

<Return to Contents>

Asking the Source

The student who asked the question, went to the source (the utility representative) for clarification on the units on the bill.  And in this case, they were told that the M (Roman Numeral) actually stands for K (Si Prefix) meaning that their bill was for thousands of pounds of steam.

So it seems that all that is needed now is to figure out how any Btu’s are released when you condense a pound (or a thousand pounds) of steam.  Frequently, that is done by making an assumption about the amount of energy associated with the phase change.  But if you want a more exact answer, it is a bit more complex than a single number. 

It is also an interesting (in a nerdy sort of way) saturated system physics exercise.  So I thought it would be worth looking at both techniques.

<Return to Contents>

Using a Simplifying Assumption

There is nothing at all wrong with using a simplifying assumption.  Being math-phobic and often pressed for time in terms of coming up with an answer, I do it all of the time. But if you do it, I think  it is important to recognize the constraints that your assumption placed on the result so you don’t take yourself to seriously if the discussion becomes more precise.  And you need to understand if the assumption can actually be used in the context of a given discussion.

In this case, our simplifying assumption might be based on the fact that most condensate return systems are open to atmospheric pressure at some point, usually at the condensate receiver.  So, we could look at the amount of energy released if we were to condense 1 pound of steam at atmospheric pressure.

You can find this value in a steam table.   Steam tables contain empirically derived values for the various properties of water under different conditions of temperature and pressure.   You can find them in classic publications like Keenan and Keyeson line, in the ASHRAE handbooks, or you can even build one yourself as a learning exercise using REFPROP, like I did to create the table below.

Steam-Table_thumb4

Note that the pressures in second column are in absolute pressure units, not the gauge pressure units we are probably more accustomed too.  In other words, the pressures are referenced to a pure vacuum, o psia.   So atmospheric pressure is 14.71 psia or 0 psig.

The value we are interested in is the latent heat of vaporization at atmospheric pressure (highlighted in orange above) which is the difference between the enthalpy of the water vapor (steam) and the enthalpy of the liquid water at the condition we are interested in.  In this case, the value is 970.8 Btu/lb.

To estimate the amount of energy associated with a bill for 301.3 thousand pounds of steam based on the assumption that the steam was condensed at atmospheric pressure, we could do a bit of simple math, like this.

image_thumb101

If we needed to convert this to millions of Btu, we would just divide the result by 1,000,000, like this.

image_thumb8

We could even create a multiplier that we could directly apply to future bills to give us the answer.

image_thumb18

In fact, the student who inspired this post was planning on using this multiplier.  All I have done up to this point is illustrate where it came from and that there is an assumption behind it. 

How much does that assumption impact the accuracy of the EUI and benchmark?  Well,

It depends on the magnitude of the difference between the assumed value for the enthalpy change that occurs when the steam is condensed relative to the actual value of the enthalpy change produced by the thermodynamic processes used to extract energy from the steam at the facility.

It also depends on what you do with the condensate.

<Return to Contents>

Seeking A More Exact Solution

Truth be told, in the olden days, folks (such as myself) would assume that condensing a pound of steam was worth about 1,000 Btus.  It made the math easier if you were using a slide rule or four function calculator.  And, if you contemplate the steam table above, you can see that it probably meant we were accurate to with-in 10% or better over a pretty broad range of conditions.

But, if you consider what is really going on in the context of the data in the steam table, you realize that assuming the latent heat of vaporization is 970.8 Btu/lb or 1,000 Btu/lb could be wrong because:

It depends on the saturation temperature that the steam condenses at.

For instance, most steams systems deliver the steam to the loads they serve at a pressure that is above atmospheric pressure;  pressures of 3-15 psig are common.  For district steam systems, the delivery pressure can be significantly higher, perhaps as high as 60-150 psig or more, which is subsequently reduced to the 3-15 psig range at the end use facility. 

If you look at the Tariff that defines the rate structure and nature of the service for the utility suppling steam to the facility in question, you find that there are two potential delivery pressure ranges available from their distribution network, 5-10 psig and 20-120 psig and that the company reserves the right to adjust the delivery pressure.

image_thumb21

Note that I have assumed the pressures are gauge pressures vs. absolute pressures. 

And, the term “quality” as used in the tariff is probably not the thermodynamic use of the term given the reference to chemical constituents.  In other words, in a pure thermodynamic sense, the “quality” of saturated steam is a measure of it’s wetness; i.e. how much of the steam is pure vapor and how much of it is water that has yet to change phase. More on this to follow.

It is also worth noting that some utilities will deliver the steam in a superheated state, not a saturated state.  All of these things have an impact on the energy content of the steam.

<Return to Contents>

Energy and Phase Changes;  Understanding the Process

If you perform the experiment I describe in this blog post, you will discover that it takes a whole lot more energy to change the state of water from a liquid to a vapor relative to what it takes to heat the liquid or vapor.  Here is an image from that blog post depicting the results of the experiment.  The paragraphs that follow describe the results.

image_thumb111

The red line in the picture is temperature of the water in the tea kettle.  The green dashed line and blue solid line are the temperature of the space above the water.[ii]  Initially, this space is filled with a mix of air and water vapor.   But once boiling starts, with the lid on the kettle, all of the air will be driven out and it will fill with steam.

Heating the Water

If you observe what happens, when I turn on the heat (the purple line is the watts into the burner on the stove), the temperature of the water and the water vapor mix both start to rise.  Since the liquid water is at atmospheric pressure but below the boiling temperature (a.k.a. the saturation temperature) we say that it is subcooled.   During this phase of the experiment the burner was supplying 1 Btu to raise the temperature of one pound of water 1°F.

When the water temperature reaches 212°F, the water begins to boil, which creates steam, filling the area above the water with pure steam, and creating a saturated system where the temperature of both the water and the steam are the same (notice how the green and red lines converge). 

<Return to Contents>

Heating the Mixture of Water and Steam

Now, even though the burner is applying a steady amount of energy, the temperature of the water/steam mix holds constant.  That is because the energy from the burner is now being used to change the liquid water to steam (a.k.a. a phase change) and during a phase change the temperature remains constant at the saturation temperature. During this time, the  burner was supplying 970.8 Btus for every pound of water that was converted to steam.

When the last drop of water changed to steam, the burner was still supplying energy at a steady rate.  But since the mass of the steam contained inside the teapot at that point was quite low compared to the mass of water that was there when we started (most of that mass was now outside the teakettle condensing on the windows in the kitchen),  there was a lot of energy being supplied to a very small mass.  

<Return to Contents>

Heating the Steam

At this point, the phase change is complete so all of the energy from the burner is applied to changing the temperature of the steam inside the pot.  Since it only takes about 0.5 Btus to raise the temperature of a pound of steam 1°F at atmospheric pressure (and there was much, much less than a pound of steam contained in the pot) then the temperature spikes rapidly.  This elevation in temperature above  the saturation temperature is called superheat.

<Return to Contents>

A Few New Terms

If you are new to thermodynamics, some of the terms that you observed in the steam table can be a little scary sounding.  After all, how many dinner conversations (with normal people) have you had where the words “enthalpy” and “entropy” were bantered about.

We are accustomed to concepts like temperature and pressure because we apply them directly in our day to day lives.  A weather forecaster may talk about a high pressure system moving into our area or that we can expect lower temperatures and humidity after a cold front moves through.   Or the recipe we select to prepare for dinner likely specifies a temperature that we should cook the food at, perhaps suggesting that we bring a pot of water to boil in preparation for making some pasta.

But in the course of day to day conversation, we seldom discuss enthalpy or entropy, even though those properties are also changing as we go about our daily lives.  For instance, the weather forecaster could have said that the enthalpy of the air is going to drop after the cold front passes.  And the recipe could have suggested that we increase the enthalpy of a pot of water until it reached saturation and then continue to add energy so that the water changes phase.

The point is that enthalpy, while an unfamiliar term in day to day life, is a property used to measure the total available energy in a substance at a given condition.   So, if we know the enthalpy change that a substance goes through in a given process, we know the energy change.[iii]  

Enthalpy is challenging to measure directly.  But since it is related to things that we can more readily measure, like temperature and pressure and moisture, some very dedicated individuals have been able to experimentally determine enthalpies for various substances and develop relationships that allow us to predict enthalpy based on other measurements and coefficients that are developed via the experiments. The thermodynamic diagrams that follow are simply graphical representations of these results.

<Return to Contents>

Enthalpy Depends on Temperature and Pressure

If you study the steam table I inserted previously,  you will discover that the latent heat of vaporization – i.e. the energy it takes to convert a pound of water to a pound of water vapor (a.k.a. steam) – varies as a function of the saturation temperature and pressure.  Stated another way, the enthalpy change associated with a phase change will vary with the temperature and pressure that the phase change occurs at.

For example, if the pressure is about 60 psig (or about 75 psia), then the latent heat of vaporization is more like 905 Btu/lb vs. the 970.8 Btu/lb we have discussed for water at atmospheric pressure.  Similar considerations apply for sub-atmospheric pressures.  And, as our experiment revealed, the amount of heat associated with changing the temperature of a subcooled liquid or a superheated vapor is different from the phase change value and will also vary a bit with temperature and pressure.

The steam table above is focused on water at saturation.   There are other tables that document the properties for water that is superheated or subcooled.

<Return to Contents>

Thermodynamic Diagrams

All of this can be quite complex to wrap your head around.  But a picture can be worth a thousand words, and in the context of our discussion, a thermodynamic diagram can be worth a thousand words.   Using one, you can plot a process and read all of the thermodynamic properties of water (or other substances) directly from the diagram.  And the process plot gives you a “visual” on what is going on.  

Psychrometric charts are a form of thermodynamic diagram that HVAC engineers use to assess an HVAC process. 

image_thumb11

Skew T log P diagrams are used by meteorologists to understand the atmosphere.

image_thumb51

To understand what happens to a substance as it goes through a process, encountering various  various conditions and states, we can use pressure-enthalpy (p-h) diagrams (what follows uses water as an example) …

image_thumb71

… temperature entropy (t-s) diagrams …

image_thumb131

… and enthalpy-entropy (h-s) diagrams (a.k.a Mollier diagrams)  ….

image_thumb14

These diagrams are extremely intimidating. 

But if you can stay calm and continue to breath normally, they can be quite useful because if you can plot a process on them, you can read all of the properties for the various states directly from the chart. When you compare it to the other options, like playing with the equations of state, which can look like this …

Equations-of-State-for-Air_thumb1

…   or working through multiple tables like the one pictured below and interpolating values …

Keenan-and-Keyes-Table_thumb1

… they can become quite attractive and you may find yourself inspired to learn how to use them.

<Return to Contents>

The Spreadsheet Behind the Diagrams

If you are really curious about the diagrams above, you can find the spreadsheet behind them at this link.  Personally, I learned a lot by developing them.  And now that I have them, I can plot processes on them pretty precisely, which lends itself to using a graphical solution to solve and visualize complex thermodynamic processes.

<Return to Contents>

Focusing on p-h Diagrams

P-h diagrams are a very common way to look at thermodynamic processes like refrigeration cycles.

image_thumb16

They can give you a “visual” on a complex process and make it less intimidating for math phobic folks like me.  If you want an example of how useful a diagram like the one above is, take a look at this engineering application guide from Sporlan.  

I don’t want to get to far a-field here, but the point is that diagrams like these can make the analysis of cycles much easier to accomplish once you learn to work with them.   There was a point in my career where I was somewhat terrified of a psych chart.  But now, it is my “go to” tool for understanding air handling system processes. Similarly, I use the various thermodynamic diagrams I illustrated above to help me understand different HVAC and building system processes.

<Return to Contents>

Applying the p-h Diagram For Water and Steam

To gain a deeper understanding of the amount of heat represented by a condensed pound of steam, I’m going to plot out a pressure reducing process on a p-h diagram.   I could plot it on any of the diagrams, but I chose the p-h diagram because we want to demonstrate what happens as steam is throttled to reduce its pressure and a throttling process can be considered a constant enthalpy process.  So, the two things we are going to work with are represented by the primary axis of the chart.

Let’s look at what happens if the utility serving the facility we are considering is delivering saturated steam to it from their high pressure system at 120 psig.  And let’s assume:

  • The facility uses a pressure reducing valve to drop the pressure to 12 psig to serve an insulated pipe header that delivers the lower pressure steam to a heat exchanger, and
  • That the heat exchanger condenses the steam to make 180°F hot water, which is then distributed to to the various loads in the facility, and
  • That the pressure reducing valve, heat exchanger, and its control valve are all in close proximity to each other so that there is no meaningful pressure drop between the pressure reducing valve and control valve nor is there any meaningful heat loss through the insulation between those points, and
  • That the design supply water temperature to the loads is 180°F with the heat exchanger was selected for a 20°F temperature rise on the water side using saturated steam at atmospheric pressure (0 psig, 14.7 psia), and
  • As a result, the condensate leaving the heat exchanger is at 212°F, and
  • That the condensate is discharged to a system that is vented to atmospheric pressure.

The process is plotted out on the p-h diagram below.

image_thumb1011

Plotting the Initial Condition

The initial condition is on the saturation line at the delivery pressure of 120 psig or 134.7 psia.  Knowing that the steam is saturated (red saturated vapor curve) at a specific pressure (value on the vertical axis) allows us to plot the entering condition on the chart, and we can read the enthalpy of 1,193 Btu/lb at this condition from the p-h diagram.

Plotting the Condition Entering the Control Valve

The condition entering the control valve represents the result of the throttling processes associated with the pressure reducing valve.   Throttling processes are constant enthalpy processes, so knowing that and that the leaving condition that the pressure reducing valve is controlling for (12 psig, 26.7 psia), we can plot this point on our chart.

Note that we assumed there was no meaningful pressure drop or heat loss in the piping header due to its short length.   Had there been a meaningful pressure drop and thermal loss in the piping system, that would have shifted the entering control valve point down and to the left slightly from where we plotted it.  

Plotting the Condition Entering the Heat Exchanger

The entering condition in the heat exchanger represents the throttling processes associated with  the control valve, which was selected based on an entering steam pressure of 12 psig and a pressure in the heat exchanger of 0 psig.   This results in an initial condition in the heat exchanger that is at the same enthalpy as the control valve entering condition (because throttling processes occur at constant enthalpy) but at the pressure used to select the heat exchanger (o psig, 14.7 psia).  Thus, we can plot this point on the chart based on these two parameters. 

Note that the steam entering the heat exchanger is superheated as a result of the two throttling processes in the delivery chain.  As a result, it has a bit more energy content than it would if it was saturated steam at atmospheric pressure.

Plotting the Leaving Condition

Because the heat exchanger was selected to deliver the design performance requirement using steam at atmospheric pressure, the condensate coming off of the process will be at atmospheric pressure and 212°F, the saturation temperature associated with atmospheric pressure.  This is also the condition in the condensate return main.  As a result, we can plot this point on the chart, which allows us to read the enthalpy of the  condensed steam leaving the process.

<Return to Contents>

Enthalpy Change = Energy Change

If we know the enthalpy change between two conditions, then we know the energy change.   In this case, the change in enthalpy was from 1,193 Btu/lb t0 181 Btu/lb or 1,012 Btu/lb. 

Good News and Bad News

Taking a closer look at the specifics of the process revealed that for every pound of steam that was condensed in this scenario, we received 42 more Btu’s than our rule of thumb would have suggested or about 4% more.  In the context of the Btu’s received for your dollar, that sounds like a good thing.  In other words, the pounds of steam you purchased delivered more Btus than the rule of thumb suggested.

But in the context of a benchmark, it means that you actually used more energy than the rule of thumb suggested.  Thus, in this case, if we were to calculate an EUI based on our more specific assessment of how the steam was actually used in the facility, the EUI will be higher and the benchmark score will be lower.

<Return to Contents>

ENERGYSTAR®, Conversion Factors, and Rules of Thumb

In an effort to try to create consistency, ENERGYSTAR® publishes conversion factors for various energy sources including district steam.

image_thumb5

If I understand it correctly (I don’t actually do a lot of ENERGYSTAR® benchmarks), when you are entering your data into ENERGYSTAR®, an “Add Meter Wizard” will guide you to the 1,194 Btu number for a meter that was reporting KLbs (thousands of pounds) of steam. 

As you can see, this would result in a consumption value that is higher than the rule of thumb we developed based on an assumption of condensing steam a atmospheric pressure (1,194 vs. 970.8 Btu/lb) as well as the rule of thumb used by old engineers like myself sometimes (1,194 vs. 1,000 Btu/lb) .  

It is also higher than reality for the situation we explored in the p-h diagram (1,194 vs. 1,012 Btu/lb).  So if you where to benchmark in ENERGYSTAR® using their metrics, it would seem like they would over-state the energy use of your facility if it was a facility where the steam delivery followed the process we traced out.

That means  your EUI would be higher and your benchmark would be lower than it would be if you could insert your actual energy use in terms of the Btus released by the condensed steam vs. the thousands of pounds of steam you used into the ENERGYSTAR® database. 

<Return to Contents>

Benchmarks are Approximations, not Exactamates[iv]

The preceding may want you to cry “Foul”.  After all, you are trying to do a good job in terms of running your facility efficiently and it seems unfair to have your score penalized by an arbitrary conversion factor.

But you need to remember that benchmarks are intended to provide a broad-brush comparison of similar facilities in similar climates serving similar occupancies with similar use patterns.  There are a lot of variables at play.  For example, the heat content of gas and other fuels will vary with the source and ENERGYSTAR® applies arbitrary conversion factors to them just like it does to district steam.

The endnotes in the referenced ENERGYSTAR® conversion factors document indicate the source for the conversion factors, with the International District Energy Association being the source for the district steam energy conversion factor.

<Return to Contents>

Why so High?

If you study the steam table, you may find yourself wondering why the International District Energy Association recommended a conversion factor of 1,194 Btu/lb.  After all, that appears to be the latent heat of vaporization associated with an extremely low saturation temperature and pressure.

That is because there is more than the latent heat of vaporization to be recovered.   For instance, in the example I plotted out on the p-h diagram, the condensate left the process at 212°F.  There are quite a few things that you could do with a stream of water at that temperature.   For example, you could run it through a heat exchanger to recover sensible energy and preheat or even heat domestic hot water.

So, in a way, the answer to a modified version of the original question, perhaps along the lines of …

How can I go about capturing the energy that the  ENERGYSTAR® conversion factor for district steam metered as pounds of steam implies is available?

is …

It depends on what you do with steam and condensate you receive from the utility

<Return to Contents>

The Basis of the ENERGYSTAR® Conversion Factor

If you dig around a bit, you can discover the basis behind the ENERGYSTAR® conversion factor.  I found it in a footnote in a technical reference they provide about Greenhouse Gas Emissions.

image_thumb1311

What that is saying is that the ENERGYSTAR® conversion factor is equal to the enthalpy of saturated steam at 150 psig.   It is important to realize that this is different from saying it is equal to the latent heat of vaporization of 150 psig steam, which is the enthalpy change associated with condensing saturated vapor to saturated liquid, or about 858 Btu/lb.

In our field, we are typically interested in changes in enthalpy through a process rather that the specific enthalpy at a given state.  And,  because enthalpy cannot be measured directly, we state the values of enthalpy for a substance referenced to a particular state.  For instance the specific enthalpy of water or steam is referenced to water at 0.01°C and atmospheric pressure.

In the context of this discussion, that means that if we really wanted to capture all of the energy associated with the ENERGYSTAR® conversion rate for district steam metered as pounds, then we need to not only condense the steam we receive, we need to receive the steam at 150 psig as saturated steam and we need to cool it to just above freezing.

<Return to Contents>

So, the ENERGYSTAR® Folks are Crazy

You may be thinking at this point that the ENERGYSTAR® folks are nuts.  After all, your local utility may not deliver steam at 150 psig, with the delivery pressure of 120 psig in the utility tariff we looked at being an example of that.

But if you compare the enthalpy of 12o psig steam with 150 psig steam, you will find that it is only about 3 Btu/lb different;  about a quarter of a percent.  So in the bigger picture, receiving steam at a lower delivery pressure would not make that much difference in the factor that you would use.

You may think, O.K. I’ll buy that, but it just does not seem practical to cool the condensate to just above freezing in a way that delivered anything useful to the building.  In other words, to provide heat, the source (in this case the condensate) needs to be warmer than what you are trying to heat. 

Given that we are trying to maintain space temperatures in the mid 60°F to mid 70°F range in most of our buildings, a fluid stream that is at or below that temperature range could not be used directly to heat.  Some sort of heat pump (and energy input) would be required to move the heat from the condensate to the place that needed it.

Actually, the ENERGYSTAR® Folks are Not Crazy

If you take the time to think it through, you will realize that the ENERGYSTAR® conversion factor is simply forcing us to take a hard look at what it means in terms of energy and resources if our facility uses steam as an energy source. 

There is a subtilty associated with how most (not all)  commercial district steam systems work that we need to consider.  You get a clue about it if you read the tariff for the facility we have been discussing closely (note my highlight).

image_thumb311

What that is saying is that the condensate (condensed steam) delivered from the utility will not go back to the utility.  Rather, it will go to the sewer.   That means that all of the energy associated with the hot condensate is literally dumped down the drain and eventually is dissipated to the environment with out serving any useful purpose in the building that consumed the steam (a.k.a. energy and water vapor;  two different resources).

In fact, depending on the temperature of the condensate and the requirements of the local plumbing code and the material in your sanitary piping system, you may actually have to cool the condensate before discharging it.  Typically this is done using domestic cold water (directly or via a heat exchanger) which is then dumped to the sewer along with the cooled condensate.

Bottom line, if  you received district steam at 150 psig, saturated, you actually did receive 1,194 Btus with every pound of steam (and a pound of water for every pound of steam).  The challenge is to understand how to capture as many of those Btu’s as possible before discarding the condensed fluid stream to the sewer.  Because what ever you don’t recover really is wasted energy (and water).

So painful as it may be for this type of system the 1,194 Btu/lb factor allows your steam consumption to be legitimately and fairly compared to the other types of steam systems I will describe  in the next blog post.

David-Signature1_thumb_thumb_thumb

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

[i]     A district steam system is a network of piping served by a central plant that provides steam to a large area like the downtown area of a city.

[ii]   The blue line is data from a very low mass thermocouple so that it would react quickly because I wanted to capture the very rapid increase in steam temperature that I anticipated once all of the liquid water had been converted to steam. (For more on how sensor mass can impact the data it produces, see this blog post). 

I had the logger set for a very rapid sampling rate and did no have enough memory to allow it to log data for the entire time it took to boil off all of the water.  So I did not start the logger associated with that sensor until nearly all of the water was evaporated, which is why the blue line only shows up towards the end of the graph.  

[iii]  Entropy is a bit more complicated to grasp, like, I almost flunked thermo because I struggled with it so much.   I think that is not unusual and often take comfort in something John von Neumann said (emphasis is mine):

You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage.

They way I have come to think of it is that its basically nature’s way of saying:

There’s no such thing as a free lunch

When we turned on the burner to boil the water, energy flowed from it to the water because the burner was hotter than the water.   But, with out some sort of process that involves doing work, we can not get the energy that flowed into the water converted back into heat or electricity.  Heat does not flow from cold to hot, only from hot to cold.

If you want a bit more detail about all of this, you may want to review a string of blog posts I did that look at saturated multiphase systems.  The experiment I mention and use to illustrate what happens when water boils is part of one of the posts.

[iv]  You may also find the Chapters in Roy Dossat’s book Principles of Refrigeration titled Internal Properties of Matter and Properties of Vapors to be insightful.  He writes about thermodynamic concepts in a very understandable way.  When I found the book, early in my career, my first thought was where were you when I took thermodynamics, which I almost flunked because of my struggle with the math and concepts initially.

[iv]   When I worked for Murphy Company, Mechanical Contractors, more than once, I heard Pat Murphy, our chief estimator mentor some of the younger estimators, saying

we were doing estimates, not exactamates.  

When I first heard him say it, I felt it was really insightful.  And I also think the same is true for a benchmark.

Posted in Boilers, Hot Water Systems, and Steam Systems, HVAC Calculations, HVAC Fundamentals, Operations and Maintenance, Steam Systems | Leave a comment

Lags, the Two-Thirds Rule, and the Big Bang, Part 5

In Part 4 of this series, we explored the complex transportation lag that was the key challenge in terms of using a remote duct pressure sensor to control the large VAV air handling system in the case study building. In this post I will show you the solution that grew out of that understanding and discuss a few reasons why not every VAV system will exhibit this behavior. I’ll close out the post with what I have found to be a very useful and  interesting insight that can be gleaned from the apparent dead time that you observe when you upset a control process in a system that is in operation.

Not Every System Will React This Way (Thank Goodness) Reprise

In the first article, I mentioned that this issue obviously does not happen in every VAV system out there. I think one of the main reasons is that many systems are small enough that the transportation dynamic I focused on in the previous article is not significant enough to cause a problem. But I think there are also some other reasons that people may not run into it very often, or maybe have never run into it.

You Learn A Lot the First Time You Start Up a System

My experience at the MCI building occurred during the very first start-up of the system. At the time, I was in the dual role of control system designer and start-up technician. There was no formal commissioning process so, my start-up activities were the commissioning process.

On a current project, depending on the exact design of the commissioning plan, it is possible that the official commissioning provider would not be on site for the very first start-up of the system. They would only come on site after the contractor had taken the system through start-up process and identified and corrected any obvious deficiencies.

You could say that Ray (the service fitter I was working with) and I discovered an obvious deficiency when we blew up the duct, and then corrected it. Meaning that had there been a commissioning provider, when they came into the process, they may have found some issues, but they would not have observed the system blowing up a duct or having nuisance static safety trips. That could create the impression that the lag issue did not exist, simply because it had been addressed.

But, evidence in the field, like:

  • Ductwork with wrinkles in it, or
  • Ductwork with extra reinforcement angles, or
  • An obvious patch in the duct insulation, or
  • Pressure relief doors that have been added by change-order

… could suggest that just because the system seems to start smoothly now, that may not have always been the case.

Variable Speed Drives are Very Common

When the MCI Building came online, variable speed drives were not an option for most systems, even large ones, because of the cost and size. That is not the case for a modern project.

As a result it would be unusual for a VAV system these days to not have a variable speed drive of some sort. As a result, when faced with nuisance safety trips (or worse), it is common practice to address the problem by using the acceleration and deceleration settings in the drive to slow the system down. This approach is like the approach I tried when I added restrictors to the pneumatic lines feeding the actuators to slow them down.

As you may recall, I concluded that in doing that, I had traded one problem (safety trips and blown ducts) for a different problem (an unresponsive system that could not deal with a large step change). I believe that improperly applied acceleration and deceleration ramps are likely doing the same thing. But since an unresponsive system may appear to operate reasonably well unless you analyze the trends, this may not be generally recognized. More on this later in the article.

Solving the Problem

Back in the MCI Building days, with my significant emotional event fresh in my mind, I went about re-reading what David St. Clair had written about lags in Controller Tuning and Control Loop Performance . As you may recall from the first post in the series, I had totally missed his point on the topic of lags when I read his book the first time, despite him having it in all capitals, in a large shaded box at the end of the chapter.

All About the Lags st

Truth be told, it wasn’t so much that I missed the point.  Rather, I simply did not understand the concept at all.

But what was became clear almost immediately as a re-read the section on lags (due to my significant emotional event) was that my problem was the result of lags in the system and that I needed a control process that would be impervious to them. David’s chapter on cascaded control suggested a strategy that would offer a solution.

Modifying the Control-System Design

As you may recall, our initial solution to the problem was to move the remote sensor back to the fan discharge and control for that pressure. In doing that, we circumvented two major lags: the sensor lag and the transportation lag.

But after re-reading David St. Clair’s primer, I realized that if:

  • We added a remote sensor, and
  • Added a second controller for it to work with, and
  • Created a remote duct static pressure control process,

… then we could use the output of that process to adjust (or reset) the discharge static pressure control process set point. In other words, the output of the remote process would cascade into the discharge pressure control process to optimize its set point. The result was a control system configured as illustrated below.

Pneumatic Control v2

Bear in mind that there probably are several other design solutions that could have worked, especially in this modern area of fully programmable DDC systems.

Developing a Reset Strategy

To implement the solution, we needed to come up with a relationship that defined how the discharge-static-pressure set point would be adjusted as pressure at the remote point in the duct increased above the design target when the terminal units closed their dampers in response to decreasing load. This “reset schedule” is graphically depicted in the chart in the illustration above.

Pneumatic control system operating characteristics generally are defined by a 3 to 15 psi span. As a result, to fully define our reset schedule, we needed to specify the discharge-static-pressure set points associated with outputs of 3 psig and 15 psig from our remote static-pressure-control process. Once we identified those outputs, we could set them up in the controller by making physical adjustments with knobs and dials.

Knobs and Dials

In current technology DDC systems, all of the parameters I will discuss below are set up via the software in the system, either using sliders and knobs in a graphic screen or by setting the value of point in the system via keyboard commands.  But in the olden days, they were set up using the knobs, dials, and sliders that were provided on the controller.  The controllers in the image below illustrate this and are similar to the controllers we were working with at the MCI building.

RC-195

For the MCC Powers RC-195 controllers illustrated above, the authority adjustment slide is what sets up the reset schedule.  If you want to know more about the details, you will find the instruction manual for it on the pneumatic control resources page of our commissioning resources website.

Controller Action—The General Case

As a first step in figuring out our strategy, we had to determine the “action” of our controller:

DA LrgDirect Action

With a direct-acting controller, an increase in the difference between the set point and the process variable (0ften called error) will cause an increase in control-process output.  A decrease in the difference between the set point and the process variable will cause a decrease in the control-process output.

RA LrgReverse Action

With a reverse-acting controller, an increase in the difference between the set point and the process variable will cause a decrease in control-process output.  A decrease in the difference between the set point and the process variable will cause an increase in the control-process output.

Controller Action Bottom Line

The bottom-line regarding controller action is that a designer determines the failure mode for the final control element (in the case of the MCI building, the inlet guide vanes) as a first step. That information combined with how the system will react when the final control element is moved in response to an increase or decrease in the process variable (in this case, duct static pressure) determines the controller action.

Controller Action for the MCI Building Static-Control Processes

For the MCI Building, because we had selected the IGV actuator to fail closed on a loss of air pressure, a reverse acting discharge static pressure controller was required. In other words,  if discharge static pressure dropped below set point, we needed the output pressure from the controller to increase, causing the inlet guide vanes to open.  If discharge static pressure increased above set point, we needed the output pressure from the controller to decrease, causing the inlet guide vanes to close.

A reverse-acting process allowed us to start the system with the inlet guide vanes closed and the fan at minimum capacity, meaning the fan started unloaded and the potential for immediate over pressurization upon system startup was minimized.

Interlocking the Control Process with Fan Operation

To ensure that the system started this we, we provided a three-way air valve (often called an Electro-Pneumatic switch or EP switch) in shown in the illustration. The equivalent in a DDC system is the proof-of-operation interlock.

When de-energized, the three-way valve blocked the control signal and vented the pressure in the actuator to atmosphere.  When energized, it closed the vent and connected the control signal to the output serving the actuator, allowing the control system to modulate the inlet guide vanes through the positioning relay. The three-way valve was wired in parallel with the fan-motor starter so that, when the starter was energized, the valve was energized.  

This was a fairly common approach for doing this sort of interlock at the time.  But there is an assumption behind it, that being that, if the motor is spinning, air is moving.  That may or may not be a good assumption for several reasons;  for instance, if the belts had broken, the motor would in fact be spinning but there would be no air moving. But to keep from making this even longer, I will set that discussion aside for now.

Reset-Line Points

We knew we needed 3 in. w.c. of pressure at the discharge of the fan to deliver 0.75 in. w.c. of pressure at the remote location on a design day. That requirement established one point on our straight-line reset schedule.

More specifically, we adjusted the knobs and dials on the controller so that, when the signal from the remote static-pressure controller was 15 psig, the set point of the controller was 3 in. w.c. In a DDC system, this would be accomplished by relationships set up in the controlling logic rather than by physical adjustments to a piece of hardware.

To determine the other point on our reset schedule, we considered what would happen on a weekend with only workers on the second floor in the building. Under those conditions, the system would run and the terminal units on the floor with people would follow the load. The terminal units on all the other floors would probably be at or near minimum flow depending on the solar load and thermostat set points.

In the worst-case scenario, we would need to deliver the design flow for the second floor and the minimum flow for the other floors. The calculated pressure drop to the remote-sensor location on the second floor at this flow condition was approximately 0.25 in. w.c. because at this relatively low flow condition compared to the design flow rate, the distribution duct system as quite oversized.

Adding this pressure drop to the 0.75 in.w.c. required to deliver design air flow from the remote sensor location to the zones on the second floor told us that we would need to deliver 1.0 in.w.c. at the supply fan discharge (0.25 in.w.c. + .75 in.w.c.) under this low load condition.  This value became the other point on the reset schedule line.

More specifically, we adjusted the controller so that, when the signal from the remote static-pressure controller was 3 psig, the set point of the controller was 1 in. w.c.  We would fine-tune both reset values based on operating experience during commissioning and the first year of operation.

Considering an Extreme Condition

Once we had made our adjustments, the remote sensor would adjust the discharge set point linearly over the range established for the reset schedule. But, because the output of the remote controller could drop as low as 0 psig and rise to whatever the pneumatic-system supply pressure was (typically 20 to 25 psig), in day-to-day operation, the set point of the controller could potentially be adjusted beyond the bounds of the reset schedule based on the nominal 3 to 15 psig span that was the de facto standard in the industry.

A set point lower than 1.0 in. w.c. would not be cause for much concern. A set point above the 3.0 in. w.c. maximum target, however, could cause nuisance safety trips or worse.

For example, at startup, when duct pressure at the remote location was 0.0 in. w.c., the reverse action of the remote static-pressure controller would cause the controller’s output to drive toward its maximum value. Depending on the throttling range/proportional-band setting of the controller, the output under this condition could be the maximum available main air pressure.

If you extrapolate the straight line associated with the reset schedule to 20 psig, you will discover that the remote controller would have commanded a set point of about 3.8 in. w.c. for the fan discharge pressure controller.   If the fan were to achieve this value, it would have tripped the high-static-pressure limit. 

To prevent that problem, we added a high-limit relay, which limited the signal to the reset input of the discharge controller at 15 psig even if the output from the remote controller drove above that value.   Thus, we limited the maximum reset command to the discharge controller to a set point of 3 in. w.c. In a DDC system, this would be achieved with the control logic rather than by a physical piece of hardware.

Reset Strategy in Operation

The reset strategy allowed us to have our proverbial cake and eat it too, meaning the control process would never allow fan-discharge static pressure to exceed the 3.0-in.-w.c. design target because it was controlling for discharge static pressure directly and the system hardware would allow only a maximum set point of that magnitude, even at startup, when the pressure at the remote point in the system was 0.0 in. w.c.

If, as the system came up to speed, delivering 3.0 in. w.c. at the discharge of the fan created more pressure than the 0.75 in. w.c. we targeted at the remote location, then the output of the remote controller would drop.

This would lower the set point of the discharge controller, causing the inlet guide vanes to close and deliver less air, which would lower the system pressure. If the terminal units opened their dampers to meet an increase in load, the reduction in pressure at the remote location would cause the set point of the control process to again be adjusted upward, but never above the design value.

One Final Thought About Lags

What follows is one of the most useful lessons gleaned from my experience at the MCI building (aside from how to not blow up ducts).

Comparing the Response of a Process to an Upset with Different Levels of Tuning Implemented

The figure below illustrates the response of a system with a proportional-only (P) control process to an upset[i] as the proportional band is reduced gradually from:

  1. No control (manual, top black line).
  2. Loosely tuned control—a very large proportional band (red line).
  3. Tightly tuned control—the proportional band is as tight as it can be without the risk of hunting (blue line).
  4. Near-resonance, or hunting (gray line).
  5. Over tuned/approaching instability—the proportional band is too narrow, given the characteristics of the system (bottom wavy black line).

Response Tune @

The system the controller is applied to is fixed in terms of lags, dead time, system gain, and other factors that dictate how the process will respond.

When you tune a control loop, you start with the a very large proportional band (the red line) and sneak up on the gray line, which is the point at which the system is starting to go unstable.  Then you back off a bit (back towards the red line) so you run on the safe side of stable (the dark blue line).

The reason you sneak up on the gray line is that it reveals the natural period for the control process and system. You can use that parameter to come up with a pretty good set of initial tuning parameters for the control loop.

In the illustration, the upset occurred at t=0 on the x axis.  Notice how there is a period of time after the upset during which nothing seems to happen based on the response of the system (the y axis on both charts).  The purple line with an arrow at both ends illustrates this, and it is called the “apparent dead time” for the process.  It represents the sum of all of the lags in the system.

My purpose in bringing that up is to focus your attention on three facts:

  • The natural period for the near resonance control loop (the grey line) is approximately equal to four times the apparent dead time (compare the light blue double arrow head line with the red, orange, green and dark blue double arrow head lines)
  • No matter how loosely or tightly tuned a control process is, the response for about the first half of the natural period (about twice the apparent dead time) will be nearly identical no matter if the control process is over tuned, under tuned or non-existent (manual control); contrast the 5 different response curves in the enlarged circle for half the natural period, which is indicated by the red plus orange arrows.
  • The tightly tuned control process (blue line) is stable at about the end of twice the natural period.

Once you recognize and embrace these facts, there are very useful in the context of what we are trying to do when we tune a P, PI or PID control loop.

The Quarter Decay Ratio

Technically speaking, for most of our systems, our goal is to achieve a quarter-decay-ratio response to a process upset, as illustrated below.

Quarter Decay 0

“Quarter decay ratio” is a fancy way of saying the peak of the spike during the second cycle of the response cycle will be one quarter of the peak during the first cycle of the response.  

It has its roots in the work John Ziegler and Nathan Nichols published in Optimum Settings for Automatic Controllers in 1941.  If you would like to read it, you will find a copy of it in part 1 of the Control Engineering Reference Guide to PID.  There is also an interview in there with John Ziegler, which is kind of cool.

Twice the Apparent Dead Time;  A Very Important Parameter

If you go out and start playing with loop tuning, you will discover that there are multiple versions of this response pattern or something very close to it, depending on the exact combination of proportional, integral and derivative gain you set up for the process.  In fact, you could probably spend hours changing the settings and observing the different patterns.

I speak from experience because when I first tried tuning loops, I did just that.  But at one point, I realized a couple of things,  specifically;

If the first spike doesn’t trip a safety or, worse yet, break something (for instance, blow up a duct), and

If the process settles within a reasonable time frame for the application you are working with

… then you probably have a winner, at least for the time being.[ii] 

Quarter DecayBut if you keep tripping safeties (or worse) and that was happening with-in less than twice the apparent dead time after you observe the system starting to respond, then you are going to need to eliminate some lags.  That is what the second bullet point in the opening part of this section was about.

Similarly, if you have managed to find a setting that does not cause safety trip (or worse) but now, the system is still trying to find itself hours (or even two natural periods) after the upset, then  you are going to need to eliminate some lags.

To quote David St.Clair:

It All Depends On The Lags

Eliminating Lags

The table below contrasts lags that are relatively easy and relatively difficult to eliminate.

Lags Table

Eliminating lags to solve a startup/loop-tuning problem can be counterintuitive.

For instance, when I was having trouble getting the MCI Building VAV system online, it seemed things were happening too fast at the inlet guide vanes;  they were opening up way to quickly.  So I slowed them down by adding restrictors. In reality, things were not happening fast enough in terms of the control system realizing the fan had started but that it would be some time before there was meaningful pressure at the remote sensor location.

When I added the restrictors, I was able to get the fan running without tripping the safety, but not able to achieve my set point in a reasonable time or respond to step changes in the system (zone level scheduling or a set point change for instance), so I had simply traded problems.

Ramps vs. Acceleration and Deceleration Settings

In modern times, it can be tempting to try to solve a startup problem like the one I experienced using the acceleration and deceleration settings on a VSD to slow the drive’s reaction to changes commanded by the control system. And, while you may be able to resolve the over-pressurization problem in this manner, you will have added a lag to the system. That means that for even a modest upset or step change in the system, you will have limited how quickly the control process can react to it to recover the set point and resume steady state operation.

Ramp logic is a way around this.  A true ramp limits reaction time until the process variable and set point are inside a window established during startup and commissioning. Once the process variable is inside the window, the limiting function is eliminated from the control process, meaning and the control process is unconstrained in terms of how quickly it can make a change.

Many VFDs have a ramp function built into them.  But just to make interesting, some manufacturer’s call their acceleration and deceleration settings “ramps”.  Having said that, if the drive does not have the setting built into it, you can simply implement it in the control logic that is managing the drive.

Conclusion

While I illustrated the solution to the MCI building problem using the pneumatic control technology we were working with at the time, many of the issues the solution addressed are independent of the control technology because they were about the physics of the system that was being controlled. Thus, they are somewhat timeless in nature and perhaps things you will find useful in the modern world with its DDC technology.  Maybe they are even something you can pass on in your role as mentor, just as the MCI building, David St. Clair, and Tom Lillie did for me.

David-Signature1_thumb1_thumb                                                        

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering                                Visit Our Commissioning Resources Website at http://www.av8rdas.com/

[i]     The term “upset” means a sudden change in the process;  something like a major set point change or a major load change.  Sometimes, the word “step change” is used as a synonym for “upset”.  Start-ups are an example of a event that introduces an upset into nearly control loop in the system that is started up (and often into the systems that support it).

[ii]     I say for the time being because things that affect the lags in a system can change over time.  For instance, in a brand new system the day that you tune the discharge temperature control loop for the very first time may be a design cooling day.  

The system may (probably will) exhibit a totally different response pattern 6 months later on the design heating day since it will be using different heat transfer elements to deliver a similar discharge temperature.   And things will be different during the swing season when the economizer has a role in the process.

And after you finally have tweaked and fine tuned the loop over the course of the first year and found the perfect, year round solution, you may discover it no longer works two years down the road because wear in the linkage system changed the hysteresis or the coils are not as pristine as they were when they were new or the occupancy pattern in the building and related load profile has changed.

Bottom line, loop tuning, just like commissioning, is not a one time event.

Posted in Air Handling Systems, Controls, HVAC Fundamentals, Pneumatic Controls | Leave a comment

Lags, the Two-Thirds Rule, and the Big Bang, Part 4

In the previous blog post,  we looked at common lags that you might encounter in building systems in the general case. In this post, we will look at the particularly complex transportation lag that I ran into in the MCI Building VAV system, which was the root cause behind my significant emotional event.

Some Housekeeping

Before getting into the post, I wanted to do a bit of housekeeping.  You may have noticed that all of the links that were previously on the right side of the blog home page under the “Categories” drop-down menu went away.   That is because all of them and more now exist on our Commissioning Resources website (the place you will go if you click on the little picture of the Pittsburgh skyline on the right side of the home page).

That said, let me know if there is something missing that you are looking for.  I will direct you to its new home or make sure it is available on the Commissioning Resources website if it is not already there.

Lags and the MCI Building VAV System

The VAV system in the MCI Building that is behind this case study had many of the lags described in the previous post. But thermal lags were not an issue since we were dealing with a pressure control process. What’s more, the linkage and valve-plug lags were in the form of the linkage system[i] and blade-rotation mechanism for the inlet guide vanes.

With my pneumatic pressure transmitter located on the second floor and the controller it served located on the roof, the sensor lag was fairly significant because of the long run of quarter-inch pneumatic tubing from the main air source in the control panel to the transmitter and then back up to the control panel: probably in the range of 300 feet or so each way.

In addition, the transportation lag was quite significant and complex and was something I had clearly not considered in my control system design. But it was probably the biggest contributor to the problem I experienced.

An Analogy

In trying to understand this phenomenon initially and then subsequently explain it over the years, I have developed an analogy that is based on pumping water to fill a series of interconnected tanks.

The first tank, which is directly served by the pump, fills three other tanks through lines of different lengths. The 3rd and 4th tanks have two-way valves that drain water back into a reservoir for recirculation to the pump.  

The sketch below illustrates the arrangement under stead-state conditions.

Tanks Start-up v1

Note that if you click on the image, an enlarged version of it will open up.  Clicking the back-arrow will bring you back to the post.  You can also right click on the image and select “Open image in new tab” as illustrated below.

Enlarge

Granted, water is incompressible and the air in the MCI building system was compressible. But bear with me;  in my experience, explaining this phenomenon using a water and pump analogy will get the basics of the phenomenon we are discussing established.  Having established that, we can then qualify it regarding the differences between air and water to fully explain what happened in the MCI building.  That lesson can then be applied to other large, complex distribution systems.

A Bit about Pump Physics

To understand the analogy, you need to understand how pumps work.  So, while I am not going to go into a full blown explanation of pump physics, I wanted to highlight a few things that will matter in terms of understanding how the pump will interact with the tank.  If you are comfortable with pump and system curves, then you may want to just jump on down to next section (The MCI Building System Arrangement).[i]

To get you up to speed on the pump physics that matter for this analogy, I will use a simplified version of our diagram, limited to a reservoir, one tank with a pump moving water into it from the reservoir and two valves that let water out of it back into the reservoir.

Steady State Operation at Design Conditions

image

Under this condition, the pump delivers design flow to the tank and each of two control valves allows 50% of the design flow to return to the reservoir.  The depth of water in the tank creates the pressure required to move the design flow rate through the wide open control valves.  Thus, if the tank level is maintained at the level shown above, there will always be sufficient head to deliver design flow through either or both valves.

The total flow rate is the sum of the flow through the two control valves and the head delivered by the pump is the head required to lift the water over the top of the tank and the head required to overcome the resistance due to flow in the piping network.

As a result, for a fixed speed with a fixed impeller size, the pump will operate at a fixed point on the impeller line (the green line on the pump curve) associated with the design head and flow.   The system curve (the orange line) is a parabola that passes through the operating point (the red dot).  Its 0 gpm point is associated with the lift the pump sees; i.e. how much head or pressure it needs to create to lift water over the top of the tank and initiate flow.

Note that from the perspective of the pump, it is serving a fixed system because there is nothing in the piping circuit that it serves directly that can move.  The control valves can move, but they are decoupled from the pump circuit by the air gap between the point where the pump dumps water into the tank and the air gap between the outlet of the valves and the reservoir.

Steady State Operation at 50% Design Conditions

image

If we close one control valve but keep the other fully open so it delivers design flow, we will have cut the flow in half since each valve was selected to deliver half of the total flow rate.   But since the pressure set by the water level is what drives flow through the valve, to deliver design flow, we still need to maintain the design water level in the tank, even though the flow leaving it has been reduced by 50%.

Since the depth of water and the pressure it creates at the bottom of the tank is what drives the design flow rate through the wide open valve, we could control the pump by measuring the pressure at the bottom of the tank and varying the speed as needed to increase or reduce the flow into the tank.  And since, for a fixed system, the pump speed and flow rate are directly related, a reducing in demand of 50% from the design value would mean that the pump only needed to run at 50% of the design speed to meet the new, lower flow requirement.

The head required to overcome the resistance due to flow for a given flow rate in a fixed system varies as the square of the flow (i.e. the Square Law).  As a result, when we reduced the flow by 50%, the head required to overcome the resistance to flow will drop to 25% of what it was at the design condition.  Since the height of the tank and the discharge pipe did not change, the lift did not change.

The bottom line is that if we were controlling for a fixed pressure at the bottom of the tank, a reduction in flow out of the tank by 50% would cause the pump to slow down to 50% of its design speed.  The operating point would shift down the system curve since to 50% of the design flow rate at a head equal to 25% of the pressure drop due to flow plus the static lift over the top of the tank.

Start-up at 50% Design Conditions

image

The diagram above shows the tank immediately after a start-up at 50% load.   Since the water level is below set point, the pump ramps up to full speed. As the water level rises, the pump slows down and follows the system curve illustrated previously until it stabilizes at the design water level and 50% of design flow.

The shape of the system curve is not impacted by tank water level.  This is a subtle difference from the situation we will discuss next.

Steady State Again but with a Subtly Different Configuration

image

If you study the diagram above, you will realize there is a subtle difference between it and the previous diagrams;  the pump discharges into the bottom of the tank instead of the top of the tank.

Now, the lift that the pump needs to provide will be a function of the level of water in the tank.   When the tank is totally empty – at start up for instance – the pump will require less lift than when the tank is at the design operating level and the system curve will shift down from the design operating point and the operating point itself will shift out the pump impeller line.

image

As a result, the pump will move more than design flow initially.  But as the tank fills, the pump head will increase because the static head imposed by the water level in the tank increases and the flow drops off.

The bottom line is that in this configuration, the water level in the tank impacts the system curve.

The MCI Building System Arrangement

To fully understand the phenomenon we are about to discuss, you will need a general understanding of the physical arrangement of the MCI building air handling system in question.  Thanks to Google Earth and the internet, even though I no longer have the documentation for the facility, I was able to put something together.  The result is the images below. 

This first  image is of the roof top air handling equipment;  note the large, identical fan systems with symmetrical supply (towards the bottom of the picture) and return (towards the top of the picture) duct connections.

image

This image illustrates a typical floor plan as well as an overview of the building.   The left side of the floor plan would be towards the top of the image above.   The view of the building is from street level towards the bottom left of the image above.

image

The supply and return ducts from the air handling units in the first image come together into a common supply and return duct riser in the two shafts highlighted on the floor plans.

MCI Building Analogous Components

The analogous components in the context of the tank and pipe network relative to the building are as follows.

  • The fans inside the two AHUs are analogous to the pump filling the 1st tank.
  • The 1st tank is analogous to the discharge duct from the AHU with is coupled to the distribution duct riser through a string of fittings that represent a significant portion of the system pressure drop due to their configuration and the high velocities that they operate at.[ii]
  • The 2nd tank represents the distribution riser, which is a straight run of duct and thus free of fitting pressure drops. However, it is long (the height of the building) and the implication of this is discussed subsequently.
  • The 3rd and 4th tank represent the floor level duct distribution duct systems. In the actual building, there are distribution systems for each of the 12 floors served by the air handling system. But for the sake of illustration, I am only representing the top floor and the bottom floor in the analogy.
  • The two way valves that allow water to leave the 2nd and 3rd tank and recirculate to the pump represent the VAV terminal units associated with the zones in the building.
  • The reservoir represents the return duct system.

The Floor Level Distribution Systems and Their Tank and Pipe Analogy

The distribution systems serving each floor in the facility are fed from the duct riser. Because it is long duct, running the full height of the building, there is a pressure drop across it’s length, even though it is essentially a straight duct running down a vertical shaft.

As a result, the pressure at the fitting that taps the riser at the bottom to serve the 2nd floor distribution system will be lower than the pressure at a similar fitting serving the 11th floor distribution duct system. This difference in available pressure to deliver air to the different floors is represented by the short vs. long pipe connecting the tank representing the duct riser to the tank representing the 11th floor distribution system (the short pipe) and tank representing the 2nd floor distribution system (the long pipe).

A Bit More about the Reservoir

For the purposes of the discussion that follows, the reservoir from which the pump draws its water is assumed to be large enough so that there is no meaningful change in level between what exists at design flow and what exists when the system off, when all of the water drains back to the reservoir. In other words, the pump performance is independent of the level of the water in the reservoir and is only a function of the elevation of the tank it serves, the water level in the tank it serves, and the speed it is operating at.

Pump and Tank System Control

In the analogy, the pump’s role is to move water from the reservoir to the first tank in the network.  The depth of water in the first tank, which represents the pressure created by the supply fan in the analogy, is what causes the water to flow to the other tanks, through the control valves and back to the reservoir.

The pump speed is controlled by the pressure at the bottom of the tank representing the lower floor of the building.  This is analogous to the remote pressure sensor I used to control the IGV’s on the supply fan initially as described in the first blog post in this series.

The pressure at the bottom of the tank is a function of the water level in the tank.   That means that if the water level in the tank is low relative to the desired level, the pump speed will increase, moving more water directly into the first tank and indirectly through the network of tanks and piping to the last tank.  There will be a time lag associated with this process and understanding that lag is the goal of the analogy.

The pump fills the 1st tank by pumping water into it from the bottom. As a result, the head the pump sees will vary with the level of water in the tank. In turn, this will cause the pumps operating point to vary with the level of water in the tank. This is analogous to how the supply fans in the AHU will perform as the duct system becomes pressurized.[iv]

The other tanks in the system are fed from the bottom of the tank ahead of them. As a result, the flow rate to the downstream tanks will vary with the pressure (water level) in the tank that is feeding them. This is analogous to how flow to the various floor level distribution systems will vary as a function of the pressure in the duct riser feeding them.

Finally, overflowing a tank is analogous to over-pressurizing a duct and causing it to fail.

Tank System Operation

Steady State Operation at Design Conditions

The illustration below (a repeat of the first illustration)  represents the system in steady state operation under design conditions.

Tanks Start-up v1

All the control valves (VAV terminals) are wide open. The pressure sensor in the 4th tank has the pump running at full speed because that is what is required at design to establish the level in the tank required to deliver design flow to the loads.

Notice that:

  • The level in the 1st tank is higher than the level in tank 2nd tank, and
  • The level in the 2nd tank is higher than the level in tank 3rd tank, and
  • the level in the 3rd tank is higher than the level in the 4th tank.  

This is because it is the level difference between the tanks that cases the water to flow from one to the other.   In other words the level difference represents the pressure drop due to flow in the pipe connecting the tanks.  Specifically, for the illustration above, it represents the pressure drop due to flow at design conditions.

These levels are not directly controlled.  Rather, the are established by the pressure in the 4th tank (which is directly controlled) feeding back to the other tanks through the piping network.

Response to a Load Reduction at a Load Served by the 4th Tank

If one of the loads served by the 4th tank dropped (required less water), it would trigger a  chain of events:

  1. The control valve would start to close, then
  2. The water level in the 4th tank would start to rise, and
  3. The pressure at the bottom of the tank would increase (due to the higher water level), and
  4. The control system to start to slow the pump down to re-establish the targeted operating level in the last tank.

Those four events are only the beginning of a very dynamic, interactive string of events that will ripple out through the system.

Initially, when one of the 4th tank loads dropped and caused its associated valve to close, the higher pressure (deeper water) in the 4th tank would reduce the pressure difference between the 3rd and 4th tank, causing the flow from the 3rd to 4th tank to drop off, which would cause the level (pressure) in the 3rd tank to rise.

The deeper water in the 3rd tank would tend to drive the flow out of it to the 4th tank back up again.  But it would cause more than the design flow to leave the tank through the wide-open control valves, which in turn, would cause them to throttle (modulate towards the closed position) to try to maintain set point.

In the early moments of this event, since the control system is just starting to slow the pump down and the correct level has yet to be established in the 4th tank, the amount of water coming into the 3rd tank is likely more than required by the loads it serves directly and the loads it serves via the water it delivers to the 4th tank. The combination of excess flow and the throttled valves on the 3rd tank will cause the tank water level to rise, which will tend to increase the pressure difference between the 3rd and 4th tank all other things being equal.

This increased pressure difference will tend to increase flow to the 4th tank, causing its level to rise and the 3rd tanks level to drop, all other things being equal. As a result, the water level (pressure) in the 4th tank would tend to increase, further slowing down the pump to try to bring the system back into balance at the set point.

Response to Other Load Changes

A similar but slightly different dynamic would be set up if a control valve leaving the 3rd tank was to modulate closed instead of a control valve in the 4th tank. And yet another similar but slightly different dynamic would be set up if either of those valves modulated back open again.

The point is that this is a very dynamic process with a lot of interactions between different elements of the system, some of which have no direct impact on the speed of the pump. One of the tricks in tuning a system like this is to try to find a tuning solution that will deliver stable performance under all the operating conditions that the system will see, including modest, gradual changes in load. But the process also needs to be able to react quickly enough to a major load change to prevent overflowing a tank (blowing up a duct).

System Dynamics at a Full Load Start-up

For most systems, a start-up is the largest load change the system will see, especially if the conditions at the loads are out of control. For example, a VAV system that is starting up on a warm morning after a long, hot weekend is likely starting with all the terminal units fully open and demanding their maximum flow.

Due to system diversity, this demand could actually be in excess of the design flow requirement.  As a result, the system will ramp up to full speed but will not be able to achieve its design static pressure set point until some of the zones start to cool off and close their dampers.

The illustration below shows the conditions immediately after start-up on a design day for our tank system.

Start-up

Immediately prior this point in time, the tanks were all empty. Since there is no water (pressure) in the 4th tank, at start-up, the sensor that is located there to control pump seed commands the pump to full speed and will keep it at full speed until the water level in the 4th tank approaches the targeted set point (the red line next to the tank in the figure).

The pump was selected to deliver design flow to the system at the head established by the design water level in the 1st tank along with the elevation change required to get water to the tank in the 1st place and the pressure drop due to flow through the suction and discharge piping. But when the pump starts with no water in the tank and no flow in the system, the only head it sees initially will be the what is required to lift water to the open tank.

As soon as it starts, the pressure drop due to flow will show up in the piping circuit. But depending on the volume of the tank relative to the pumps flow capacity, it could be a while before the head associated with the design water level in the tank is established. Thus, for a while at least, the pump will see less than the design head. 

And, since the level control system is asking it to run at full speed, it’s operating point will shift out its curve (impeller line) from the design point.  As a result, it will initially deliver more than the design flow to the tank.

As the tank fills, the head the pump sees increases and the operating point will move up its curve. If the pump was being controlled for the pressure at the bottom of the 1st tank instead of the pressure at the bottom of the 4th tank, as soon as the water level in the 1st tank approached the design level (the red line next to the tank in the figure), the pump would start to slow down in an effort to come into balance at the design level.

But, until water flows through the series of tanks and starts to fill up the 4th tank, there is nothing to tell the pump to reduced speed.

Thus, it will continue running at full speed for the time required to establish a level near the design level in the 4th tank. This time lag will be a function of several variables which are discussed subsequently. But for this entire time interval, the pump will remain at full speed, although the flow rate will continue to drop as the additional depth of water in the tank increases the head it sees and pushes it up its curve.

Of course, as the water level in the 1st tank increases, water will start to flow out of it to the other tanks. However, if you consider a special case – a situation where there was a valve in the line connecting the 1st tank to the 2nd tank and that valve was closed –  I think you can see that the pump would continue to run at full speed until it overflowed the 1st tank (ruptured the duct) simply because the signal controlling it was disconnected from what was going on in the tank due to the closed valve.

Returning to our case – where there is not a closed valve – the resistance due to flow and the volume associated with the network of tanks and pipes causes the first tank to initially fill up faster than the other tanks.

For one thing, the rate at which water is transferred from tank to tank is controlled purely by the level in the tanks relative to each other and the pressure drop due to the flow that is created by the level difference in the interconnecting piping.   Increasing the level difference will tend to increase the flow rate. 

But at the same time, the resistance due to flow will also increase as a result of the higher flow rate.  As a result, doubling the level will not double the flow rate;  it will only increase it by a factor of 1.41, which you can predict by applying the square law to the situation.

The bottom line is that until the design level is achieved in a given tank, the tanks downstream from it will not be able to deliver design flow. More specifically in the context of our example, that means that until the design level is achieved in the 2nd tank, the 3rd tank will not be able to deliver design flow to its loads and to the 4th tank.

And only after the design level is achieved in the 3rd tank will it be able to deliver design flow to the 4th tank. During this entire time, the pump will have been running at full speed, potentially over-filling the first tank.

The duration of this transient state will have a lot to do with the volumes of the tanks relative to the flow rate the pump could produce at full speed and the resistance to flow created by the piping interconnecting the tanks. If the volume of the tanks is small relative to the pumps rated flow and/or the flow required by the loads (imagine tall, thin tanks), then the required operating levels will be achieved much more quickly than if the volume of the tanks is large relative to the pump’s rated flow and/or the flow required by the loads (imagine tall, wide tanks).

Similarly, if the piping is small relative to the flow it needed to carry at design conditions (visualize soda straws interconnecting the tanks), it will take more time and/or larger level difference between the tanks to move a given volume of water from one tank to the other. In contrast, if the piping is large compared to the design flow (visualize a subway tunnel interconnecting the tanks), then it will take much less time and/or much less of a level difference to move a given volume of water between the tanks.

It is also important to recognize that during this start-up process, there is water leaving the tanks via the wide-open control valves serving the loads. In other words, some of the water that is transferred from the 2nd tank to the 3rd tank leaves the 3rd tank to go to the loads and is not available to increase the water level in the tank and/or be transferred to the 4th tank.

This further delays the time required to establish the desired operating level in the 4th tank, as does the fact that some of the water entering the 4th tank leaves to go to the loads and thus is not available to increase tank level and ultimately bring the system under control.

System Dynamics at a Part Load Start-up

When the system starts at part load, all of the dynamics outlined above come into play. But in addition, when the pump is running at full speed, it is over-sized for the current load condition.

For the sake of discussion, let’s assume that the two way valves representing the loads are all 50% open at start-up. On the plus side, this means the water level required in the 1st tank to deliver design flow to the downstream tanks will be established more quickly. This is because the partially open valves will reduce the flow rate out of the tanks for a given water level compared to what happened when they were wide open.

But, if the water can not get out of the 1st tank or downstream tanks fast enough, it is possible that the 1st tank still will overflow (the duct will fail) before the required operating level is established at the 4th tank. In fact, this could happen more quickly than it did during a start-up at full load (visualize starting up with the valves all closed).

Analogy Bottom Lines

Hopefully, at this point, you can see that there could easily be a combination of system dynamics that would cause the 1st tank to overflow before the desired operating level was achieved in the 4th tank.  And if you can see that, then you probably can understand what I believe to be the root cause behind my blowing up the duct in the MCI building.

Connecting the Dots

More specifically, when we went to start up the system for the first time using the remote sensor to control the inlet vanes on the supply fan (analogous to the pressure sensor on the 4th tank controlling the pump speed), it was a mild day.  Since the building was generally at the ambient temperature because we were just starting up the HVAC systems, many of the terminal units were partially closed (analogous to the valves on the tanks being partially closed.

Since the fan was off, the duct system was not pressurized (analogous to all of the tanks being empty).  When we started the fan, for it to pressurize the remote portion of the system where the controlling sensor was located, it also needed to pressurize the duct system leading to the remote location (analogous to the upstream tanks needing start to fill up before the 4th tank where the pressure sensor was located starting to fill up).

The geometry of the fittings on the discharge of the fan caused the static pressure to build up fairly rapidly at that location and at the same time, delayed the pressurization of the downstream ductwork (analogous to the size and length of the piping interconnecting the tanks impacting how quickly they are able to be filled up by water coming from a tank upstream of them.

All of this time, because the pressure at the remote location in the ductwork was below set point (the level in the 4th tank was below the design water level) the inlet guide vanes at the fan were held wide open (the pump ran at full speed).

As a result, the fan was able to generate a pressure that exceeded the pressure rating of the discharge duct even though the pressure at the remote location had not come up to set point (the pump completely filled up the first tank and caused it to over-flow before the 4th tank was at the targeted operating level).

And while there are some differences between the tank system and the MCI VAV system that is behind this string of blog posts, I am hoping that you can see that what happened in the MCI VAV system on the day of my significant emotional event was very similar to what happens in the tank analogy in a scenario where the pump can fill and over-flow the 1st tank before the required operating level is achieved in the 4th tank.

Differences Between the Pump and Tank Analogy and the MCI Building Air Handling System

As I mentioned at the start of the post, there are some differences between my tank analogy and the air handling system in the MCI building that will come into play.  The primary differences are:

  • Air is compressible and water isn’t.
  • For all practical purposes, the fan does not have to lift the air to the top of the system where-as the pump had to lift water to the tank level.
  • As a result of the preceding, the system curve[v] for any given operating condition will always pass through 0 cfm at 0 in.w.c. But the operating curve for a VAV system will not do that as the load drops off because if it is being controlled for a fixed pressure someplace in the system.
  • The pumping analogy is about filling volumes. The fan system is about pressurizing volumes. In the fan system at start-up, the volumes represented by the duct system are already full of air at the ambient pressure, the fan simply adds more air to the volume to elevate the pressure to the targeted design static pressure.
  • If the 40 or so feet of straight duct on the discharge of the fan at the MCI building was a closed volume, the ideal gas equation says it would only take about 14 extra standard cfm of air to pressurize it to 4 in.w.c. But, if it was open ended, then the fan that was in place, operating at the design speed, could never reach 4 in.w.c. because of how much air was exiting at the other end of the duct.
  • The reality for a large VAV system will be between the two extremes described in the previous bullet and will be a function of the size of the volumes and the nature of the resistance between the various volumes in the system.

So there you have it;  my theory about why the lags introduced by the configuration of a large distribution system can make the system challenging to bring on line and tune.

In the final post of this series, I will touch on some of the reasons that I think not every system will exhibit the problem I experienced at the MCI Building. And I will look at how we solved the problem in the MCI building, a solution which is also applicable in the general case if you are dealing with a large, complex system.


David-Signature1_thumb1_thumb                                                        

PowerPoint-Generated-White_thumb2_thDavid Sellers
Senior Engineer – Facility Dynamics Engineering     Visit Our Commissioning Resources Website at http://www.av8rdas.com/

[i] If you want more details on pump physics, you can probably get them by exploring the Energy Design Resources Design Brief titled Pump Optimization and Assessment, which can be found on the Energy Design Resources page of our commissioning resources website.

[ii] For more on linkage systems kinematics, visit Economizers–The Physics of Linkage Systems at https://av8rdas.wordpress.com/2015/10/04/economizersthe-physics-of-linkage-systems-2/.

[iii] One of the interesting about large ducts (in a nerdy sort of way) is that while they may be operating at a fairly low friction rate due to the large cross-section they contain relative to the perimeter, the velocities at the low friction rate can be quite high. As a result, the velocity pressure will also be quite high. Since duct fitting pressure drops are a direct function of velocity pressure, a string of closely coupled (interactive) fittings like those that existed at the MCI building to get from the roof, into the building and over to the distribution shaft can represent a significant pressure drop, even though the friction rate of the duct they are serving is fairly low.

[iv] The Howden/Buffalo Fan Engineering Manual includes a discussion of fan system start-up characteristics, including performance curves in Chapter 15. That chapter also illustrates how inlet guide vanes impact fan performance. You will find a link that will allow you to obtain a free electronic copy of the manual at https://av8rdas.wordpress.com/2017/11/15/howden-buffalos-fan-engineering-handbook/.

[v] It is important to remember that VAV systems operate over a family of system curves with the steepest one generally associated with the condition created by all terminal units operating at minimum flow and the shallowest one created by all terminal units operating at maximum flow. If, for either of these curves, or any one in between, I were to slow the fan down and nothing in the system moved, then the operating point would go through 0 in.w.c. and 0 cfm at 0 rpm This is different from the operating curve that a VAV system follows as the load drops off while it attempts to maintain a fixed pressure at some point in the system. You will find more information about this at http://www.av8rdas.com/affinity-laws.html#Profile.

Posted in Air Handling Systems, Controls, HVAC Fundamentals | Leave a comment