In this post, I will look at how thermal mass impacts a temperature reading, as can be seen in this video clip where I apply heat to a sensor inside a well while letting you watch a meter displaying the current flow in the current loop served by the temperature sensor. The current flow is driven by the sensor and changes as the sensor experiences a temperature change.
This post is a continuation of a string I am doing that looks at real world 4-20 ma current loop applications. Having said that, the reality is that the fundamental physical principles behind what we are looking at in these posts apply across the boards.
For instance, a Dwyer Magnasense that has a 0-5 vdc or 0-10 vdc output will show the same position sensitivity issues as the 4-20 ma version I discussed I the previous post. In fact, a pneumatic static pressure transmitter that uses a diaphragm as the sensing mechanism could do the same thing; after all gravity is gravity and the vector in our frame of reference is down.
I the context of the current post, the mass of a temperature sensor and the thermal system it is a part of will impact the response characteristic of the system it serves no matter if the sensor is a 4-20 ma sensor like the one I use in the first part of the post, a data logger sensor, like the one I use later on, or a pneumatic sensor, like the ones you might encounter in legacy control systems out in the field.
The following links will jump you to different places in the content of this post in case you don’t want to read the entire thing.
- What You See at the OWS May Not Be What is Going On
- A Closer Look at the System in the Video
- Well, Well, Well, What Have We Here?
- The Sensor Reaction Without A Well
- Sensor Mass vs. Thermal Response Characteristics
- Experimenting with Sensor Response Characteristics
- The Experimental Results
- The Bottom Line; What Does This Mean in a Working Control System?
What You See at the OWS May Not Be What is Going On
If you watch the opening part of the video, you will see that there is a noticeable lag between the time when heat is applied to the thermometer well and when the current flow in the current loop (which is driven by the temperature of the sensor inside the well) starts to go up. If you let the video run to the end, you will likely notice that:
- The current continues rise for a significant amount of time the source of heat is removed, and
- It takes a very long time for the current to return to the value it started at.
Taking a Look at the Data
If you pull data from the logger that the current loop was connected to and graph it, the result looks like this.
As you can see from the graph, the temperature at the sensor inside the well continues to rise for nearly a minute after the time that the hair dryer is turned off. And after nearly 4 minutes, the temperature at the sensor has not return to its starting point.
The Real World Implication
That means that if this was a data point in a control system and you were watching it from the Operators Work Station (the OWS as they say out in the field), then you might have the impression that heat was being added to the system for 5 or 6 minutes when in fact, the heat was only added during the first 13 seconds of the event. That is the result of the fact that there are a number of things between you and the sensor in most control systems as illustrated in this slide from a recent class.
- The transmitter that converts the low level signal from the sensor to something that can be accurately sent over a distance can introduce errors and lags.
- The input section of the controller makes an analog to digital conversion (A to D conversion) that impacts the resolution that can be achieved.
- The controller may be set up to only send data further up the network if the input changes by an amount termed the Change of Value (COV) limit, a parameter that is typically set by the installer/programmer as a part of the control system set up.
- The network controller may only ask for the data if some other device on the system requests it.
- The digital data that is transmitted around the network needs to be converted back to analog data (D to A conversion) for display at the work station.
- The operator has to interpret and react to the data appropriately (more than once, my bifocals have caused me to see a 3 as an 8 or something like that and I have reacted based on the wrong information).
- The command from the operator goes through an A to D conversion so it can be transmitted back down the network.
- When the command gets to the controller, it has to go through a D to A conversion in order to be able to modulate the valve.
- The signal to the valve actuator has to change enough for the actuator to want to move. For instance, in a pneumatic system, a volume of air has to flow through a pipe and the volume has to be large enough to generate a meaningful movement of a diaphragm or piston. For a large actuator fed by a small tube, this can take some time.
- The motion of the actuator has to be transmitted to the valve plug. If the linkage system is loose, a movement at one end may not cause an immediate movement at the other end.
- The valve plug has to move enough to allow the flow to change.
- The flow has to change enough to cause a change in the temperature in the system, a phenomenon that can have its own convoluted chain of events associated with it.
- The change in the system has to manifest itself at the point where the sensor is located before there is any chance of the control system “knowing” that anything happened.
- The mass of the sensor has to absorb or give up enough energy to allow the sensor to detect a change.
The thermal lag introduced by the well and the mass of the sensor itself is only one element in the string of events I just described. But as you might infer from the video, it is a significant one. And, it is one that you might be able to change if you needed to, as we will discover as we move through the post. That can be a handy thing to know if you are working with a recalcitrant control loop. More on why that is true will show up towards the end of the post.
A Closer Look at the System in the Video
To understand all of this a little better, lets take a look at the components in the system in the video. As a frame of reference, here is what a typical 4-20 ma current loop looks like schematically.
If you need a refresher on how a typical current loop works, you will find the details in a string of blog posts starting with one titled 4-20 Milliamp Current Loops: Why Use Them? including a description of the basic operating principle which is in the second post of the series.
The circuit we used in the experiment behind the video was a variation on the theme above that looked like this.
Physically, the components looked like this (all though they are moved around a bit from what is shown here in the video).
Lets take a look at each of the components in a bit more detail.
The DC Power Supply Panel
This is the panel I show you how to build as a part of the string of blog posts on current loops. The green box is the actual power supply and the rest of the items are piecec of electrical hardware like terminal strips, cord and cable connectors, and DIN mounting rail. There is more detail on the various parts in the posts about how to build the panel if you are interested. Its function is to provide the electrical power we need for the current loop to work.
The Temperature Transmitter
The temperature transmitter is the heart of the system at the measurement end. If you look “under the hood” you will find a number of components, one of which is the actual 1,000 ohm platinum Resistance Temperature Detector (RTD) …
… which is inside the stainless steel tube at the tip and is connected to the red and white wires. The other component is the circuit board …
… which measures the resistance change that occurs as the temperature at the RTD changes and converts it to a 4-20 ma signal that is directly proportional to the temperature.
This particular transmitter can have it range set to use a number of different zero points and spans via the dip switches in the middle of the circuit board. Not all transmitters have this type of flexibility.
The two potentiometers located at the bottom of the circuit board (the blue boxes at each end of the terminal strip) are zero and span adjustments that allow you to do a field calibration if you have an accurate enough instrument to use as a reference. Most transmitters will have this type of adjustment available.
You should think carefully before you actually try to make a field calibration as it is trickier than you might imagine (this from someone who didn’t think carefully before doing it one time). I’ll cover that in a different blog post sometime. But for the time being, this link will take you to an instruction sheet for the transmitter if you are really interested in knowing more about it.
The circuit board normally is mounted into some slots inside the box.
The red and yellow jumpers that you see attached to the transmitter are jumpers we installed to facilitate a couple of experiments, including the one we are discussing here. Basically, they allow us to put meters and other items in series with the RTD and the current loop quickly with out having to loosen up a screw in a terminal strip. In a real world field application, they would not be there.
The Fluke Multimeter
The multimeter is simply an electrical test instrument that allows various electrical parameters like voltage, resistance, and current flow to be measured.
The model I have allows me to measure temperatures with a thermocouple in addition to electrical parameters and it is capable of logging data. For the experiment, I have it set to measure DC milliamps (the knob is pointed at the letters A and mA, which stand for Amps and milliAmps respectively and the little straight line over them means DC or direct current, in contrast with AC or alternating current, which is represented by the wavy line on some of the other settings).
In addition, to measure amperage, I needed to hook the leads to a set of input terminals dedicated to amperage measurements instead of the input terminals used for the other parameters. This is typical of most of the multimeters I have been around.
The Data Logger 4-20 mA Input Cable
This looks like a couple pieces of wire with a jack attached to it but its actually more than that. Inside the black shrink-wrap is a precision resistor that has been selected to convert the 4-20 ma current flow into the 0.5 to 2.5 vdc input that the logger is capable of reading.
The Data Logger
This is the heart of the system at the data gathering end of things. This particular logger is a Onset Hobo U12, capable of monitoring 4 separate channels of data. Later in the post, you will see two different Hobo loggers in use, which are the latest product offering and feature a display and more memory in addition to the logging capabilities that the U12 was capable of.
Well, Well, Well, What Have We Here?
Folks, I’ve gotta million of em.
Returning to the serious nature of our discussion, the thermometer well that shows up in the video is not illustrated in the picture of the system set up. Specifically, the middle well in the picture is the one in the video clip.
And, in the context of our experimental system, it introduces the biggest lag into the measurement process by far. With the exception of the sensor itself (more on that in a minute) the reaction time for the other components in our current loop is likely in fractions of a second if they are stimulated by something.
Not so true for the thermometer well, at least the inside of the thermometer well relative to the outside when heat from the hair dryer is introduced. The well introduces a lag into the system because it is relatively massive. Thus the hair dryer needs to warm up the mass of metal from the outside in in order for the sensor that is on the inside of the well to “notice” a change.
How much of a lag is a function of a bunch of factors including the mass of the well and the power of the hair dryer. But no matter what, the reaction time of the sensor when it is inside the well will be different from what it would be if the well was not there.
The Sensor Reaction Without A Well
To demonstrate this, I repeated my experiment but this time with the thermometer well removed.
Here is what the data looked like contrasted with the data from the experiment when the well was in place.
Clearly, the mass of the well makes a big difference in the response characteristic of the system. But even with out the well, the mass of the sensor itself introduces a lag and and shifts the information delivered from what is actually happening. In fact, in our test set-up, the sensor itself is the 2nd most significant lag.
Sensor Mass vs. Thermal Response Characteristics
It turns out that there are many, many temperature sensor designs out there all of which have their advantages and disadvantages. Typically, the more massive sensors are more durable all other things being equal. But the more fragile, less massive sensors will respond more quickly and accurately to a change.
For that reason, I actually carry two Type K thermocouples with me in the field which are illustrated below (the picture on the right is a close-up of the tip of the probes).
The thermocouple on the left is a fairly durable, sheathed element that I use most of the time. But if I want to see or log how quickly something changes in response to a stimulus of some sort, I’ll grab the one on the right because it has so much less thermal mass.
Experimenting with Sensor Response Characteristics
To give you a sense of the difference between the probes in terms of how they react to a change, I did another experiment, this time contrasting how the two probes reacted to sudden temperature changes. As a clarifying point, I only have one logger that will interface with a thermocouple. So, I compared the low mass thermocouple with a thermistor that was encased in a chrome plated copper sheath, an arrangement similar to the type K thermocouple on the left in the image above.
Performing a Relative Calibration
Since I wanted to compare the reaction of two different probes with each other, the first thing I did after launched the loggers was place them in close proximity where they would not be influenced by anything and thus, both see the same stable temperature.
The tube of paper towels in the picture is my little steady state environment and I try to make a habit of doing something like this any time I am going to log data because most of the time, I will be concerned with the difference between the values I am logging rather than their absolute value.
Here is where the two probes ended up after being inside the tube of paper towels for about 45 minutes (Note that the active channel on the 4 channel logger on the left is the one in the upper-left corner. The data on the other channels is meaningless; it’s just what the logger reads with nothing plugged into the inputs for those channels.)
By logging the same temperature with both sensors for a while, I can develop a correction factor that I can then apply to correct the data from one sensor relative to the other. Here is a screen shot of how I did that for the data from this experiment .
In general terms, the formulas in the blue area calculate the indicated statistics based on the data in the orange area, which is the difference between what the two probes read while they were in the tube of paper towels. (Note that the screen shot only shows the last couple of readings; the calibration string was actually about 45 minutes of once per second data.)
Then, the correction factor is applied to the thermistor reading to correct it to the thermocouple in the cells that area highlighted in red by adding the correction factor to the data read from the thermistor.
Deciding which sensor to use as the reference and which to correct was an arbitrary decision and either sensor would have worked for either role in this case. In the field, I would likely decide which sensor was the key sensor and calibrate the other sensors to the key sensor.
For instance, when I was going through this process for one of the make up air handling systems as a facilities engineer when I worked at the Komatsu wafer fab in Hillsboro, Oregon, I calibrated all of the sensors in the AHU to the discharge temperature sensor since that was the condition we had to “nail” to meet the clean room quality control criteria in terms of temperature and relative humidity. If you want some perspective on why this is important in a working system, I did a blog post that illustrates that a while back titled Relative Accuracy.
Synchronizing the Data Time Stamps
Another thing that was important in terms of allowing me to compare data was to make sure I had all of the devices that would be doing things during the experiment synchronized in terms of what they thought the time was. This included me since I would be manipulating the sensors during the experiment. So, I decided to use the World Clock feature in my iPhone because my computer also was referenced to that time source and the logger clocks are set to the computer clock when I launch them.
Just to make sure I had everything in sync, I disconnected the probe from each logger briefly at a specific point in time based on my phone’s clock. I made a note of when I did this for each logger so I could compare the time stamp on the break in the data that was generated by disconnecting the probe with the time that I actually disconnected the probe based on the iPhone clock.
The Experimental Procedure
The actual procedure I planned to use was fairly simple. Specifically, I had a thermos full if ice water and a tea kettle full of boiling water and I planned to first immerse the probes in the ice water and let the temperatures stabilize. Then, I would quickly move the sensors into the steam in the tea kettle and let them stabilize there. I would manually note the time that I made the change and try to do it as quickly and consistently as possible. Here is a picture of my set-up with the probes in the ice bath.
Next, I would cycle the probes back and forth between the ice water and the steam to see how thermal mass impacted the way a periodic temperature swing was portrayed in the data stream . I planned to do a once a minute cycle and a once every 15 seconds cycle.
Finally, I would expose the sensors to ambient air and allow them to re-stabilize at that condition.
Note that my ice bath was not intended to be a perfect 32°F reference; for instance, I did not use distilled water to make it or the ice in it and I did not go to extreme lengths to isolate it from ambient impacts. I just tried to create a steady state cold reference at about the melting point of ice by filling a thermos with ice water and figuring that as long as there was both ice and water in the thermos, it was a saturated system going through a phase change at a steady temperature.
The Experimental Results
Here are the results of my experiment after I pulled the data and loaded it into Excel and corrected one sensor relative to the other. This first image is the total data set to give you an overview. I will then focus on different portions of the experiment so you can better see what actually happened.
This next graph is a close up of the response of the sensors when I immersed them in the ice bath.
The gray band represents the window during which I was moving the sensors and pushing them into the ice water. I was trying to do it exactly on the minute (in this case, 9:21) and my observation was that I was probably “accurate” in doing it plus or minus about 1 second. So, the gray bar is centered on the time I had targeted and the width of the bar represents the window of time during which the event likely happened.
As you can see, the low mass sensor adjusted to the lower temperature in about one second (that was as fast as the loggers could sample, so it may have actually reacted even faster, I just could not pick it up with the instruments I was using). In contrast, the higher mass sensor took 10-15 seconds to accurately reflect the temperature in the ice bath.
Here is the graph that shows what happened when I moved the sensors from ice water to steam.
Again, the low mass sensor responded more quickly than the high mass sensor. But, because the temperature change was larger, it took the low mass sensor about 8-10 seconds to start to stabilize while it took the high mass sensor more like 20-30 seconds.
This next graph provides an overview of the data during the time I was creating the temperature cycles by shuffling the probes back and forth, first once a minute and then, once every 15 seconds and then finally back to ambient air.
I will take a close look at each cycle, but first I wanted to point out something that is revealed in this graph.
Specifically, notice how both sensors tend to react to immersion in a water much more quickly than to immersion in air or steam. That’s because the thermal conductivity of each of those substances is different as illustrated in the table below.
As you can see from the table, water is about 24 times better at conducting heat than air is and steam is about 2/3 as good as air. So, part of the reason for the lag in the response characteristic when I move the probes from water to steam or steam or water to air or steam to air is simply the ability of the fluid the sensor is immersed in to conduct heat to or from the mass of the sensor.
There are likely other heat transfer phenomenon impacting this too, like radiant energy transfer and convective energy transfer. But my point is that we have been talking about the lag we are observing in the context of the mass of the sensor, and that is a big factor. But the characteristics of the fluid that the sensor is immersed in will also come into play.
That means that all other things being equal a given sensor will probably show a longer response time in air than it will in water.
The Once a Minute Cycle
If we take a close look at a typical 1 minute cycle, this is what we see.
At this point, its probably not a huge surprise to see that the combination of low sensor mass and the high thermal conductivity of the water cause the less massive sensor to react much more quickly than the more massive one.
I am not totally sure why the small sensor temperature rises and then falls again, but I suspect it is because I had the two sensors tied together and the energy from the more massive sensor as it cooled off probably interacted with the smaller sensor. If you look at what happens over time back in the graph that shows the response to immersion in an ice bath, the two lines tend to converge around 32°F as illustrated in the photo below, which was taken at the end of the ice bath time interval.
The results after a significant period of time in steam were similar.
Offsets vs. True Calibration
The primary reason the two loggers show different values in the pictures above is probably related to the sensor accuracy and maybe the accuracy of the A to D conversion process in the loggers. My data sets are calibrated by applying a fixed offset to the data from the thermistor based on the string of data I took with the sensors inside the roll of paper towels.
But the reality is that the offset probably varies with temperature, meaning my 0.10 °F correction factor for the thermistor used by the 4 channel logger on the left is about right at 69-70°F (the temperature in the roll of paper towels). But it probably is not the right number at the ice point or at the boiling point.
That is why a true calibration effort will involve checking data at multiple points and making both a zero and an span correction, not a simple offset, which is what I did for this experiment. More on that in another blog post at some point.
The Every 15 Seconds Cycle
This is what the first cycle looked like as I switched from once a minute to once every 15 seconds.
That’s probably about what you were expecting given what we have seen so far. But as I repeated the more rapid cycle, another phenomenon emerged as can be seen from this graph of the data from that portion of the experiment.
If you look closely, you will notice that the valley or low point achieved by the more massive sensor tends to shift up. I suspect this has something to do with how much energy the sensor can pick up during the hot cycle and how much it can get rid of during a cold cycle in the amount of time it spends at either condition. Probably the same thing is true at the peak but is not as obvious because of the conductivity of the steam vs. the water. I also suspect that the point where things come into equilibrium varies with the cycle frequency; fodder for another experiment some time maybe.
The Bottom Line; What Does This Mean in a Working Control System?
At this point (assuming you are still actually reading this) you are probably thinking something like this is all well and good and very interesting, but what does it have to do with reality? So before I closed, I thought I would share what I think the implications are.
What You See is Probably Not What You Are Getting
I mentioned this before, and in some ways, it is probably the most important lesson. And, it is easy to loose site of this in the middle of a troubleshooting session.
Different Sensors React in Different Ways
Not all temperature sensors are created equal. Not only is the technology used to measure temperature different from sensor to sensor. There are also differences in the physical characteristics like mass and the application details like thermo wells. All of these items will impact the response characteristic generated when a sensor is subjected to a change.
The Same Sensor Will React in Different Ways in Different Mediums
Since energy transfer to and from the sensor is a function of a number of factors like thermal conductivity of the fluid it is in and temperature difference between it and the surroundings, the same sensor applied in a different medium, say water instead of air, will show a different response characteristic in one medium vs. the other.
Sensor Reaction Times = Apparent Dead Time in a Control Loop
If you observe a control process, you will come to realize that there is a measurable time lag between when you make a change and when the change shows up as a variation in one of the parameters you are watching. For instance, if you change the set point for the control loop regulating a steam valve that controls the hot water supply temperature in a heating system, the indicated supply temperature doesn’t instantly increase. Rather, some time has to elapse before you see the results of the change you made.
The reaction time of the sensor is one of the reasons for this lag, along with a long list of other things like the one I made early-on in this blog post. The accumulation of all of these lags is termed the apparent dead time and to quote David St. Clair on the topic, its all about the lags. Here is why David says that.
The image below was generated using David’s Tuning 101 software, which he developed to let you try out the ideas he discusses in his Controller Tuning and Control Loop Performance book (the link takes you to the blog post on both items, which have links to the place to order copies if you want) (they are really worth getting if you are in this business).
The image compares the response generated by an upset at time = 0 for:
- An open loop (red line) i.e. a loop that has no control process running to react to the upset,
- A poorly tuned loop (the blue line),
- A well tuned loop (the grey line), and
- A loop that is starting to go unstable, thereby revealing its natural frequency (the black line)
The apparent dead time is the time interval between when the upset was initiated (time = 0) and the time when the process starts to react (about time = .5 – .6).
The natural frequency of the loop is the period of time for one oscillation in the loop that is starting to go unstable.
If you get into this a bit, a couple of interesting and useful things can be derived based on the apparent dead time. For one thing, the natural period will be about 2 to 4 times the apparent dead time.
You can prove that mathematically or by a little though experiment; David does both in the appendix of his book.
The natural period is a useful piece of information to have because it can be used to set the initial tuning parameters for a PID control process based on some rules developed by John Ziegler and Nathaniel Nichols.
But the really fascinating thing about apparent dead time (at least to me) is this. If you look at the responses of all four of the control loops in the example, they are all virtually identical for about 1/4 to 1/2 the natural period.
And, if you look at the well tuned control loop, the settling time is about twice the natural period.
That means that by taking the time to observe what the apparent dead time is in a control process, which is fairly easy to do and probably is revealed in the trends you have from a system when it starts up, then you know two very important things.
- You know how far the process is going to deviate from set point no matter how well things are tuned (it’s the deviation you see at about one to two times the apparent dead time).
- You now how quickly the process will settle in the best of all situations (about eight times the apparent dead time or two cycle times at the natural frequency).
That means that if you are having trouble with a loop deviating too far from set point before it reacts or taking too long to settle out, i.e.:
- You would like less deviation than you are currently seeing in about one to two times the apparent dead time, and/or
- You would like the loop to settle at its new operating condition faster than about eight times the apparent dead time,
then you probably are not going to solve the problem by adjusting the loop tuning parameters.
Rather, you are either going to have to lower your expectations or you are going to have to do something to reduce the amount of apparent dead time in the system. To address the apparent dead time, you will need to reduce the lags and that is easier to do for some lags relative to others.
For instance, reducing transportation delays might mean you need to make equipment, ducts, and pipes shorter; probably not possible if the system is in place already. Or, it may mean moving a sensor so that it is closer to what ever is producing the change in the system, which may be possible but expensive and require a system outage if the sensor is in a pipe full of fluid.
In contrast, adding a positioner to a valve to make it react faster or, in the context of this blog post, eliminating a thermometer well or reducing the mass of a sensor to give it a faster response characteristic might be relatively easy. Personally, I have all of these things to solve problems in the field.
And while I didn’t like the fact that removing the thermometer well meant a system outage (as would the need to replace the sensor once it was installed directly in the pipe), there was a lot to be said for having the system deliver the required level of performance. A critical insight in terms of delivering a solution was recognizing the roll that thermal mass played in the over-all performance of the system I was working on. Hopefully, the information in this post has given you some insight into that as well.
Senior Engineer – Facility Dynamics Engineering