4-20 milliamp Current Loops; Interpreting Current Loop Information

This is the third in a string of posts on 4-20 milliamp current loops. So far, we’ve looked at why we use 4-20 milliamp current loops in the first place and how 4-20 milliamp current loops work. In this post, I’ll take a look at interpreting data from them and some of the calibration issues associated with them. Note that the calibration issues I discuss also apply to other signal technologies, not just current loops.

At their core, current loops transmit data as a function of current flow so you have to convert the current to the appropriate units of measure to understand the information they are conveying. The images below illustrate this from a spreadsheet I made to do the math for me and draw a picture of the result.




Here is an image of the formulas behind the equations for those who are interested in making their own spreadsheet.


Mine is fairly automated and updates the axis titles and values in the formula automatically based on the entries I make in the input and output cells.

I have also added a tab that lets me put in the tolerance for the transmitter and show a trend line for how its actually working vs. the specified performance. This gives me a quick visual on the measured field performance relative to the ideal and the actual capabilities of the transmitter as illustrated below.  (The first image is the over-all screen and the second one zooms in on the formula section of the spreadsheet).



Coming up with the tolerance band may or may not be straight forward. Some manufacturers state their tolerance in terms of over-all accuracy, which takes all of the different variables into account. For instance, the sensing element itself probably has a
tolerance which must be combined with the tolerances for the transmitter that it is connected to. Transmitters typically have a number of things that can impact their accuracy like temperature, vibration, mounting position, and drift to name a few.If there is
no over-all accuracy statement, then all of these factors must be taken into account. One of the most common ways to do this is to take the square root of the sum of the squares of all of the tolerances.

Ir may be hard to believe that mounting position could introduce an error.  But if you are skeptical, I captured the impact of mounting position on a diaphragm type pressure transmitter in one of my previous posts if you want to see what that looks like. The Control Design Guide includes a more detailed discussion of the variables that can come into play and impact accuracy between an operator reading a parameter at the operator work station and the sensor in the field if you want to know more.

This next image is the graph my spreadsheet generates to illustrate the ideal operating curve (solid red line), the window inside of which the assembly should be operating if it meets the factory specs (between the light red dashed lines), and the trend I have measured in the field by measuring the process and milliamp output with a reference standard at two different operating conditions.


Usually (but not always) the performance is better than I have depicted in the illustration; I exaggerated both the tolerance band and the field errors to make the contrast more visible in the small scale image.

For me the picture can be worth a thousand words in terms of deciding if I am going to accept performance as is, make a single point calibration adjustment using an offset in the control system, or truly calibrate the device by manipulating the transmitter zero and span adjustments.

Trend Line Inside the Tolerance Window

If my trend line lies with in the tolerance window, then the transmitter is doing about as good as can be expected in terms of meeting the specs. I still may decide to make an adjustment to make the reading consistent relative to other transmitters in the system, but if I do that, I may actually be decalibrating the transmitter relative to an absolute reference in favor of making things read consistently from an operating stand point.  For example, two transmitters on opposite sides of a coil that meet their specs can show a temperature rise that is not there if one transmitter is reading on the high side of its tolerance while the other is reading on the low side of its tolerance.

Trend Line Outside the Tolerance Window

If my trend line lies outside the tolerance band, but the transmitter only operates over a narrow range, I may simply decide to put in an offset that shifts my operating line up or down relative to the ideal line in such a manner that for the process variable range I am concerned with, the indication will be with-in the tolerance that I need.

For example, if we zoom in on the illustration above, you can see that the reading from the transmitter will be with-in the tolerance band between about 46°F and 62°F.

By adding an offset, we can get the window of accuracy to shift down into the 45-61°F range, which may be just fine for a sensor that is measuring something that runs in a narrow range like the discharge air temperature from an air handling unit. But it would just shift the problem to another location if the sensor was measuring something with a broad operating range like outdoor air temperature.

The reality is that a single point offset is probably is the most common type of calibration procedure in use out in the field. And while it can provide satisfactory results over a limited range, its important to understand that it does not really calibrate the sensor over its entire span. In essence, it makes the sensor accurate at one point, but as you move away from that point, the error will tend to increase one way or the other.

True Calibration

True calibration requires adjusting both the zero point of the transmitter (analogous to the y intercept or “b” in the y = m * x + b equation) as well as the span (analogous to the slope or “m” in the equation). To do this, you need to make measurements at to points, preferably at or near the upper and lower limit of the transmitters operating range.

If you think about that last bullet point for a minute, you are probably gaining some insight into how difficult it would be to actually do a true calibration of a sensor in the field. You either have to force the system into an operating mode that creates a condition at one end then the other end of the span or remove the sensor from the system and use some sort of device like a constant temperature bath to simulate the desired condition.

In the former case, you could easily subject a system to damaging conditions. For instance, waiting for an extreme day and driving an air handling system to 100% outdoor air to expose a sensor to near 0°F temperatures for the purposes of calibration has obvious risks in terms of the potential to freeze a coil or quickly over-cool the area served.

In the latter case, removing A sensor from the system it serves it not as easy as it sounds. First, you have to get to it. Then you have to disconnect the wires, Physically remove it from the system, install it in the calibration standard, and reconnect the wires, which probably need to be extended to the location where the calibration standard is sitting. Then you have to reverse that process after you make any necessary adjustments.

To gain some insight into the preceeding discussion, here is a picture of the discharge side of a preheat coil serving a large AHU in the midwest.

The unit has 8 freezestats (the short copper tubes zigg-zagging over the coil face area) and one averaging temperature sensor (the long copper tube stretching diagonally across the coil face from the upper left to lower right of the picture).  The actual freezestat mechanisms are the little gray boxes you can make out on the wall at the right side of the picture.  Here is a close up of one.

The temperature transmitter associated with the averaging sensor is on the other side of the wall.  Here is a picture of it along with the transmitter for the sensor on the entering side of the coil (Incidentally, the coil should have been off on the day I took this picture so this is a picture of a retrocommissioning opportunity).

In light of the pictures, consider the task of removing the sensors to place them in a reference standard.  Things like division of labor rules can make the process even more complicated.

Now that we have discussed the hows and whys of current loops, in the next post, we can start to take a look at how you can build up a power supply panel to allow you to use a 4-20
milliamp transmitter in a field deployment of typical data logger.

David Sellers
Senior Engineer – Facility Dynamics Engineering

Click here for an index to previous posts

This entry was posted in 4-20 ma Current Loops, Data Logging. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s