This article provides background, use cases, and technical information
about the implementation of the T chart in Minitab Statistical
The T chart is a control chart used to monitor the amount of time between
adverse events, where time is measured on a continuous scale. The T chart is an
extension of the G chart, which typically plots the number of days between
events or the number of opportunities between events, where either value is
measured on a discrete scale. Like the G chart, the T chart is used to detect
changes in the rate at which the adverse event occurs.When reading the T
chart, keep in mind that points above the upper control limit indicate that the
amount of time between the events has increased and thus the rate of the events
has decreased. Points below the lower control limit indicate that the rate of
adverse events has increased.
The T chart is included in other software packages, all of which transform
the data for time between events to make it more normally distributed. The
transformed data are used to determine the control limits, which are then
converted back to the original data scale and plotted with the original
The problem with this approach is that the tails of the transformed data do
not fit a normal distribution very well. With the transformation approach, the
probability of a point being outside the control limits is only 0.0007546. In
contrast, with a standard control chart based on a normal distribution (such as
an I chart or an Xbar chart), the probability of a point being outside the
control limits is much higher, 0.00269. The transformation method for a T chart
results in an unusually low probability of out-of-control points and thus an
inflated Average Run Length (ARL).
Simulations (see Table 3 below) show that the false alarm rate increases
exponentially for extremely skewed data and decreases to almost 0 for data that
are less skewed. In general, a T chart implemented with the transformation
approach has very low detection capability, especially at the lower control
limit. The low power at the lower control limit means that the chart has
virtually no ability to detect increases in the adverse event rate.
Another approach to the T chart is to model the time between events using an
exponential distribution. The basis for this model is that, if adverse events
occur according to a Poisson model, then the time between events should follow
an exponential distribution. This approach uses percentiles of the exponential
distribution corresponding to the ± 1, 2, and 3 sigma zones in a standard chart
based on the normal distribution. These percentiles are sometimes called
“probability limits”. The use of probability limits means two things:
The issue with the exponential distribution is that, although it is the
theoretically correct distribution for time between Poisson events, the data in
practice often follow a slightly different model. The data may appear to be
exponentially distributed, but may actually deviate enough to seriously impact
the ARL and false alarm rate. If the data come from a distribution that is more
skewed than an exponential distribution, the false alarm rate can be extremely
high at the lower limit, meaning that there would be a high incidence of falsely
concluding that the adverse event rate had increased. On the other hand, if the
data come from a distribution that is less skewed than an exponential
distribution, the power to detect increases in the adverse event rate goes to
The exponential distribution has a skewness value of 2 and a kurtosis value
of 6. Simulations (see Tables 1 to 3 below) show that, as the skewness and
kurtosis of the data increase from these values, the false alarm rate associated
with the lower control limit increases exponentially. The false alarm rate
associated with the upper control limit increases more slowly. As the skewness
and kurtosis of the distribution decrease from the exponential values of 2 and
6, the false alarm rate associated with the upper control limit increases, while
the false alarm rate associated with the lower control limit goes to 0.
In order to increase the robustness of the chart, Minitab uses a Weibull
distribution rather than an exponential distribution to model the time between
events. The Weibull distribution has 2 parameters, shape and scale. If the shape
parameter is equal to 1, the Weibull distribution is the same as an exponential
distribution with the same scale parameter as the Weibull distribution.
Varying the shape parameter around 1 allows the Weibull distribution to take
on many different shapes, from extremely peaked and extremely right skewed (for
a shape parameter of less than 1), to symmetric (for a shape parameter of about
3), to left skewed (typically for shape parameter greater than 5). It is
expected that the shape parameter will typically be between 0.5 and 2, because
the distribution would then be close to the expected exponential distribution.
Although using probability limits from a Weibull distribution still means that
the expected ARL and false alarm rate would only apply if the data are in fact
from a Weibull distribution, this broader family of distributions will increase
the chances of obtaining a good fit.
For the following tables, 100 random samples of 10,000 data points each were
simulated from the specified distribution. The proportion of points outside the
control limits is shown in the table. For a standard chart based on the normal
distribution, such as an Xbar chart, the expected proportion of points outside
the limits is 0.00269.
The simulations use the Weibull and chi-square distributions. A chi-square
distribution with 2 degrees of freedom is the same as an exponential
distribution with a mean of 2. Varying the degrees of freedom around 2 makes the
chi-square more or less skewed than an exponential. See Figure 1. A Weibull
distribution with a shape parameter of 1 is the same as an exponential
distribution with a mean equal to the scale parameter from the Weibull
distribution. Varying the shape parameter around 1 makes the Weibull more or
less skewed than an exponential. See Figure 2.
Table 1a: Exponential-based T chart with chi-square dataSampling from a
Table 1b: Exponential-based T chart with Weibull dataSampling from a
Table 2a: Weibull-based T chart with chi-square dataSampling from a
Table 2b: Weibull-based T chart with Weibull dataSampling from a Weibull
Table 3: Transformation-based T chart with chi-square data.Sampling from
a Chi-Square Distribution
Degrees of freedom
% of Expected Outside
Figure 1: Comparing chi-square and exponential
Figure 2: Comparing Weibull and exponential distributions
The difference between a G chart and a T chart is the scale used to measure
distance between events. The G chart uses a discrete scale (counts of days
between events or opportunities between events recorded as integers). The T
chart uses a continuous scale (usually the dates and times that the events
occurred). Most uses of the T chart discussed in research are about monitoring
infection rates in healthcare settings. Other examples include monitoring
medication errors, patient falls and slips, surgical complications, and other
Note that it is not necessary to have both dates and times. In fact, it is
expected that a prominent use case will be having date-only data. If the number
of opportunities per day is not relatively constant, then a T chart may be a
better choice than a G chart.
Like other control charts, the T chart has a center line and upper and lower
control limits. There are also zones corresponding to the ± 1, 2, 3 sigma zones
in an Xbar chart or an I chart. These zones are not displayed in the chart, but
they are used in the tests for special causes. The control limits and zones are
all based on percentiles of the Weibull distribution. They are not multiples of
the standard deviation above and below the center line, as in other charts. As a
result, the control limits and zones are not symmetric around the center line,
except in the rare case where the Weibull distribution itself is symmetric.
The data that are plotted on the chart are the number of days or hours
between events. This makes interpreting the chart unusual. For example, if
infection rate increases, the time or number of intervals between infections
would decrease and could even be as low as 0. If the rate decreases, the time or
number of intervals between infections would increase. Thus, a point beyond the
upper control limit would indicate an unusually long period of time between
infections—in other words, that the rate was unusually low.
One negative property of the chart is that, if the control limits are fixed
and only Test 1 is used, the Average Run Length (ARL) will increase if the rate
increases. If the rate increases by 25%, and the control limits are fixed, the
ARL will increase by approximately 40%. Therefore, the T chart will be slow to
detect increases in the event rate. To compensate, Minitab uses by default both
Test 1 and Test 2. Adding Test 2 increases the ARL by only a very small amount
for decreases in the average time of around 10% and decreases the ARL for larger
changes in the average time.
There are 3 types of data that a T chart can be used for:
Xi = plot points, as explained above.If there are no 0’s in the Xi
data, the MLE estimates of the shape (KAPPA) and scale (LAMBDA) parameters are
calculated from the data and used to obtain the percentiles of the Weibull
If there are 0’s in the Xi data, the following alternative method for
obtaining parameters is used:
Let p1, p2, p3, p4, p5, p6, p7 be the CDF values from a Normal(0,1) for –3,
-2, -1, 0, +1, +2, +3.
Let w1, w2, w3, w4, w5, w6, w7 be the invcdf values for p1, p2, p3, p4, p5,
p6, p7 using a Weibull (KAPPA, LAMBDA) distribution.
Then, get LCL and UCL as follows:
CL = w4UCL = w7LCL = w1
CL = w4UCL = w7LCL = w1
If historical parameters are specified, the chart is based on the shape and
scale parameters of the Weibull distribution, much like other charts use the
mean and standard deviation. One difference is that the user must enter
historical values for both parameters (in charts like I Chart of Xbar Chart they
can enter one or both parameters).
The shape parameter must be > 0, and in most cases it should be between
0.5 and 2, although these limits are used primarily for practical reasons. Shape
parameters < 0.5 imply a distribution that is extremely skewed and can have a
kurtosis value that exceeds 2000. (An exponential distribution has a kurtosis
value of only 6.) Shape parameters that are higher than 2 imply a distribution
that is approaching symmetric, or even left skewed. Both are quite unrealistic
because data for the time between events is usually highly skewed to the
The scale parameter must be > 0 and should be somewhat greater than the
mean of the data. If the scale parameter is less than the mean of the data or
too much greater than the mean, the limits on the chart will not reflect the
process accurately and could lead to many false alarms.Note: The historical
values entered replace the KAPPA and LAMBDA used in the equations above to
obtain the control limits, center line, etc.
Test 1 – 1 point outside percentiles corresponding to K
standard deviations away from the center line in a chart based on the normal
distribution (plot point < w1 or > w7, if K = 3, see below if K <>
3)Test 2 – K points in a row on one side of the center
lineTest 3 – K points in a row, all increasing or
decreasingTest 4 – K points in a row, alternating up
and downTest 5 – K out of K + 1 points > w6, or K
out of K + 1 points < w2Test 6 – K out of K + 1
points > w5, or K out of K + 1 points < w3Test 7
– K points in a row >= w3 and <= w5Test 8 – K
points in a row < w3 or > w5
For Test 1, if the argument K is 3, then the w1 and w7 values used for the
control limits are used to define Test 1 failures (i.e., points that are < w1
or > w7). If the argument K is not equal to 3, then define p1' and p2' as the
cdf values of Normal(0,1) for –K and +K. Then define w1' and w7' as the invcdf
values from Weibull (KAPPA, LAMBDA) corresponding to p1’ and p2’. The definition
of a test 1 failure is then a point < w1' or > w7'.
In the tests above, w1, w2, w3, w4, w5, w6, w7 are as defined earlier (i.e.,
invcdf values from Weibull distribution corresponding to p1, p2, p3, p4, p5, p6,
p7 the cdf values of Normal(0,1) for -3, -2, -1, +1, +2, +3. However, if the
Test 1 argument is <> 3 we replace only w1 and w7 with w1' and
 L. Y. Chan, D. K. J. Lin, M Xie, and T. N. Goh. Cumulative Probability
Control Charts for Geometric and Exponential Process Characteristics.
International Journal of Production Research, 40:133-150, 2002. L. Y.
Chan, M Xie, and T. N. Goh. Cumulative Quantity Control Charts for Monitoring
Production Processes. International Journal of Production Research, 38:397-408,
2000. F. F. Gan. Design of Optimal Exponential CUSUM Charts. Journal
of Quality Technology, 26(2):109-124, 1994. F. F. Gan. Designs of
One-Sided and Two-Sided Exponential EWMA Charts. Journal of Quality Technology,
30:55-69, 1998. F. C. Kaminsky, J. C. Benneyan, R. D. Davis, and R.
J. Burke. Statistical Control Charts Based on a Geometric Distribution. Journal
of Quality Technology, 24:63-69, 1992. J. Y. Liu, M. Xie, T. N. Goh,
and P. R. Sharma. A Comparative Study of Exponential Time Between Events Charts.
Quality Technology of Quantitative Management, 3:347-359, 2006. D. C.
Montgomery. Introduction to Statistical Quality Control, Wiley, 6th Edition,
2009. M. Xie, T. N. Goh, and P. Ranjan. Some Effective Control Chart
Procedures for Reliability Monitoring. Reliability Engineering and System
Safety, 77:143-150, 2002. C. W. Zhang, M. Xie, and T. N. Goh. Design
of Exponential Control Charts Using a Sequential Sampling Scheme. IIE
Transactions, 38:1105-1116, 2006. C. W. Zhang, M. Xie, and T. N.
Goh. Economic Design of Exponential Charts for Time Between Events Monitoring.
International Journal of Production Research, 43:5019-5032, 2005.
Prepared by Dr. Terry Ziemer, SIXSIGMA Intelligence
Download this article as a PDF.
Get our free monthly e-newsletter for the latest Minitab news, tutorials, case studies, statistics tips and other helpful information.
Data is the new gold: 5 ways to make sure your data is reliable
Advancing the Power of Analytics
A Statistical Analysis of Boston’s 2015 Record Snowfall