Two figures of merit in control charting are (1) the expected length of time the process will appear to remain in control when in fact it is; and (2) the expected length of time it takes for an OOC condition to be detected after the process first moves out of control.
Under the usual assumptions--iid normally distributed values, no serial correlation, etc--we can reduce the first case to analyses of correlated coin flipping experiments. An accurate solution takes some work; people usually run simulations. However, each rule by itself has a simple interpretation:
Rule 1 characterizes each measurement by whether it lies beyond the interval $[-3\sigma, 3\sigma]$ (with 0.27% probability) or not. It corresponds, then, to flipping a coin with $\Pr(\text{heads})$ =0.0027 and we want to know the expected number of flips before a "heads" (OOC condition) is observed.
Rule 2 characterizes each measurement by whether it exceeds $2\sigma$ or falls below $-2\sigma$. This is like a "3-sided" coin: a multinomial distribution. One face says "above $2 \sigma$" and occurs with probability 2.28%. Let's call this "heads 1". Another face says "below $-2 \sigma$" and also occurs with probability 2.28%. Call this "heads 2". The third face says "between $-2 \sigma$ and $2 \sigma$" and occurs with probability 95.45%. The analogous question concerns the expected number of flips with this coin before a sequence of two heads of the same type is observed. The calculation might not be easy, but it's easy to see this event is fairly rare: the chance of either head appearing is just 4.55%, but given that it just appeared, the chance that a head of the same type immediately follows it is only 2.28%. Thus, if we only had a pair of throws to consider, an OOC event of this type would occur with probability 4.55% * 2.28% * 2 (multiply by 2 to account for both types of "heads") = 0.21%.
Rule 3 can be analyzed in a similar manner (but is more complicated).
Note that the rules are interrelated: a single observation can violate two or even all three rules, even though all preceding observations were in control. However, this has fairly low probability of occurring, so to a good approximation we can assume the rules are mutually exclusive (allowing us to sum their probabilities).
The purpose of rules 2 and 3 is to reduce the expected time needed to detect an OOC condition caused by a systematic change in the mean. How they accomplish this is intuitively clear: a small increase in mean, for example, only slightly increases the chance of triggering rule 1, but greatly increases the chance of triggering rule 2. For example, a one-sd increase in mean increases the chance of a rule 1 violation to $1 - \Phi(3-1) + \Phi(-3-1)$ = 2.28%, which is expected to take about 1/0.028 = 44 time steps to detect, but a rule 3 violation (four in a row above 1 sd) now has slightly greater than a $(1/2)^4$ = 6.25% chance of occurring, which will be detected almost three times quicker (around 16 time steps).
In summary, these rules can be understood by analyzing sequences of coin flips (or die rolls); each one corresponds to an event (or sequence of events) that is sufficiently rare that a process in control will go for a long time without triggering an OOC signal; and the combined set of rules is formulated to be able to detect relatively small shifts of the mean as quickly as possible.
The purpose of a control chart is to identify, as quickly as possible, when something fixable is going wrong. For it to work well, it must not identify random or uncontrollable changes as being "out of control."
The problems with the procedure described are manifold. They include
The "stable" section of the graph is not typical. By definition, it is less variable than usual. By underestimating variability of the in-control situation, it will cause the chart incorrectly to identify many changes as out of control.
Using standard errors is simply mistaken. A standard error estimates the sampling variability of the mean weekly call rate, not the variability of the call rates themselves.
Setting the limits at $\pm 3$ standard deviations might or might not be effective. It is based on a rule of thumb applicable for normally distributed data that are not serially correlated. Call rates will not be normally distributed unless they are moderately large (around 100+ per week, approximately). They might or might not be serially correlated.
The procedure assumes the underlying process has an unvarying rate over time. But you're not making widgets; you're responding to a market that--hopefully--is (a) increasing in size yet (b) decreasing its call rate over time. Temporal trends are expected. Sooner or later any trends will cause the data to look consistently out of control.
People tend to undergo annual cycles of activity corresponding to seasons, the academic calendar, holidays, and so on. These cycles act like trends to cause predictable (but meaningless) out-of-control events.
A simulated dataset illustrates these principles and problems.
The simulation procedure creates a realistic series of data that are in control: relative to a predictable underlying pattern, it includes no out-of-control excursions that can be assigned a cause. This plot is a typical outcome of the simulation.
These data a drawn from Poisson distributions, a reasonable model for call rates. They start at a baseline of 100 per week, trending upward linearly by 13 per week per year. Superimposed on this trend is a sinusoidal annual cycle with an amplitude of eight calls per week (traced by the dashed gray curve). This is a modest trend and a relatively small seasonality, I believe.
The red dots (around weeks 12 - 37) were identified as the 26-week period of lowest standard deviation encountered during the first 1.5 years of this two year chart. The thin red and blue lines are set at $\pm 3$ standard errors around this period's mean. (Obviously they are useless.) The thick gold and green lines are set at $\pm 3$ standard deviations around the mean.
(One doesn't usually project control lines backwards in time, but I have done that here for visual reference. It's usually meaningless to apply controls retroactively: they're intended to identify future changes.)
Note how the secular trend and the seasonal variations drive the system into apparent out-of-control conditions between weeks 40-65 (an annual high) and after week 85 (an annual high plus over one year's cumulative trend). Anybody attempting to use this as a control chart would be mistakenly looking for nonexistent causes most of the time. In practice, this system would be hated and soon ignored by everyone. (I have seen companies where every office door and all the hallway walls were covered in control charts that nobody bothered to read, because they all knew better.)
The right way to proceed begins by asking the basic questions, such as how do you measure quality? What influences can you have over it? How, despite your best efforts, are these measures likely to fluctuate? What would extreme fluctuations tell you (what could their controllable causes be)? Then, you need to perform a statistical analysis of the past data. What is their distribution? Are they temporally correlated? Are there trends? Seasonal components? Evidence of past excursions that might have indicated out of control situations?
Having done all this, it may then be possible to create an effective control chart (or other statistical monitoring) system. The literature is large, so if this company is serious about using quantitative methods to improve quality, there is ample information about how to do so. But ignoring these statistical principles (whether through lack of time or lack of knowledge) practically guarantees that the effort will fail.
Best Answer
The R statistical program along with the qcc package fits your specified needs (and it is hard to beat the price):
RODBC
being a common and useful one on MS windows.mail
andsendmailR
packages were designed to automatically send e-mails.