Quantcast
Channel: Understanding Statistics | Minitab
Viewing all 145 articles
Browse latest View live

Explaining Quality Statistics So Your Boss Will Understand: Weighted Pareto Charts

$
0
0
is this machine properly calibrated?

Failure to properly calibrate this machine will result in defective rock and roll. 

In my last post, I imagined using the example of a rock and roll band -- the Zero Sigmas -- to explain Pareto charts to my music-loving but statistically-challenged boss. I showed him how easy it was to use a Pareto chart to visualize defects or problems that occur most often, using the example of various incidents that occurred on the Zero Sigmas last tour.  

The Pareto chart revealed that starting performances late was far and away the Zero Sigmas' most frequent "defect," one that occurred every single night of the band's 100-day tour.

This is the point at which my boss would say, "I get it!  We just need to make sure the Zero Sigmas hit the stage on time, and everything will be swell!" 

"Not so fast there, sir," I would have to reply. "There's a question that this Pareto chart of frequency doesn't answer." 

Pareto Chart of Rock and Roll Tour Incidents by Frequency

 

Are the Most Frequent Defects the Most Important? 

We know the Zero Sigmas started every show late, making that the defect that occurred most often, and this information is valuable. It's also useful to see how frequently singer Hy P. Value forgot the words to his songs and greeted the wrong city when he hit the stage. ("Hello, Albequerque!" was correct on only one night of the tour.)

All of these are incidents we'd like to happen much less frequently. But are they equal? Looking at just the raw counts of the incidents assumes all problems or defects are the same in terms of their consequences.

You can see why this is problematic if you think about defects that might occur in manufacturing a car: a scuff mark on the carpet is undesirable, but it's not on par with a disconnected brake cable. Similarly, if a shirt is sewn with thread that's just slightly off color, the defect is so small the garment might still be usable; a shirt with mismatched fasteners will need to reworked or discarded. 

In the world of rock and roll, the Zero Sigmas starting a performance late probably has fewer consequences than their getting caught lip-syncing during a performance does. How is that reflected in the Pareto chart above? It's not. 

When different defects have different impacts, a Pareto chart based only on number of occurrences doesn't provide enough information to tell you which issues are the most important. 

Are You Counting the Right Thing? 

You might be able to learn more by looking at a different measurement. In many situations, you do want to know the number of defects. But that's not always what you want to measure. For example, the Zero Sigmas' public relations manager gathered all the coverage about the recent tour, and she wants to know how it corresponds to things that happened while the band was on the road so she can be ready to handle things that might happen on the next tour! 

We can add a column of data to our worksheet that tallies the number of news reports, online reviews, and social media mentions about the various incidents that took place on tour.

Bad PR Data for Pareto Chart

This gives us insight into how the different types of incidents played out in the media. Here's how that data looks in a Pareto chart: 

Pareto Chart of Bad Press

This is very important information for the PR manager, because it shows which types of incidents resulted in the biggest number of negative mentions.These results are quite different from the raw counts of defects. For example, even though it was the most frequent defect, the band starting late was barely mentioned in negative reports. 

However, this is really just a different type of frequency data: in effect, we're counting the number of complaints rather than the raw number of defects.

There's another approach to getting more insight from a Pareto chart:  we can look at the data in conjunction with another factor, like a cost, to create a weighted Pareto chart. Because the most common problems aren't always the most important ones, a weighted Pareto chart can give extra emphasis to the most important factors.

Setting Up Data for a Weighted Pareto Chart

A weighted Pareto chart doesn't just look at how often defects occur, but also considers how important they are. A weighted Pareto chart accounts for the severity of the defects, their cost, or almost anything else you want to track. And as we saw when we looked at bad PR instead of incident counts, a weighted Pareto chart may change how we see the priority for improvement projects. 

Weighting requires a valuation: you weight the frequency counts by assigning attributes, such as cost, severity, or detectability, to each defect type. This attribute could be objective, such as the dollar amount it costs to fix each type of defect. For example, a garment manufacturer might know that wrinkles cost $.10 to fix, while dirt specks cost $.50.

Other attributes may be harder to quantify. For instance, a manufacturer might want to place a value on the potential effect of different defects on the company's reputation, a much more difficult thing to assess. Precise measures may not be available, but to get a sense of the possibilities, the manufacturer might ask a corporate counsel or communications officer to rate the damage potential of each type of defect on a scale, or even conduct a small survey to assign values. 

In looking at the tour data for the Zero Sigmas, we'll assign a number from 1 to 100 for the amount of embarrassment, or "lameness," associated with each type of incident that took place, as shown below: 

Data for Weighted Pareto Chart

Now let's use these weights to create a weighted Pareto chart with Minitab Statistical Software. To do it, we'll first need to create a new column of data with Minitab's calculator (Calc > Calculator)  by multiplying the degree of embarassment by the frequencies for each type of incident. We'll store that in a column titled "Lame-o."  

Selecting Stat > Quality Tools > Pareto Chart and entering "Incidents" as the defects and "Lame-o" as the frequencies produces the following chart:   

Pareto Chart of Lameness

The weighted Pareto chart above uses the same incident count data, except that now the defects have been weighted by the degree of lameness involved in each type of incident. Here, you can see that Hy P. Value's forgetting the lyrics to his own songs accounted for 46% of the tour's lameness. Combine that with the guitarists' failure to tune their instruments and we've accounted for 67.5% of the total lameness from the last tour. 

If the next Zero Sigmas tour is going to rock harder, we need to focus on tuning the instruments and making sure Hy P. Value remembers the words. Starting the show late may happen every night, but it doesn't even register in making the tour lame!  

What Would You Like to Know? 

All of this just goes to demonstrate that the same data can lead to different conclusions, depending on how we frame the question. If we're  concerned with the frequency of defects, we focus on getting the band to start their shows on time. If we're concerned with minimizing bad PR, we want to make sure that Zero Sigmas don't get caught lip-syncing again. And if we want to make the band's next tour less lame, figuring out why the singer forgets the lyrics is where we'll want to start. 

That's three ways the same data can give us three different insights into different aspects of quality. That's why we need to be careful about what we're actually measuring, and what we hope to achieve by measuring it. 

 

 

 


Will the Weibull Distribution Be on the Demonstration Test?

$
0
0

Over on the Indium Corporation's blog, Dr. Ron Lasky has been sharing some interesting ideas about using the Weibull distribution in electronics manufacturing. For instance, check out this discussion of how dramatically an early first-failure can affect an analysis of a part or component (in this case, an alloy used to solder components to a circuit board). 

This got me thinking again about all the different situations in which the Weibull distribution can help us make good decisions. The main reason Weibull is so useful is that it's very flexible in fitting different types of data, because it can take on the characteristics of other types of distributions. For example, if you have skewed data, the Weibull is an alternative to the normal distribution. 

Developing a Demonstration Test Plan with the Weibull Distribution

demonstration test plan example with turbine engine combustorIn business and industry, the Weibull distribution is frequently used to model time-to-failure data. In other words, it can help us assess the reliability of component or part by estimating how long it will take to fail. If you work for the company supplying those parts, it can help you check the quality of the components you're making, and to prove to your customers that your products meet their requirements.

A good way to do this is to follow a Demonstration Test Plan, and Minitab Statistical Software can help you create test plans using the Weibull distribution (or some other distribution, if you know it's more appropriate). Minitab's test planning commands make it easy to to determine the sample size and testing time required to show that you have met reliability specifications.

Your test plan will include:

  • The number of units or parts you need to test
  • The stopping point, which is either the amount of time you test must each part or the number of failures that must occur
  • The measure of success, which is the number of failures allowed in a passing test (for example, every unit runs for the specified amount of time and there are no failures)

You can use Minitab to create demonstration, estimation, and accelerated life test plans, but let's focus on demonstration test plans here.

What Types of Demonstration Test Plans Are There? 

There are two types of demonstration tests:

Substantiation Tests

A substantiation test provides statistical evidence that a redesigned system has suppressed or significantly reduced a known cause of failure. This test aims to show that the redesigned system is better than the old system.

Reliability Tests

A reliability test provides statistical evidence that a reliability specification has been achieved. This test aims to show that the system's reliability exceeds a goal value. 

You can tweak these tests depending on scale (Weibull or exponential distribution) or location (other distributions), a percentile, the reliability at a particular time, or the mean time to failure (MTTF). For example, you can test whether or not the MTTF for a redesigned system is greater than the MTTF for the old system.

An Example of a Demonstration Test Plan

Let's say we work for a company that makes turbine engines. The reliability goal for a new turbine engine combustor is a 1 percentile of at least 2000 cycles. We know that the number of cycles to failure tends to follow a Weibull distribution with shape = 3,  and that we can accumulate up to 8000 test cycles on each combustor. We need to determine the number of combustors it takes to demonstrate the reliability goal using a 1-failure test plan.

Here's how to do it in Minitab (if you're not already using it, download the free 30-day trial of Minitab and play along): 

  1. Choose Stat > Reliability/Survival > Test Plans > Demonstration   
  2. Choose Percentile, then enter 2000. In Percent, enter 1.
  3. In Maximum number of failures allowed, enter 1.
  4. Choose Testing times for each unit, then enter 8000.
  5. From Distribution, choose Weibull. In Shape (Weibull) or scale (other dists), enter 3. Click OK.

Your completed dialog box should look like this: 

Demonstration Test Dialog Box

Interpreting the Demonstration Test Plan Results

When you click OK, Minitab will create the following ouput:

Demonstration Test Plan Output

Looking at the Sample Size column in the output above, we can see we'll need to test 8 combustors for 8000 cycles to demonstrate with 95.2% confidence that the first percentile is at least 2000 cycles.  

When it generates your Demonstration Test Plan, Minitab also creates this graph: 

Likelihood of Passing for Weibull Model Demonstration Test

The graph gives us a visual representation of the likelihood of actually passing the demonstration test. Here,

  • The very sharp rise between 0 and 2 indicates that the probability of your 1-failure test passing increases steadily as the improvement ratio increases from zero to two.
     
  • If the improvement ratio is greater than about two, the test has an almost certain chance of passing.

Based on this information, if the (unknown) true first percentile was 4000, then the improvement ratio = 4000/2000 = 2, and the probability of passing the test will be about 0.88. If you reduced the value to be demonstrated to 1600, then the improvement ratio increases to 2.5 and the probability of passing the test increases to around 0.96. Reducing the value to be demonstrated increases the probability of passing the test. However, it also makes a less powerful statement about the reliability of the turbine engine combustor.

What's the Right Demonstration Test Plan? 

The right demonstration test for your situation will depend on many factors. Fortunately, it's easy to adjust different parts of a proposed test to see how they affect the task.  Each combination of maximum number of failures allowed and sample size or testing time will result in one test plan, so you can use Minitab to generate several test plans and compare the results.

 

 

 

No Matter How Strong, Correlation Still Doesn't Imply Causation

$
0
0

a drop of rain correlating with or causing ripples...There's been a really interesting conversation about correlation and causation going on in the LinkedIn Statistics and Analytics Consultants group. 

This is a group with a pretty advanced appreciation of statistical nuances and data analysis, and they've been focusing on how the understanding of causation and correlation can be very field-dependent. For instance, evidence supporting causation might be very different if we're looking at data from a clinical trial conducted under controlled conditions as opposed to observational economic data.

Contributors also have been citing some pretty fascinating ideas and approaches, including the application of Granger Causality to time series data; Hill's Causation Criteria in epidemiology and other medical-related fields; and even a very compelling paper which posits that most published research findings are false.  

All of this is great food for thought, but it underscores again what must be the most common misunderstanding in the statistical world: correlation does not equal causation. This seems like a simple enough idea, but how often do we see the media breathlessly reporting on a study that has found an associative relationship between some factor (like eating potato chips) and a response (like having a heart attack) as if it established direct, a + b = c inevitability?  

What Is Correlation? 

Correlation is just a linear association between two variables, meaning that as one variable rises or falls, the other variable rises or falls as well.  This association may be positive, in which case both variables consistently rise, or negative, in which case one variable consistently decreases as the other rises. 

An easy way to see if two variables might be correlated is to create a scatterplot.  Sometimes a scatterplot will immediately indicate correlation exists; for instance, in this data set, if we choose Graph > Scatterplot > Simple, and enter Score1 and Score2, Minitab creates the following graph: 

scatterplot showing correlation between factors

(If you want to play along and you don't already have it, please download the free 30-day trial of Minitab Statistical Software!)

In this scatterplot above, we can clearly see that as score1 values rise, so do the values for Score2.  There's definitely correlation there!  But sometimes a scatterplot isn't so clear.  From the same data set, let's create a scatterplot using "Verbal" as the X variable and GPA as the Y variable: 

Scatterplot of Verbal Scores and GPA

Well, it looks like there might be a correlation there...but there's a lot of scatter in that data, so it isn't so clear as it was in the first graph. Is it worth exploring this further (for instance, by proceeding to a regression analysis to learn more about the association)?  Fortunately, we can look at a statistic that tells us more about the strength of an association between these variables.  

The Correlation Coefficient

To find the Pearson correlation coefficient for these two variables, go to Stat > Basic Statistics > Correlation... in Minitab and enter Verbal and GPA in the dialog box. Minitab provides the following output: 

Pearson's correlation coefficient

The correlation coefficient can range in value from -1 to +1, and tells you two things about the linear association between two variables:

  • Strength - The larger the absolute value of the coefficient, the stronger the linear relationship between the variables. An value of one indicates a perfect linear relationship (the variables in the first scatterplot had a correlation coefficient of 0.978), and a value of zero indicates the complete absence of a linear relationship.
     
  • Direction - The sign of the coefficient indicates the direction of the relationship. If both variables tend to increase or decrease together, the coefficient is positive. If one variable tends to increase as the other decreases, it's negative.

The correlation coefficient for Verbal and GPA in our data set is 0.322, indicating that there is a positive association between the two. Comparing the 0.978 of the first two variables to this, we see the variability visible in the second scatterplot reflected in the lower correlation coefficient: there's a relationship there, but it is not as obvious or clear.  

So, does the connection between Verbal and GPA merit further scrutiny?  Maybe...with real data sets, it's rare to see a correlation coefficient as high as that between Score1 and Score2.  Whether you should interpret an intermediate value for the Pearson correlation coefficient as a weak, moderate, or strong correlation depends on your objectives and requirements.

Even STRONG Correlation Still Does Not Imply Causation

But even if your data have a correlation coefficient of +1 or -1, it is important to note that correlation still does not imply causality. For instance, a scatterplot of popsicle sales and skateboard accidents in a neighborhood may look like a straight line and give you a correlation coefficient of 0.9999...but buying popsicles clearly doesn't cause skateboard accidents. However, more people ride skateboards and more people buy popsicles in hot weather, which is the reason these two factors are correlated.

It is also important to note that the correlation coefficient only measures linear relationships. A meaningful nonlinear relationship may exist even if the correlation coefficient is 0.

Only properly controlled experiments let you to determine whether a relationship is causal, and as that recent LinkedIn conversation has indicated, the "requirements" for determining causality can vary greatly depending on what you're studying.  

So, in the end, what can we say about the relationship between correlation and causation? This comic from xkcd.com, also referenced in the recent LinkedIn conversation, sums it up nicely: 

xkcd correlation comic

Comic licensed under a Creative Commons Attribution-NonCommercial 2.5 license.  Photo credit to robin_24 http://www.flickr.com/photos/robin24/5554306438/.  This photo has a creative commons attribute license. 

 

Studying Old Dogs with New Statistical Tricks: Bone-Cracking Hypercarnivores and 3D Surface Plots

$
0
0

A while back my colleague Jim Frost wrote about applying statistics to decisions typically left to expert judgment; I was reminded of his post this week when I came across a new research study that takes a statistical technique commonly used in one discipline, and applies it in a new way. 

Hyena skulls: optimized for cracking bones! The study, by paleontologist Zhijie Jack Tseng, looked at how the skulls of bone-cracking carnivores--modern-day hyenas--evolved. They may look like dogs, but hyenas in fact are more closely related to cats. However, some extinct dog species had skulls much like a hyena's. 

Tseng analyzed data from 3D computer models of theoretical skulls, along with those of existing species, to test the hypotheses that specialized bone-cracking hyenas and dogs evolved similar skulls with similar biting capabilities, and that the adaptations are optimized from an engineering perspective. 

This paper is well worth reading, and if you're into statistics and/or quality, you might notice how Tseng uses 3D surface plots and contour plots to explore his data and explain his findings. That struck me because I usually see these two types of graphs used in the analysis of Design of Experiments (DoE) data, when quality practitioners are trying to optimize a process or product.

Two other factors make this even more cool: Tseng used Minitab to create the surface plots (sweet!), and  his paper and data are available to everyone who would like to work with them. When I contacted him to ask if he'd mind us using his data to demonstrate how to create a surface plot, he graciously assented and added, "In the spirit of open science and PLoS ONE's mission, the data are meant for uses exactly like the one you are planning for your blog."

So let's make (and manipulate) a surface plot in Minitab using the data from these theoretical bone-cracking skulls. If you don't already have it, download our 30-day trial of Minitab Statistical Software and follow along!

Creating a 3D Surface Plot

Three-dimensional surface plots help us see the potential relationship between three variables. Predictor variables are mapped on the x- and y-scales, and the response variable (z) is represented by a smooth surface (surface plot) or a grid (wireframe plot). Skull deepening and widening are major evolutionary patterns in convergent bone-cracking dogs and hyaenas, so Tseng used skull width-to-length and depth-to-length ratios as variables to examine optimized shapes for two functional properties: mechanical advantage (MA) and strain energy (SE). 

So, here's the step-by-step breakdown of creating a 3D surface plot in Minitab. We're going to use it to look at the relationship between the ratio of skull depth to length (D:L), width to length (W:L), and skull-strain energy (SE), a measure of work efficiency.

  1. Download and open the worksheet containing the data.
  2. Choose Graph > 3D Surface Plot.
  3. Choose Surface, then click OK.
  4. In Z variable, enter SE (J). In Y variable, enter D:L. In X variable, enter W:L.
  5. Click Scale, then click the Gridlines tab.
  6. I'm going to leave them off, but if you like, you can use Show gridlines for, then check Z major ticks, Y major ticks, and X major ticks. Adding the gridlines helps you visualize the peaks and valleys of the surface and determine the corresponding x- and y-values. 
  7. Click OK in each dialog box.

Minitab produces the following graph: 

Surface plot of skull-strain energy to depth/length and width/length

The "landscape" of the 3D surface plot is illuminated in places so that you can better see surface features, and you can change the position, color, and brightness of these lights to better display the data. You also can change the pattern and color of the surface. You can open the "Edit Surface" dialog box simply by double-clicking on the landscape. Here, I've tweaked the colors and lighting a bit to give more contrast: 

surface plot with alternate colors

Turn the Landscape Upside-Down

You may not want to go so far as to flip it, but rotating the graph to view the surface from different angles can help you visualize the peaks and valleys of the surface. You can rotate the graph around the X, Y, and Z axes, rotate the lights, and even zoom in with the 3D Graph Tools toolbar. (If you don't already see it,  just choose Tools > Toolbars > 3D Graph Tools to make it appear.)

3D Graph Tools toolbar in statistical software

By rotating 3D surface and wireframe plots, you can view them from different angles, which often reveals interesting information. Changing these factors can help reveal different features of the data surface and dramatically impact what features are highlighted:

Rotated and illuminated surface plot

Off-Label Use of the Surface Plot? 

Tseng notes that combining biomechanical analysis of the theoretical skulls and functional landscapes like the 3D surface plot is a novel approach to the study of convergent evolution, one that permits fossil species to be used in biomechanical simulations, and also provides comparative data about hypothesized form-function relationships. What did he find?  He explained it this way in an interview:

What I found, using models of theoretical skulls and those from actual species, was that increasingly specialized dogs and hyenas did evolve stronger and more efficient skulls, but those skulls are only optimal in a rather limited range of possible variations in form. This indicates there are other factors restricting skull shape diversity, even in lineages with highly directional evolution towards biomechanically-demanding lifestyles...although the range of theoretical skull shapes I generated included forms that resemble real carnivore skulls, the actual distribution of carnivoran species in this theoretical space is quite restricted. It shows how seemingly plausible skull shapes nevertheless do not exist in nature (at least among the carnivores that I studied).

In addition to 3D surface plots, Tseng used contour plots to help visualize his theoretical landscapes. In my next post, I'll show how to create and manipulate those types of graphs in Minitab. Meanwhile, please be sure to check out his paper for the full details on Tseng's research: 

Tseng ZJ (2013) Testing Adaptive Hypotheses of Convergence with Functional Landscapes: A Case Study of Bone-Cracking Hypercarnivores. PLoS ONE 8(5): e65305. doi:10.1371/journal.pone.0065305

Studying Old Dogs with New Statistical Tricks Part II: Contour Plots and Cracking Bones

$
0
0

A skull made for cracking some bones! Yesterday I wrote about how paleontologist Zhijie Jack Tseng used 3D surface plots created in Minitab Statistical Software to look at how the skulls of hyenas and some extinct dogs with similar dining habits fit into a spectrum of possible skull forms that had been created with 3D modelling techniques.

What's interesting about this from a data analysis perspective is how Tseng took tools commonly used in quality improvement and engineering and applied them to his research into evolutionary morphology.

We used Tseng's data to demonstrate how to create and explore 3D surface plots yesterday, so let's turn our attention to contour plots. 

How to Create a Contour Plot 

Like a surface plot, we can use a contour plot to look at the relationships between three variables on a single plot. We take two predictor variables (x and y) and use the contour plot to see how they influence a response variable (z).  

A contour plot is like a topographical map in which x-, y-, and z-values substitute for longitude, latitude, and elevation. Values for the x- and y-factors (predictors) are plotted on the x- and y-axes, while contour lines and colored bands represent the values for the z-factor (response). Contour lines connect points with the same response value.

Since skull deepening and widening are major evolutionary trends in bone-cracking dogs and hyaenas, Tseng used skull width-to-length and depth-to-length ratios as variables to examine optimized shapes for two functional properties: mechanical advantage (MA) and strain energy (SE). 

Here's how to use Minitab to create a contour plot like those in Tseng's paper

  1. Download and open the worksheet containing the data.
  2. Choose Graph > Contour Plot.
  3. In Z variable, enter SE (J). In Y variable, enter D:L. In X variable, enter W:L. 
  4. Click OK in the dialog box.

Minitab creates the following graph: 

Contour Plot of Skull Strain Energy

Now, that looks pretty cool...but notice how close the gray and light green bands in the center are?  It would be easier to distinguish them if we had clear dividing lines between the contours.  Let's add them. We'll recreate the graph, but this time we'll click on Data View in the dialog box, and check the option for Contour Lines:

adding contour lines to contour plot

Click OK > OK, and Minitab gives us this plot, which is much easier to scan: 

contour plot with contour lines

Refining and Customizing the Contour Plot

Now, suppose you've created this plot, as we did, with 9 contour levels for the response variable, but you really don't need that much detail?  You can double-click on the graph to bring up the Edit Area dialog box, from which you can adjust the number of levels from 2 through 11.  Here's what the graph looks like reduced to 5 contour levels: 

Contour plot with five levels

Alternatively, we can specify which contour values to display. And if your boss (or funding agency) doesn't like green or blue, it's very easy to change the contour plot's palette. You can also adjust the type of fill used in specific contours:

Contour plot with custom palette and shading

Whoa. 

Reading the Contour Plot 

As noted earlier, we read the contour plot as if it were a topographical map: the contours indicate the "steepness" of the response variable, so we can look for: 

  • X-Y "coordinates" that produce maximal or minimal responses in Z
  • Ridges" of high values or "valleys" of low values

It's easy to see from this demonstration why the contour plot is such a popular tool for optimizing processes: it drastically simplifies the task of identifying which values of two predictors lead to the desired values for a response, which would be a bit of a pain to do using just the raw data.

To see how Tseng used contour plots, check out his study: 

Tseng ZJ (2013) Testing Adaptive Hypotheses of Convergence with Functional Landscapes: A Case Study of Bone-Cracking Hypercarnivores. PLoS ONE 8(5): e65305. doi:10.1371/journal.pone.0065305

How to Create and Read an I-MR Control Chart

$
0
0

When it comes to creating control charts, it's generally good to collect data in subgroups, if possible. But sometimes gathering subgroups of measurements isn't an option. Measurements may be too expensive. Production volume may be too low. Products may have a long cycle time.

In many of those cases, you can use an I-MR chart. Like all control charts, the I-MR chart has three main uses: 

  1. Monitoring the stability of a process.
    Even very stable processes have some variation, and when you try to fix minor fluctuations in a process you can actually cause instability. An I-MR chart can alert you to changes that reveal a problem you should address.
     
  2. Determining whether a process is stable and ready to be improved.
    When you change an unstable process, you can't accurately assess the effect of the changes. An I-MR chart can confirm (or deny) the stability of your process before you implement a change. 
     
  3. Demonstrating improved process performance.
    Need to show that a process has been improved? Before-and-after I-MR charts can provide that proof. 

The I-MR is really two charts in one. At the top of the graph is an Individuals (I) chart, which plots the values of each individual observation, and provides a means to assess process center.

I chart

The bottom part of the graph is a Moving Range (MR) chart, which plots process variation as calculated from the ranges of two or more successive observations. 

MR Chart

The green line on each chart represents the mean, while the red lines show the upper and lower control limits. An in-control process shows only random variation within the control limits. An out-of-control process has unusual variation, which may be due to the presence of special causes.

Creating the I-MR Chart

Let's say you work for a chemical company, and you need to assess whether the pH value for a custom solution is within  acceptable limits. The solution is made in batches, so you can only take one pH measurement per batch and the data cannot be subgrouped. This is an ideal situation for an I-MR chart. 

pH data

So you measure pH for 25 consecutive batches. Preparing this data for the I-MR chart couldn't be easier: just  list your measurements in a single column, in the order you collected them. (To follow along, please download this data set and, if you don't already have it, the free trial of our statistical software.) 

Choose Stat > Control Charts > Variables Charts for Individuals > I-MR and select pH as the Variable. If you enter more than one column in Variables, no problem -- Minitab will simply produce multiple I-MR charts. Dialog box options let you add labels, split the chart into stages, subset the data , and more.

You're want to catch any possible special cause of variation, so click I-MR Options, and then choose Tests. Choose "Perform all tests for special causes," and then click OK in each dialog box.

tests for special causes

The tests for special causes detect points beyond the control limits and specific patterns in the data.

  • When an observation fails a test, Minitab reports it in the Session window and marks it on the I chart. A failed point indicates a nonrandom pattern in the data that should be investigated.
     
  • When no points are displayed under the test results, no observations failed the tests for special causes.
Interpreting the I-MR Chart, part 1: The MR Chart

Here's the I-MR chart for your pH data: 

I-MR Chart of pH

First examine the MR chart, which tells you whether the process variation is in control. If the MR chart is out of control, the control limits on the I chart will be inaccurate. That means any lack of control in the I chart may be due to unstable variation, not actual changes in the process center. If the MR chart is in control, you can be sure that an out-of-control I chart is due to changes in the process center.

Points that fail Minitab's tests are marked with a red symbol on the MR chart. In this MR chart, the lower and upper control limits are 0 and 0.4983, and none of the individual observations fall outside those limits.The points also display a random pattern. So the process variation is in control, and it is appropriate to examine the I Chart.

Interpreting the I-MR Chart, part 2: The I Chart

The individuals (I) chart assesses whether the process center is in control. Unfortunately, this I chart doesn't look as good as the MR chart did: 

I chart of pH

Minitab conducts up to eight special-cause variation tests for the I chart, and marks problem observations with a red symbol and the number of the failed test. The graph tells you three observations failed two tests. The Minitab Session Window tells you why each point was flagged: 

Test Results for I Chart

Observation 8 failed Test 1, which tests for points more than 3 standard deviations from the center line -- the strongest evidence that a process is out of control. Observations 20 and 21 failed Test 5, which tests for a run of two out of three points with the same sign that fall more than two standard deviations from the center line. Test 5 provides additional sensitivity for detecting smaller shifts in process mean.

This I-MR chart indicates that the process average is unstable and the process is out of control, possibly due to the presence of special causes.

Now What? 

The I-MR chart for pH may not be what you wanted to see, but now you know there may be a problem that needs to be addressed. That's the whole purpose of the control chart!  Next, you can try to identify and correct the factors contributing to this special-cause variation. Until these causes are eliminated, the process cannot achieve a state of statistical control.

 

 

 

Minitab's LinkedIn Group: A Great Place to Talk Stats

$
0
0

LinkedInIf you've got questions about quality improvement and statistics, I've got a resource for you: the Minitab Network on LinkedIn. I'm privileged to serve as the moderator of this group, which lets people who use Minitab products communicate and network with like-minded people from around the world.

LinkedIn is the leading social networking site for professionals, and the Minitab Network on LinkedIn has become an excellent way for Minitab users to share ideas and learn from each other. Since we launched the group in August 2008, it's become a very active community of people who share an interest in data and statistics.

It's an honor to help facilitate these discussions, although to be honest not very much facilitation is needed: the group's 6,000+ participants are very cordial and helpful. Every day people post questions and respond to each other with professional courtesy and unwavering support. It’s been great to see it grow. 

The success of this group isn't accidental. We strive to keep the signal-to-noise ratio on the Minitab Network very high. As a rule, we don't permit ads and promotions in the discussion area (not even our own!), and we keep discussions focused on data analysis and/or quality improvement. We also encourage goodwill and civility, so the group isn't plagued by the bickering and flame wars that mar so many Internet discussion groups. 

The Minitab Network has become a de facto “user group” for Minitab products. Early in the personal computing era, ‘user groups’ sprang up around different products, including Minitab. Group members would meet regularly and help each other get the most benefit from their shared interests.

Frequently, though, these user groups were very small and centered around a particular university or a company, which placed geographic and other constraints on the number and range of people who could participate. The LinkedIn group brings together Minitab users from around the world to share their insights and expertise, and it’s really exciting to see how enthusiastically people have seized onto it.

Recent topics of conversation on have included:

  •     Creating macros in Minitab 16
  •     3 Sigma vs. 6 Sigma
  •     Sample selection for Measurement System Analysis
  •     Interpreting the results of a 1-sample t-test
  •     Identifying what kind of regression analysis to use
  •     Customizing Minitab graphs

Many Minitab employees are members of the group, but we don't try to direct conversations or limit what people can say. Whether a participant shares kudos, questions, or complaints, we want to listen and learn along with the other members. 

The LinkedIn group also gives people who use our software a chance to interact directly with representatives of Minitab and to provide valuable feedback and comments. For example, one of our documentation specialists recently asked group members to share the kinds of questions they have when performing capability analysis; the responses are being used to improve future Help content.

However, the Minitab Network is primarily a discussion group and does not replace existing methods for contacting us for product support. If you have a question about your license, or if you’re having trouble using a Minitab product and you want someone from Minitab to respond first, you should contact us directly by visiting http://www.minitab.com/contacts 

In addition to joining the Minitab Network on LinkedIn, enthusiasts of statistics, quality improvement and social media can follow us on Twitter and befriend us on Facebook.

Creating a Chart to Compare Month-to-Month Change

$
0
0

One member of Minitab's LinkedIn group recently asked this question:

I am trying to create a chart that can monitor change by month. I have 2012 data and want to compare it to 2013 data...what chart should I use, and can I auto-update it? Thank you. 

As usual when a question is asked, the Minitab user community responded with some great information and helpful suggestions. Participants frequently go above and beyond, answering not just the question being asked, but raising issues that the question implies.  For instance, one of our regular commenters responded thus: 

There are two ways to answer this inquiry...by showing you a solution to the specific question you asked or by applying statistical thinking arguments such as described by Donald Wheeler et al and applying a solution that gives the most instructive interpretation to the data.

In this and subsequent posts, I'd like to take a closer look at the various suggestions group members made, because each has merits. First up: a simple individuals chart of differences, with some cool tricks for instant updating as new data becomes available. 

Individuals Chart of Differences

An easy way to monitor change month-by-month is to use an individuals chart. Here's how to do it in Minitab Statistical Software, and if you'd like to play along, here's the data set I'm using. If you don't already have Minitab, download the free 30-day trial version.

I need four columns in the data sheet: month name, this year's data, last year's data, and one for the difference between this year and last. I'm going to right-click on the Diff column, and then select Formulas > Assign Formula to Column..., which gives me the dialog box below. I'll complete it with a simple subtraction formula, but depending on your situation a different formula might be called for:

 assign formula to column

With this formula assigned, as I enter the data for this year and last year, the difference between them will be calculated on the fly. 

data set

Now I can create an Individuals Chart, or I Chart, of the differences. I choose Stat > Control Charts > Variables Charts for Individuals > Individuals... and simply choose the Diff column as my variable. Minitab creates the following graph of the differences between last year's data and this year's data: 

Individuals Chart
 

Updating the Individuals Chart Automatically

Now, you'll notice that when I started, I only had this year's data through September. What happens when I need to update it for the whole year?  Easy - I can return to the data sheet in January to add in the data from the last quarter. As I do, my Diff column uses its assigned formula (indicated by the little green cross in the column header) to calculate the differences: 

auto-updated worksheet


Now if I look at the I-chart I created earlier, I see a big yellow dot in the top-left corner.

automatic update for an individuals chart

When I right-click on that yellow dot and choose "Automatic Updates," as shown in the image above, Minitab automatically updates my Individuals chart with the information from the final three months of the year: 

automatically updated i chart

Whoa!  It looks like we might have some special-cause variation happening in that last month of the year...but at least I can use the time I've saved by automatically updating this chart to start investigating that! 

In my next post, we'll try another way to look at monthly differences, again following the suggestions offered by the good people on Minitab's LinkedIn group. 

 


Creating Charts to Compare Month-to-Month Change, part 2

$
0
0

A member of Minitab's LinkedIn group recently asked how to create a chart to monitor change by month, specifically comparing last year's data to this year's data. My last post showed how to do this using an Individuals Chart of the differences between this year's and last year's data.  Here's another approach suggested by a participant in the group. 

Applying Statistical Thinking

An individuals chart of the differences between this year's data and last year's might not be our best approach. Another approach is to look at all of the data together.  We'll put this year's and last year's data into a single column and see how it looks in an individuals chart. (Want to play along? Here's my data set, and if  you don't already have Minitab, download the free 30-day trial version.)

We'll choose Stat > Control Charts > Variables Charts for Individuals > Individuals... and choose the "2 years" column in my datasheet as the variable. Minitab creates the following I chart: 

i chart of two years

Now we can examine all of the data sequentially and ask some questions about it. Are there outliers? The data seem remarkably consistent, but those points in December (12 and 24) warrant more investigation as potential sources of special cause variation. If investigation revealed a source for these data points that indicate these outliers should be disregarded, these outliers could be removed from the calculations for the center line and control limits, or removed from the chart altogether.

What about seasonality, or a trend over the sequence? Neither issue affects this data set, but if they did, we could detrend or deseasonalize the data and chart the residuals to gain more insight into how the data are changing month-to-month.  

I-MR Chart

Instead of an Individuals chart, one participant in the group suggested using an I-MR chart, which provides both the indiviudals chart and a moving-range chart.  We can use the same single column of data, then examine the resulting I-MR chart for indications of special cause variation. "If not, there's no real reason to believe one year was different than another," this participant suggests. 

Another thing you can do with most of the control charts in Minitab is establish stages.  For example, if we want to look for differences between years, we can add a column of data (call it "Year") to our worksheet that labels each data point by year (2012 or 2013).  Now when we select Stat > Control Charts > Variables Charts for Individuals > I-MR...we will go into the Options dialog and select the Stages tab.  

I-MR Chart stage dialog

As shown above, we'll enter the "Year" column to define the stages. Minitab produces the following I-MR chart:

I-MR Chart with Stages  

This I-MR chart displays the data in two distinct phases by year, so we can easily see if there are any points from 2013 that are outside the limits for 2012. That would indicate a significant difference. In this case, it looks like the only point outside the control limits for 2012 is that for December 2013, and we already know there's something we need to investigate for the December data.

Time Series Plot 

For the purposes of visual comparison, some members of the Minitab group on LinkedIn advocate the use of a time series plot. To create this graph, we'll need two columns in the data sheet, one for this year's data and one for last year's.  Then we'll choose Graph > Time Series Plot > Multiple and select the "Last Year" and "This Year" columns for our series. Minitab gives us the following plot: 

Time Series Plot

Because the plot of this year's and last year's data are shown in parallel, it's very easy to see where and by how much they differ over time.

Most of the months appear to be quite close for these data, but once again this graph gives us a dramatic visual representation of the difference between the December data points, not just as compared to the rest of the year, but compared to each other from last year to this. 

Oh, and here's a neat Minitab trick: what if you'd rather have the Index values of 1, 2, 3...12 in the graph above appear as the names of the months?  Very easy!  Just double-click on the X axis, which brings up the Edit Scale dialog box. Click on the Time tab and fill it out as follows: 

Edit the time scale of your graph

(Note that our data start with January, so we use 1 for our starting value. If your data started with the month of February, you'd choose to start with 2, etc.)  Now we just click OK, and Minitab automatically updates the graph to include the names of the month:  

Time Series Plot with Months

The Value of Different Angles

One thing I see again and again on the Minitab LinkedIn group is how a simple question -- how can I look at change from month to month between years? -- can be approached from many different angles.  

What's nice about using statistical software is that we have speed and power to quickly and easily  follow up on all of these angles, and see what different things each approach can tell us about our data. 

 

Making a Difference in How People Use Data

$
0
0

AmSTAT NewsA colleague of mine at Minitab, Cheryl Pammer, was recently featured in "A Statistician's Journey," a monthly feature that appears in the print and online versions of the American Statistical Association's AMSTAT News magazine.  

Each month, the magazine asks ASA members to talk about the paths they took to get to where they are today. Cheryl is a "user experience designer" at Minitab. In other words, she's one of the people who help determine how our statistical software does what it does, and tries to make it as helpful, useful, and beneficial as possible. Cheryl is always looking for ways to make our software better so that the people who use it can get more out of their data. 

It's exciting that one of Minitab's statisticians was selected to be profiled, and it's always great when someone whom you know does great work receives some public recognition. But I was particularly interested to see what Cheryl had to say about her work at Minitab -- you know, what it is that motivates her to come into work every day.  Here's how she answered that question: 

A tremendous amount of data exists out there, most of it being analyzed without the help of a degreed or accredited statistician. As a designer of statistical software, my main goal is to promote good statistical practices by presenting appropriate choices to the software user and displaying results in a meaningful way. It is exciting to know that the work I do makes a difference in how thousands of people will use and interpret the data they have.

This really struck home with me. Before I joined Minitab, I worked in higher education as an editor, and I oversaw a magazine that covered the work of scientists from a wide variety of fields. I needed to keep track of circulation and many other metrics, and I did not have a clue how to do it properly. I muddled through it using spreadsheets, and even pencil and paper, but I never had confidence in my conclusions and always had a nagging suspicion that I'd probably missed something critical...something that would either invalidate any good news I'd seemed to find, or would make data that already didn't look so good even worse. 

I Needed an Assistant for Data Analysis When I Had No Idea How to Do It

Since then, I've come a long way in terms of analyzing data, even completing a graduate degree in applied statistics.  But I remember vividly how it felt to look at a collection of numbers and not have the vaguest idea how to start making sense of it. And I remember seeing research results and analyses and wishing they'd been expressed in some way that was easier to understand if, like me back then, you didn't have a good background in statistics.  

And that's where the Assistant comes in. People like Cheryl designed the Assistant in Minitab Statistical Software to help people like me understand data analysis.  When you select the type of analysis you want to do in the Assistant -- like graphical analysis, hypothesis testing, or regression -- the Assistant guides you through it by asking you questions.  

For example, if you're doing a hypothesis test, the Assistant will ask you whether you have continuous or categorical data. Don't know the difference?  Click on a button and the Assistant will give you a crystal-clear explanation so you can make the right choice. Back when I was trying to figure out how our science magazine was performing, this would have saved me a lot of wasted time.  It also would have made me a lot more sure about the conclusions I reached.  

Describing it doesn't really do it justice, though.  Here's a video that provides a quick overview of how the Assistant works:  

An Assistant for Data Analysis Is Great Even When You Know How

Even though I've learned a lot about analyzing data since my magazine days, I still find the Assistant tremendously helpful because:

A.  I'm usually sharing the results of an analysis with people who don't know statistics, and
B. The Assistant explains those results in very clear language that anyone can understand. 

For ANOVA, capability analysis, measurement systems analysis, and control charts, the Assistant's output includes not only graphs and bottom-line results, but also report cards and summaries that tell you how well your data meet the the statistical assumptions for the analysis, and whether there are trouble spots or specific data points you should take a look at.  So if you're explaining the results to your boss, your colleagues, or a group of potential clients, you can present the information and provide assurance that the analysis has followed good statistical practice.  

We've heard the same thing from consultants, Six Sigma black belts, researchers, and other people who know how to wrangle a data set:  these experts certainly can do their analysis without the Assistant, but the Assistant makes it easier to communicate what the analysis means and how reliable it is, both of which are critical. 

Which brings us back to Cheryl -- and her colleagues in Minitab's software development teams -- who work so hard to make data analysis accessible to more people. H. G. Wells famously said "Statistical thinking will one day be as necessary a qualification for efficient citizenship as the ability to read and write."  In a world where so much data is so readily available to all of us, it's an honor to be part of a team working to make statistical thinking and the ability to make better use of that data more available. 

The Value Stream Map: It's Been Around Longer than You Think

$
0
0

value stream mapIn looking for the answer to an unrelated quality improvement question the other day, I ran across a blog post that answers a question I'd had for a while: what's the origin of the value stream map? 

A value stream map (VSM) is a key tool in many quality improvement projects, especially those using Lean. The value stream is the collection of all of the activities, both value-added and non-value added, that generate a product or service that meets customer needs. The VSM shows how both materials and information flow as a product or service moves through the process value stream, helping teams visualize where improvements might be made in both flows.

In his post, Michel Baudin traces the history of a process map that accounts for both materials and information back to a text published in 1918, and provides examples and documentation of how this idea has been applied, transformed, and popularized since then.

There are two kinds of value stream maps, a current state map and a future state map. A current state value stream map shows what the actual process looks like at the beginning of a project. It identifies waste and helps you envision an improved future state. The future state map shows what the process should look like at the end of the project. Then, as the future state map becomes the current state map, a new future state map can be created and a plan implemented to achieve it.

Tools for Creating Value Stream Maps

You can use many tools to create a value stream map. At the most basic level, you just need paper and pencil. A group facilitator might use a whiteboard or cover a wall with paper, then give each work team involved in the process color-coded post-it notes. The team members put their tasks on the notes, place them in sequence, then draw lines between steps to show how the work flows. The group adds new steps and adjusts the map until it captures the process in its current state. 

More sophisticated value stream maps can be created with easy-to-use software. There are stand-alone VSM creation tools, and also VSM tools that are part of more comprehensive process improvement software packages.

Minitab offers value stream map tools in Qeystone, our project portfolio management platform for Lean and Six Sigma deployments, as well as Quality Companion, our collection of soft tools for quality improvement projects. This video provides a great overview of how the VSM tool works in Companion:

If you'd like to try this yourself, you can download a free 30-day trial of Quality Companion. We also offer the full PDF of our Quality Companion training manual's lesson on value stream mapping.

A Value Stream Map by Any Other Name...

I won't reiterate all the details on the history of the value stream map here. But I will share a theory about the name "value stream map" that I found particularly interesting. 

The idea of mapping the flow of information and materials through a process clearly predates the modern "value stream map." There are many potential terms that could be (and have been) applied to this tool; so what made the VSM term stick? Baudin attributes it to savvy marketing:

“Process Mapping,” “Materials and Information Flow Analysis,” are all terms that, at best, appeal to engineers. Any phrase with “value” in it, on the other hand, resonates with executives and MBAs... They readily latch on to a concept called “Value Stream Mapping,” even though their eyes would glaze over at the sight of an actual map. While this confusingly abstract vocabulary can be frustrating to engineers, it does serve the vital purpose of getting top management on board. The trick is to know when to use it — in the board room — and when not to — on the shop floor.

Makes you wonder what other quality tools might be renamed for greater appeal in the boardroom...

If you're interested in VSM, I encourage you to read Baudin's full post as well as the comments that follow it; you'll find some great insight, history, and practical information about when and where to use this tool. 

These blog posts also share some helpful tips:

Five Guidelines You Need to Follow to Create an Effective Value Stream Map

Four More Tips for Making the Most of Value Stream Maps

 

Applying Six Sigma to a Small Operation

$
0
0

Using data analysis and statistics to improve business quality has a long history. But it often seems like most of that history involves huge operations. After all, Six Sigma originated with Motorola, and became adopted by thousands of other businesses after it was adopted by a little-known outfit called General Electric.

There are many case studies and examples of how big companies used Six Sigma methods to save millions of dollars, slash expenses, and improve quality...but when they read about the big dogs getting those kind of results, a lot of folks hear a little voice in their heads saying, "Sure, but could it work in my small business?"  

Can Six Sigma Help a Small Business? 

six sigma for bicycle chain manufacturerThat's why I was so intrigued to find this article published in the TQM Journal in 2012: it shows exactly how Six Sigma methods can be used to benefit a small manufacturing business. The authors of this paper profile a small manufacturing company in India that was plagued with declining productivity. This operation made bicycle chains using plates, pins, bushings, and rollers.

The bushings, which need to be between 5.23 and 5.27 mm, had a very high rejection rate. Variation in the diameter caused rejection rates of 8 percent, so the company applied Six Sigma methods to reduce defects in the bushing manufacturing process.

The company used the DMAIC methodology--which divides a project into Define, Measure, Analyze, Improve, and Control phases--to attack the problem. Each step the authors describe in their process can be performed using Minitab Statistical Software and Quality Companion, our collection of "soft tools" for quality projects.

The Define Phase

The Define phase is self-explanatory: you investigate and specify the problem, and detail the requirements that are not being met. In Define phase, the project team created a process map (reproduced below in Quality Companion) and a SIPOC (Supplier, Input, Process, Output, Customer) diagram for the bushing manufacturing process.

Process Map Created in Quality Companion by Minitab

The Measure Phase

In measure phase, you gather data about the process. This isn't always as straightforward as it seems, though.  First, you need to make sure you can trust your data by conducting a measurement system analysis.

The team in this case study did Gage repeatability and reproducibility (Gage R&R) studies to confirm that their measurement system produced accurate and reliable data. This is a critical step, but it needn't be long and involved: the chain manufacturer's study involved two operators, who took two readings apiece on 10 sample bushings with a micrometer. The 40 data points they generated were sufficient to confirm the micrometer's accuracy and consistency, so they moved on to gathering data about the chain-making process itself.

The Analysis Phase

The team then applied a variety of data analysis tools, using Minitab Statistical Software. First they conducted a process capability analysis, taking 20 samples produced under similar circumstances (in groups of 5).  The graph shown below uses simulated data with extremely similar, though not completely identical, results to those shown in the TQM Journal article.

process capability curve

One of the key items to look at here is the PPM Total, which equates to the commonly-heard DPMO, or defects per million opportunities. In this case, the DPMO is nearly 80,000 per million, or 8 percent.

Another measure of process capability is the the Z.bench score, which is a report of the process's sigma capability. In general terms, a 6 sigma process is one that has 3.4 defects per million opportunities. Adding the conventional 1.5 Z-shift, this appears to be about a 3-sigma process, or a little over 66,000 defects per million opportunities.

Clearly, there's a lot of room for improvement, and this preliminary analysis gives the team a measure against which to assess improvements they make to the process. 

At this point, the project team looked carefully at the process to identify possible causes for rejecting bushings. They drew a fishbone diagram that helped them identify four potential factors to analyze: whether the operator was skilled or unskilled, how long rods were used (15 or 25 hours), how frequently the curl tool was reground (after 20 or 30 hours), and whether the rod-holding mechanism was new or old. 

The team then used Minitab Statistical Software to do 2-sample t-tests on each of these factors. For each factor they studied, they collected 50 samples under each condition.  For instance, they looked at 50 bushings made by skilled operators, and 50 made by unskilled operators. They also looked at 50 bushings made with rods that were replaced after 15 hours, and 50 made with rods replaced after 30 hours.

The t-tests revealed whether or not there was a statistically significant difference between the two conditions for each factor; if no significant difference existed, team members could conclude it didn't have a large impact on bush rejection.

This team's hypothesis tests indicated that operator skill level and curl-tool regrinding did not have a significant effect on bushing rejection; however, 15-hour vs. 25-hour rod replacement and new vs. old rod-holding mechanisms did.  Thus, a fairly simple analysis helped them identify which factors they should their improvement efforts on.

In my next post, I'll review how the team used Minitab to apply what they learned in the Define, Measure and Analyze phases of their project to the final two phases: Improve, and Control, and the benefits they saw from the project

Applying Six Sigma to a Small Operation, Part 2

$
0
0

bike chain, top viewIn my previous post, I shared a case study of how a small bicycle-chain manufacturing company in India used the DMAIC approach to Six Sigma to reverse declining productivity.

After completing the Define, Measure, and Analysis phases, the team had identified the important factors in the bushing creation process. Armed with this knowledge, they were now ready to make some improvements.

The Improve Phase

In the Improve phase, the team applied a statistical method called Design of Experiments (DOE) to optimize the important factors they'd identified in the initial phases.

Most of us learn in school that to study the effects of a factor on a response, you hold all other factors constant and change the one you're interested in. But DOE lets you change more than a single variable at a time. This minimizes the number of experimental runs necessary to get meaningful results, so you can reach conclusions about multiple factors efficiently and cost-effectively.

DOE has a reputation for being difficult, but statistical software makes it very accessible. In Minitab, you just select Stat > DOE > Create Factorial Design..., select the number of factors you want to study, then choose from available designs based on your time and budget constraints.

In this case, the project team used Minitab to design a 2x2 experiment, one that had two levels for each of the two factors under examination. They did two replicates of the experiment, for a total of eight runs. The experimental design, and the measured diameter (the response) for each run is shown below in the data sheet below:

DOE worksheet

Once they'd collected the data, the team used Minitab to create plots of the main effects of both factors.

main effects plot for diameter

The slope of the lines on the main effects indicates how large an effect the factor has on the response: the steeper the slope, the greater the impact. The main effect plots above indicates that replacing the rod at 15 hours has a minor effect, while using a new rod-holding mechanism has a greater effect.

The team also created an interaction plot that showed how both factors worked together on the response variable:

Interaction plot for Diameter

Parallel lines on an interaction plot indicate that no interaction between factors is present. Since the lines in this plot intersect, there is an interaction. As the research team put it in their paper, this means "the change in the response mean from the low to the high level of rod replacement depends on the level of rod-holding mechanism."

These analyses enabled the team to identify the important factors in creating bushings that fit inside the required limits, and indicated where they could adjust those factors to improve the manufacturing process.

The Control Phase

Once the team's recommended improvements had been implemented, it was time to gather data about the new process and assess whether it had made a difference in the bushing rejection rate.

The team again collected 20 subgroups of 5 samples each (n=100) from bushings created using the improved process.  (Once again, we have used simulated data that match the parameters of the team's actual data, so the results are extremely similar but not completely identical to those shown in the original report.) The results of the capability analysis are shown below:

Capability Analysis of New Process

The PPM -- the number of defects per million opportunities -- fell to 0.02, while the Z.Bench or Sigma capability score reached 5.52. That's a tremendous improvement over the original process's 8% rejection rate and 1.4 Z.bench score!

That's not quite the end of the story, though: the Control phase doesn't really end, because the owner of the process that's been improved needs to ensure that the improvements are sustained. To do this, the organization used X-bar R control charts to ensure that the improved process remained on track. 

Six Sigma Project Results

So, did this project have a positive impact on the bottom line of this small manufacturing enterprise?  You bet.  Implementing the team's recommendations made this a 5.5-sigma process, and improved the Application of project recommendation brought up the sigma level to 5.46 and reduced the monthly bushing rejection rate by more than 80,000 PPM.

That worked out to a cost savings of about $120,000 per year. For a business of any size, that's a significant result. 

Visit our case studies page for more examples of how different types of organizations have benefit from quality improvement projects that involve data analysis

Say "I Love You" with Data on Valentine's Day

$
0
0

When we think about jobs with a romantic edge to them, most of us probably think of professions that involve action or danger.  Spies, soldiers, cops, criminals -- these are types of professions romantic leads have. Along with your occasional musician, reporter, or artist, who don't have the action but at least bring drama.

But you know who never shows up as a romantic lead?  Quality improvement professionals, that's who.  Can you name just one movie that features a dedicated data analyst or quality practitioner as the love interest...just one?  No, you can't.  Doesn't exist.

Love of Quality: The Greatest Love of All?

I guess screenwriters think statisticians and people in the quality industry have no love lives at all, but those of us who work in the sector know the passion and romance involved in optimizing a process, and the beauty inherent in a control chart free of special-cause variation.

Since it's Valentine's Day tomorrow, here's a fun little diversion that lets you share your love with a little data. Grab this data set, open it in Minitab Statistical Software, and select  Graph > Scatterplot.  Click the option for Simple scatterplot, and select "Passion" as your Y variable, and "Devotion" as your X variable.

Then send the resulting scatterplot to your sweetie:

Valentine's Day Scatterplot

You can probably expect to receive an e-mail or phone call from the recipient, asking you just what this data is supposed to mean. 

Explain that you thought the pattern in the data was clear, but you'll send them a revised graph that draws the connections.  Then send 'em a second graph of the data, which you've adjusted with Minitab's graph editing tools to connect the dots strategically:

Be Mine scatterplot

If you're already a Minitab user, you probably know that these graphs are very easy to customize, so you can tailor the graph just the way you -- or your beloved -- like it.  For instance, if you know she's crazy about script fonts and the color pink, something like this might work:

pink be mine

Of course, you would do well to celebrate your love in other ways, too...flowers or dinner, for instance. 

A few years ago my colleague Carly came up with this scatterplot.  The data for this heart is included in the data set linked above, if you prefer this more streamlined approach:

Minitab Scatterplot
 

And she even threw in a time-series plot of her heartbeat -- now that's romantic! 

Minitab Time Series Plot

If you've never edited Minitab graphs before, it's easy. To change the colors and fonts of your graphs, just double-click the graph attributes you'd like to edit. Clicking Custom on the various tabs lets you customize fill patterns and colors, borders and fill line colors, etc.

You can also change the default color and font styles Minitab uses in the Tools menu:

1. Select Tools > Options. Click Graphics and the + sign to see more options:



2. Click Regions, then choose the graph elements to customize.

3. Change the font used in your graph labels using Frame Elements (also under Graphics).

Happy Valentine’s Day!
 

(We Just Got Rid of) Three Reasons to Fear Data Analysis

$
0
0
Today our company is introducing Minitab 17 Statistical Software, the newest version of the leading software used for quality improvement and statistics education. 
 
Fear of Data AnalysisSo, why should you care? Because important people in your life -- your co-workers, your students, your kids, your boss, maybe even you -- are afraid to analyze data. 
 
There's no shame in that. In fact, there are pretty good reasons for people to feel some trepidation (or even outright panic) at the prospect of making sense of a set of data.

I know how it feels to be intimidated by statistics. Not long ago, I would do almost anything to avoid analyzing data. I wanted to know what the data said -- I just didn't believe I was capable of analyzing it myself. 

So to celebrate the release of our new software, I'm going to share my three top fears about analyzing data.  And I'll talk about how Minitab 17 can help people who are struggling with dataphobia.
Fear #3:  I Don't Even Know Where to Start Analyzing this Data.
Writers confront a lurking terror each time they touch the keyboard. It's called "The Blank Page," or maybe "The Blank Screen," and it can be summed up in a simple question: "Where do I start?"  I know that terror well...but at least when confronting the blank page, I always had confidence that I can write.
 
When it came to analyzing data, not only was I not sure where to start, I also had no confidence that I'd be able to do it. I always envisioned getting off on the wrong foot with my analysis, then promptly stumbling straight off some statistical cliff to plunge into an abyss of meaningless numbers.
 
You can understand why I tried to avoid this.
 
We want to help people overcome those kinds of qualms. Minitab 17 does this by expanding the reach of the Assistant, a menu that guides you through your analysis and helps you interpret your results with confidence.
 
Man, I wish the Assistant had been there when I started my career.
 
The Assistant can guide you through 9 types of analysis. But what if you don't remember what any of those analyses do?  No problem. The Assistant's tool tips  explain exactly what each analysis is used for, in plain language.
 
If I had data about the durability of four kinds of paper, the explanation of Hypothesis Tests would grab my attention:
 
Hypothesis Test - Assistant Menu
 
Of course, if you already know a thing or two about statistics, you know there's more than one kind of hypothesis test. The Assistant guides you through a decision tree so you can identify the one that's right for your situation, based on the kind of data you have and your objectives. If you can't answer a question, the Assistant provides  information so you can respond correctly, such as illustrated examples that help you understand how the question relates to your own data.
 
The Assistant leads me to One-way ANOVA to compare my paper samples.
 
Now I know where to start my analysis.  But I still face....
Fear #2:  I Don't Know Enough about Statistics to Get All the Way Through this Analysis.

Getting started is great, but what if you're not sure how to continue?

Fortunately, after you've chosen the right tool, the Assistant comes right out and tells you how to ensure your analysis is accurate. For example, it offers you this checklist for doing a one-way ANOVA:

ANOVA guidelines
 
The Assistant provides clear guidelines, including how to set up, collect, and enter your data, and more.
 
What's more, the Assistant's dialogs are simple to complete. No need to guess about what you should enter, and even relatively straightforward concepts like Alpha value are phrased as common-sense questions: "How much risk are you willing to accept of concluding there are differences when there are none?"
 
ANOVA dialog
 
The Assistant will help you finish the analysis you start. But my biggest fear about data is still waiting...
Fear #1:  If I Reach the Wrong Conclusion, I'll Make a Fool of Myself!

Once you finish your analysis, you must interpret what it means, and then you usually need to explain it to other people. 

This is where the Assistant really shines, by providing a series of reports that help you understand your analysis.

Take a look at the summary report for my ANOVA below and tell me if the means of my four paper samples differed.

ANOVA summary report
 
The bar graph in the left corner explicitly tells me YES, the means differ, and it gives me the p-value, too...but I don't need to interpret that p-value to draw a conclusion. I don't even need to know what a p-value is. I do know what's important: that the means are different. 
 
This summary report also tells me which means are different from each other.  With this report, I could tell my boss that we should avoid paper #2, which has a low durability compared to the others, but that there's not a statistically significant difference in durability between papers 1, 3, and 4, so we could select the least expensive option.
 
In my early career, a tool like this would have made all the difference when questions about data came up. I wouldn't have needed to avoid it.
What If I'm Already an Experienced Data Analyst?
Today I know enough about data analysis that I could easily run the ANOVA without the Assistant, but I still like to use it.  Why?  Because the simplicity and clarity of the Assistant's output and reports is perfect for communicating the results of my analysis with people who fear statistics the same way I used to.
 
And as you probably know, there are lots of us out there.
 
I hope you'll give the 30-day trial version of Minitab 17 a try, and let us know how you like it.  We've even put together a series of fun statistical exercises you can do with the Assistant to get started.  

Got Good Judgment? Prove It with Attribute Agreement Analysis

$
0
0

Many Six Sigma and quality improvement tools could be applied in other areas. For example, I wonder whether my son's teachers could benefit from a little attribute agreement analysis. 

He seemed frustrated the other day when I picked him up at school. He'd been working on a presentation that needed to be approved by his teachers. (My son attends a charter school, and each class is taught by a two-person teaching team.)

"What's wrong?" I asked when he clambered into the car with a big sigh.

My son explained that he'd given the presentation to teacher Jennifer that morning. A few minor suggestions aside, she thought it was fine. Jennifer told him the presentation was ready to deliver.

But when he gave the presentation to Jeff in the afternoon, the feedback was very different. Jeff felt the content of the presentation was too vague, and that my son needed to do more research and add more information. Jeff told him the presentation wasn't acceptable.

And because Jennifer had already left for the day, there wasn't a chance to reconcile these very different opinions.

No wonder my son felt frustrated.

The Challenge of Judging Attributes Consistently

gavelWe all need to make judgments every day. Some are fairly inconsequential, such as whether you think a given song on the radio is good or bad. But judgments we make at work can have profound impacts on customers, coworkers, clients, employees...or students.

Inspectors classify parts as good, or bad. Employment screeners select applicants they think are worth interviewing. And instructors decide whether a student's work is acceptable. In each case, judgments are made about one or more attributes, which can't easily be measured objectively.

That's where the problems start. One synonym for "judgment" is "opinion," and peoples' opinions don't always match. That's not always a problem: If I like a song and you don't, it's not a big deal. But when two or more people have contradictory assessments of critical things, disagreement can cause real problems. The quality of a business' parts or service can vary from day to day, or even from shift to shift. Customers' experiences can be very inconsistent from one day to the next.

As if different judgments from different people aren't problematic enough, we also have a great capacity for disagreeing with ourselves. And in many cases we're inconsistent without even recognizing it: if you're inspecting parts that all look the same, are you sure you'd judge the same part the same way every time? And can you be sure you're inspecting parts consistently with your fellow inspectors?

Or, in the case of my son's teachers, how can you be sure your assessment of a student's work is consistent with your own judgments, and with those of your fellow instructors?

Benefits of Attribute Agreement Analysis

These situations can be illuminated by Attribute Agreement Analysis. Attributes are difficult to measure -- that's why we rely on judgments instead of objective measurements to assess them -- but we can collect data that reveals whether different people assign attributes to an item consistently, and whether an individual makes the same judgment when assessing the same item at different times.

Attribute Agreement Analysis can tell you whether and where you're getting it wrong. Knowing this helps everyone in the process to make better and more consistent judgments.

The results of an Attribute Agreement Analysis may indicate that your team judges attributes very consistently, and that you can be confident in how you're evaluating items. Alternatively, you may find that one or two team members make very different judgments than others, or that you don't always rate the same item the same way.

Identifying those issues gives you the opportunity to make improvements, through training, developing clearer standards, or other actions.

If my son's teachers did an Attribute Agreement Analysis, they might find they're not on the same page about what makes a good presentation. If they knew that was the case, they could then develop clearer and more consistent standards so they could more fairly assess their students' work.

How to Do an Attribute Agreement Analysis

There are two main steps in an Attribute Agreement Analysis:

  1. Set up your experiment and collect the data
  2. Analyze the data and interpret the results

You can use the Assistant in Minitab Statistical Software to do both. If you're not already using it, you can try Minitab free for 30 days.  

The Assistant gives you an easy-to-follow Attribute Agreement Analysis worksheet creation tool and even lets you print out data collection forms for each participant and each trial:

Attribute Agreement Analysis Worksheet Creation

Collect your data, then use the Assistant to analyze it and give you clear interpretations of what your results mean.

See a step-by-step breakdown of how it's done in this QuickStart exercise for Minitab 17, in which a family uses Attribute Agreement Analysis to discover the source of their disagreements about dinner. The example includes instructions, a quick video summary, and a downloadable data set so you can try the analysis yourself.

Where could you use Attribute Agreement Analysis in your work or personal life?

Control Chart Tutorials and Examples

$
0
0

The other day I was talking with a friend about control charts, and I wanted to share an example one of my colleagues wrote on the Minitab Blog.  Looking back through the index for "control charts" reminded me just how much material we've published on this topic.

Whether you're just getting started with control charts, or you're an old hand at statistical process control, you'll find some valuable information and food for thought in our control-chart related posts. 

Different Types of Control Charts

One of the first things you learn in statistics is that when it comes to data, there's no one-size-fits-all approach. To get the most useful and reliable information from your analysis, you need to select the type of method that best suits the type of data you have.

The same is true with control charts. While there are a few charts that are used very frequently, a wide range of options is available, and selecting the right chart can make the difference between actionable information and false (or missed) alarms.

What Control Chart Should I Use?  offers a brief overview of the most common charts and a discussion of how to use the Assistant to help you choose the right one for your situation. And if you're a control chart neophyte and you want more background on why we use them, check out

Joel Smith extols the virtues of a less commonly used chart in , while Greg Fox talks about using control charts to track rare events in .

In , Dawn Keller discusses the distinction between P' charts and their cousins, described by Tammy Serensits in .

And it's good to remember that things aren't always as complicated as they seem, and sometimes a simple solution can be just as effective as a more complicated approach. See why in

Control Chart Tutorials

Many of our Minitab bloggers have talked about the process of choosing, creating, and interpreting control charts under specific conditions. If you have data that can't be collected in subgroups, you may want to learn about

If you do have data collected in subgroups, you'll want to understand why, when it comes to .

It's often useful to look at control chart data in calendar-based increments, and taking the monthly approach is discussed in the series and .

If you want to see the difference your process improvements have made, check out and

While the basic idea of control charting is very simple, interpreting real-world control charts can be a little tricky. If you're using Minitab 17, be sure to check out this post about a great new feature in the Assistant:

Finally, one of our expert statistical trainers offers his suggestions about .

Control Chart Examples

Control charts are most frequently used for quality improvement and assurance, but they can be applied to almost any situation that involves variation.

My favorite example of applying the lessons of quality improvement in business to your personal life involves Bill Howell, who applied his Six Sigma expertise to the (successful) management of his diabetes. Find out how he uses .

Some of our bloggers have applied control charts to their personal passions, including holiday candies in and bicycling in .

If you're into sports, see how Jim Colton used control charts to reveal Or look to the cosmos to consider . And finally, compulsive readers like myself might be interested to see how relevant control charts are to literature, too, as Cody Stevens illustrates in

How are you using control charts?

 

 

 

 

 

 

When Will I Ever See This Statistics Software Again?

$
0
0

Minitab Statistical Software was born out of a desire to make statistics easier to learn: by making the calculations faster and easier with computers, the trio of educators who created the first version of Minitab sought to free students from intensive computations to focus on learning key statistical concepts. That approach resonated with statistics instructors, and today Minitab is the standard for teaching and learning statistics at more than 4,000 universities all over the world.

Minitab is used around the world.But many students seem to believe Minitab is used only in education. Search Twitter for "Minitab," and you're likely to find a few students grousing that nobody uses Minitab Statistical Software in the "real world."

Those students are in for a big shock after they graduate. Organizations like Boeing, Dell, General Electric, Microsoft, Walt Disney, and thousands more worldwide rely on Minitab software to help them improve the quality of their products and services.

Savvy instructors already know learning with Minitab can give students an advantage in the job market.

Stories of How Data Analysis Made a Real-World Difference

In my job, I get to talk with professionals about how they use our software in their work. I've interviewed scientists, engineers, miners, shop stewards, foresters, Six Sigma experts, service managers, bankers, utility executives, soldiers, civil servants, and dozens of others.

The statistical methods they use vary widely, but a common thread running through all of their experiences reveals a critical link between Minitab's popularity in the academic world and its widespread application in so many different businesses and industries. Virtually every person I talk to about our software mentions something about "ease of use."  

That makes a lot of sense: Minitab wasn't the first statistical software package, but it was the first statistical software package designed with the express goal of being easy to use. That led to its quick adoption by instructors and students, and those students brought Minitab with them into the workplace. And for more than 40 years, professionals have been using Minitab to solve challenges in the real world.

In case you're looking for examples, here are several of our favorite stories about how people have used Minitab:   

Case Study

Industry

Methods and Tools

U.S. Army

Military

Pareto, Before/After Capability

Rode Kruis and CWZ

Hospital

Boxplot, Pareto Chart

Belgian Red Cross

Healthcare

Histogram, Probability Plot

BetFair

Sports Betting

Interaction Plot, Capability Analysis, I-MR Chart

Ford Motor Company

Automotive

Design of Experiments (DOE)

U.S. Bowling Congress

Sports and Leisure

Scatterplot

Six Sigma Ranch

Wine

Attribute Agreement Analysis, I-MR Chart

Newcrest Mining

Mining

Individual Value Plot

NASCAR

Car Racing

Design of Experiments (DOE)

Have you used Minitab software on the job?  We'd love to hear your story!

"Hidden Helpers" in Minitab Statistical Software

$
0
0

Minitab Statistical Software offers many features that can save you time and effort when you’re learning statistics or analyzing data. However, when we demonstrate many of these short cuts, tools, and capabilities at shows and events, we find that even some longtime users aren’t aware of them.

I asked members of our sales team and technical support staff to list some of Minitab’s most helpful, yet frequently overlooked features. How many do you use—or want to start using?

Can You Repeat That?

Frequently, you’ll need to modify or re-run some part of an analysis you conducted. You can easily return to your last dialog box by pressing CTRL+E.  

What if you need more than 1 version of a graph?  Maybe you're presenting your results to two different audiences, and you'd like to highlight different factors for each. Use Editor > Duplicate Graph to create an identical copies of the original graph, which you can then tailor to suit each of your audiences.

duplicate graphs in minitab

It’s also easy to create new graphs using different variables while retaining all of your graph edits. With a graph or control chart active, choose Editor > Make Similar Graph to make a graph that retains all properties of your original graph but uses different columns.

Have It Your Way

To customize menus and toolbars, choose Tools > Customize. You can add, delete, move, or edit menus and toolbars; add buttons to Minitab that you can simply click on to run macros; and set keystrokes for commands.

You Can Take It With You

You can specify default settings using Tools > Options. Then store all your personalized settings and customizations in a profile (using Tools > Manage Profiles) that you can use whenever you choose and share with colleagues.

Manipulating Data

Need to change the format of a column? For example, do you need to convert a text column to numeric format for your analysis?  Just choose Data > Change Data Type and select the appropriate option.

May I Take Your Order?

Have you ever created a graph and wished you could switch the order of the results? For instance, you might want to change “High, Medium, Low” to “Low, Medium, High”.  To display your results in a specific order, right-click on the column used to generate the output and choose Column > Value Order. This lets you set the value order for a text column using an order you define. The value order lets you control the order of groups on bar charts and other graphs, as well as tables and other Session Window output.

Help Is Just a Click Away

If you’ve never clicked on Minitab’s Help menu, you’re missing a tremendous collection of resources. Of course you’ll find guidance about how to use Minitab software there, including step-by-step tutorials. In addition, you’ll also find:

  • Minitab’s Statistical Glossary.  This comprehensive, illustrated glossary covers all areas of Minitab statistics. Each definition contains practical, easy-to-understand information.
  • StatGuide. You’ve run an analysis, but what does it mean? StatGuide explains how to interpret Minitab results, using preselected examples to explain your output.
  • A list of Methods and Formulas
  • Links to helpful Internet resources, including our extensive Answers Knowledgebase.

And if you don’t find the answers you need, you can contact Minitab’s free Technical Support for assistance from highly-skilled specialists with expertise in both computing and statistics.

Do you have any favorite "hidden helpers" in Minitab? 

 

I Think I Can, I Know I Can: A High-Level Overview of Process Capability Analysis

$
0
0

trainRemember "The Little Engine That Could," the children's story about self-confidence in the face of huge challenges? In it, a train engine keeps telling itself "I think I can" while carrying a very heavy load up a big mountain. Next thing you know, the little engine has done it...but until that moment, the outcome was uncertain.

It's a wonderful story for teaching kids about self-confidence. But from a quality and customer service viewpoint, it's a horror story: if your business depends on taking the load up the hill, you want to know you can do it.

That's where capability analysis comes in. 

When customers ask if you're able to meet their requirements, process capability analysis lets you reply, "I know we can."

How Do You Prove Your Process Is Capable?

You want to determine if your part-making process can meet a customer's specification limits—in other words, can you produce good parts?  Statistically speaking, we assess the capability to make good parts by comparing the width of the variation in your process with the width of the specification limits.

The first step in capability analysis is to make sure your process is in statistical control, or producing consistently. If it's not, any estimates of process capability you make won't be reliable.

The results of a capability analysis usually include capability histograms and capability plots that help you visually assess the distribution of your data and verify that the process is in control.

It also includes capability indices, which are ratios of the specification tolerance to the natural process variation. Once you understand them, capability indices are a simple way of assessing process capability. Because they reduce process information to a single number, you can also use capability indices to compare the capability of one process with another.

This video offers a quick demonstration of a simple capability analysis:

Selecting the Right Type of Capability Analysis

You need to select the right capability analysis for your data based on its distribution. Depending on the nature and the distribution of your process data, you can perform capability analysis for:

  • normal or nonnormal probability models (for measurement data)
  • normal data that might have a strong source of between-subgroup variation
  • binomial or Poisson probability models (for attributes or count data)

Minitab statistical software will help you identify the distribution that fits your data, or transform your data to follow normal distribution, before a capability analysis.

Capability analysis using a normal probability model provides a more complete set of statistics, but it assumes that the data follow an approximately normal distribution, and come from a stable process.

If you apply normal capability analysis to badly skewed data, you may drastically over- or underestimate the defects a process will produce. In this case, it's better to select a probability model based on a nonnormal distribution that best fits your data.

Alternatively, you might transform the data to better approximate the normal distribution. Minitab can transform your data using the Johnson transformation or Box-Cox power transformation.

The important thing to keep in mind is that in both normal and nonnormal capability analysis, the validity of the results depends on the validity of the assumed distribution.

Additional Considerations in Capability Analysis

Typically, data for a capability analysis consists of groups of samples, produced over a short period, that are representative of the output from the process. Collecting small subgroups of samples under the same conditions, and then analyzing the variation within these subgroups, lets you estimate natural variation in the process. You can also use individual item data to assess capability, as long as it's been collected over a long enough time to account for different sources of variation.

Guidelines typically recommend getting at least 100 total data points—such as 25 subgroups of size 4—to obtain reasonably precise capability estimates.

Process data also may have random variation between subgroups. If you think strong between-subgroup variation exists in your process, use Minitab's Capability Analysis (Between/Within) or Capability Sixpack (Between/Within) options, which calculate both within- and between-subgroup standard deviations, then pool them to calculate the total standard deviation. Accounting for both sources of subgroup variation can give you a more complete estimate of the your process' potential capability.

If you have attribute (count) data, you can perform capability analyses based on the binomial and Poisson probability models. For example, with Capability Analysis (Binomial) you can compare products against a standard and classify them as defective or not. Capability Analysis (Poisson) lets you classify products based on the number of defects.

Accessing Capability Analysis Tools

The full range of capability tools in Minitab are found in the Stat > Quality Tools > Capability Analysis menu, including:

  • Normal and Non-normal Capability Analysis
  • Between/Within Capability Analysis
  • Normal and Nonnormal Capability Analysis with Multiple Variables
  • Binomial Capability Analysis
  • Poisson Capability Analysis

You should also check out the Capability SixpackTM for Normal, Nonnormal, or Between-Within capability analyses, which combines the following charts into a single display, with a subset of the capability statistics:

  • Chart to verify that the process is in control.
  • Capability histogram and probability plot to verify the data follow the specified distribution.
  • Capability plot that displays process variability compared to the specifications.
Take Guesswork Out of Capability Analysis with the Assistant

If capability analysis seems complicated, there's no denying that it can be. However, the Assistant in Minitab Statistical Software can take a lot of the labor and uncertainty out of doing capability analysis, especially if its a method you're new to. I'll cover how to use the Assistant for capability analysis in detail in my next post.

Viewing all 145 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>