The emergence and role of simple quality control tools. The Seven Basic Product Quality Control Tools What QA Tools Are

Purpose of the "Seven Basic Tools of Quality Control" method is to identify problems to be addressed as a matter of priority, based on the control of the current process, the collection, processing and analysis of the facts obtained (statistical material) for the subsequent improvement of the quality of the process.

The essence of the method- quality control (comparison of the planned quality indicator with its actual value) is one of the main functions in the quality management process, and the collection, processing and analysis of facts is the most important stage of this process.

Of the many statistical methods, only seven have been selected for wide application, which are understandable and can be easily applied by specialists in various fields. They allow you to identify and display problems in time, establish the main factors from which you need to start acting, and distribute efforts in order to effectively resolve these problems.

The expected result is the solution of up to 95% of all problems that arise in production.

Seven Essential Quality Control Tools- a set of tools that make it easier to control ongoing processes and provide various kinds of facts for analysis, adjustment and improvement of the quality of processes.

1. Checklist- a tool for collecting data and their automatic ordering to facilitate further use of the collected information.

2. Histogram- a tool that allows you to visually evaluate the distribution of statistical data grouped by the frequency of data falling into a certain (preset) interval.

3. Pareto chart- a tool that allows you to objectively present and identify the main factors influencing the problem under study, and distribute efforts for its effective resolution.

4. Stratification method(data stratification) - a tool that allows you to divide data into subgroups according to a certain attribute.

5. Scatterplot(scattering) - a tool that allows you to determine the type and closeness of the relationship between pairs of relevant variables.

6. Ishikawa diagram(causal diagram) - a tool that allows you to identify the most significant factors (causes) that affect the final result (effect).

7. Control card- a tool that allows you to track the course of the process and influence it (using appropriate feedback), preventing its deviations from the requirements for the process.

Checklists(or data collection) - special forms for data collection. They facilitate the collection process, contribute to the accuracy of data collection, and automatically lead to some conclusions, which is very convenient for quick analysis. The results are easily converted into a histogram or Pareto chart. Control sheets can be used both for quality control and for quantitative control. The form of the control sheet may be different, depending on its purpose.


To find the right way to achieve a goal or solve a problem, the first thing to do is to collect the necessary information, which will serve as the basis for further analysis. It is desirable that the collected data be presented in a structured and easy-to-process form. To do this, and also to reduce the likelihood of errors in data collection, a checklist is used.

Checklist - a form designed to collect data and automatically organize it, which makes it easier to further use the collected information.

At its core, a control sheet is a paper form on which controlled parameters are printed, in accordance with which, with the help of notes or simple symbols, the necessary and sufficient data are entered on the sheet. That is, a control sheet is a means of recording data.

The form of the checklist depends on the task at hand and can be very diverse, but in any case it is recommended to indicate in it:

Topic, object of study (usually indicated in the title of the checklist);

Period of data registration;

Data source;

Position and surname of the employee registering the data;

Symbols, for registration of received data;

Data logging table.

When preparing checklists, you need to ensure that the simplest ways to fill them in (numbers, conventional icons) are used, the number of controlled parameters is as small as possible (but sufficient to analyze and solve the problem), and the form of the sheet is as clear and convenient as possible for filling even by unqualified personnel.

1. Formulate the purpose and objectives for which the information is collected.

2. Select the quality control methods by which the collected data will be further analyzed and processed.

3. Determine the time period during which the research will be conducted.

4. Develop measures (create conditions) for conscientious and timely entry of data into the control sheet.

5. Designate who is responsible for data collection.

6. Develop the form of the control sheet.

7. Prepare instructions for performing data collection.

8. Instruct and educate workers on data collection and entry on the checklist.

9. Organize periodic reviews of data collection.

The most acute issue that arises when solving the problem is the reliability of the information collected by the staff. Finding a solution based on distorted data is very difficult (if not impossible). The adoption of measures (creation of conditions) for the registration of true data by employees is a necessary condition for achieving the task.

Rice. Checklist examples

Can use electronic forms

At the same time, the disadvantages of the electronic form of the control sheet compared to the paper one include:

- baboutmore difficult to use;

- the need to spend more time entering data.

To the pluses:

- convenience of data processing and analysis;

- high speed of obtaining the necessary information;

- the possibility of simultaneous access to information of many people.

However, most of the collected data has to be duplicated in paper form. The problem is that this leads to a decrease in productivity: the time that is saved for analyzing, storing and obtaining the necessary information is mostly offset by the double work of data logging.

bar chart- a tool that allows you to visually depict and easily identify the structure and nature of changes in the received data (estimate the distribution), which are difficult to notice in their tabular presentation.

After analyzing the shape of the obtained histogram and its location relative to the tolerance interval, one can draw a conclusion about the quality of the product under consideration or the state of the process under study. Based on the conclusion, measures are developed to eliminate deviations in product quality or the state of the process from the norm.

Depending on the method of presentation (collection) of the initial data, the method of constructing a histogram is divided into 2 options:

I option To collect statistical data, product or process performance checklists are developed. When developing a form of checklists, it is necessary to immediately determine the number and size of the intervals in accordance with which data will be collected, on the basis of which a histogram will in turn be built. This is necessary due to the fact that after filling out the checklist, it will be practically impossible to recalculate the values ​​of the indicator for other intervals. The maximum that can be done is to ignore the intervals in which no value falls and combine by 2, 3, etc. interval without fear of distorting the data. As you understand, with such restrictions, for example, it is almost impossible to make 7 out of 11 intervals.

Construction technique:

1. Determine the number and width of intervals for the control sheet.

The exact number and width of intervals should be chosen based on ease of use or according to the rules of statistics. If there are tolerances for the measured indicator, then it is worth focusing on 6-12 intervals within the tolerance and 2-3 intervals outside the tolerance. If there are no tolerances, then we evaluate the possible spread of the values ​​of the indicator and also divide it into 6-12 intervals. In this case, the width of the intervals must be the same.

2. Develop checklists and use them to collect the necessary data.

3. Using the completed checklists, count the frequency (i.e. how many times) of the obtained indicator values ​​in each interval.

Usually, a separate column is allocated for this, located at the end of the data registration table.

If the value of the indicator exactly corresponds to the boundary of the interval, then add half to both intervals on the boundary of which the value of the indicator fell.

4. To build a histogram, use only those intervals that include at least one indicator value.

If there are empty intervals between the intervals in which the values ​​of the indicator fall, then they must also be plotted on the histogram.

5. Calculate the average of the observation results.

On the histogram, it is necessary to plot the arithmetic mean of the obtained sample.

Standard formula used for calculations:

where x i- the obtained values ​​of the indicator,

N-the total number of received data in the sample.

How to use it if there are no exact values ​​of the indicator x 1 , x 2 and so on. is not explained anywhere. In our case, for an approximate estimate of the arithmetic mean, I can suggest using my own methodology:

a) determine the average value for each interval using the formula:

where j isthe intervals selected for constructing the histogram,

x j max -the value of the upper limit of the interval,

x j min –the value of the lower bound of the interval.

b) determine the arithmetic mean of the sample using the formula:

where n isthe number of selected intervals for building a histogram,

v j -the frequency of the sample results falling into the interval.

6. Construct the horizontal and vertical axes.

7. Draw the boundaries of the selected intervals on the horizontal axis.

If in the future it is planned to compare histograms describing similar factors or characteristics, then when applying the scale to the abscissa axis, one should be guided not by intervals, but by data units.

8. Scale the values ​​on the vertical axis according to the selected scale and range.

9. For each selected interval, build a bar whose width is equal to the interval, and the height is equal to the frequency of the observation results falling into the corresponding interval (the frequency has already been calculated earlier).

Draw a line on the graph corresponding to the arithmetic mean value of the indicator under study. If there is a tolerance field, draw lines corresponding to the boundaries and the center of the tolerance interval.

II option Statistical data have already been collected (for example, recorded in log books) or are expected to be collected in the form of accurately measured values. In this regard, we are not limited by any initial conditions, so we can choose, and at any time change the number and width of intervals in accordance with current needs.

Construction technique:

1. Bring the received data into one document in a form convenient for further processing (for example, in the form of a table).

2. Calculate the range of indicator values ​​(sample range) using the formula:

where xmax is the highest value obtained,

xmin is the smallest value obtained.

3. Determine the number of histogram bins.

To do this, you can use the table calculated on the basis of the Sturgess formula:

You can also use the table calculated on the basis of the formula:

4. Determine the width (size) of the intervals using the formula:

5. Round the result up to a convenient value.

Note that the entire sample must be divided into intervals of the same size.

6. Define the boundaries of the intervals. First determine the lower bound of the first interval so that it is less than xmin. Add the width of the interval to it to get the border between the first and second intervals. Next, keep adding spacing width ( H) to the previous value to get the second boundary, then the third, and so on.

After the actions taken, you should make sure that the upper limit of the last interval is greater than xmax.

7. For the selected intervals, calculate the frequency of occurrence of the values ​​of the studied indicator in each interval.

If the value of the indicator exactly corresponds to the boundary of the interval, then add half to both intervals, on the boundary of which the value of the indicator fell.

8. Calculate the average value of the studied indicator using the formula:

Follow the order of plotting the histogram, starting with step 5 of the above methodology for I option.

Histogram analysis is also divided into 2 options, depending on the availability of technological tolerance.

I option Tolerances for the indicator are not set. In this case, we analyze the shape of the histogram:

The usual (symmetrical, bell-shaped) shape. The average value of the histogram corresponds to the middle of the data range. The maximum frequency also falls in the middle and gradually decreases towards both ends. The shape is symmetrical.

This form of the histogram is the most common. It indicates the stability of the process.

Negatively skewed (positively skewed). The average value of the histogram is located to the right (to the left) of the middle of the data range. Frequencies decrease sharply when moving from the center of the histogram to the right (left) and slowly to the left (right). The shape is asymmetric.

This form is formed either if the upper (lower) limit is adjusted theoretically or according to the tolerance value, or if the right (left) value cannot be reached.

Distribution with a break on the right (distribution with a break on the left). The average value of the histogram is located far to the right (to the left) of the middle of the data range. Frequencies decrease very sharply when moving from the center of the histogram to the right (left) and slowly to the left (right). The shape is asymmetric.

This form is often found in a situation of 100% control of products due to poor process reproducibility.

Comb (multimodal type). Intervals through one or two have lower (higher) frequencies.

This form is formed either if the number of single observations included in the interval varies from interval to interval, or if a certain data rounding rule is applied.

A histogram that does not have a high central part (plateau). The frequencies in the middle of the histogram are approximately the same (for a plateau, all frequencies are approximately equal).

This form occurs when several distributions are combined with means close to each other. For further analysis, it is recommended to apply the stratification method.

Two-peak type (bimodal type). In the vicinity of the middle of the histogram, the frequency is low, but there is a frequency peak on each side.

This form occurs when two distributions with mean values ​​that are far apart are combined. For further analysis, it is recommended to apply the stratification method.

Histogram with a dip (with a "pulled out tooth"). The shape of the histogram is close to the distribution of the usual type, but there is an interval with a frequency lower than in both adjacent intervals.

This form occurs if the interval width is not a multiple of the unit of measurement, if the scale readings are incorrectly read, etc.

Distribution with an isolated peak. Together with the usual shape of the histogram, a small isolated peak appears.

This form is formed when a small amount of data from another distribution is included, for example, if process control is impaired, measurement errors occur, or data from another process is included.

II option. There is a technological tolerance for the studied indicator. In this case, both the shape of the histogram and its location in relation to the tolerance field are analyzed. Possible options:

The histogram looks like a regular distribution. The average value of the histogram coincides with the center of the tolerance field. The width of the histogram is less than the width of the tolerance field with a margin.

In this situation, the process does not need to be adjusted.

The histogram looks like a regular distribution. The average value of the histogram coincides with the center of the tolerance field. The width of the histogram is equal to the width of the tolerance interval, in connection with which there are fears of the appearance of substandard details both from the upper and from the lower tolerance fields.

In this case, it is necessary either to consider the possibility of changing the technological process in order to reduce the width of the histogram (for example, increasing the accuracy of equipment, using better materials, changing the conditions for processing products, etc.) or expanding the tolerance field, because requirements for the quality of parts in this case are difficult to meet.

The histogram looks like a regular distribution. The average value of the histogram coincides with the center of the tolerance field. The width of the histogram is greater than the width of the tolerance interval, in connection with which substandard details are detected both from the side of the upper and from the lower tolerance fields.

In this case, it is necessary to implement the measures described in paragraph 2.

The histogram looks like a regular distribution. The width of the histogram is less than the width of the tolerance field with a margin. The average value of the histogram is shifted to the left (right) relative to the center of the tolerance interval, and therefore there are fears that substandard parts may be located from the side of the lower (upper) boundary of the tolerance field.

In this situation, it is necessary to check whether the applied measuring instruments introduce a systematic error. If the measuring instruments are in good order, the process should be adjusted so that the center of the histogram coincides with the center of the tolerance field.

The histogram looks like a regular distribution. The width of the histogram is approximately equal to the width of the tolerance field. The average value of the histogram is shifted to the left (right) relative to the center of the tolerance interval, and one or more intervals go beyond the tolerance field, which indicates the presence of defective parts.

In this case, it is initially necessary to adjust the technological operations in such a way that the center of the histogram coincides with the center of the tolerance field. After that, you need to take action to reduce the range of the histogram or increase the size of the tolerance interval.

The center of the histogram is shifted to the upper (lower) tolerance limit, and the right (left) side of the histogram near the upper (lower) tolerance limit has a sharp break.

In this case, it can be concluded that products with an indicator value outside the tolerance range were excluded from the batch or deliberately distributed as suitable for inclusion within the tolerance limits. Therefore, it is necessary to identify the cause that led to the appearance of this phenomenon.

The center of the histogram is shifted to the upper (lower) tolerance limit, and the right (left) side of the histogram near the upper (lower) tolerance limit has a sharp break. In addition, one or more intervals are out of tolerance.

The case is similar to 6., but the intervals of the histogram that go beyond the limits of the tolerance field indicate that the measuring tool was faulty. In this regard, it is necessary to verify the measuring instruments, as well as to re-instruct the employees on the rules for performing measurements.

The histogram has two peaks, although the measurement of the indicator values ​​was carried out for products from the same batch.

In this case, it can be concluded that the products were obtained under different conditions (for example, materials of different grades were used, equipment settings changed, products were produced on different machines, etc.). In this regard, it is recommended to apply the stratification method for further analysis.

The main characteristics of the histogram are in order (corresponding to case 1.), while there are defective products with indicator values ​​that go beyond the tolerance field, which form a separate “island” (isolated peak).

This situation could have arisen as a result of negligence, in which defective parts were mixed with good ones. In this case, it is necessary to identify the causes and circumstances leading to the occurrence of this situation, as well as take measures to eliminate them.


Polkhovskaya T., Adler Yu., Shper V.

In the modern world, the problem of product quality is extremely important. The well-being of any company, any supplier largely depends on its successful solution. Higher quality products significantly increase the supplier's chances to compete for markets and, most importantly, better meet the needs of consumers. Product quality is the most important indicator of the company's competitiveness.

The quality of products is laid down in the process of scientific research, design and technological development, is ensured by a good organization of production, and, finally, it is maintained during operation or consumption. At all these stages, it is important to carry out timely control and obtain a reliable assessment of product quality.

To reduce costs and achieve a quality level that satisfies the consumer, methods are needed that are not aimed at eliminating defects (inconsistencies) in the finished product, but at preventing the causes of their occurrence in the production process.

What are the reasons for the appearance of various defects in products and what are the possibilities to reduce their number?

Many people believe that defective products are inevitable because products must meet stringent quality standards and the factors that lead to defects are numerous. However, despite differences in product types and types of technological processes, the causes of defective products are universal. In part, defects are caused by the physical and chemical processes of creating products themselves, and in part they are associated with the variability (variability) of materials, processes, working methods, control methods, etc. If there were no variability, then all products would be identical, i.e. their quality would be exactly the same for all of them.

What will happen, for example, if you make products from materials of the same quality on the same machines, using the same methods and check these products in exactly the same way? No matter how many items are produced, they must all be identical as long as the four conditions mentioned are identical, i.e. either all products will meet the requirements, or they will not meet them. All products will be found to be defective if materials, machine tools, manufacturing or inspection methods deviate from specified requirements. In this case, the appearance of identical defective products is inevitable. If there are no deviations in the listed four production conditions, then all products must be "identical" - defect-free.

But it is almost impossible for all products to be defective. Of the total output, only some will be defective, while the rest will be defect-free.

Consider, for example, the process of bending steel sheets. At first glance, it seems that all the sheets have the same thickness, but if measured accurately, their thickness will be different, and even in different parts of the same sheet. If we examine the crystal structure of different parts of the leaf, it turns out that there are slight variations in the form of crystals consisting of iron, carbon and other atoms. These differences naturally affect quality scores. Even if the same bending method is used, the sheets will not bend in the same way, and some may crack.

Another example is metal machining. As the number of machined parts increases, the cutter becomes blunt. The consistency of the cutting fluid also changes with temperature. As a result, the dimensions of the products depend on whether the cutter is sharpened and whether it is installed correctly. Although it may seem that both operations are performed under the same conditions, in fact there are many changes or variations that go unnoticed, but they affect the quality of the product.

Consider another example - heat treatment. The temperature in the furnace constantly changes with voltage (if the process is in an electric furnace) or gas pressure (if a gas furnace is used). In the furnace itself, the areas located at the damper; near the hearth, arch, at the side walls, in the central part, are in different conditions. When products are placed in a day furnace, the amount of heat they receive varies depending on their position, which affects the quality factor such as the hardness of the product.

The physical abilities and skill of the workers also have an impact on the change in the quality of products. There are tall and short, thin and fat, weak and strong people, left-handers and people who have a better developed right hand. Workers may think they work the same way, but there are individual differences. Even the same person works differently depending on how he feels on a given day, condition and degree of fatigue. Sometimes he makes mistakes due to inattention.

Errors can be made by inspectors when measuring product parameters. Measurement variations may result from the use of a faulty measuring tool or an imperfect measurement method. Thus, in the case of organoleptic (visual control), changes in the criteria that the controller is guided by can lead to an erroneous assessment of product quality and affect the objectivity of decision-making regarding the suitability of products.

Considering the problem in this way, it can be seen that in the process of manufacturing a product, there are many factors that affect its quality indicators. Evaluating the production process from the point of view of quality change, we can consider it as a certain set of causes of variability. These reasons explain the changes in the quality indicators of products, which leads to their division into defective and defect-free. A product is considered defect-free if its quality indicators meet a certain standard, otherwise the product is classified as defective. Moreover, even defective products differ from each other when compared with the standard, i.e. There are no "absolutely identical" products. One of the reasons for the release of defective products, as already mentioned, is variability. If you try to reduce it, their number will undoubtedly decrease. This is a simple and sound principle, equally valid regardless of the types of products or types of technological processes.

The methods of control that have existed for a long time were reduced, as a rule, to the analysis of defects through a complete check of manufactured products. In mass production, such control is very expensive. Calculations show that in order to ensure the quality of products by sorting them out, the control apparatus of enterprises should be five to six times greater than the number of production workers.

On the other hand, total control in mass production does not guarantee the absence of defective products in the accepted products. Experience shows that the controller quickly gets tired, as a result of which part of the good product is mistaken for defective and vice versa. Practice also shows that where they are carried away by complete control, losses from marriage increase sharply.

These reasons put the production in front of the need to move to selective control. The spread of sampling control was facilitated by research by specialists in the field of probability theory and mathematical statistics, which showed that in most cases, for a reliable assessment of quality, there is no need to check all manufactured products. These studies (primarily by the American statisticians Dodge, Romig and Shewhart) made it possible to approach the organization of technical control on a new scientific and methodological basis. However, it should be borne in mind that the transition to selective control is effective only when the technological processes, being in an established state, have such accuracy and stability that automatically guarantees the manufacture of products with a minimum number of defects.

Why should sampling be statistical? Let's consider two typical examples.

Today, the current control of the state of the technological process is carried out as follows. From the current products at random times, one unit of products is selected for control, according to which the state of the technological process is judged: if it turns out to be suitable, the process is considered to be established, otherwise a decision is made on the need to suspend the manufacture of products and to adjust the process.

What is the effectiveness of such actions? The formulated procedure for monitoring the state of the technological process proceeds from the traditional logic: the process is established - there is no marriage, the process is disordered - all manufactured products will be defective.

In production, there are other patterns that are called stochastic or random. When the process is out of order, the share of the defect produced only slightly increases: up to 1, 2, 10% and extremely rarely up to 100% - this depends on the specific technology and the specific cause of the disorder. Let's imagine that as a result of the disorder of the technological process, the share of produced defects has increased to 5%. This means that, on average, one in twenty manufactured units will be defective. What is the probability of extracting this particular, one among twenty, defective unit and making the right decision? The answer may be such that the probability of detecting a process violation is equal to the probability of manufacturing a defective unit of product with a disordered process, in our case - 5%,

The modern practice of organizing the current control of the state of the technological process cannot fundamentally solve the problem of preventing defects. It doesn’t help when they select not one, but two or three units for verification. With statistical quality control, the same results processed by methods of mathematical statistics make it possible to assess the true state of the technological process with a high degree of reliability. Statistical methods make it possible to reasonably detect the disorder of the process even when two or three units of products selected for control turn out to be suitable, since they are highly sensitive to changes in the state of technological processes.

For years of hard work, specialists have been extracting bit by bit from world experience such techniques and approaches that can be understood and effectively used without special training, and this was done in such a way as to ensure real achievements in solving the vast majority of problems that arise in real production.

As a result, a system of practical methods designed for mass application was developed. These are the so-called seven simple methods:

1) Pareto chart;

2) Ishikawa scheme;

3) delamination (stratification);

4) control sheets;

5) histograms;

6) graphics (on the plane)

7) control charts (Shewhart).

Sometimes these methods are listed in a different order, which is not important, since they are supposed to be considered both as separate tools and as a system of methods, in which, in each specific case, it is supposed to specifically determine the composition and structure of the working set of tools.

Statistical methods of quality management are a philosophy, policy, system, methodology, as well as technical means of quality management based on the results of measurements, analysis, testing, control, operation data, expert assessments and any other information that allows you to make reliable, reasonable, evidence-based decisions.

The use of statistical methods is a very effective way to develop new technology and control the quality of production processes. Many leading firms seek to actively use them, and some of them spend more than a hundred hours annually on in-house training in these methods. Although knowledge of statistical methods is part of the normal education of an engineer, knowledge itself does not mean the ability to apply it. The ability to consider events in terms of statistics is more important than knowledge of the methods themselves. In addition, one must be able to honestly recognize shortcomings and changes that have occurred and collect objective information.

The Japanese Union of Engineers and Scientists identified seven main tools for operational quality management (ensuring) (Fig. 2.38):

  • 1) affinity diagram (affinity diagram);
  • 2) link diagram (interrelationship diagram);
  • 3) tree diagram (tree diagram);
  • 4) matrix chart, or quality table (matrix diagram or quality table);
  • 5) arrow diagram (arrow diagram);

Rice. 2.38.

  • 6) diagram of the program implementation process PDPC (process decision program chart - diagram of the program implementation process);
  • 7) priority matrix (analysis of matrix data) (matrix data analysis).

Sometimes these seven instruments are called new quality management tools - N1. These tools are used in the operational quality management of the project and are of a general nature. Strategically, they can be seen as seven strategic methods of quality management - S7. These include:

  • 1) assessment of the attractiveness of the business;
  • 2) benchmarking;
  • 3) market segmentation analysis;
  • 4) evaluation of the market position;
  • 5) project portfolio management;
  • 6) strategic analysis of development factors;
  • 7) resource optimization.

From a strategic point of view TQM becomes the concept of enterprise management, which determines the current efficiency of the business and the prospects for its development.

The scheme of sharing quality tools is shown in fig. 2.39. At the stage of preliminary analysis and definition of the problem, such quality management tools as an affinity diagram and a link diagram are used; at the stage of deployment of funds - a matrix diagram and a tree diagram; at the stage of systematization of means - an arrow diagram and a process diagram. The final is a matrix of priorities, which allows you to identify the priority market segments that will be perceived by the improved product. The possibilities of connecting existing quality control tools, if necessary, and potential tools for complex situations in the form of multivariate analysis, if necessary, are shown.

Let's consider a practical solution to the problem of increasing the warranty period of the product (table mill), for which the "house of quality" was built (see Fig. 2.16).

Affinity diagram is a means of structuring a large amount of diverse data on the problem under consideration according to the principle of affinity of various data and illustrates associative rather than logical connections. This tool


Rice. 2.39. Sharing quality tools is sometimes referred to as a method K.J. This method originated from the early work of the Japanese scientist Jiro Kawakita in the 1950s. The methods then developed for collecting and analyzing data resulted in the problem-solving approach indicated by his initials. In 1967, Kawakita described his method and developed a teaching system. The information often comes as linguistic data from multiple sources: customer responses to reviews, transcribed records of customer visits, or TQM, or synthesized results from multiple AU charts. Any of them can lead to tens or hundreds of statements. Multi-selection method MRM there is a methodology for sifting these statements down to a manageable amount. Kawakita created this method along with the A/-method. Like the last one MRM uses facts or ideas. There are two principles of data reduction: 1) strengthening strengths and 2) eliminating weaknesses. MRM follows the first principle - to focus on the importance of the data relevant to the topic. Idea MRM has some similarities with the theory of W. McGregor.

It is preferable to create an affinity diagram as a group. Experience shows that for this purpose it is better to create a group consisting of 6-8 people with prior experience of working together. The procedure for creating a diagram can be organized as follows. First, the subject (topic) is determined, which is the basis for data collection. Method MRM implemented in several stages:

  • preparation, which includes a warm-up and discussion of the topic;
  • collection of data on the topic using the brainstorming method. Team members label statements that will be likely upon final review. Each member of the group notes everything that seems important to him. Unchecked statements are not discussed. There are several selection attempts with a gradual decrease in selection. By constantly checking the list of statements and labeling them, team members reach agreement on the most important ones without wasting time on discussion;
  • focused selection - 20 to 30% of the previous material is discarded from the final selection. Each participant has a limited choice for marking the final conclusions. By this time, everyone has already considered the remaining statements several times, and everyone is ready to focus on the most important ones.

When building a chart MRM and affinity plots, quality control tools such as control charts, scatter plots, stratification plots, and Pareto plots are widely used. On fig. 2.40 shows a diagram MRM in relation to the problem under consideration of increasing the service life of the product (increasing durability).

The limitation of the initial data showed that there is consumer dissatisfaction with the guaranteed service life of products (1 year). This is due to frequent repairs to restore the performance of the product. At the same time, the functionality of well-made samples as a whole determines the market attractiveness of the product. These results are confirmed by products that work for a long time without additional repairs. The essence of the problem is specified: to increase the warranty period up to four years in order to ensure stable durability and thus customer satisfaction.

To construct an affinity diagram, related data are grouped in directions of different levels. The work is considered completed when all the data is in order, i.e. collected


Rice. 2. 40. Diagram MRM

into preliminary groups of related data, and most disagreements are resolved. The remaining questions are usually removed during the discussion.

First, one should try to determine the directionality of each group of data in terms of the affinity of the data of the group. This can be done differently: choose one card from the group, set it at the head, or form a new direction. This procedure can be repeated to summarize the leading directions and thus create a hierarchy. The analysis ends when the data are grouped into leading directions. On fig. 2.41 shows an affinity diagram for the problem of increasing the warranty period of a product. It can be seen that the data are grouped into three areas, each of which contains substantive statements on the problem


Rice. 2.41. Me affinity diagram. For this, a tool such as a stratification diagram is used.

A tool that allows you to identify logical connections between the main idea, problem or different data. The purpose of this management tool is to match the root causes of process failure identified by the affinity diagram with the problems that need to be addressed (this explains some similarities between the link diagram and the causal diagram - the Ishikawa diagram). The classification of these causes by importance is carried out taking into account the resources used in the company, as well as numerical data characterizing the causes.

The data used here can, for example, be generated by applying an affinity diagram. The link diagram is primarily a logical tool, as opposed to an affinity diagram, which is creative. A link diagram can be useful in situations:

  • when the subject (topic) is so complex that the connections between different ideas cannot be established by ordinary discussion;
  • the time sequence according to which the steps are taken is decisive;
  • there is a suspicion that the problem raised in the question is solely a symptom of a more fundamental untouched problem. As with the affinity diagram, the link diagram should be done by the right team. It is important that the subject under study (result) must first be defined. Root causes can be determined from an affinity chart or using a quality control tool such as the Ishikawa chart.

On fig. 2.42 the diagram of communications at the decision of a problem of increase in a warranty period of service of a product up to four years is shown. The main tasks of the problem being solved are determined, the persons responsible for their implementation are established, logical connections between them are established. The underestimation of the market importance of the problem held back its solution. Awareness of the dependence of the enterprise on the state of market conditions can create threats to it in the future. To reduce the degree of its risk, the production model is being revised. An important factor is also the need to use an updated technological base, which is not


Rice. 2.42.

only with the acquisition of modern equipment, but also with its effective use. Personnel for this must have the necessary competencies. To guarantee the quality, metrological support is required, which will allow closing the issues of control accuracy.

(systematic diagram) - a tool that provides a systematic way to solve the problem under consideration to increase the satisfaction of consumers represented at various levels.

Unlike the affinity diagram and the link diagram, this tool is more targeted. A tree diagram is constructed as a multi-stage structure, the elements of which are various means and methods for solving the problem. The principle of constructing a tree diagram is shown in fig. 2.43.

The tree diagram can be used in the following cases:

Rice. 2.43.

  • when vaguely formed consumer wishes for a product are translated into consumer wishes at a manageable level;
  • it is necessary to investigate all possible aspects of the problem;
  • short-term goals must be achieved before the completion of all work, i.e. at the design stage.

Matrix chart - a tool that allows you to establish the importance of various connections and is the central link of the seven tools for managing the house of quality.

A matrix chart allows you to structure a large amount of data, so that the logical relationships between various elements receive a graphical display. Its purpose is to outline the links and correlations between tasks, functions, and characteristics, highlighting their relative importance. In its final form, the matrix diagram shows the correspondence of certain factors and phenomena to various causes of their occurrence and means of eliminating their consequences, as well as the degree of dependence of these factors on the causes of their occurrence and measures to eliminate them. Matrix diagrams, also called connection matrices, are shown in fig. 2.44. In the case under consideration, they determine the presence and closeness of the links between the factors T - technical factors, P - market factors, K - competence factors. The connection between them is shown using special characters:

Strong connection (defined as 9 points);

O - average connection (defined as 3 points);

Weak connection (defined as 1 point).

We determine the importance of the relationships of the main factors in solving the problem of increasing the warranty period of the manufactured product (table mill). The matrix diagram consists of three linear (simple) diagrams, in the form of combinations of factors: diagram a) - a combination of T and P; diagram b) - combination of T and K; diagram c) - combination of K and P.

We form the main factors influencing the process of improving the product in order to increase its warranty period:

1. Technical factors, specified as a set T = }

CATEGORIES

POPULAR ARTICLES

2022 "mobi-up.ru" - Garden plants. Interesting about flowers. Perennial flowers and shrubs