معیارهایی برای ارزیابی عملکرد قابل پیش بینی کنترل کیفیت
ترجمه نشده

معیارهایی برای ارزیابی عملکرد قابل پیش بینی کنترل کیفیت

عنوان فارسی مقاله: قسمت اول بهینه سازی کنترل کیفیت: معیارهایی برای ارزیابی عملکرد قابل پیش بینی کنترل کیفیت
عنوان انگلیسی مقاله: Quality control optimization part I: Metrics for evaluating predictive performance of quality control
مجله/کنفرانس: Clinica Chimica Acta
رشته های تحصیلی مرتبط: مهندسی صنایع
گرایش های تحصیلی مرتبط: بهینه سازی سیستم ها، برنامه ریزی و تحلیل سیستم ها
کلمات کلیدی فارسی: کنترل کیفیت، آمار، خطا، ارزش پیش گویی کننده مثبت، ارزش پیش گویی کننده منفی
کلمات کلیدی انگلیسی: Quality control، Statistics، Error، Positive predictive value، Negative predictive value
نوع نگارش مقاله: مقاله پژوهشی (Research Article)
نمایه: Scopus - Master Journals List - MedLine - JCR
شناسه دیجیتال (DOI): https://doi.org/10.1016/j.cca.2019.04.053
دانشگاه: The Department of Pathology, University of Utah, Salt Lake City, UT, United States of America
صفحات مقاله انگلیسی: 11
ناشر: الزویر - Elsevier
نوع ارائه مقاله: ژورنال
نوع مقاله: ISI
سال انتشار مقاله: 2019
ایمپکت فاکتور: 2/762 در سال 2018
شاخص H_index: 127 در سال 2019
شاخص SJR: 1/027 در سال 2018
شناسه ISSN: 0009-8981
شاخص Quartile (چارک): Q1 در سال 2018
فرمت مقاله انگلیسی: PDF
وضعیت ترجمه: ترجمه نشده است
قیمت مقاله انگلیسی: رایگان
آیا این مقاله بیس است: خیر
آیا این مقاله مدل مفهومی دارد: ندارد
آیا این مقاله پرسشنامه دارد: ندارد
آیا این مقاله متغیر دارد: دارد
کد محصول: E12646
رفرنس: دارای رفرنس در داخل متن و انتهای مقاله
فهرست مطالب (انگلیسی)

Abstract

1- Introduction

2- Theoretical development

3- Methods

4- Results

5- Example calculations

6- Discussion

References

بخشی از مقاله (انگلیسی)

Abstract

Background: Quality control (QC) policies are usually designed using power curves. This type of analysis reasons from a cause (a shift in the assay results) to an effect (a signal from the QC monitoring process). End users face a different problem: they must reason from an effect (QC signal) to a cause. It would be helpful to have metrics that evaluated QC policies from an end-user perspective.

Methods: We developed a simple dichotomous model based on classification of assay errors. Errors are classified as important or unimportant based on a critical shift size, defined as Sc. Using this scheme, we show how QC policies can be analyzed using common accuracy metrics such as sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). We explore the impact of design choices (QC limits, number of repeats) on these performance measures in a number of different contexts.

Results: PPV varies widely (1% to 100%) depending on context. NPV also varies (40% to 100%) but is less sensitive to context than PPV. There are many contexts in which QC policies have low predictive values. In such cases, performance (PPV, NPV) can be improved by adjusting the QC limits or the number of repeats at each QC event.

Conclusion: The effectiveness of QC can be improved by considering the context in which the QC policy will be applied. Using simple assumptions, common accuracy metrics can be used to evaluate QC policy performance.

Introduction

Laboratories are under increasing pressure to improve performance. Quality control (QC) ensures the reliability of results and is therefore a key component of laboratory performance. Laboratories direct considerable resources to QC and assay improvement and, given the importance of QC, it would be useful to have metrics to evaluate the performance of a QC plan. Unfortunately, few metrics are available. The performance of a QC plan is generally analyzed in terms of the number of events before false rejection and the number of events before error detection [1]. These quantities are also known as the average run length (ARL) and time to signal (TTS) [2]. Run lengths are determined by the statistical power of a QC plan. Statistical power is the probability that a QC plan will produce a signal (i.e., rule violation) when a change in the process occurs (e.g., a shift in the mean) [2–4]. QC plans with greater statistical power are considered superior. Such analyses fail to consider the magnitude of the error and all errors are considered equal. In reality, this assumption is unlikely to be true because larger errors may have more potential for harm (and may be costlier) than smaller errors. A more accurate model might place more weight on larger errors. In particular, power curve analysis only considers an event a false rejection when the QC monitoring system produces a signal and there has been no shift in the mean. This practice overstates the specificity of the QC plan because there may be inconsequential events (i.e., small shifts), which can be safely ignored. Responding to such events wastes resources. A more realistic model might classify errors into categories (e.g., important/unimportant) and use this information to evaluate the performance of a QC plan. The design of QC plans is rarely considered from a user perspective. The typical design perspective is, “Given a shift of a given size, what is the probability of detecting the change if I use a particular QC plan?” The reasoning is from cause to effect. The end-user perspective is different. The end user is confronted with a QC result and asks, “Given this signal, what is the probability that a significant problem has occurred? Is it worth the time to troubleshoot?” Conversely, “Given no signal, what is the probability that no change has occurred?” End users need to reason from an effect (a signal from QC monitoring) to a cause.