Receiver operating characteristic
From Wikipedia, the free encyclopedia
ROC curve of three epitope predictors.
In signal detection theory, a receiver operating characteristic (ROC), or simply ROC curve, is a graphical plot of the sensitivity vs. (1 - specificity) for a binary classifier system as its discrimination threshold is varied. The ROC can also be represented equivalently by plotting the fraction of true positives (TPR = true positive rate) vs. the fraction of false positives (FPR = false positive rate). Also known as a Relative Operating Characteristic curve, because it is a comparison of two operating characteristics (TPR & FPR) as the criterion changes.[1]
ROC analysis provides tools to select possibly optimal models and to discard suboptimal ones independently from (and prior to specifying) the cost context or the class distribution. ROC analysis is related in a direct and natural way to cost/benefit analysis of diagnostic decision making.
The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battle fields, also known as the signal detection theory. ROC analysis has more recently been used in medicine, radiology, psychology, and other areas for many decades, and it has been introduced relatively recently in other areas like machine learning and data mining.
Basic concept
Terminology and derivationsfrom a confusion matrix
See also: Type I and type II errors
A classification model (classifier or diagnosis) is a mapping of instances into a certain class/group. The classifier or diagnosis result can be in a real value (continuous output) in which the classifier boundary between classes must be determined by a threshold value, for instance to determine whether a person has hypertension based on blood pressure measure, or it can be in a discrete class label indicating one of the classes.
Let us consider a two-class prediction problem (binary classification), in which the outcomes are labeled either as positive (p) or negative (n) class. There are four possible outcomes from a binary classifier. If the outcome from a prediction is p and the actual value is also p, then it is called a true positive (TP); however if the actual value is n then it is said to be a false positive (FP). Conversely, a true negative has occurred when both the prediction outcome and the actual value are n, and false negative is when the prediction outcome is n while the actual value is p.
To get an appropriate example in a real-world problem, consider a diagnostic test that seeks to determine whether a person has a certain disease. A false positive in this case occurs when the person tests positive, but actually does not have the disease. A false negative, on
the other hand, occurs when the person tests negative, suggesting they are healthy, when they actually do have the disease.
Let us define an experiment from P positive instances and N negative instances. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows:
actual value p ntotal prediction outcomep' n' total
ROC space
The ROC space and plots of the four prediction examples. (Note: this diagram is incorrect; see discussion.)
The contingency table can derive several evaluation metrics (see infobox). To draw a ROC curve, only the true positive rate (TPR) and false positive rate (FPR) are needed. TPR determines a classifier or a diagnostic test performance on classifying positive instances
correctly among all positive samples available during the test. FPR, on the other hand, defines how many incorrect positive results occur among all negative samples available during the test.
A ROC space is defined by FPR and TPR as x and y axes respectively, which depicts relative trade-offs between true positive (benefits) and false positive (costs). Since TPR is equivalent with sensitivity and FPR is equal to 1 - specificity, the ROC graph is sometimes called the sensitivity vs (1 - specificity) plot. Each prediction result or one instance of a confusion matrix represents one point in the ROC space.
The best possible prediction method would yield a point in the upper left corner or coordinate (0,1) of the ROC space, representing 100% sensitivity (no false negatives) and 100% specificity (no false positives). The (0,1) point is also called a perfect classification. A completely random guess would give a point along a diagonal line (the so-called line of no-discrimination) from the left bottom to the top right corners. An intuitive example of random guessing is a decision by flipping coins (head or tail).
The diagonal line divides the ROC space in areas of good or bad classification/diagnostic. Points above the diagonal line indicate good classification results, while points below the line indicate wrong results (although the prediction method can be simply inverted to get points above the line). Let us look into four prediction results from 100 positive and 100 negative instances:
Plots of the four results above in the ROC space are given in the figure. The result A clearly shows the best among A, B, and C. The result B lies on the random guess line (the diagonal line), and it can be seen in the table that the accuracy of B is 50%. However, when C is mirrored onto the diagonal line, as seen in C', the result is even better than A.
Since this mirrored C method or test simply reverses the predictions of whatever method or test produced the C contingency table, the C method has positive predictive power simply by reversing all of its decisions. When the C method predicts p or n, the C' method would predict n or p,
respectively. In this manner, the C' test would perform the best. While the closer a result from a contingency table is to the upper left corner the better it predicts, the distance from the random guess line in either direction is the best indicator of how much predictive power
a method has, albeit, if it is below the line, all of its predictions including its more often wrong predictions must be reversed in order to utilize the method's power.
From Wikipedia, the free encyclopedia
ROC curve of three epitope predictors.
In signal detection theory, a receiver operating characteristic (ROC), or simply ROC curve, is a graphical plot of the sensitivity vs. (1 - specificity) for a binary classifier system as its discrimination threshold is varied. The ROC can also be represented equivalently by plotting the fraction of true positives (TPR = true positive rate) vs. the fraction of false positives (FPR = false positive rate). Also known as a Relative Operating Characteristic curve, because it is a comparison of two operating characteristics (TPR & FPR) as the criterion changes.[1]
ROC analysis provides tools to select possibly optimal models and to discard suboptimal ones independently from (and prior to specifying) the cost context or the class distribution. ROC analysis is related in a direct and natural way to cost/benefit analysis of diagnostic decision making.
The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battle fields, also known as the signal detection theory. ROC analysis has more recently been used in medicine, radiology, psychology, and other areas for many decades, and it has been introduced relatively recently in other areas like machine learning and data mining.
Contents
|
Terminology and derivationsfrom a confusion matrix
true positive (TP) eqv. with hittrue negative (TN)eqv. with correct rejectionfalse positive (FP)eqv. with false alarm, Type I errorfalse negative (FN)eqv. with miss, Type II errortrue positive rate (TPR)eqv. with hit rate, recall, sensitivityTPR = TP / P = TP / (TP + FN)false positive rate (FPR)eqv. with false alarm rate, fall-outFPR = FP / N = FP / (FP + TN)accuracy (ACC)ACC = (TP + TN) / (P + N)specificity (SPC) or True Negative RateSPC = TN / N = TN / (FP + TN) = 1 − FPRpositive predictive value (PPV)eqv. with precisionPPV = TP / (TP + FP)negative predictive value (NPV)NPV = TN / (TN + FN)false discovery rate (FDR)FDR = FP / (FP + TP)Matthews correlation coefficient (MCC) Source: Fawcett (2004). |
A classification model (classifier or diagnosis) is a mapping of instances into a certain class/group. The classifier or diagnosis result can be in a real value (continuous output) in which the classifier boundary between classes must be determined by a threshold value, for instance to determine whether a person has hypertension based on blood pressure measure, or it can be in a discrete class label indicating one of the classes.
Let us consider a two-class prediction problem (binary classification), in which the outcomes are labeled either as positive (p) or negative (n) class. There are four possible outcomes from a binary classifier. If the outcome from a prediction is p and the actual value is also p, then it is called a true positive (TP); however if the actual value is n then it is said to be a false positive (FP). Conversely, a true negative has occurred when both the prediction outcome and the actual value are n, and false negative is when the prediction outcome is n while the actual value is p.
To get an appropriate example in a real-world problem, consider a diagnostic test that seeks to determine whether a person has a certain disease. A false positive in this case occurs when the person tests positive, but actually does not have the disease. A false negative, on
the other hand, occurs when the person tests negative, suggesting they are healthy, when they actually do have the disease.
Let us define an experiment from P positive instances and N negative instances. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows:
actual value p ntotal prediction outcomep' n' total
True Positive | False Positive | P' |
False Negative | True Negative | N' |
P | N |
The ROC space and plots of the four prediction examples. (Note: this diagram is incorrect; see discussion.)
The contingency table can derive several evaluation metrics (see infobox). To draw a ROC curve, only the true positive rate (TPR) and false positive rate (FPR) are needed. TPR determines a classifier or a diagnostic test performance on classifying positive instances
correctly among all positive samples available during the test. FPR, on the other hand, defines how many incorrect positive results occur among all negative samples available during the test.
A ROC space is defined by FPR and TPR as x and y axes respectively, which depicts relative trade-offs between true positive (benefits) and false positive (costs). Since TPR is equivalent with sensitivity and FPR is equal to 1 - specificity, the ROC graph is sometimes called the sensitivity vs (1 - specificity) plot. Each prediction result or one instance of a confusion matrix represents one point in the ROC space.
The best possible prediction method would yield a point in the upper left corner or coordinate (0,1) of the ROC space, representing 100% sensitivity (no false negatives) and 100% specificity (no false positives). The (0,1) point is also called a perfect classification. A completely random guess would give a point along a diagonal line (the so-called line of no-discrimination) from the left bottom to the top right corners. An intuitive example of random guessing is a decision by flipping coins (head or tail).
The diagonal line divides the ROC space in areas of good or bad classification/diagnostic. Points above the diagonal line indicate good classification results, while points below the line indicate wrong results (although the prediction method can be simply inverted to get points above the line). Let us look into four prediction results from 100 positive and 100 negative instances:
|
|
|
| ||||||||||||||||||||||||||||||||||||
TPR = 0.63 | TPR = 0.77 | TPR = 0.24 | TPR = 0.88 | ||||||||||||||||||||||||||||||||||||
FPR = 0.28 | FPR = 0.77 | FPR = 0.88 | FPR = 0.24 | ||||||||||||||||||||||||||||||||||||
ACC = 0.68 | ACC = 0.50 | ACC = 0.18 | ACC = 0.82 |
Since this mirrored C method or test simply reverses the predictions of whatever method or test produced the C contingency table, the C method has positive predictive power simply by reversing all of its decisions. When the C method predicts p or n, the C' method would predict n or p,
respectively. In this manner, the C' test would perform the best. While the closer a result from a contingency table is to the upper left corner the better it predicts, the distance from the random guess line in either direction is the best indicator of how much predictive power
a method has, albeit, if it is below the line, all of its predictions including its more often wrong predictions must be reversed in order to utilize the method's power.
Sat Apr 08, 2023 8:31 am by Dr Abdul Aziz Awan
» Video for our MPH colleagues. Must watch
Sun Aug 07, 2022 11:56 pm by The Saint
» Salam
Sun Jan 31, 2021 7:40 am by mr dentist
» Feeling Sad
Tue Feb 04, 2020 8:27 pm by mr dentist
» Look here. Its 2020 and this is what we found
Mon Jan 27, 2020 7:23 am by izzatullah
» Sad News
Fri Jan 11, 2019 6:17 am by ameen
» Pakistan Demographic Profile 2018
Fri May 18, 2018 9:42 am by Dr Abdul Aziz Awan
» Good evening all fellows
Wed Apr 25, 2018 10:16 am by Dr Abdul Aziz Awan
» Urdu Poetry
Sat Apr 04, 2015 12:28 pm by Dr Abdul Aziz Awan