Public Health Forum

Would you like to react to this message? Create an account in a few clicks or log in to continue.
Public Health Forum

A Forum to discuss Public Health Issues in Pakistan

Welcome to the most comprehensive portal on Community Medicine/ Public Health in Pakistan. This website contains content rich information for Medical Students, Post Graduates in Public Health, Researchers and Fellows in Public Health, and encompasses all super specialties of Public Health. The site is maintained by Dr Nayyar R. Kazmi

Latest topics

» Polio Endemic Countries on the Globe
Receiver Operator Curve EmptySat Apr 08, 2023 8:31 am by Dr Abdul Aziz Awan

» Video for our MPH colleagues. Must watch
Receiver Operator Curve EmptySun Aug 07, 2022 11:56 pm by The Saint

» Salam
Receiver Operator Curve EmptySun Jan 31, 2021 7:40 am by mr dentist

» Feeling Sad
Receiver Operator Curve EmptyTue Feb 04, 2020 8:27 pm by mr dentist

» Look here. Its 2020 and this is what we found
Receiver Operator Curve EmptyMon Jan 27, 2020 7:23 am by izzatullah

» Sad News
Receiver Operator Curve EmptyFri Jan 11, 2019 6:17 am by ameen

» Pakistan Demographic Profile 2018
Receiver Operator Curve EmptyFri May 18, 2018 9:42 am by Dr Abdul Aziz Awan

» Good evening all fellows
Receiver Operator Curve EmptyWed Apr 25, 2018 10:16 am by Dr Abdul Aziz Awan

» Urdu Poetry
Receiver Operator Curve EmptySat Apr 04, 2015 12:28 pm by Dr Abdul Aziz Awan

Navigation

Affiliates

Statistics

Our users have posted a total of 8425 messages in 1135 subjects

We have 439 registered users

The newest registered user is Dr. Arshad Nadeem Awan


    Receiver Operator Curve

    The Saint
    The Saint
    Admin


    Sagittarius Number of posts : 2444
    Age : 51
    Location : In the Fifth Dimension
    Job : Consultant in Paediatric Emergency Medicine, NHS, Kent, England, UK
    Registration date : 2007-02-22

    Receiver Operator Curve Empty Receiver Operator Curve

    Post by The Saint Thu Jun 04, 2009 12:45 pm

    Receiver operating characteristic

    From Wikipedia, the free encyclopedia

    Receiver Operator Curve 180px-RocReceiver Operator Curve Magnify-clip

    ROC curve of three epitope predictors.





    In signal detection theory, a receiver operating characteristic (ROC), or simply ROC curve, is a graphical plot of the sensitivity vs. (1 - specificity) for a binary classifier system as its discrimination threshold is varied. The ROC can also be represented equivalently by plotting the fraction of true positives (TPR = true positive rate) vs. the fraction of false positives (FPR = false positive rate). Also known as a Relative Operating Characteristic curve, because it is a comparison of two operating characteristics (TPR & FPR) as the criterion changes.[1]
    ROC analysis provides tools to select possibly optimal models and to discard suboptimal ones independently from (and prior to specifying) the cost context or the class distribution. ROC analysis is related in a direct and natural way to cost/benefit analysis of diagnostic decision making.
    The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battle fields, also known as the signal detection theory. ROC analysis has more recently been used in medicine, radiology, psychology, and other areas for many decades, and it has been introduced relatively recently in other areas like machine learning and data mining.

    Contents

    Basic concept


    Terminology and derivationsfrom a confusion matrix
    true positive (TP) eqv. with hittrue negative (TN)eqv. with correct rejectionfalse positive (FP)eqv. with false alarm, Type I errorfalse negative (FN)eqv. with miss, Type II errortrue positive rate (TPR)eqv. with hit rate, recall, sensitivityTPR = TP / P = TP / (TP + FN)false positive rate (FPR)eqv. with false alarm rate, fall-outFPR = FP / N = FP / (FP + TN)accuracy (ACC)ACC = (TP + TN) / (P + N)specificity (SPC) or True Negative RateSPC = TN / N = TN / (FP + TN) = 1 − FPRpositive predictive value (PPV)eqv. with precisionPPV = TP / (TP + FP)negative predictive value (NPV)NPV = TN / (TN + FN)false discovery rate (FDR)FDR = FP / (FP + TP)Matthews correlation coefficient (MCC)Receiver Operator Curve 71feb43bc805b033cc59842cde7a93ea
    Source: Fawcett (2004).
    See also: Type I and type II errors

    A classification model (classifier or diagnosis) is a mapping of instances into a certain class/group. The classifier or diagnosis result can be in a real value (continuous output) in which the classifier boundary between classes must be determined by a threshold value, for instance to determine whether a person has hypertension based on blood pressure measure, or it can be in a discrete class label indicating one of the classes.
    Let us consider a two-class prediction problem (binary classification), in which the outcomes are labeled either as positive (p) or negative (n) class. There are four possible outcomes from a binary classifier. If the outcome from a prediction is p and the actual value is also p, then it is called a true positive (TP); however if the actual value is n then it is said to be a false positive (FP). Conversely, a true negative has occurred when both the prediction outcome and the actual value are n, and false negative is when the prediction outcome is n while the actual value is p.
    To get an appropriate example in a real-world problem, consider a diagnostic test that seeks to determine whether a person has a certain disease. A false positive in this case occurs when the person tests positive, but actually does not have the disease. A false negative, on
    the other hand, occurs when the person tests negative, suggesting they are healthy, when they actually do have the disease.
    Let us define an experiment from P positive instances and N negative instances. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows:
    actual value p ntotal prediction outcomep' n' total
    True
    Positive
    False
    Positive
    P'
    False
    Negative
    True
    Negative
    N'
    PN
    ROC space

    Receiver Operator Curve 250px-ROC_spaceReceiver Operator Curve Magnify-clip

    The ROC space and plots of the four prediction examples. (Note: this diagram is incorrect; see discussion.)

    The contingency table can derive several evaluation metrics (see infobox). To draw a ROC curve, only the true positive rate (TPR) and false positive rate (FPR) are needed. TPR determines a classifier or a diagnostic test performance on classifying positive instances
    correctly among all positive samples available during the test. FPR, on the other hand, defines how many incorrect positive results occur among all negative samples available during the test.
    A ROC space is defined by FPR and TPR as x and y axes respectively, which depicts relative trade-offs between true positive (benefits) and false positive (costs). Since TPR is equivalent with sensitivity and FPR is equal to 1 - specificity, the ROC graph is sometimes called the sensitivity vs (1 - specificity) plot. Each prediction result or one instance of a confusion matrix represents one point in the ROC space.
    The best possible prediction method would yield a point in the upper left corner or coordinate (0,1) of the ROC space, representing 100% sensitivity (no false negatives) and 100% specificity (no false positives). The (0,1) point is also called a perfect classification. A completely random guess would give a point along a diagonal line (the so-called line of no-discrimination) from the left bottom to the top right corners. An intuitive example of random guessing is a decision by flipping coins (head or tail).
    The diagonal line divides the ROC space in areas of good or bad classification/diagnostic. Points above the diagonal line indicate good classification results, while points below the line indicate wrong results (although the prediction method can be simply inverted to get points above the line). Let us look into four prediction results from 100 positive and 100 negative instances:













    TP=63FP=2891
    FN=37TN=72109
    100100200












    TP=77FP=77154
    FN=23TN=2346
    100100200












    TP=24FP=88112
    FN=76TN=1288
    100100200












    TP=88FP=24112
    FN=12TN=7688
    100100200
    TPR = 0.63TPR = 0.77TPR = 0.24TPR = 0.88
    FPR = 0.28FPR = 0.77FPR = 0.88FPR = 0.24
    ACC = 0.68ACC = 0.50ACC = 0.18ACC = 0.82
    Plots of the four results above in the ROC space are given in the figure. The result A clearly shows the best among A, B, and C. The result B lies on the random guess line (the diagonal line), and it can be seen in the table that the accuracy of B is 50%. However, when C is mirrored onto the diagonal line, as seen in C', the result is even better than A.
    Since this mirrored C method or test simply reverses the predictions of whatever method or test produced the C contingency table, the C method has positive predictive power simply by reversing all of its decisions. When the C method predicts p or n, the C' method would predict n or p,
    respectively. In this manner, the C' test would perform the best. While the closer a result from a contingency table is to the upper left corner the better it predicts, the distance from the random guess line in either direction is the best indicator of how much predictive power
    a method has, albeit, if it is below the line, all of its predictions including its more often wrong predictions must be reversed in order to utilize the method's power.
    The Saint
    The Saint
    Admin


    Sagittarius Number of posts : 2444
    Age : 51
    Location : In the Fifth Dimension
    Job : Consultant in Paediatric Emergency Medicine, NHS, Kent, England, UK
    Registration date : 2007-02-22

    Receiver Operator Curve Empty Re: Receiver Operator Curve

    Post by The Saint Thu Jun 04, 2009 12:45 pm

    Curves in ROC space

    Discrete classifiers, such as decision tree
    or rule set, yield numerical values or binary label. When a set is
    given to such classifiers, the result is a single point in the ROC
    space. For other classifiers, such as naive Bayesian and neural network, they produce probability values
    representing
    the degree to which class the instance belongs to. For these methods,
    setting a threshold value will determine a point in the ROC space. For
    instance, if probability values below or equal to a threshold value of
    0.8 are sent to the positive class, and other values
    are assigned to
    the negative class, then a confusion matrix can be calculated. Plotting
    the ROC point for each possible threshold value results in a curve.

    Further interpretations
    Receiver Operator Curve 300px-Roc-generalReceiver Operator Curve Magnify-clip

    How a ROC curve can be interpreted

    Sometimes, the ROC is used to generate a summary statistic. Three common versions are:

    • the intercept of the ROC curve with the line at 90 degrees to the no-discrimination line
    • the area between the ROC curve and the no-discrimination line
    • the area under the ROC curve, or "AUC", or A' (pronounced "a-prime") [2]
    • d'
      (pronounced "d-prime"), the distance between the mean of the
      distribution of activity in the system under noise-alone conditions and
      its distribution under signal-alone conditions, divided by their standard deviation, under the assumption that both these distributions are normal with the same standard deviation. Under these assumptions, it can be proved that the shape of the ROC depends only on d'.

    The
    AUC is equal to the probability that a classifier will rank a randomly
    chosen positive instance higher than a randomly chosen negative one.[3]It can be shown that the area under the ROC curve is equivalent to the Mann-Whitney U,
    which tests for the median difference between scores obtained in the
    two groups considered if the groups are of continuous data. It is also
    equivalent to the Wilcoxon test of ranks. The AUC has been found to be related to the Gini coefficient(G) by the following formula[4] G1 + 1 = 2xAUC, where:
    Receiver Operator Curve 92428c1094ec9bc91f01f78cfd12577b
    In
    this way, it is possible to calculate the AUC by using an average of a
    number of trapezoidal approximations. However, any attempt to summarize
    the ROC curve into a single number loses information about the pattern
    of tradeoffs of the particular discriminator algorithm. The machine
    learning community most often uses the ROC AUC statistic for model
    comparison[5].
    This measure can be interpreted as the probability that when we
    randomly pick one positive and one negative example, the classifier
    will assign a higher score to the positive example than to the
    negative. In engineering, the area between the ROC curve and the
    no-discrimination line is often preferred, because of its useful
    mathematical properties as a non-parametric statistic. This area is often simply known as the discrimination. In psychophysics, d' is the most commonly used measure.
    The
    illustration at the top right of the page shows the use of ROC graphs
    for the discrimination between the quality of different epitope predicting algorithms. If you wish to discover at least 60% of the epitopes in a virus
    protein, you can read out of the graph that about 1/3 of the output
    would be falsely marked as an epitope. The information that is not
    visible
    in this graph is that the person that uses the algorithms knows what
    threshold settings give a certain point in the ROC graph.
    Sometimes
    it can be more useful to look at a specific region of the ROC Curve
    rather than at the whole curve. It is possible to compute partial AUC.[6]
    For example, one could focus on the region of the curve with low false
    positive rate, which is often of prime interest for population
    screening tests.[7]

    History

    The ROC curve was first used during World War II for the analysis of radar signals before it was employed in signal detection theory.[8] Following the attack on Pearl Harbor
    in 1941, the United States army began new research to increase the
    prediction of correctly detected Japanese aircraft from their radar
    signals.
    In the 1950s, ROC curves were employed in psychophysics to assess human (and occasionally non-human animal) detection of weak signals.[8] In medicine, ROC analysis has been extensively used in the evaluation of diagnostic tests.[9][10] ROC curves are also used extensively in epidemiology and medical research and are frequently mentioned in conjunction with evidence-based medicine. In radiology, ROC analysis is a common technique to evaluate new radiology techniques.[11].
    In
    the social sciences, ROC analysis is often called the ROC Accuracy
    Ratio, a common technique for judging the accuracy of default
    probability models. ROC curves also proved useful for the evaluation of
    machine learning techniques.
    The first application of ROC in machine learning was by Spackman who
    demonstrated the value of ROC curves in comparing and evaluating
    different classification algorithms.[12]

      Current date/time is Wed Oct 16, 2024 1:34 pm