Public Health Forum

Would you like to react to this message? Create an account in a few clicks or log in to continue.
Public Health Forum

A Forum to discuss Public Health Issues in Pakistan

Welcome to the most comprehensive portal on Community Medicine/ Public Health in Pakistan. This website contains content rich information for Medical Students, Post Graduates in Public Health, Researchers and Fellows in Public Health, and encompasses all super specialties of Public Health. The site is maintained by Dr Nayyar R. Kazmi

Latest topics

» Polio Endemic Countries on the Globe
T Test EmptySat Apr 08, 2023 8:31 am by Dr Abdul Aziz Awan

» Video for our MPH colleagues. Must watch
T Test EmptySun Aug 07, 2022 11:56 pm by The Saint

» Salam
T Test EmptySun Jan 31, 2021 7:40 am by mr dentist

» Feeling Sad
T Test EmptyTue Feb 04, 2020 8:27 pm by mr dentist

» Look here. Its 2020 and this is what we found
T Test EmptyMon Jan 27, 2020 7:23 am by izzatullah

» Sad News
T Test EmptyFri Jan 11, 2019 6:17 am by ameen

» Pakistan Demographic Profile 2018
T Test EmptyFri May 18, 2018 9:42 am by Dr Abdul Aziz Awan

» Good evening all fellows
T Test EmptyWed Apr 25, 2018 10:16 am by Dr Abdul Aziz Awan

» Urdu Poetry
T Test EmptySat Apr 04, 2015 12:28 pm by Dr Abdul Aziz Awan

Navigation

Affiliates

Statistics

Our users have posted a total of 8425 messages in 1135 subjects

We have 439 registered users

The newest registered user is Dr. Arshad Nadeem Awan


4 posters

    T Test

    Dr Abdul Aziz Awan
    Dr Abdul Aziz Awan


    Pisces Number of posts : 685
    Age : 56
    Location : WHO Country Office Islamabad
    Job : National Coordinator for Polio Surveillance
    Registration date : 2007-02-23

    T Test Empty T Test

    Post by Dr Abdul Aziz Awan Fri May 18, 2007 12:43 pm

    As it includes certain important graphs/pictures, so please visit the website given below;

    http://www.socialresearchmethods.net/kb/stat_t.php
    The Saint
    The Saint
    Admin


    Sagittarius Number of posts : 2444
    Age : 51
    Location : In the Fifth Dimension
    Job : Consultant in Paediatric Emergency Medicine, NHS, Kent, England, UK
    Registration date : 2007-02-22

    T Test Empty T Test

    Post by The Saint Wed May 07, 2008 11:18 am

    The T-Test



    The t-test assesses whether the means of two groups are statistically different from each other. This analysis is appropriate whenever you want to compare the means of two groups, and especially appropriate as the analysis for the posttest-only two-group randomized experimental design.



    T Test Stat_t1
    Figure 1. Idealized distributions for treated and comparison group
    posttest values.


    Figure 1 shows the distributions for the treated (blue) and control (green) groups in a study. Actually, the figure shows the idealized distribution -- the actual distribution would usually be depicted with a histogram or bar graph. The figure indicates where the control and treatment group means are located. The question the t-test addresses is whether the means are statistically different.

    What does it mean to say that the averages for two groups are statistically different?
    Consider the three situations shown in Figure 2. The first thing to notice about the three situations is that the difference between the means is the same in all three.
    But, you should also notice that the three situations don't look the same -- they tell very different stories. The top example shows a case with moderate variability of scores within each group. The second situation shows the high variability case. the third shows the case with low variability. Clearly, we would conclude that the two groups appear most
    different or distinct in the bottom or low-variability case. Why? Because there is relatively little overlap between the two bell-shaped curves. In the high variability case, the group difference appears least striking because the two bell-shaped distributions overlap so much.



    T Test Stat_t2
    Figure 2. Three scenarios for differences between means.


    This leads us to a very important conclusion: when we are looking at the differences between scores for two groups, we have to judge the difference between their means relative to the spread or variability of their scores. The t-test does just this.

    Statistical Analysis of the t-test



    The formula for the t-test is a ratio. The top part of the ratio is just
    the difference between the two means or averages. The bottom part is a measure of the variability or dispersion of the scores. This formula is essentially another example of the signal-to-noise metaphor in research: the difference between the means is the signal that, in this case, we think our program or treatment introduced into the data; the bottom part of the formula is a measure of variability that is essentially noise that may make it harder to see the group difference. Figure 3 shows the formula for the t-test and how the numerator and denominator are related to the
    distributions.



    T Test Stat_t3
    Figure 3. Formula for the t-test.


    The top part of the formula is easy to compute -- just find the difference between the means. The bottom part is called the standard error of the
    difference
    . To compute it, we take the variance for each group and divide it by the number of people in that group. We add these two values and then take their square root. The specific formula is given in Figure 4:



    T Test Stat_t4
    Figure 4. Formula for the Standard error of the difference between the means.


    Remember, that the variance is simply the square of the standard deviation.

    The final formula for the t-test is shown in Figure 5:



    T Test Stat_t5
    Figure 5. Formula for the t-test.


    The t-value will be positive if the first mean is larger than the second and negative if it is smaller. Once you compute the t-value you have to look it up in a table of significance to test whether the ratio is large enough to say that the difference between the groups is not likely to have been a chance finding. To test the significance, you need to set a risk level (called the alpha level). In most social research, the "rule of thumb" is to set the alpha level at .05. This means that five times out of a hundred you would find a statistically significant difference between the means even if there was none (i.e., by "chance"). You also need to determine the degrees of freedom (df) for the test. In the t-test, the degrees of freedom is the sum
    of the persons in both groups minus 2. Given the alpha level, the df, and the t-value, you can look the t-value up in a standard table of significance (available as an appendix in the back of most statistics texts) to determine whether the t-value is large enough to be significant. If it is, you can conclude that the difference between the means for the two groups is different (even given the variability). Fortunately, statistical computer
    programs routinely print the significance test results and save you the trouble of looking them up in a table.

    The t-test, one-way Analysis of Variance (ANOVA) and a form of regression analysis are mathematically equivalent (see the statistical analysis of the
    posttest-only randomized experimental design
    ) and would yield identical results.
    The Saint
    The Saint
    Admin


    Sagittarius Number of posts : 2444
    Age : 51
    Location : In the Fifth Dimension
    Job : Consultant in Paediatric Emergency Medicine, NHS, Kent, England, UK
    Registration date : 2007-02-22

    T Test Empty Re: T Test

    Post by The Saint Mon Mar 30, 2009 6:46 pm

    The Saint
    The Saint
    Admin


    Sagittarius Number of posts : 2444
    Age : 51
    Location : In the Fifth Dimension
    Job : Consultant in Paediatric Emergency Medicine, NHS, Kent, England, UK
    Registration date : 2007-02-22

    T Test Empty Re: T Test

    Post by The Saint Mon Mar 30, 2009 6:47 pm

    Take the print out of the sheet given above. I bet it is certainly going to help you in future
    The Saint
    The Saint
    Admin


    Sagittarius Number of posts : 2444
    Age : 51
    Location : In the Fifth Dimension
    Job : Consultant in Paediatric Emergency Medicine, NHS, Kent, England, UK
    Registration date : 2007-02-22

    T Test Empty Re: T Test

    Post by The Saint Thu Feb 28, 2013 7:54 pm



    A parametric test is a statistical test that assumes an underlying distribution of observed data. t test is one of the most common parametric tests and can be categorized as follows.


    One-Sample t Test
    One-sample t test is used to test whether the population mean of the variable of interest has a specific value (hypothetical mean), against the alternative that it does not have this value, or is greater or less than this value. A p value is computed from the t ratio (which equals the difference of the sample mean and the hypothetical mean divided by the standard error of mean) and the numbers of degrees of freedom (which equals sample size minus 1). If the p value is small, the data give more possibility to conclude that the overall mean differs from the hypothetical value.


    Two-Sample t Test
    The two-sample t test is used to determine if the means of the variable of interest from two populations are equal. A common application of this is to test if the outcome of a new process or treatment is superior to a current process or treatment.


    t Test for Independent Samples
    An independent samples t test is used when a researcher wants to compare the means of a variable of interest (normally distributed) for two independent groups, such as the heights of gender groups. The t ratio is the difference of sample means between two groups divided by the standard error of the difference, calculated by pooling the standard error of the means of the two groups.


    t Test for Dependent Samples
    If two groups of observations of the variable of interest (that are to be compared) are based on the same sample of subjects who were tested twice (e.g., before and after a treatment); or if the subjects are recruited as pairs, matched for variables such as age and ethnic group, and one of them gets one treatment, the other an alternative treatment; or if twins or child–parent pairs are being measured, researchers can look only at the differences between the two measures of the observations in each subject. Subtracting the first score from the second for each subject and then analyzing only those “pure (paired) differences” is precisely what is being done in the t test for dependent samples; and, as compared with the t test for independent samples, this always produces “better” results (i.e., it is always more sensitive). The t ratio for a paired t test is the mean of these differences divided by the standard error of the differences.


    Assumptions
    Theoretically, the t test can be used even if the sample sizes are very small (e.g., as small as 10) so long as the variables of interest are normally distributed within each group, and the variation of scores in the two groups is not reliably different. The normality assumption can be evaluated by looking at the distribution of the data (via histograms) or by performing a normality test. The equality of variances assumption can be verified with the F test, or the researcher can use the more robust Levene's test.


    Analysis of Variance
    Analysis of variance (ANOVA) is a statistical test that makes a single, overall decision as to whether a significant difference is present among three or more sample means of the variable of interest (outcome). An ANOVA is similar to a t test; however, it can also test multiple groups to see if they differ on one or more explanatory variables. The ANOVA can be used to test between-groups and within groups differences. There are two types of ANOVAs: one-way ANOVA and multiple ANOVA.


    One-Way ANOVA
    A one-way ANOVA is used when there are a normally distributed interval outcome and a categorical explanatory variable (with two or more categories), and the researcher wishes to test for differences in the means of the outcome broken down by the levels of the explanatory variable. For instance, a one-way ANOVA could determine whether class levels (explanatory variable), for example, freshmen, sophomores, juniors, and seniors, differed in their reading ability (outcome).


    Multiple ANOVA (Two-Way ANOVA, N-Way ANOVA)
    This test is used to determine if there are differences in two or more explanatory variables. For instance, a two-way ANOVA could determine whether the class levels differed in reading ability and whether those differences were reflected by gender. In this case, a researcher could determine (a) whether reading ability differed across class levels, (b) whether reading ability differed across gender, and (c) whether there was an interaction between class level and gender.
    [hide] - [top]Nonparametric Test

    Nonparametric methods were developed to be used in cases when the researcher knows nothing about the parameters of the variable of interest in the population. Nonparametric methods do not rely on the estimation of parameters (such as the mean or the standard deviation) describing the distribution of the variable of interest in the population. Nonparametric methods are most appropriate when the sample sizes are small. In a nutshell, when the samples become very large, then the sample means will follow the normal distribution even if the respective variable is not normally distributed in the population or is not measured very well.
    Basically, there is at least one nonparametric equivalent for each parametric general type of test. In general, these tests fall into the following categories.


    One-Sample Test
    A Wilcoxon rank sum test compares the median of a single column of numbers against a hypothetical median that the researcher enters. If the data really were sampled from a population with the hypothetical mean, one would expect the sum of signed ranks to be near zero.


    Differences Between Independent Groups
    Nonparametric alternatives for the t test for independent samples are the Mann-Whitney U test, the Wald-Wolfowitz runs test, and the Kolmogorov- Smirnov two-sample test. The Mann-Whitney U test, also called the rank sum test, is a nonparametric test assessing whether two samples of observations come from the same distribution. This is virtually identical to performing an ordinary parametric two sample t test on the data after ranking over the combined samples. The Wald-Wolfowitz runs test is a nonparametric test of the identity of the distribution functions of two continuous populations against general alternative hypotheses. The Kolmogorov-Smirnov two-sample test is one of the most useful and general nonparametric methods for comparing two samples, as it is sensitive to differences in both location and shape of the empirical cumulative distribution functions of the two samples.
    An appropriate nonparametric alternative to the one-way independent-samples ANOVA can be found in the Kruskal-Wallis test, which is applicable when the researcher has the outcome with two or more levels and an ordinal explanatory variable. It is a generalized form of the Mann-Whitney test method, since it permits two or more groups.


    Differences Between Dependent Groups
    For the t test for dependent samples, the nonparametric alternatives are the Sign test and Wilcoxon's matched pairs test. The sign test can be used to test that there is “no difference” between the continuous distributions of two random samples. The Wilcoxon test is a nonparametric test that compares two paired groups, through calculating the difference between each set of pairs and analyzing that list of differences. If the variables of interest are dichotomous in nature (i.e., “pass” vs. “no pass”), then McNemar's chisquare test is appropriate. If there are more than two variables that were measured in the same sample, then the researcher would customarily use repeated measures ANOVA. Nonparametric alternatives to this method are Friedman's two way ANOVA and Cochran Q test. Cochran Q is an extension to the McNemar test and particularly useful for measuring changes in frequencies (proportions) across time, which leads to a chisquare test.


    Relationships Between Variables
    Spearman R, Kendall tau, and coefficient gamma are the nonparametric equivalents of the standard correlation coefficient to evaluate a relationship between two variables. The appropriate nonparametric statistics for testing the relationship between the two categorical variables are the chi-square test, the phi coefficient, and the Fisher exact test. In addition, Kendall coefficient of concordance is a simultaneous test for relationships between multiple cases, which is often applicable for expressing interrater agreement among independent judges who are rating (ranking) the same stimuli.
    Naeem Durrani
    Naeem Durrani


    Number of posts : 144
    Location : University Town Peshawar
    Job : Program Management
    Registration date : 2011-05-06

    T Test Empty Re: T Test

    Post by Naeem Durrani Fri Mar 01, 2013 8:22 am

    Thank you so much Sir
    Dr Abu Zar Taizai
    Dr Abu Zar Taizai


    Aries Number of posts : 1163
    Age : 58
    Location : Pabbi Nowshera
    Job : Co-ordinator DHIS: District NowsheraAnd Coordinator Public Health
    Registration date : 2008-03-09

    T Test Empty Re: T Test

    Post by Dr Abu Zar Taizai Sun Mar 03, 2013 5:32 am

    Excellent

    Sponsored content


    T Test Empty Re: T Test

    Post by Sponsored content


      Current date/time is Wed Oct 16, 2024 11:51 am