Get A Quote in 30 Minutes

Dissertation India - UK Dissertations, Dissertation Help, Dissertation Help India, SPSS Help, MBA Projects

Please Install Flash Player
Download from here

First Time Client Offer: Flat 20% discount on quoted prices.

Fill in this form and we'll get back to you in less than 30 minutes

*

*

*

*

*

*

capture

Interpretive of Inferential Statistics

Interpretive of Inferential Statistics Home

When you are working on a research paper, statistics which support your hypothesis are very important. Once you are done with the collection of data, the next crucial step is accurate interpretation of the same. Interpretive statistics allow inferences to be drawn about the similarities or differences between the sample and the population, or between samples or subsets of a sample. The Statistics Help and dissertation consulting services by Dissertation India guide you with this task, making it easy for you and saving time:


Means » ‘t’ and ‘z’ tests.

Variance » ANOVA.

Distribution » Chi Square.

Correlations » Spearman Rank correlation coefficient.


This part is an indispensable segment of PhD Dissertation Consulting. Our statisticians prepare reports that are detailed and explain all aspects of the analysis, thus making the inference clear. This is how a sample report looks like:

Factor analysis


The factor analysis is carried out in SPSS and the output is given below:

Total Variance Explained


Component Initial Eigenvalues Rotation Sums of Squared Loadings
  Total % of Variance Cumulative % Total % of Variance Cumulative %
1 2.508 22.796 22.796 2.036 18.512 18.512
2 1.531 13.923 36.718 1.613 14.663 33.175
3 1.357 12.338 49.057 1.464 13.306 46.480
4 1.140 10.359 59.416 1.406 12.783 59.263
5 1.014 9.217 68.633 1.031 9.369 68.633
6 .732 6.655 75.287      
7 .710 6.452 81.739      
8 .628 5.713 87.452      
9 .559 5.085 92.537      
10 .434 3.943 96.481      
11 .387 3.519 100.000      

Extraction Method: Principal Component Analysis.


The table given above lists the Eigenvalues associated with each linear component before extraction, after extraction and after rotation. Before extraction, we can see 11 linear components. That is, Factor 1 explains 22.796% of total variance, Factors 1 and 2 jointly explain 36.718% of total variance and so on. Also, we see that subsequent factors explain only small variances. Therefore, those factors are excluded from the model (the factors for which Eigenvalues are less than 1). The factors which are selected for the model iteration are listed under the heading 'Rotation Sum of Squared Loadings'.


Scree Plot


The scree plot shown below indicates the point of inflexion on the curve. This curve is difficult to interpret because curve begins to tail off after two factors, but there is another drop after 5th factor before a stable plateau is reached. Therefore, we could probably justify retaining either one or four factors.


Rotated Component Matrix

  Component
  1 2 3 4 5
Total Family Income -.307 -.292 .382 .441 -.072
Are you veg or non veg? -.027 -.046 -.011 .020 .985
RTE foods are nutritionally balanced .260 .099 .057 .731 -.052
RTE foods can replace a full meal -.104 .051 -.110 .785 .080
RTE foods are cheaper considering the time saved 144 .846 .046 .169 -.140
RTE Foods save on time .709 .173 -.118 .132 .083
RTE foods need lesser preparation time .827 .120 .120 -.093 -.004
RTE foods are cheaper then frozen foods .147 .845 .098 -.049 .070
RTE foods can be easily warmed in microwave .793 .052 .068 .027 -.112
RTE foods taste good .012 .190 .786 -.052 .026
RTE foods are enjoyable .062 -.023 .803 .018 -.032

Extraction Method: Principal Component Analysis.

Rotation Method: Varimax with Kaiser Normalization.


Rotation converged in 5 iterations.



Component Transformation Matrix


Component 1 2 3 4 5
1 .796 .584 .119 .074 -.073
2 -.275 .135 .791 .520 -.103
3 .011 .000 -.535 839 .099
4 538 -.798 .232 .140 .003
5 .027 .059 .145 -.025 .987

Extraction Method: Principal Component Analysis.


Rotation Method: Varimax with Kaiser Normalization.



Discriminant Analysis


Summary of Canonical Discriminant Functions


Eigenvalues



Function Eigenvalue % of Variance Cumulative % Canonical Correlation
1 .023a 100.0 100.0 .150

First 1 canonical discriminant functions were used in the analysis.


An Eigenvalue indicates the proportion of variance explained. (Between-groups sums of squares divided by within-groups sums of squares). A large Eigen value is associated with a strong function. .

The canonical relation is a correlation between the discriminant scores and the levels of the dependent variable. A high correlation indicates a function that discriminates well. The present correlation of 0.15 is not very low (1.00 is perfect)


Wilks' Lambda


Test of Function(s) Wilks' Lambda Chi-square df Sig.
1 .977 4.470 5 .484

Wilks' Lambda is the ratio of within-groups sums of squares to the total sums of squares. This is the proportion of the total variance in the Discriminant scores not explained by differences among groups. A lambda of 1.00 occurs when observed group means are equal (all the variance is explained by factors other than difference between those means), while a small lambda occurs when within-groups variability is small compared to the total variability. A small lambda indicates that group means appear to differ. The associated significance value indicates whether the difference is significant. Here, the Lambda of 0.977 has an insignificant value (Sig. = 0.484); thus, the group means do not appear to differ.


Canonical Discriminant Function Coefficients


  Function
1
REGR factor score 1 for analysis 1 .322
REGR factor score 2 for analysis 1 .509
REGR factor score 3 for analysis 1 .669
REGR factor score 4 for analysis 1 -.432
REGR factor score 5 for analysis 1 -.085
(Constant) -.041

Unstandardized coefficients


A Canonical Discriminant function coefficient indicates the unstandardized scores concerning the independent variables. It is the list of the coefficients of the unstandardized distribution. Each subject's Discriminant score would be computed by entering his or her variable values for each of the variables in the equation. The canonical Discriminant function coefficients for REGR factor score (2 for analysis 1) and REGR factor score (3 for analysis 1) are greater than 0.5


Functions at Group Centroids


Where do you shop for RTE Function
1
Local Convenience Store .122
Super Markets -.187

Unstandardized canonical Discriminant functions evaluated at group means


'Functions at Group Centroids' indicate the average Discriminant score for subjects in the two groups; more specifically, the Discriminant scores for each group when the variable means are entered into the Discriminant equation. The score for Local Convenience Store (0.122) is significantly greater than the Super Markets score (-0.187). .


Classification Statistics


Classification Processing Summary


Processed 510
Excluded Missing or out-of-range group codes 0
At least one missing discriminating variable 280
Used in Output 230

Prior Probabilities for Groups

Where do you shop for RTE Prior Cases Used in Analysis
Unweighted Weighted
Local Convenience Store .500 121 121.000
Super Markets .500 79 79.000
Total 1.000 200 200.000

Classification Resultsa

    Where do you shop for RTE Predicted Group Membership Total
      Local
Convenience
Store
Super Markets  
Original Count Local Convenience Store 70 51 121
Super Markets 30 49 79
Ungrouped cases 13 17 30
% Local Convenience Store 57.9 42.1 100.0
Super Markets 38.0 62.0 100.0
Ungrouped cases 43.3 56.7 100.0

59.5% of original grouped cases correctly classified.


Classification results are a simple summary of the number and per cent of subjects classified correctly and incorrectly. The leave-one-out classification is a cross validation method, of which results are to be presented.

From the above Classification results table, we see that 59.5% of original grouped cases are classified correctly.


Another Example report


Hypothesis testing:


In order to test whether there is a significant mean difference between the Fiscal Incentives and the cost of utilities, we carry out independent sample 'T' test. The null and alternate hypotheses are given below:

Null Hypothesis: H0: μ1 = μ2


That is, there is no significant mean difference between the Fiscal Incentives and the cost of utilities


Alternate Hypothesis: H0: μ1 ≠ μ2


That is, there is a significant mean difference between the Fiscal Incentives and the cost of utilities


The output of the independent sample 'T' test is given below


t-Test: Two-Sample Assuming Equal Variances



  Fiscal_incentives Cost_of_utilities
Mean 3.95 3.316666667
Variance 0.73697479 1.714005602
Observations 120 120
Pooled Variance 1.225490196  
Correlation Coefficient 0.687126  
Hypothesized Mean Difference 0  
df 238  
t Stat 4.43152344  
P(T<=t) one-tail 7.14202E-06  
t Critical one-tail 1.651281164  
P(T<=t) two-tail 1.4284E-05  
t Critical two-tail 1.969981476  

From the above output table, we see that the value of the 'T' test statistic is 4.43152344 and its corresponding p-value is 1.4284E-05.

Since the p-value of the test statistic is less than 0.05, there is sufficient evidence to reject the null hypothesis and conclude that there is a significant mean difference between the Fiscal Incentives and the cost of utilities.

Also, we see that the correlation coefficient between the two variables Fiscal Incentives and the cost of utilities is positive and equal to 0.687126. This indicates that there exists a moderate positive linear relationship between the two variables Fiscal Incentives and the cost of utilities.