Wednesday, October 26, 2011

Factor analysis

Factor analysis

Factor analysis is a form of exploratory multivariate analysis that is used to either reduce the number of variables in a model or to detect relationships among variables. All variables involved in the factor analysis need to be interval and are assumed to be normally distributed. The goal of the analysis is to try to identify factors which underlie the variables. There may be fewer factors than variables, but there may not be more factors than variables.

The main applications of factor analytic techniques are:

(1) to reduce the number of variables and

(2) to detect structure in the relationships between variables, that is to classify variables.

Therefore, factor analysis is applied as a data reduction or structure detection method (the term factor analysis was first introduced by Thurstone, 1931). The topics listed below will describe the principles of factor analysis, and how it can be applied towards these two purposes.

Multiple regression

Multiple regression

Multiple regression is very similar to simple regression, except that in multiple regression you have more than one predictor variable in the equation.

Non-parametric correlation

Non-parametric correlation

A Spearman correlation is used when one or both of the variables are not assumed to be normally distributed and interval (but are assumed to be ordinal). The values of the variables are converted in ranks and then correlated.

Simple linear regression

Simple linear regression

Simple linear regression allows us to look at the linear relationship between one normally distributed interval predictor and one normally distributed interval outcome variable.

Correlation

Correlation

A correlation is useful when you want to see the relationship between two (or more) normally distributed interval variables.

Normal Distribution




The normal distribution is pattern for the distribution of a set of data which follows a bell shaped curve. In many natural processes, random variation conforms to a particular probability distribution known as the normal distribution, which is th most commonly observed probability distribution. It is also known as the Gaussian distribution among the scientific community. A normal distribution is described by its mean and standard deviation . The density curve is symmetrical, centered about its mean, with its spread determined by its standard deviation.

The shape of the curve is described as bell-shaped with the graph falling off evenly on either side of the mean. 50% of the distribution lies to the left of the mean and 50% lies to the right of the mean. The spread of a normal distribution is controlled by the standard deviation. The smaller the standard deviation the more concentrated the data. The mean and the median are the same in a normal distribution.


The Standard Normal curve, shown above, has mean value 0 and standard deviation 1. If a dataset follows a normal distribution, then about 68% of the observations will fall within of the mean , which in this case is with the interval (-1,1). About 95% of the observations will fall within 2 standard deviations of the mean, which is the interval (-2,2) for the standard normal, and about 99.7% of the observations will fall within 3 standard deviations of the mean, which corresponds to the interval (-3,3) in this case. Although it may appear as if a normal distribution does not include any values beyond a certain interval, the density is actually positive for all values, . Data from any normal distribution may be transformed into data following the standard normal distribution by subtracting the mean and dividing by the standard deviation .

The bell shaped curve has several characteristics:

  • The curve concentrated in the center and decreases on either side. This means that the data has less of a tendency to produce unusually extreme values, compared to some other distributions.
  • The bell shaped curve is symmetric. This tells you that he probability of deviations from the mean are comparable in either direction.

Sunday, October 23, 2011

RESEARCH METHDOLGY

RESEARCH METHDOLGY

Research methodology is the systematic study of methods that are, can be, or have been applied within a discipline. It is a way to systematically solve the research problem. Research Methodology comprises of two words, research and methodology. Research is defined as human activity based on intellectual application in the investigation of matter. The primary purpose for applied research is discovering, interpreting, and the development of methods and systems for the advancement of human knowledge on a wide variety of scientific matters of our world and the universe. Research may be defined as a careful investigation or inquiry especially through search of new facts in any branch of knowledge. In short, it comprises defining a refined problem's, formulating hypothesis or suggested solution; collecting, organizing and evaluating data; making deductions and research conclusions; and lastly carefully testing the conclusion to determine whether they fit the hypothesis.

Friday, October 21, 2011

Null hypothesis

Null hypothesis

The null hypothesis, H0, is an essential part of any research design, and is always tested, even indirectly. The simplistic definition of the null is as the opposite of the alternative hypothesis, H1, although the principle is a little more complex than that. The null hypothesis is a hypothesis which the researcher tries to disprove, reject or nullify. The 'null' often refers to the common view of something, while the alternative hypothesis is what the researcher really thinks is the cause of a phenomenon. An experiment conclusion always refers to the null, rejecting or accepting H0 rather than H1.

t-test

t-test

A t test is a very standard statistical test to compare the means of two groups. The two sample t-test simply tests whether or not two independent populations have different mean values on some measure. The null hypothesis, which is assumed to be true until proven wrong, is that there is really no difference between given two populations. In simple terms, the t-test compares the actual difference between two means in relation to the variation in the data (expressed as the standard deviation of the difference between the means.

Statistics Solutions

statistics solutions

I provide statistical solutions for the students, lecturers, professors in completion of their dissertations, Ph.D., projects specially in Education and Psychology.

My Education: Ph.D. Economics,

Ph.D. Education

Current Post: Asst. Professor of Economics

So if you want statistics solutions can contact at anu0562@gmail.com

statistics solutions

statistics solutions

I provide statistical solutions for the students, lecturers, professors in completion of their dissertations, Ph.D., projects specially in Education and Psychology.

Ph.D. Education
My Education: Ph.D. Economics,

Current Post: Asst. Professor of Economics

So if you want statistics solutions can contact at anu0562@gmail.com

ANOVA ~ Analysis of Variance

ANOVA ~ Analysis of Variance

ANCOVA ~ Analysis of Covarience

ANCOVA ~ Analysis of Covarience