# The Hull Comparison Data And Parallel Analysis Engineering Essay

Correctly placing the figure of factors in explorative factor analysis is a important measure that there has been considerable contention and misinterpretation in. Assorted methods exist for finding the figure of factors, each of which has been studied extensively. The consequence of most of these surveies is that Parallel Analysis is the most accurate technique. More late, nevertheless, two newer methods have been developed that have shown promise: the Hull method and the Comparison Data technique. The current survey expanded on the old surveies for these methods to research the truth of all three techniques every bit good as the factors that may do prejudice in each. Overall, the Comparison Data technique and Parallel Analysis performed best. Factors impacting which method is the most accurate include the true figure of factors, figure of variables per factor, and interfactor correlativities. Given that these factors are non normally known beforehand, it is recommended that the Comparison Data technique and Parallel Analysis be used in concurrence with each other to find the figure of factors in an explorative factor analysis.

Keywords: explorative factor analysis, figure of factors, parallel analysis, hull method, comparing informations

## An Probe of the Hull, Comparison Data, and Parallel Analysis Methods for Determining the Number of Factors

Correctly placing the figure of factors in explorative factor analysis is a important measure. This is an country of contention and misinterpretation. Both under- and overfactoring can hold terrible effects on the consequences. Underfactoring can ensue in unnaturally complex factor burdens and difficult-to-interpret factors whereas overfactoring may ensue in the being of factors with small substantial and theoretical significance. Determining the right figure of factors is critical to avoid these jobs.

Assorted methods exist for finding the figure of factors, such as the characteristic root of a square matrix greater than one regulation, the talus secret plan, Bartlett ‘s trial for equality of characteristic root of a square matrixs, the maximal likeliness trial, Akaike information standard, Bayesian information standard, Velicer ‘s minimal mean partial ( MAP ) process, and parallel analysis. To day of the month, parallel analysis is regarded as the most accurate method. Recently, nevertheless, two newer methods were derived: the Hull method ( Lorenzo-Seva, Timmerman, & A ; Kiers, 2011 ) and the Comparison Data ( Cadmium ) method ( Ruscio & A ; Roche, 2012 ) .

## Parallel Analysis

Horn ( 1965 ) derived a method for finding the figure of factors whereby the research worker compares existent characteristic root of a square matrixs with 1s computed from chief constituents analyses of K random correlativity matrices. These K matrices are indistinguishable to that of the existent informations with regard to figure of observations and figure of variables. This process is known as parallel analysis ( PA ) . The logic behind this is that the existent characteristic root of a square matrixs are inflated by trying mistake. This trying mistake is reflected in the characteristic root of a square matrixs computed from the random correlativity matrices, which can be interpreted as the void distribution for the characteristic root of a square matrixs. Those characteristic root of a square matrixs in the existent informations that exceed those generated are likely true factors are likely due to a existent consequence as opposed to opportunity trying variableness. PA has been found to be insensitive to correlation matrices based on different distributional features ( Glorfeld, 1995 ) .

PA by and large consequences in the expected figure of factors. Depending on the conditions, such as whether chief constituents or chief axis factorization is used, interfactor correlativities, cross-loadings, and sample size, PA may either under- or overextract ( Cho, Li, & A ; Bandalos, 2009 ; Crawford et al. , 2010 ; Glorfeld, 1995 ; Humphreys & A ; Montanelli, 1975 ; Timmerman & A ; Lorenzo-Seva, 2011 ; Turner, 1998 ; Zwick & A ; Velicer, 1986 ) .

Several fluctuations of the PA process exist. One such fluctuation uses the 95th percentile, as opposed to the mean, with the purpose on counterbalancing for PA ‘s inclination to overextract ( Glorfeld, 1995 ) . This is based on the logic that the mean characteristic root of a square matrix is correspondent to utilizing an I± = .50 whereas utilizing the 95th percentile would be correspondent to utilizing an I± = .05. It was believed that this is the ground why PA tended to overextract when incorrect. This fluctuation of the PA process involves bring forthing a really big figure of random correlativity matrices in order to obtain the complete distribution of characteristic root of a square matrixs for each characteristic root of a square matrix extracted. Whether or non the 95th percentile outperforms the mean is dependent on the conditions of the informations ( Cho et al. , 2009 ; Crawford et al. , 2010 ; Timmerman & A ; Lorenzo-Seva, 2011 ) . Despite this, the 95th percentile has shown some positive consequences ( Crawford et al. , 2010 ; Green, Levy, Thompson, Lu, & A ; Lo, 2012 ; Timmerman & A ; Lorenzo-Seva, 2011 ) .

Because Pearson correlativities underestimate the strength of relationships among dichotomous and ordinal variables, which are utilized a batch in societal scientific discipline research, several surveies have investigated the usage of PA with tetrachoric and polychoric correlativities ( e.g. , Cho et al. , 2009 ) . The findings of these surveies are interesting in that they suggest that analyses of Pearson correlativities on dichotomous and ordinal variables performs at least every bit good as those obtained from tetrachoric and polychoric correlativities ( Cho et al. , 2009 ) . A separate survey, nevertheless, proposed a PA based on minimal rank factor analysis and suggested that utilizing a 95th percentile with polychoric correlativities had an advantage over Pearson-based PA based on minimal rank factor analysis provided that convergence was reached ( Timmerman & A ; Lorenzo-Seva, 2011 ) .

Turner ( 1998 ) found that the size of noise characteristic root of a square matrixs is affected by sample size, per centum of common discrepancy, and the spiel of construction coefficients of the points across factors. In other words, the distribution of characteristic root of a square matrixs beyond the first 1 in a traditional PA is conditional on the presence of nothing factors. Therefore, a fluctuation on PA has been suggested whereby the size and construction form of known existent factors is taken into history ( Green et al. , 2012 ) . This process involves a series of simulations, with each simulation proving the hypothesis that the following characteristic root of a square matrix is due to opportunity. The first measure in this process is to bring forth K comparison informations sets with the same figure of variables and topics as the existent informations set. Following, a factor analysis is conducted on each of the comparing informations sets to find the kth + 1 factor for each information set. Third, a drumhead statistic is computed for this factor ( e.g. , the mean or 95th percentile ) . If this drumhead statistic is greater than that for the same factor in the existent informations set, the figure of factors is inferred to be k. Otherwise, a 5th measure is conducted whereby K is incremented by 1 and the first four stairss are repeated under the status that the characteristic root of a square matrix for kth + 1 factor in the existent informations set is non less than 0 if chief axis factorization is used and non less than 1 if chief constituents is used. The overall consequences of this fluctuation were positive. No method was systematically found to be the best across all conditions, but this new process was found to bring forth comparatively high truth when used with chief axis factorization and the 95th percentile characteristic root of a square matrix regulation ( Green et al. , 2012 ) .

## The Comparison Data ( Cadmium ) Method

Recently, a technique was introduced by which one creates and analyzed comparing informations with known factorial construction to find the figure of factors to retain ( Ruscio & A ; Roche, 2012 ) . This Cadmium technique efforts to spread out upon and better the public presentation of PA. The logic is similar, except that, alternatively of bring forthing random informations sets which merely take into history trying mistake, random informations sets are generated with both trying mistake and known factorial constructions. These informations sets are so used to find which best reproduces the profile of characteristic root of a square matrixs for the existent information.

To transport out the Cadmium technique one needs to bring forth a finite population of comparing informations. The GenData plan can be used to this terminal, which reproduces the point correlativity matrix for the mark informations set and reproduces the multivariate distribution of point responses ( for more information, see Ruscio & A ; Roche, 2012 ) . The consequence is a set of factor burdens that reproduce the point correlativity matrix for the mark informations set every bit good as possible. Following, a random sample of N instances is drawn from this population. Eigenvalues of the point correlativity matrix for this sample are computed and the root-mean-square residuary ( RMSR ) characteristic root of a square matrix is calculated by comparing these characteristic root of a square matrixs and those for the mark informations set. This is done K times to acquire the distribution of RMSR tantrum values. After this distribution is obtained the figure of factors in the structural theoretical account is increased by one and the procedure is repeated. Finally, the RMSR tantrum values for this new theoretical account are compared with those from the old theoretical account utilizing the Mann-Whitney U trial with an alpha degree of.30[ 1 ]. If the new theoretical account provides significantly lower RMSR values, the figure of factors in the structural theoretical account is once more increased by one and the procedure repeated. The Cadmium technique ends when the new theoretical account does non hold significantly lower RMSR values.

The Cadmium technique was found to hold the highest truth when compared to the Kaiser standard, PA, optimum co-ordinates and acceleration factor techniques for placing the cubitus of a scree secret plan, the MAP process, Akaike information standard, Bayesian information standard, and chi-square trials ( Ruscio & A ; Roche, 2012 ) . Further, the Cadmium technique seldom under- or overextracted by more than one factor when it erred ( about one per centum of the clip ) . Overall, the Cadmium technique ‘s truth improved with fewer factors, more points, and larger sample sizes.

## The Hull Method

The Hull method is a newer technique that balances goodness-of-fit and grades of freedom ( Lorenzo-Seva et al. , 2011 ) . Goodness-of-fit values are plotted against the figure of parametric quantities ( i.e. , grades of freedom ) in a planar graph. The y-axis contains the goodness-of-fit values and the x-axis contains the figure of parametric quantities. The curve that consequences is monotonically increasing. The method is named the hull method because the thought is to happen the first convex hull of this curve. The figure of factors is indicated by the sharpest cubitus ( i.e. , a large leap followed by a little leap, or cubitus ) in this convex hull. This method is fundamentally an extension of the scree trial.

The Hull method is conducted in four chief stairss ( Lorenzo-Seva et al. , 2011 ) . The first is to find the scope of factors to be considered. The cubitus of the convex hull depends on this. For illustration, in order to observe a three-factor solution, two- and four-factor solutions need to be compared with the three-factor solution. This measure besides guards against under- and overextraction. One thing to observe is that an cubitus can non be at the first and last place of the convex hull, therefore the lowest and highest Numberss of factors in the scope can non be detected as solutions. Lorenzo-Seva et Al. ( 2011 ) proposed that the figure of factors plus one from PA ( with the 95th percentile regulation ) be used as the highest figure of factors to see.

The 2nd measure is to pick a goodness-of-fit step and measure the series of factor solutions with that goodness-of-fit step. Any goodness-of-fit index, such as the CFI, RMSEA, SRMR, or newer common portion accounted for ( CAF ) index, can be utilized for this intent although the CFI has been the most successful so far ( Lorenzo-Seva et al. , 2011 ) . The public presentation of the CAF index has besides been good and has the advantage that it can be applied in state of affairss where the CFI can non, such as with factor extraction methods besides ML and ULS. Third, the grades of freedom are computed for the series of factor solutions.

Finally, the hull secret plan is created utilizing the goodness-of-fit values and grades of freedom for the series of factor solutions and the cubitus is located in the higher boundary of the convex hull. This requires a heuristic for turn uping the cubitus. The lone heuristic assessed with the Hull method is one that maximizes

( 1 )

where degree Fahrenheit is the goodness-of-fit value and df are the grades of freedom. The ith solution is at the cubitus of the convex hull. The maximal value of st indicates that leting for dfi grades of freedom increases the tantrum of the theoretical account well whereas leting for more than dfi hardly increases the tantrum at all.

So far the consequences for the Hull method show promise. The Hull method has performed good when compared to the MAP trial, Horn ‘s PA, the AIC, and the BIC, although no method was consistently superior across the simulation conditions ( i.e. , figure of major factors, figure of mensural variables per factor, sample size, degree of interfactor correlativities, and degree of common discrepancy ) . Compared to PA, the Hull method was more successful with higher samples sizes, when the proportion of figure of variables for each factor was big, when correlativities among factors were low or moderate, and when there was a larger figure of major factors. Lorenzo-Seva et Al. ( 2011 ) recommended that, when figure of ascertained variables per factor is non little ( i.e. , 5 or less ) , the Hull-CFI be applied with ML and ULS extraction methods and the Hull-CAF method be applied with any other extraction method. Otherwise, PA should be used. Unfortunately, the figure of factors and, as a consequence, the figure of variables per factor, is non normally known in progress.

## The Current Study

Because the Hull method and Cadmium technique are new and have non been tested exhaustively, the current survey purposes to spread out on the old surveies and look into the public presentation of both of these methods every bit good as PA, which to day of the month has been regarded as the most accurate method for finding the figure of factors. Because the Cadmium technique and Hull method are new and have non been compared to each other, this survey will be explorative. The intent is to look into which method may be most appropriate for finding the figure of factors. Hopefully the consequences will assist research workers carry oning an explorative factor analysis and inform future research on these methods for finding the figure of factors.

## Method

A Monte Carlo simulation survey was conducted to compare the public presentation of PA, the Cadmium technique, and the Hull method. Because methods for finding the figure of factors are influenced by the figure of factors, the figure of variables, degree of correlativity among the factors, sample size, and magnitude of factor coefficients ( Cho et al. , 2009 ; Crawford et al. , 2010 ; Horn, 1965 ; Humphreys & A ; Montanelli, 1975 ; Zwick & A ; Velicer, 1986 ) , the current survey manipulated those factors. Following the scope of parametric quantities of old surveies ( see Cho et al. , 2009 ; Crawford et al. , 2010 ; Glorfeld, 1995 ; Green et al. , 2012 ; Lorenzo-Seva et al. , 2011 ; Ruscio & A ; Roche, 2012 ; Timmerman & A ; Lorenzo-Seva, 2011 ) , the figure of factors was set to 1, 2, 4, or 8 with the figure of variables per factor specified as 4, 6, or 8. Interfactor correlativities were specified as 0, .3, and.5. Sample size was set to 250, 500, and 750. Finally, low factor burdens were indiscriminately generated from a unvarying distribution runing from.3 to.5 ( M = .4 ) and high factor burdens were indiscriminately generated from a unvarying distribution runing from.7 to.9 ( M = .8 ) . The design was to the full crossed, ensuing in 180 entire conditions. For each possible combination of conditions 100 sample correlativity matrices were generated.

The information for each of these conditions were generated utilizing the Mplus 6.11 plan ( Muthen & A ; Muthen, 1998 ) . The parametric quantity values antecedently specified were used to bring forth the natural information, which was so transformed to correlation matrices of Pearson correlativities. Three methods were used to happen the figure of factors: Parallel Analysis, the Comparison Data technique, and the Hull Method. O’Connor ‘s ( 2000 ) SAS plan was used to bring forth random characteristic root of a square matrixs for PA based on the figure of variables and sample size of each status. These random characteristic root of a square matrixs were compared to those extracted from the informations sets using unweighted least squares extraction with SMCs as the communalities. For the Cadmium technique, a SAS version of the GenData plan was used ( Ruscio & A ; Roche, 2012 ) . Finally, a SAS plan was created for the Hull method ( see Appendix )[ 2 ]. This plan utilised unweighted least square extraction and the CFI index, which was found to hold the highest truth.

Three statistics were computed to measure each method. The first was the truth rate, or per centum of datasets where the right figure of factors was determined. Second, prejudice was estimated by taking the mean divergence mark ( predicted figure of factors – existent figure of factors ) . This quantifies the extent to which a method under- or overestimated the figure of factors ( 0 represents no prejudice ) . Finally, preciseness was calculated in the same mode as prejudice, except the absolute value was taken of the divergence mark ( 0 represents perfect preciseness ) . This was used to quantify each method ‘s magnitude of mistake.

## Accuracy, Precision, and Bias

The truth, preciseness, and prejudice rates are presented in Tables 1, 2, 3, and 4 for the conditions where the true figure of factors were 1, 2, 4, and 8, severally. By and large, the Cadmium technique appears to be the most accurate and most precise, which follows the consequences of Ruscio and Roche ( 2012 ) . Cases where this is non the instance is when there is more than one true underlying factor, the interfactor correlativity is high, the figure of variables per factor low, and the magnitude of the form burdens is low. In these instances PA performs better, although the difference is frequently negligible. We found that preciseness was better for PA when PA outperformed the Cadmium technique, which does non follow old findings ( Ruscio & A ; Roche, 2012 ) . Accuracy rates for PA were by and large high and similar to those found in old surveies, although it should be noted that old surveies may hold used different versions of PA or different computing machine package ( Crawford et al. , 2010 ; Green et al. , 2012 ; Lorenzo-Seva et al. , 2011 ; Ruscio & A ; Roche, 2012 ; Timmerman & A ; Lorenzo-Seva, 2011 ) .

The Hull method systematically performed the worst. This does non hold with old findings ( Lorenzo-Seva et al. , 2011 ) . Hull method truth tended to increase with larger sample sizes and larger figure of variables per factor. The Hull method truth is better with high magnitudes of pattern burdens when both the sample size and figure of variables per factor is big, but better with low magnitudes of pattern burdens when both the sample size and figure of variables per factor is little. These findings follow those of Lorenzo-Seva and co-workers ( 2011 ) , where the truth rates for the Hull method were highest when the communalities were higher, the figure of variables per factor were higher, and sample size was higher. The Hull method does non systematically over- or underextract, although overextraction tends to be more likely when the figure of true factors additions and underextraction tends to be more likely with smaller sample sizes. It appears that the Hull method has much lowest truth rates when the true figure of factors is more than one. The Hull method tended to hold nice truth rates when there were a larger figure of variables per factor. It should be noted that the truth is lower in the current survey than antecedently found ( Lorenzo-Seva et al. , 2011 ) .

On norm, the Cadmium technique has the best preciseness and the least sum of prejudice. When incorrect the Hull method and Cadmium technique tended to underextract and PA tended to overextract. Ruscio and Roche ( 2012 ) besides found the Cadmium technique to underextract when incorrect. Overall, additions in sample size, magnitude of pattern burdens, and figure of variables per factor tended to increase the truth of all three methods whereas additions in true figure of factors and interfactor correlativity decreased the truth of all three methods. Differences between the truth rates of each method became more evident as the true figure of factors increased.

## Factors Influencing Each Method

ANOVAs were conducted to find the conditions impacting each method for finding the figure of factors. The dependent variable for each ANOVA is the difference between the figure of factors obtained and the true figure of factors ( i.e. , prejudice ) . Eta-squared ( I·2 ) was used, alternatively of statistical significance, for partitioning the explained discrepancy. This is because it was anticipated that most effects would be important due to the big sample sizes. Valuess of.01, .06, and.14 have been suggested to represent little, medium, and big effects, severally ( Cohen, 1988 ) . Therefore, focal point was on chief effects and interactions explicating at least 1 % of the discrepancy in the theoretical account. Because chief effects are non explainable in the presence of higher-order interactions, merely the interactions that are present will be interpreted. The chief effects in each theoretical account were the true figure of factors, figure of variables, sample size, interfactor correlativity, and magnitude of factor burdens. All interactions were included up to the five-way interaction between all chief effects. Table 5 contains the effects that explain at least 1 % of the discrepancy for each of the theoretical accounts.

The theoretical account for the Hull method explained 37 % of the discrepancy in prejudice. The effects explicating most of the prejudice can be seen in two tripartite interactions: between true figure of factors, interfactor correlativity, and magnitude of pattern burdens and between true figure of factors, figure of variables per factor, and magnitude of pattern burdens ( see Figure 1 ) . The differences in bias tend to go larger in the different conditions as the true figure of factors addition. Magnitude of pattern burdens appears to hold the biggest impact on this difference ; larger magnitudes of pattern burdens tend to cut down overall prejudice. As interfactor correlativity additions, prejudice becomes larger in the negative way. Increases in the figure of variables per factor addition prejudice in the positive way. It has been found that truth is lowest for the Hull method when the interfactor correlativity additions and figure of factors additions ( Lorenzo-Seva et al. , 2011 ) , therefore these consequences are non surprising. There was besides a bipartisan interaction between true figure of factors and sample size. It appears that smaller sample sizes have a larger impact on prejudice ( i.e. , raises prejudice in the positive way ) when there is a larger figure of true factors.

The theoretical account for the Cadmium technique explained 79 % of the discrepancy in prejudice. All of the independent variables had an consequence in this theoretical account, which can be explained in two four-way interactions: between true figure of factors, figure variables per factor, interfactor correlativity, and magnitude of burdens ( see Figure 2 ) and between true figure of factors, sample size, interfactor correlativity, and magnitude of burdens ( see Figure 3 ) . As the true figure of factors additions ( beyond two ) , the effects of interfactor correlativity, magnitude of pattern burdens, figure of variables per factor, and sample size on prejudice become larger. Interfactor correlativity tends to increase the prejudice in the negative way. This is magnified when the magnitude of pattern burdens is little, figure variables per factor lessenings, and sample size lessenings. This fits with old findings, which suggest that CD truth is lower when there are more factors, fewer variables per factor, and smaller sample sizes ( Ruscio & A ; Roche, 2012 ) .

Finally, the theoretical account for PA explained 19 % of the discrepancy in prejudice. One four-way interaction amounts up all of the effects: between true figure of factors, sample size, interfactor correlativity, and magnitude of burdens ( see Figure 4 ) . Bias for PA was more positive than for the other two methods. Most conditions did non impact the prejudice of PA really much until the true figure of factors became big ( i.e. , 8 factors ) . When this was the instance interfactor correlativity tended to do the prejudice more negative, particularly as sample size lessenings and magnitude of pattern burdens decreased. This fits with old surveies, which found that truth additions as sample size additions ( Crawford et al. , 2010 ; Green et al. , 2011 ; Ruscio & A ; Roche, 2012 ) , figure of variables per factor additions ( Cho et al. , 2009 ; Ruscio & A ; Roche, 2012 ) , magnitude of burdens additions ( Cho et al. , 2009 ; Crawford et al. , 2010 ) , decreases as interfactor correlativity additions ( Cho et al. , 2009 ; Crawford et al. , 2010 ; Green et al. , 2011 ) , and decreases as figure of factors additions ( Ruscio & A ; Roche, 2012 ) .

## Discussion

Several restrictions should be noted. First, the current survey was a simulation survey. Even though conditions were chosen that mimic those that may happen in pattern, cautiousness should be taken when generalising these consequences. Second, merely one version of PA was investigated. Several different versions and plans exist for PA and may execute otherwise. Third, the generated informations were uninterrupted in nature. Datas obtained in pattern may hold different distributions and be ordinal and nature, both of which can potentially impact the public presentation of each method.

A batch of research in the societal scientific disciplines involves ordinal informations, which may hold an impact on the public presentation of each method. Future research should retroflex these findings and spread out this survey to look into each of these restrictions.

That being said, the consequences of the current survey parallel those of old surveies. For case, similar to Ruscio and Roche ( 2012 ) , we found that the Cadmium technique by and large outperforms PA. When it does non, nevertheless, the differences are frequently negligible. It is hard to state why, as the methods and package used from their survey may hold been different than ours. Differences between PA truth rates across the different conditions really closely resembled those of old surveies ( Crawford et al. , 2010 ; Green et al. , 2012 ; Lorenzo-Seva et al. , 2011 ; Ruscio & A ; Roche, 2012 ; Timmerman & A ; Lorenzo-Seva, 2011 ) . The Hull method by and large performed the worse than PA in our survey. This is the antonym of old research ( Lorenzo-Seva et al. , 2011 ) . Accuracy of the Hull method was besides found to be lower in the current survey. It should be noted that truth rates were similar to old findings ( Lorenzo-Seva et al. , 2011 ) when the figure of variables per factor was at least six and the magnitude of pattern burdens was high. Accuracy dropped greatly when this was non the instance. This excessively may be a map of simulation conditions and package. Interestingly, some of their sample sizes and figure of variables per factor were much larger and their interfactor correlativities were smaller. Future research should try to research this further.

Many conditions, similar to those in old surveies, wedged truth rates of each method. These were interfactor correlativity and figure of factors for the Hull method ( Lorenzo-Seva et al. , 2011 ) , figure of factors, figure of variables per factor, and sample size for the Cadmium technique ( Ruscio & A ; Roche, 2012 ) , and interfactor correlativity, figure of factors, figure of variables per factor, sample size, and magnitude of pattern burdens for PA ( Cho et al. , 2009 ; Crawford et al. , 2010 ; Green et al. , 2011 ; Ruscio & A ; Roche, 2012 ) . Some of these make sense. For case, as interfactor correlativity additions, figure of variables per factor additions, and magnitude of pattern burdens decreases it becomes harder to pull out chiseled factors with simple construction. Larger samples will ensue in less sampling mistake, therefore truth should increase. The consequence that does non do sense, and is bad, is that the figure of true factors tends to act upon how colored each method is. Ideally we would desire a technique that can deduce the true figure of factors accurately despite how many there are. It is possible, nevertheless, that this is a map of how much discrepancy is explained by the last factor. When pull outing factors, each subsequent factor will explicate a lower per centum of the overall discrepancy ( before rotary motion ) . Therefore, the last factors may be more undependable for these methods to pick up.

It appears that the Cadmium technique and PA perform the best in most conditions. However, in an explorative factor analysis the research worker does non cognize the true figure of factors, figure of variables per factor, and interfactor correlativities. These are the instances that can impact which method is the most accurate. Therefore, it is recommended that both the Cadmium technique and PA be used to find the figure of factors. This is non to state that graduated table development should be neglected. Writing several well-performing points to mensurate specific concepts, obtaining big samples, and pilot testing is the best manner to guarantee that the figure of factors extracted is accurate.

## Mentions

Cho, S. , Li, G. , & A ; Bandalos, D. ( 2009 ) . Accuracy of the parallel analysis process with polychoric correlativities. Educational and Psychological Measurement, 69 ( 5 ) , 748-759. Department of the Interior: 10.1177/0013164409332229

Cohen, J. ( 1988 ) . Statistical power analysis for the behavioural scientific disciplines ( 2nd ed. ) . Hillsdale, NJ: Lawrence Erlbaum.

Crawford, A. V. , Green, S. B. , Levy, R. , Lo, W-J. , Scott, L. , Svetina, D. , & A ; Thompson, M. S. ( 2010 ) . Evaluation of parallel analysis methods for finding the figure of factors. Educational and Psychological Measurement, 70 ( 6 ) , 885-901. Department of the Interior: 10.1177/0013164410379332

Glorfeld, L. W. ( 1995 ) . An betterment on Horn ‘s parallel analysis methodological analysis for choosing the right figure of factors to retain. Educational and Psychological Measurement, 55 ( 3 ) , 377-393. Department of the Interior: 10.1177/0013164495055003002

Green, S. B. , Levy, R. , Thompson, M. S. , Lu, M. , & A ; Lo, W-J. ( 2012 ) . A proposed solution to the job with utilizing wholly random informations to measure the figure of factors with parallel analysis. Educational and Psychological Measurement, 72 ( 3 ) , 357-374. Department of the Interior: 10.1177/0013164411422252

Horn, J. L. ( 1965 ) . A principle and trial for the figure of factors in factor analysis. Psychometrica, 30 ( 2 ) , 179-185. Department of the Interior: 10.1007/BF02289447

Humphreys, L. G. , & A ; Montanelli, R. G. Jr. ( 1975 ) . An probe of the parallel analysis standard for finding the figure of common factors. Multivariate Behavioral Research, 10 ( 2 ) , 193-205. Department of the Interior: 10.1207/s15327906mbr1002_5

Lorenzo-Seva, U. , Timmerman, M. E. Kiers, H. A. L. ( 2011 ) . The Hull method for choosing the figure of common factors. Multivariate Behavioral Research, 46 ( 2 ) , 340-364. Department of the Interior: 10.1080/00273171.2011.564527

Muthen, L. K. , & A ; Muthen, B. O. ( 1998 ) . Mplus: The comprehensive mold plan for applied research workers: User ‘s usher. Los Angeles: Writer.

O’Connor, B. P. ( 2000 ) . SPSS and SAS plans for finding the figure of constituents utilizing parallel analysis and Velicer ‘s MAP trial. Behavior Research Methods, Instrumentation, and Computers, 32, 396-402. Department of the Interior: 10.3758/BF03200807

Ruscio, J. , & A ; Roche, B. ( 2012 ) . Determining the figure of factors to retain in an explorative factor analysis utilizing comparing informations of known factorial construction. Psychological Assessment, 24 ( 2 ) , 282-292. Department of the Interior: 10.1037/a0025697

Timmerman, M. E. , & A ; Lorenzo-Seva, U. ( 2011 ) . Dimensionality appraisal of ordered polytomous points with parallel analysis. Psychological Methods, 16 ( 2 ) , 209-220. Department of the Interior: 10.1037/a0023353

Turner, N. E. ( 1998 ) . The consequence of common discrepancy and construction form on random informations characteristic root of a square matrixs: Deductions for the truth of parallel analysis. Educational and Psychological Measurement, 58 ( 4 ) , 541-568. Department of the Interior: 10.1177/0013164498058004001

Zwick, W. R. , & A ; Velicer, W. F. ( 1986 ) . Comparison of five regulations for finding the figure of constituents to retain. Psychological Bulletin, 99 ( 3 ) , 432-442. Department of the Interior: 10.1037/0033-2909.99.3.432

## Table 1

Accuracy/Bias/Precision Ratess when Number of True Factors is One

## Hull

Sample Size = 250

98.50 % / 0.02 / 0.02

84.67 % / 0.21 / 0.21

97.83 % / 0.03 / 0.03

# Variables = 4

98.00 % / 0.02 / 0.02

100.00 % / 0.00 / 0.00

98.50 % / 0.02 / 0.02

Low Factor Impregnation

96.00 % / 0.04 / 0.04

100.00 % / 0.00 / 0.00

97.00 % / 0.03 / 0.03

High Factor Saturation

100.00 % / 0.00 / 0.00

100.00 % / 0.00 / 0.00

100.00 % / 0.00 / 0.00

# Variables = 6

98.50 % / 0.02 / 0.02

77.50 % / 0.23 / 0.23

97.50 % / 0.03 / 0.03

Low Factor Impregnation

97.00 % / 0.03 / 0.03

75.00 % / 0.25 / 0.25

95.00 % / 0.06 / 0.06

High Factor Saturation

100.00 % / 0.00 / 0.00

80.00 % / 0.20 / 0.20

100.00 % / 0.00 / 0.00

# Variables = 8

99.00 % / 0.01 / 0.01

76.50 % / 0.40 / 0.40

97.50 % / 0.03 / 0.03

Low Factor Impregnation

98.00 % / 0.02 / 0.02

77.00 % / 0.37 / 0.37

95.00 % / 0.06 / 0.06

High Factor Saturation

100.00 % / 0.00 / 0.00

76.00 % / 0.42 / 0.42

100.00 % / 0.00 / 0.00

Sample Size = 500

100.00 % / 0.00 / 0.00

84.00 % / 0.20 / 0.20

99.67 % / 0.01 / 0.01

# Variables = 4

100.00 % / 0.00 / 0.00

100.00 % / 0.00 / 0.00

100.00 % / 0.00 / 0.00

Low Factor Impregnation

100.00 % / 0.00 / 0.00

100.00 % / 0.00 / 0.00

100.00 % / 0.00 / 0.00

High Factor Saturation

100.00 % / 0.00 / 0.00

100.00 % / 0.00 / 0.00

100.00 % / 0.00 / 0.00

# Variables = 6

100.00 % / 0.00 / 0.00

69.50 % / 0.31 / 0.31

99.50 % / 0.01 / 0.01

Low Factor Impregnation

100.00 % / 0.00 / 0.00

65.00 % / 0.35 / 0.35

99.00 % / 0.02 / 0.02

High Factor Saturation

100.00 % / 0.00 / 0.00

74.00 % / 0.26 / 0.26

100.00 % / 0.00 / 0.00

# Variables = 8

100.00 % / 0.00 / 0.00

82.50 % / 0.30 / 0.30

99.50 % / 0.01 / 0.01

Low Factor Impregnation

100.00 % / 0.00 / 0.00

83.00 % / 0.32 / 0.32

99.00 % / 0.01 / 0.01

High Factor Saturation

100.00 % / 0.00 / 0.00

82.00 % / 0.28 / 0.28

100.00 % / 0.00 / 0.00

Sample Size = 750

100.00 % / 0.00 / 0.00

84.83 % / 0.20 / 0.20

100.00 % / 0.00 / 0.00

# Variables = 4

100.00 % / 0.00 / 0.00

100.00 % / 0.00 / 0.00

100.00 % / 0.00 / 0.00

Low Factor Impregnation

100.00 % / 0.00 / 0.00

100.00 % / 0.00 / 0.00

100.00 % / 0.00 / 0.00

High Factor Saturation

100.00 % / 0.00 / 0.00

100.00 % / 0.00 / 0.00

100.00 % / 0.00 / 0.00

# Variables = 6

100.00 % / 0.00 / 0.00

75.50 % / 0.25 / 0.25

100.00 % / 0.00 / 0.00

Low Factor Impregnation

100.00 % / 0.00 / 0.00

78.00 % / 0.22 / 0.22

100.00 % / 0.00 / 0.00

High Factor Saturation

100.00 % / 0.00 / 0.00

73.00 % / 0.27 / 0.27

100.00 % / 0.00 / 0.00

# Variables = 8

100.00 % / 0.00 / 0.00

79.00 % / 0.37 / 0.37

100.00 % / 0.00 / 0.00

Low Factor Impregnation

100.00 % / 0.00 / 0.00

80.00 % / 0.33 / 0.33

100.00 % / 0.00 / 0.00

High Factor Saturation

100.00 % / 0.00 / 0.00

78.00 % / 0.40 / 0.40

100.00 % / 0.00 / 0.00

## Table 2

Accuracy/Bias/Precision Ratess when Number of True Factors is Two

## Hull

Sample Size = 250

90.75 % / -0.01 / 0.09

65.13 % / -0.07 / 0.45

88.50 % / 0.02 / 0.15

# Variables = 4

87.75 % / -0.05 / 0.12

53.50 % / -0.40 / 0.47

85.25 % / -0.07 / 0.16

Interfactor Correlation = 0

96.00 % / 0.04 / 0.04

61.50 % / -0.30 / 0.39

95.50 % / 0.04 / 0.06

Low Factor Impregnation

92.00 % / 0.08 / 0.08

59.00 % / -0.33 / 0.41

91.00 % / 0.07 / 0.11

High Factor Saturation

100.00 % / 0.00 / 0.00

64.00 % / -0.26 / 0.36

100.00 % / 0.00 / 0.00

Interfactor Correlation = .5

79.50 % / -0.15 / 0.21

45.50 % / -0.51 / 0.55

75.00 % / -0.17 / 0.27

Low Factor Impregnation

59.00 % / -0.29 / 0.41

27.00 % / -0.65 / 0.73

50.00 % / -0.34 / 0.54

High Factor Saturation

100.00 % / 0.00 / 0.00

64.00 % / -0.36 / 0.36

100.00 % / 0.00 / 0.00

# Variables = 8

93.75 % / 0.04 / 0.06

76.75 % / 0.26 / 0.43

91.75 % / 0.12 / 0.14

Interfactor Correlation = 0

94.50 % / 0.06 / 0.06

87.50 % / 0.29 / 0.32

92.50 % / 0.13 / 0.13

Low Factor Impregnation

89.00 % / 0.11 / 0.11

82.00 % / 0.28 / 0.34

85.00 % / 0.25 / 0.25

High Factor Saturation

100.00 % / 0.00 / 0.00

93.00 % / 0.29 / 0.29

100.00 % / 0.00 / 0.00

Interfactor Correlation = .5

93.00 % / 0.03 / 0.07

66.00 % / 0.23 / 0.55

91.00 % / 0.11 / 0.15

Low Factor Impregnation

86.00 % / 0.06 / 0.14

39.00 % / 0.40 / 0.98

82.00 % / 0.21 / 0.29

High Factor Saturation

100.00 % / 0.00 / 0.00

93.00 % / 0.06 / 0.12

100.00 % / 0.00 / 0.00

Sample Size = 750

99.00 % / -0.01 / 0.01

76.75 % / -0.14 / 0.26

99.25 % / 0.00 / 0.01

# Variables = 4

98.00 % / -0.02 / 0.02

60.75 % / -0.34 / 0.39

99.00 % / 0.00 / 0.01

Interfactor Correlation = 0

100.00 % / 0.00 / 0.00

67.50 % / -0.28 / 0.33

99.50 % / 0.01 / 0.01

Low Factor Impregnation

100.00 % / 0.00 / 0.00

68.00 % / -0.26 / 0.32

99.00 % / 0.01 / 0.01

High Factor Saturation

100.00 % / 0.00 / 0.00

67.00 % / -0.29 / 0.33

100.00 % / 0.00 / 0.00

Interfactor Correlation = .5

96.00 % / -0.04 / 0.04

54.00 % / -0.41 / 0.46

98.50 % / -0.01 / 0.02

Low Factor Impregnation

92.00 % / -0.08 / 0.08

40.00 % / -0.52 / 0.60

97.00 % / -0.01 / 0.03

High Factor Saturation

100.00 % / 0.00 / 0.00

68.00 % / -0.30 / 0.32

100.00 % / 0.00 / 0.00

# Variables = 8

100.00 % / 0.00 / 0.00

92.75 % / 0.07 / 0.13

99.50 % / 0.01 / 0.01

Interfactor Correlation = 0

100.00 % / 0.00 / 0.00

95.50 % / 0.07 / 0.09

99.50 % / 0.01 / 0.01

Low Factor Impregnation

100.00 % / 0.00 / 0.00

94.00 % / 0.05 / 0.09

99.00 % / 0.01 / 0.01

High Factor Saturation

100.00 % / 0.00 / 0.00

97.00 % / 0.08 / 0.08

100.00 % / 0.00 / 0.00

Interfactor Correlation = .5

100.00 % / 0.00 / 0.00

90.00 % / 0.07 / 0.17

99.50 % / 0.01 / 0.01

Low Factor Impregnation

100.00 % / 0.00 / 0.00

83.00 % / 0.14 / 0.30

99.00 % / 0.01 / 0.01

High Factor Saturation

100.00 % / 0.00 / 0.00

97.00 % / 0.00 / 0.04

100.00 % / 0.00 / 0.00

## Table 3

Accuracy/Bias/Precision Ratess when Number of True Factors is Four

## Hull

Sample Size = 250

76.50 % / -0.17 / 0.34

59.50 % / 0.11 / 1.32

76.88 % / -0.05 / 0.39

# Variables = 4

70.75 % / -0.30 / 0.47

57.75 % / -0.50 / 0.96

73.50 % / -0.16 / 0.48

Interfactor Correlation = 0

85.50 % / 0.15 / 0.16

68.00 % / 0.14 / 0.52

86.00 % / 0.20 / 0.23

Low Factor Impregnation

71.00 % / 0.30 / 0.32

49.00 % / 0.21 / 0.75

72.00 % / 0.39 / 0.45

High Factor Saturation

100.00 % / 0.00 / 0.00

87.00 % / 0.07 / 0.29

100.00 % / 0.00 / 0.00

Interfactor Correlation = .5

56.00 % / -0.76 / 0.78

47.50 % / -1.14 / 1.39

61.00 % / -0.52 / 0.74

Low Factor Impregnation

12.00 % / -1.51 / 1.55

9.00 % / -2.21 / 2.43

22.00 % / -1.04 / 1.48

High Factor Saturation

100.00 % / 0.00 / 0.00

86.00 % / -0.07 / 0.35

100.00 % / 0.00 / 0.00

# Variables = 8

82.25 % / -0.05 / 0.22

61.25 % / 0.71 / 1.69

80.25 % / 0.07 / 0.31

Interfactor Correlation = 0

89.50 % / 0.13 / 0.13

76.50 % / 1.21 / 1.21

87.50 % / 0.22 / 0.22

Low Factor Impregnation

79.00 % / 0.25 / 0.25

55.00 % / 2.31 / 2.31

75.00 % / 0.44 / 0.44

High Factor Saturation

100.00 % / 0.00 / 0.00

98.00 % / 0.11 / 0.11

100.00 % / 0.00 / 0.00

Interfactor Correlation = .5

75.00 % / -0.22 / 0.32

46.00 % / 0.21 / 2.16

73.00 % / -0.08 / 0.39

Low Factor Impregnation

50.00 % / -0.43 / 0.63

0.00 % / -0.01 / 3.89

46.00 % / -0.16 / 0.78

High Factor Saturation

100.00 % / 0.00 / 0.00

92.00 % / 0.43 / 0.43

100.00 % / 0.00 / 0.00

Sample Size = 750

92.88 % / -0.09 / 0.09

70.88 % / -0.22 / 0.92

96.00 % / 0.00 / 0.04

# Variables = 4

85.75 % / -0.19 / 0.19

69.50 % / -0.48 / 0.78

95.25 % / -0.04 / 0.05

Interfactor Correlation = 0

100.00 % / 0.00 / 0.00

83.50 % / 0.08 / 0.30

99.50 % / 0.01 / 0.01

Low Factor Impregnation

100.00 % / 0.00 / 0.00

74.00 % / 0.19 / 0.45

99.00 % / 0.01 / 0.01

High Factor Saturation

100.00 % / 0.00 / 0.00

93.00 % / -0.03 / 0.15

100.00 % / 0.00 / 0.00

Interfactor Correlation = .5

71.50 % / -0.37 / 0.37

55.50 % / -1.03 / 1.25

91.00 % / -0.08 / 0.10

Low Factor Impregnation

43.00 % / -0.74 / 0.74

17.00 % / -1.93 / 2.33

82.00 % / -0.15 / 0.19

High Factor Saturation

100.00 % / 0.00 / 0.00

94.00 % / -0.13 / 0.17

100.00 % / 0.00 / 0.00

# Variables = 8

100.00 % / 0.00 / 0.00

72.25 % / 0.04 / 1.07

96.75 % / 0.03 / 0.03

Interfactor Correlation = 0

100.00 % / 0.00 / 0.00

92.00 % / 0.44 / 0.44

97.00 % / 0.03 / 0.03

Low Factor Impregnation

100.00 % / 0.00 / 0.00

85.00 % / 0.82 / 0.82

94.00 % / 0.06 / 0.06

High Factor Saturation

100.00 % / 0.00 / 0.00

99.00 % / 0.06 / 0.06

100.00 % / 0.00 / 0.00

Interfactor Correlation = .5

100.00 % / 0.00 / 0.00

52.50 % / -0.37 / 1.71

96.50 % / 0.04 / 0.04

Low Factor Impregnation

100.00 % / 0.00 / 0.00

8.00 % / -0.94 / 3.20

93.00 % / 0.07 / 0.07

High Factor Saturation

100.00 % / 0.00 / 0.00

97.00 % / 0.21 / 0.21

100.00 % / 0.00 / 0.00

## Table 4

Accuracy/Bias/Precision Ratess when Number of True Factors is Eight

## Hull

Sample Size = 250

52.25 % / -1.00 / 1.43

45.25 % / 2.35 / 5.12

57.00 % / -0.15 / 1.57

# Variables = 4

43.50 % / -1.33 / 1.80

47.75 % / -0.56 / 2.67

55.50 % / -0.19 / 1.90

Interfactor Correlation = 0

64.00 % / 0.38 / 0.57

53.50 % / 1.44 / 1.69

60.50 % / 1.12 / 1.28

Low Factor Impregnation

28.00 % / 0.75 / 1.13

12.00 % / 2.63 / 3.13

21.00 % / 2.23 / 2.55

High Factor Saturation

100.00 % / 0.00 / 0.00

95.00 % / 0.25 / 0.25

100.00 % / 0.00 / 0.00

Interfactor Correlation = .5

23.00 % / -3.03 / 3.03

42.00 % / -2.56 / 3.64

50.50 % / -1.49 / 2.53

Low Factor Impregnation

0.00 % / -5.39 / 5.39

3.00 % / -4.82 / 6.18

1.00 % / -2.97 / 5.05

High Factor Saturation

46.00 % / -0.67 / 0.67

81.00 % / -0.30 / 1.10

100.00 % / 0.00 / 0.00

# Variables = 8

61.00 % / -0.66 / 1.07

42.75 % / 5.27 / 7.58

58.50 % / -0.12 / 1.24

Interfactor Correlation = 0

71.00 % / 0.40 / 0.41

44.00 % / 8.74 / 8.81

65.50 % / 0.87 / 0.88

Low Factor Impregnation

42.00 % / 0.79 / 0.81

0.00 % / 15.46 / 15.60

31.00 % / 1.74 / 1.76

High Factor Saturation

100.00 % / 0.00 / 0.00

88.00 % / 2.02 / 2.02

100.00 % / 0.00 / 0.00

Interfactor Correlation = .5

51.00 % / -1.72 / 1.73

41.50 % / 1.80 / 6.35

51.50 % / -1.11 / 1.60

Low Factor Impregnation

2.00 % / -3.44 / 3.46

0.00 % / 0.74 / 9.84

3.00 % / -2.22 / 3.20

High Factor Saturation

100.00 % / 0.00 / 0.00

83.00 % / 2.85 / 2.85

100.00 % / 0.00 / 0.00

Sample Size = 750

83.88 % / -0.55 / 0.57

60.13 % / 0.10 / 3.17

73.25 % / 0.11 / 0.37

# Variables = 4

73.00 % / -1.04 / 1.08

58.50 % / -0.92 / 2.26

71.50 % / -0.08 / 0.44

Interfactor Correlation = 0

96.00 % / 0.04 / 0.04

68.50 % / 1.16 / 1.16

83.00 % / 0.23 / 0.23

Low Factor Impregnation

92.00 % / 0.08 / 0.08

38.00 % / 2.30 / 2.30

66.00 % / 0.45 / 0.45

High Factor Saturation

100.00 % / 0.00 / 0.00

99.00 % / 0.02 / 0.02

100.00 % / 0.00 / 0.00

Interfactor Correlation = .5

50.00 % / -2.11 / 2.11

48.50 % / -3.01 / 3.37

60.00 % / -0.38 / 0.66

Low Factor Impregnation

0.00 % / -4.22 / 4.22

1.00 % / -6.10 / 6.64

20.00 % / -0.75 / 1.31

High Factor Saturation

100.00 % / 0.00 / 0.00

96.00 % / 0.09 / 0.09

100.00 % / 0.00 / 0.00

# Variables = 8

94.75 % / -0.06 / 0.06

61.75 % / 1.13 / 4.07

75.00 % / 0.29 / 0.29

Interfactor Correlation = 0

100.00 % / 0.00 / 0.00

75.50 % / 3.65 / 3.65

74.00 % / 0.31 / 0.31

Low Factor Impregnation

100.00 % / 0.00 / 0.00

53.00 % / 6.87 / 6.87

48.00 % / 0.62 / 0.62

High Factor Saturation

100.00 % / 0.00 / 0.00

98.00 % / 0.42 / 0.42

100.00 % / 0.00 / 0.00

Interfactor Correlation = .5

89.50 % / -0.11 / 0.11

48.00 % / -1.39 / 4.50

76.00 % / 0.27 / 0.27

Low Factor Impregnation

79.00 % / -0.22 / 0.22

0.00 % / -3.51 / 8.25

52.00 % / 0.54 / 0.54

High Factor Saturation

100.00 % / 0.00 / 0.00

96.00 % / 0.74 / 0.74

100.00 % / 0.00 / 0.00

## Table 5

ANOVA Effect Sizes ( eta-squared )

## A

Hull Bias

PA Bias

factors

0.0181

0.0641

0.0084

variables

0.0267

0.0175

0.0047

factors * variables

0.0283

0.0192

0.0040

sample

0.0055

0.0043

0.0000

factors * sample

0.0120

0.0076

0.0015

variables * sample

0.0038

0.0001

0.0012

factors * variables * sample

0.0060

0.0001

0.0025

correlativity

0.0463

0.1010

0.0200

factors * correlativity

0.0462

0.1099

0.0221

variables * correlativity

0.0014

0.0208

0.0021

factors * variables * correlativity

0.0028

0.0164

0.0013

sample * correlativity

0.0004

0.0194

0.0140

factors * sample * correlativity

0.0005

0.0158

0.0129

variables * sample * correlativity

0.0002

0.0014

0.0006

factors * variables * sample * correlativity

0.0005

0.0026

0.0010

magnitude

0.0003

0.0365

0.0019

factors * magnitude

0.0073

0.0580

0.0084

variables * magnitude

0.0124

0.0147

0.0047

factors * variables * magnitude

0.0175

0.0150

0.0040

sample * magnitude

0.0021

0.0030

0.0000

factors * sample * magnitude

0.0049

0.0050

0.0015

variables * sample * magnitude

0.0022

0.0003

0.0012

factors * variables * sample * magnitude

0.0036

0.0011

0.0025

correlativity * magnitude

0.0522

0.0935

0.0200

factors * correlativity * magnitude

0.0571

0.0991

0.0221

variables * correlativity * magnitude

0.0037

0.0166

0.0021

factors * variables * correlativity * magnitude

0.0054

0.0117

0.0013

sample * correlativity * magnitude

0.0006

0.0156

0.0140

factors * sample * correlativity * magnitude

0.0009

0.0117

0.0129

variables * sample * correlativity * magnitude

0.0013

0.0034

0.0006

factors * variables * sample * correlativity * magnitude

0.0018

0.0065

0.0010

Note. factors = true # of factors ; variables = # variables per factor ; correlativity = interfactor correlativity ; magnitude = magnitude of pattern burdens ; sample = sample size.