The Test of Performance Strategies (TOPS 2) - Development and Validation of TOPS 2 Short form8JUN2020

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

 

 
 

The Test of Performance Strategies (TOPS 2): Development and validation of TOPS 2 short
form

Vijay Kumar, Herbert W. Marsh, Chris Lonsdale and Jiesi Guo

Australian Catholic University

 
 

 

Abstract

Purpose: The Test of Performance Strategies, TOPS 2 (Hardy, Roberts, Thomas & Murphy,

(2010) may be perceived as too long, especially when used in conjunction with a battery of

other instruments (Marsh, Martin, and Jackson, 2010). Therefore, the purpose of this study

was to examine the psychometric properties of refined TOPS 2 and to develop a robust,

reliable, and valid short form of this instrument.

Design and Method: The recommended criteria (see Marsh, et al., 2010) are applied in

selection of items for TOPS 2 short form (TOPS 2-S). A minimum of three items per factor

was recommended by Kline (2005) and Marsh et al. (2010), which the present study adopted.

Confirmatory factor analysis (CFA) and exploratory structural equation modelling (ESEM)

was conducted with Mplus using maximum likelihood estimation to investigate the factor

structure of TOPS 2 and the (TOPS 2-S).

Results and conclusion: The TOPS 2 and TOPS 2-S reliability estimates were consistently

high for all factors. The CFA and ESEM showed that the confirmatory fit indices met the cut

off in the criteria advocated by Hu and Bentler (1999). The multitrait multimethod (MTMM)

analysis showed that TOPS 2-S had met the criteria for convergent and discriminant validity.

The Test of performance Strategies Short Form (TOPS 2-S) is available for research

purposes from the author, Dr Vijay Kumar, via email: [email protected] or

[email protected].

 
 

 

The present study critically examined the psychometric properties of the Test of Performance

Strategies (TOPS 2), a popular measure of the mental skills and strategies used by athletes in

competition and during practice. The main aim of this research was to develop a robust,

reliable, and valid short form of TOPS 2.

Test of Performance Strategies (TOPS)

TOPS 2 (version 3) is a 68 item, self-report questionnaire designed to measure athletes’ use

of a range of psychological skills during practice and in competition (Hardy et al., 2010). It

has 17 factors with items designed to measure eight practice and nine competition skills.

These practice skills are goal- setting, imagery, attention control, self-talk, activation,

emotional control, automaticity, relaxation, and negative thinking; there is an additional

negative thinking subscale in competition scale. All subscales have four items each.

Lane, Harwood, Terry and Karageorghis (2004) investigated the original TOPS and found it

to be lacking good psychometric properties (see supplementary notes). This prompted Hardy

et al. (2010) to develop TOPS 2. Hardy and colleagues conducted a study to examine the

factor structure of the questionnaire with 220 Australian, 120 North American and 225

British athletes from 48 different sports across a wide range of ability levels. They found the

initial fit for both nine factor competition and eight factor practice subscales were acceptable

(CFI, TLI ≥ .95; RMSEA ≤ .06). The further refinement of TOPS 2(version 3) showed that

the fit for the competition model improved while for the practice model fit improvement was

substantial (see Hardy et al., 2010). Cronbach’s alphas range for competition was from .62

(Automaticity) to .89 (Emotional Control) while for practice the ranges was from .71

(Activation) to .85 (relaxation).



 

The present investigation

The purpose of the present study was to develop a robust, reliable, and valid short form of

TOPS 2. This study followed recommendations in line with the construct validity approach

suggested by Marsh et al. (2010). These four basic guidelines were adopted to evaluate the

TOPS 2 and its short form (TOPS 2 S): (a) A strong original instrument was a requirement

for developing short form; (b) Short form should retain the content coverage of each factor;

(c) Each factor on the short form must be adequately reliable; and (d) The short form must

retain the factor structure of the original instrument.

This study used confirmatory factor analysis (CFA) and exploratory structural equation

modeling (ESEM) to test the structural integrity of TOPS 2 and TOPS 2 S. The main focus

was on the application of CFA. However, some comparison of CFA result was made with

ESEM, a new and evolving statistical procedure. ESEM provides confirmatory tests of a

priori factor structures, relations between latent factors and multigroup/multioccasion tests of

full measurement invariance strategy (Marsh, Liem, Martin, Morin and Nagengast, 2011;

Marsh, Morin, Parker & Kaur, 2014). ESEM integrates the best features of exploratory factor

analysis (EFA), CFA and structural equation modelling (SEM) (Marsh, Nagengast, Morin,

Parada, Craven & Hamilton, 2011). Marsh and colleagues (2014) emphasized that when there

is a well- defined a priori factor structure ESEM can be used as a confirmatory tool.

Method

Sample.

Existing data from the TOPS 2 instrument (version 3) was obtained from original authors of

TOPS (Thomas, Murphy & Hardy. 1999). This data was from a sample of 538 participants,

composed of 286 males (53.2%) and 252 females (46.8%), from diverse sports groups.

 

Criteria applied in the selection of items for the TOPS 2 S short form.

Five goals were established for selecting items for TOPS 2 S (see Marsh et al., 2010). The

items selected for each subscale were based on the confirmatory factor analysis with

guidelines listed below (see supplementary material for further explanation). Items retained:

1. Had highest factor loadings from each subscale in CFA (this also matched the item-

corrected correlations from the reliability procedure).

2. Had low cross-loadings as indicated by Mplus’ modification indexes. Cross-loadings

indicate how much fit would improve if allowed loading onto a factor other than the

intended factor that it would measure.

3. Had low correlated uniqueness (CUs) with other items in the same subscale, as

indicated by Mplus’ modification indexes. If more than one item had high correlated

uniqueness then the item with higher CU was dropped.

4. Maintained coefficient α estimate of reliability of at least .70.

5. Maintained the breadth of content of the construct (based on subjective judgement of

the researchers).

Statistical analysis to refine the number of items

Researchers (Hu & Bentler, 1999; Marsh, Ellis, Parada, Richards, & Heubeck, 2005; Marsh,

et al., 2011) have suggested a number of fit statistics derived from the minimised discrepancy

function. In this study emphasis was placed on the absolute fit indices such as, RMSEA, the

Tucker-Lewis index (TLI), and comparative fit index (CFI) to evaluate goodness of fit.

RMSEA values of .05 and less reflects a model of close fit while values between .05 and .08

indicate reasonable fit (Marsh et al., 2011). The TLI and CFI indices lie between zero (0) and

one (1). Values exceeding 0.95 are typically taken to be excellent fit while values greater than

0.90 are taken to reflect acceptable fit to the data (Marsh et al., 2010). Marsh, Hau and Wen

(2004) suggested that CFI fit indexes should not be treated as golden rules but rather form the

 

basis of preliminary interpretations that must be followed in relation to the specific details of

the research.

The CFA and ESEM was conducted with Mplus (version7.11) using maximum likelihood

estimation to investigate the factor structure of the 51 items TOPS 2-S and to compare it with

the factor structure of the 68 items TOPS 2. Primarily CFA was used to develop TOPS 2

short form. Marsh, et al. (2014) stated the current CFA standards are too restrictive and hence

many psychological instruments used in applied research do not meet the minimum criteria of

acceptable fit. They recommend use of ESEM in conjunction with CFA to get more definitive

results.

Multiple sets of ESEM factors can be defined in ESEM as ESEM or CFA factors where

assignment of items is usually determined on the basis of a priori theoretical expectations

(Marsh et al., 2014). Marsh and colleagues argue that a stronger a priori model is facilitated

through target rotation in ESEM. See Marsh et al. (2014) for further explanation.

Multitrait-multimethod (MTMM) analysis. The MTMM analysis provides a strong

approach to measure stability of a multidimensional instrument (Campbell and Fiske, 1959;

Marsh, Asci, Thomas, 2002) and was one of the standard criteria for evaluating instruments

such as TOPS 2. Marsh, Morin Parker and Kaur (2014) established that, compared to CFA

model which can provide results in inflated correlations among different factors, ESEM is

well suited to the construction of latent MTMM correlation matrices that can be evaluated

with Campbell-Fiske guidelines. Marsh and colleagues argued that several MTMM ESEM

studies provide strong approaches to evaluating discriminant validity. This study, therefore

examined both the ESEM and CFA approaches to MTMM analysis for competition vs

practice subscales to determine the convergent and discriminant validity of TOPS 2 S.



 

Correlations among sixteen latent constructs (Table 3) form the MTMM matrix. The 16x16

representing the 8 scales common to the practice and competition; correlation matrix relating

to the sixteen TOPS-2-S factors can be seen in the MTMM matrix (Table 3, 4).

The four validation processes advocated by Campbell and Fiske (1959) were: (a) Entries in

the validity diagonal (also known as monotrait-monomethod values) were examined and the

evidence of convergent validity is shown with the diagonal values being significant from zero

and also if sufficiently large; (b) The higher values in the validity diagonal compared to the

values lying in its column and row in the heterotrait-heteromethod triangles suggests

discriminant validity; (c) The third process involves comparing the values of a given variable

in the validity diagonal with its values in the heterotrait-monomethod triangles and in this

process the variable should correlate higher to measure the same trait than with measures of

different traits that employ the same trait; and (d) All of the heterotrait triangles of

monomethod and heteromethod should show the same pattern of trait interrelationship. The

last two processes also provide evidence for discriminant validity.

Results

Reliability

TOPS 2 reliability estimates were consistently high for each factor (Table 1). Preliminary

investigation of TOPS 2 showed good reliability across all 17 factors. The TOPS 2-S

reliability estimates were also consistently high for each subscale (Table 1) and showed good

reliability across all 17 factors. Cronbach alpha values (calculated to show the internal

consistencies of the TOPS 2 and TOPS 2-S questionnaires) obtained for both overall scales

were above .80. All competition alphas were .80 and over except for automaticity (.77). The

practice scale alphas were .70 and over except for activation (.69). However, the Cronbach’s

alphas calculated for TOPS 2-S subscales were lower than .80 but above .69. The lowest

 

alpha of .69 for 3-items Activation (TOPS 2- S) matches with alpha obtained for 4-item

(TOPS 2) Activation scale (α = .72). Due to the low 4-item (Activation) alpha it was

anticipated that the alpha value for this 3-item subscale will be lower but very close to .70

(see Table 1 for the internal consistency estimates for all competition and practice subscales).

Comparing CFA and ESEM of the TOPS 2

The confirmatory fit indices fell in the traditional criterion advocated by Hu and Bentler

(1999). The RMSEA, CFI and TLI all show adequate to good fit for the TOPS 2 and TOPS 2-

S (see Table 2). Overall ESEM shows a better model fit compared to CFA results.

Competition subscales. The fit for the TOPS 2- S competition scale, containing 27

items, showed good model fit with: RMSEA = .04; CFI = .97; TLI = .97. There was a

noticeable improvement in the RMSEA, CFI and TLI for the short form version in

comparison to the CFA index obtained from TOPS 2 analysis. The CFA for the TOPS 2 nine

factor competition scales, containing 36 items showed good support for the model (Table 2).

In comparison to CFA the ESEM results of the TOPS 2 S for competition was: RMSEA =

.03; CFI = .99; TLI = .98. There was an improvement in all fit indices for ESEM. The ESEM

for 4-item TOPS 2 also showed significant improved model fit compared to the CFA (see

Table 2).

Practice subscales. The practice scale with 3-item per subscales with a total of 24

items showed adequate fit with: RMSEA = .05; CFI = .95; TLI = .94. This result is similar to

the TOPS 2 practice scale which also had adequate model fit (see Table 2). There was a slight

improvement in the fit indices (CFI & TLI) for the TOPS 2-S practice analysis when

compared to TOPS 2 analysis. A noticeable decrease in RMSEA value obtained indicated

support for 3-items per subscale as a more parsimonious model. Another source of support

for a more parsimonious 3-items model was indicated by an increase in incremental fit

 

indexes CFI and TLI. In comparison to CFA the ESEM for TOPS 2 S practice also showed

some improvement in model fit (see Table 2). ESEM results for TOPS 2 S practice showed:

RMSEA = .05; CFI = .97; TLI = .98. There was only a marginal improvement in CFI index.

The competition results in ESEM for 4-item TOPS 2 also showed significant improved model

fit when compared to the CFA results (see Table 2). The above results indicate that ESEM

provide a better model fit indices than CFI.

Factor Loading. The sixteen target factor loadings (see table) range for 3-items

competition and practice scales were both from .68 to .90. The 3-item model had a slightly

better factor loading range for both competition and practice subscales when compared to the

4-item model (see supplementary material for a detailed analysis).

MTMM.

The MTMM results obtained for TOPS 2-S (Table 3) show that the eight convergent

validities (shaded in grey) were consistently high (Mean = .75, Range = .586 - .915); only the

coefficients for attentional control (.59), goal setting (.67) and activation (.67) were less than

.70. Similar pattern of results was obtained when an ESEM MTMM analysis (Table 4) was

carried out between competition and practice scales (Mean = .75, Range = .564 - .991). The

results also show that the validity diagonal values for both ESEM and CFA analysis were

higher than the values in its column and row in the heterotrait-heteromethod (HTHM)

triangles. The reliability diagonal values indicated the TOPS 2-S has met the criteria for

convergent validity.

The correlations between different factors administered on different method in the heterotrait-

heteromethod (HTHM) sub square matrix (Mean = .37; Range = .124 - .628) are lower than

the values in the validity diagonals (see Table 3). The information presented above can be

 

taken as the justification for validation of discriminant validity for TOPS 2-S. Similar results

were obtained for TOPS 2 (see supplementary materials).

The CFA correlations between different factors on the same method, heterotrait-monomethod

(HTMM) correlations in the diagonal sub matrixes (Mean = .49, Range = .120 - .871) are

only slightly larger than the HTHM sub-triangle correlations and also substantively lower

than the convergent validity in most cases. All convergent validity values are higher than

HTMM correlations except these: CA-PA (.667) had ten out of 14 higher HTMM triangle

values, CAC-PAC (.586) also had ten out of 14 higher HTMM triangle values while for CG-

PG (.667) only two out of 14 correlations had higher HTMM triangle values. Matching

competition and practice scales reflect convergent validity.

The comparison of diagonal validity values of 56 HTHM and 56 HTMM correlations can be

a validation process for discriminant validity. The results obtained show that the criteria had

met the requirements for most variables. The diagonal correlations provided evidence for

discriminant validity. The goal setting, imagery, attention control, self-talk, emotional

control, automaticity, and relaxation correlations all met requirements for discriminant

validity.

The fourth aspect of the question of validity can be seen when the same pattern of trait

interrelationship is apparent in both blocks of HTMM and HTHM correlation matrix

(Campbell & Fiske, 1959). The profile similarity indexes are similar in all heterotrait

triangles (Table 5).

The ESEM MTMM comparison showed that the factor correlations among different factors

were substantially smaller than the corresponding CFA factors (Tables 3, 4, 5). ESEM and

CFA both provided good fit to the TOPS 2 S. Marsh, et al. (2014) however, state that ESEM
10 
 

routinely indicate a more accurate estimation of the factor correlation and thus recommends

that both ESEM and CFAs should be applied to the same data for a thorough analysis.

Discussion
The present study examined the appropriateness of the short form of the popular and widely

used TOPS instrument. The psychometric properties, such as reliability, factor structure,

correlated uniqueness, and cross loadings, of TOPS 2 and TOPS 2-S were thoroughly

examined. It is also demonstrated here that the ESEM indicated a better model fit and can be

used in psychometric tests of psychological assessment instruments.

The evaluation guidelines proposed by Marsh et al. (2010) was adopted to develop TOPS 2-

S: Four basic relevant guidelines used for the present study are presented below:

(a) A strong original instrument was a fundamental requirement for developing

short form. The present study started with a strong TOPS 2 instrument. The conceptual

problems of original TOPS raised by Lane et al. (2004) were eradicated by Hardy et al.

(2010) in their refinement of TOPS. The CFAs reported in Hardy et al. (2010) and the present

study provided strong support for the use of TOPS 2 to measure the use of psychological

skills and strategies in competition and practice environments;

(b) Short form should retain the content coverage of each factor. The MTMM

analysis of the TOPS 2 and TOPS 2-S was used to test this assumption and both demonstrate

that the content coverage of the two instruments is invariant (see Marsh et al., 2010). The

MTMM analysis, where multiple traits are assessed by multiple methods and parallel analysis

of the short and long form, is a strong approach and provides strong support for construct

validity of the TOPS 2-S and its equivalence with the TOPS 2 (see Marsh et al., 2010).
11 
 

(c) Each factor on the short form must be adequately reliable. Marsh et al. (2010)

recommended reliability estimates should ideally be .80 or higher whereas Smith et al. (2000)

accepted reliability coefficients of .70 to be adequate. The alphas obtained for TOPS 2 short

form subscales, in the present study were, comparatively, close to the alphas of the TOPS 2

long form. Reliability Cronbach alphas indicated that the TOPS 2-S compared very well to

the TOPS 2 and showed good reliability across all seventeen factors. There seem to be a very

small loss of reliability (mean reliability for TOPS 2 and TOPS 2-S competition scale were

.86 and .83 respectively; the mean reliability for TOPS 2 and TOPS 2-S practice scale were

.81 and .78 respectively). The TOPS 2-S results only marginally differ from the TOPS 2 long

form. Although alpha value of .7 is accepted as good indicator of internal reliability, .6 is

acceptable for factors with fewer item, such as in this case where each factor has 3 items (see

Loewenthal, 2001). Moreover .69 is only marginally below .7 and is accepted as adequate

reliability due to the less number of items in the subscales (see Loewenthal, 2001;

Tabachnick & Fidell, 2001).

(d) Short form must retain the factor structure of the original form. The CFA and

ESEM results indicated a good to adequate fit for seventeen factors TOPS 2 and TOPS 2–S.

The confirmatory factor analyses separately examined the competition and practice scale is

concurrent with previous TOPS studies (see Hardy et al. 2010; Lane, et al., 2004). The

analysis of seventeen factor structure of both TOPS 2 and TOPS 2-S provided a good fit for

the model. The fit for the nine-factor competition scales was good for both the TOPS 2 and

the short form. The eight-factor practice scales for both TOPS 2 and the short form did not

show as good fit as the competition scales but was adequate and acceptable.
12 
 

Conclusion

A robust, reliable, and valid short form of TOPS 2 was developed. CFA and ESEM results

indicated that TOPS 2-S maintained high reliability and the construct validity. Moreover, the

TOPS 2-S factors relate to the established and cognate TOPS 2 constructs. Finally, a strong

and robust 51-item Test of Performance of Strategies retaining all the qualities of its original

form was developed. The range of items retained was able to maintain the breadth and depth

of each factor. It was important to retain the same TOPS2 structure and thus all seventeen

factors to maintain its reliability and validity. It is anticipated that the refinement of TOPS 2

will help reduce administration time (especially when supplemented with other

questionnaires) and further encourage researchers to use this measure in their applied

research. Moreover, applied researchers may find TOPS 2-S a practical replacement for

TOPS 2 and information obtained through this instrument can enable them to plan, build and

implement future psychological skills programs for their athletes.


13 
 

Table1. Comparison of Reliability coefficient based on Short and Long forms

Subscale 4 item comp 3-item comp 4 item prac 3-item prac

Goal setting .863 .865 .869 .855

Self-Talk .839 .828 .826 .788

Imagery .864 .856 .789 .748

Attention Control .858 .823 .817 .766

Activation .857 .825 .718 .686

Emotional Control .893 .89 .804 .764

Automaticity .826 .768 .782 .801

Relaxation .888 .907 .873 .866

Negative Thinking .834 .813

Median .858 .828 .811 .777

Mean .858 .842 .810 .784

PSI .928 .919

Note. 4 item comp = competition items from the TOPS 2; 3 item comp = competition items from the TOPS 2

short form; 4 item prac = practice items from TOPS 2; 3 item prac = practice items from the TOPS 2 short form.
14 
 

Table 2. Confirmatory factor analysis of TOPS 2, TOPS 2-S and ESEM of TOPS 2-S

TOPS 2 TOPS 2 S

Fit Indices Competition Practice Competition Practice

CFA 4- ESEM 4- CFA4- ESEM 4- CFA 3- ESEM 3- CFA 3- ESEM 3-

item item item item item item item item

Chi-

square 1261.70 527.97 973 439.09 531 203.62 482.80 245

df 588 342 436 268 288 144 224 144

RMSEA .048 .032 .048 .034 .040 .028 .046 .047

CFI .943 .982 .931 .974 .973 .992 .952 .970

TLI .936 .967 .921 .951 .967 .981 .941 .927

Note: df = Degrees of Freedom, RMSEA = Root Mean Square Error of Approximation, CFI = Comparative Fit Index, TLI = Tucker-Lewis Index, CFA = Confirmatory

Factor Analysis, ESEM = Exploratory Structural Equation Modeling


15 
 

Table 3. Correlation matrix for CFA 3-item subscales

PG PI PAC PST PA PEC PAU PR CG CI CAC CST CA CEC CAU CR CNT

PG 1.000
PI .707 1.000
PAC .530 .366 1.000
PST .705 .670 .460 1.000
PA .653 .568 .704 .669 1.000
PEC .214 .167 .495 .354 .562 1.000
PAU .223 .280 .367 .273 .496 .350 1.000
PR .449 .600 .195 .625 .428 .175 .120 1.000
CG .667 .490 .364 .488 .485 .155 .171 .265 1.000
CI .517 .870 .191 .537 .507 .201 .167 .453 .550 1.000
CAC .423 .444 .586 .454 .540 .501 .328 .203 .522 .469 1.000
CST .539 .628 .339 .915 .549 .330 .199 .567 .556 .669 .592 1.000
CA .453 .471 .415 .536 .667 .474 .357 .279 .527 .583 .871 .720 1.000
CEC .216 .267 .336 .334 .358 .744 .220 .124 .295 .310 .705 .474 .729 1.000
CAU .315 .380 .383 .364 .540 .427 .729 .238 .436 .428 .746 .522 .848 .563 1.000
CR .362 .498 .162 .571 .422 .201 .129 .854 .301 .486 .303 .661 .408 .233 .347 1.000
CNT .391 .421 .431 .528 .500 .585 .266 .335 .436 .461 .739 .722 .773 .757 .661 .436 1.000
16 
 
Note. Letters starting with P have Practice subscales and those starting with C have competition subscales: G = goal setting, I = imagery, AC = attention control, ST = self-

talk, A = activation, EC = emotional control, A = automaticity, R = relaxation and NT = negative thinking.

Table 4. Correlation matrix for ESEM 3-item subscales

PG PI PAC PST PA PEC PAU PR CG CI CAC CST CA CEC CAU CR CNT


PG 1
PI .572 1
PAC .427 .261 1
PST .541 .525 0.252 1
PA .578 0.399 0.532 0.499 1
PEC .155 0.09 0.446 0.227 0.435 1
PAU .218 0.201 0.287 0.198 0.45 0.291 1
PR .404 0.534 0.145 0.535 0.361 0.179 0.107 1
CG .661 0.448 0.312 0.43 0.451 0.139 0.175 0.262 1
CI .471 0.873 0.148 0.421 0.469 0.171 0.152 0.446 0.543 1
CAC .342 0.387 0.564 0.299 0.385 0.438 0.277 0.128 0.457 0.393 1
CST .397 0.468 0.18 0.991 0.395 0.208 0.127 0.473 0.473 0.54 0.331 1
CA .349 0.379 0.248 0.389 0.63 0.335 0.257 0.206 0.432 0.506 0.655 0.452 1
CEC .150 0.216 0.291 0.268 0.261 0.735 0.186 0.088 0.272 0.267 0.638 0.303 0.521 1
CAU .261 0.288 0.320 0.25 0.448 0.389 0.732 0.203 0.410 0.393 0.655 0.322 0.662 0.502 1
CR .318 0.438 0.104 0.495 0.394 0.188 0.125 0.849 0.299 0.484 0.23 0.556 0.331 0.192 0.311 1
CNT .301 0.318 0.376 0.339 0.382 0.556 0.200 0.281 0.354 0.36 0.582 0.475 0.553 0.641 0.531 0.348 1
17 
 
Note. Letters starting with P have Practice subscales and those starting with C have competition subscales: G = goal setting, I = imagery, AC = attention control, ST = self-

talk, A = activation, EC = emotional control, A = automaticity, R = relaxation and NT = negative thinking.


18 
 

Table 5: Comparison of MTMM correlations based on Short and Long Forms

Description Mean High Low PSI

CFA HTMM Short .488 .871 .12 .986

CFA HTMM Long .470 .832 .096 -

ESEM HTMM Short .392 .655 .090 .944

ESEM HTMM Long .354 .624 .014

CFA HTHM short .373 .613 .124 .975

CFA HTHM Long .359 .667 .108 -

ESEM HTHM Short .301 .495 .088 .899

ESEM HTHM Long .269 .511 .080

CFA MTHM Short .754 .915 .586 .995

CFA MTHM Long .760 .891 .615 -

ESEM MTHM Short .074 .991 .564 .981

ESEM MTHM Long .728 .912 .483

Note: ESEM = exploratory structural equation modelling, HTMM = heterotrait monomethod, HTHM =

heterotrait heteromethod, MTHM = monotrait heteromethod, PSI = profile similarity indexes.


19 
 

References

Campbell, D. T. & Fiske, D. W. (1959). Convergent and discriminant validation by the

multitrait-multimethod matrix. Psychological Bulletin, 56, 81-105.

Hardy, L., Roberts, R., Thomas, P. R. & Murphy. S. (2010). Test of Performance Strategies

(TOPS): Instrument refinement using confirmatory factor analysis. Psychology of

Sport and Exercise, 11, 27–35.

Hu, L. & Bentler, P. M. (1998). Fit indices in covariance structure modelling: Sensitivity to

under parameterized model misspecification. Psychological Methods, 3(4), 424-

453.

Kline, R. B. (2005). Principles and practice of structural equation modelling (2nd ed.). New

York: Guildford.

Lane, M. Harwood, C., Terry, P. C. & Karageorghis. C. I. (2004). Confirmatory factor

analysis of the Test of Performance Strategies (TOPS) among adolescent athletes.

Journal of Sports Sciences, 22(9), 803-812.

Loewenthal, K. M. (2001). An Introduction to Psychological Tests and Scales, (2nd ed.).

Hove, East Sussex: Psychology Press Ltd.

Marsh, H. W. (1989). Confirmatory factor analyses of multitrait-multimethod data: Many

problems and a few solutions. Applied Psychological Measurement, 13(4), 335-360.

Marsh, H. W., Asci, F. H., & Tomas-Marco, I. (2002). Multitrait-multimethod analysis of two

physical self-concept instruments: A cross-cultural perspective. Journal of Sport

and Exercise Psychology, 24. 99-119.


20 
 

Marsh, H. W., Ellis, L. A., Parada, R. H., Richards, G. & Heubeck, B. G. (2005). A short

version of the self-description questionnaire II: Operationalizing criteria for short-

form evaluation with new applications of confirmatory factor analysis.

Psychological Assessment, 17 (1), 81-102.

Marsh, H. W., Hau, K. T. & Wen, Z. (2009). In search of golden rules: Comment on

hypothesis-testing approaches to setting cut off values for fit indexes and dangers in

overgeneralizing Hu and Bentler’s (1999) findings. Structural Equation Modeling:

A Multidisciplinary Journal, 11(3), 320-341.

Marsh, H. W., Liem, G. D., Martin, A. J., Morin, A. J. S. & Nagengast, B. (2011).

Methodological measurements fruitfulness of exploratory structural equation

modelling (ESEM): New approaches to key substantive issues in motivation and

engagement. Journal of Psychoeducational Assessment, 29(4), 322-346

Marsh, H. W., Martin, A. & Jackson, S. (2010). Introducing a short version of the

physical self-description questionnaire: New strategies, short-form evaluative

criteria, and applications of factor analyses. Journal of Sport & Exercise Psychology,

32, 438-482.

Marsh, H. W., Morin, A. J. S., Parker, P. D. & Kaur, G. (2014). Exploratory structural

equation modelling: An integration of the best features of exploratory and

confirmatory factor analysis. Annual Review of Clinical Psychology, 10, 85-110.

Marsh, H. W., Nagengast, B., Morin, A. J. S., Parada, R. H., Craven, R. G., Hamilton, L. R.

(2011). Construct validity of the multidimensional structure of bullying and

victimization: An application of exploratory structural equation modelling. Journal

of Educational Psychology, 103(3), 701-732.


21 
 

Smith, G. T., McCarthy, D. M. & Anderson, K. G. (2000). On the sins of short-form

development. Psychological Assessment, 12, (1), 102-111.

Tabachnick, B. G. & Fidell, L. S. (2013). Using multivariate statistics (6th ed.) Boston:

Pearson.

Thomas, Murphy & Hardy (1999). Test of performance strategies: Development and

preliminary validation of a comprehensive measure of athletes' psychological skills.

Journal of Sports Sciences, 17(9), 697-711.

You might also like