Consoli 2023 BING

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Computers & Education 197 (2023) 104742

Contents lists available at ScienceDirect

Computers & Education


journal homepage: www.elsevier.com/locate/compedu

What is “technology integration” and how is it measured in K-12


education? A systematic review of survey instruments from 2010
to 2021
Tessa Consoli a, *, Juliette Désiron a, Alberto Cattaneo b
a
Institute of EducationUniversity of Zurich Kantonsschulstrasse 3, CH-8001, Zurich, Switzerland
b
Swiss Federal University for Vocational Education and Training SFUVET, Via Besso 84 / 86, 6900, Lugano, Massagno, Switzerland

A R T I C L E I N F O A B S T R A C T

Keywords: This systematic review provides an overview and analysis of the survey instruments measuring
Primary education technology integration (TI) in educational settings from 2010 to 2021. Given the importance of
Secondary education addressing aspects related to the quality of TI (e.g., the use of technology to support pedagogical
Improving classroom teaching
goals), we paid particular attention to them. Search results from the PsycINFO, ERIC, Web of
Science, and Scopus databases yielded 695 records. Thirty-five different survey instruments used
in 36 studies met our eligibility criteria and were then analyzed by applying content analysis. Our
results indicate that the diversity of operationalizations is very high and that several instruments
have no explicit conceptual or theoretical underpinnings. Most of the instruments measure as­
pects related to classroom practices and measure TI from the teachers’ point of view. The most
frequently measured pedagogical aspects of TI in instructional practices include the use of
technology to (1) enhance students’ cognitive engagement, (2) promote collaboration between
students, and (3) allow students to conduct research online. The study concludes with some
perspectives for future research, an attempt to formulate a definition of TI, and some more
general recommendations to ensure terminological unambiguity in TI research.

1. Introduction

Technology integration (TI) in education has been one of the most significant areas of educational technology research in recent
decades (Valtonen et al., 2022). Nevertheless, several scholars have claimed that a common understanding of the term “TI” has not yet
been reached (Belland, 2009; Davies & West, 2014; Hew & Brush, 2007; Hsu, 2010; Peeraer & van Petegem, 2012; Proctor et al., 2003).
They pointed out that whereas some researchers use the expression as a synonym for technology use (e.g., Pittman & Gaines, 2015),
others refer to a far more complex phenomenon, such as the transformation or enhancement of teaching and learning practices through
the use of technology (e.g., Davies & West, 2014), a school development challenge (e.g., Vanderlinde & van Braak, 2010), or even
educational reform (e.g., Jonassen, 1995). To complicate matters even further, the expression “TI” is also used to refer to an outcome, a
process, a condition, or a state (Lloyd, 2006).
Furthermore, some scholars have highlighted that TI has often been poorly defined, which has led researchers to measure the
concept inappropriately. Proctor et al. (2003), for example, criticize that several studies measure TI in an overly simplistic way (e.g., by

* Corresponding author. Kantonsschulstrasse 3, 8001, Zürich.


E-mail addresses: [email protected] (T. Consoli), [email protected] (J. Désiron), [email protected] (A. Cattaneo).

https://doi.org/10.1016/j.compedu.2023.104742
Received 22 June 2022; Received in revised form 10 January 2023; Accepted 17 January 2023
Available online 21 January 2023
0360-1315/© 2023 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
T. Consoli et al. Computers & Education 197 (2023) 104742

measuring student–computer ratio or the frequency of computer use) and argued that “unless more sophisticated notions of defining
ICT curriculum integration are developed, researchers run the risk of promulgating severely restrictive ways of measuring it” (p. 69).
The lack of an elaborate and shared definition of TI could perhaps be due to the fact that educational technology research is an
under-theorized research field. Hew et al. (2019) conducted a systematic literature review of 305 empirical studies published in
education-technology-related, highly ranked peer-reviewed journals and found that many of them have vague (N = 120) or no (N =
209) theoretical underpinnings and that very few articles contribute to theory advancement (N = 77). One reason for this poor
engagement with theory may be that educational technology research is often practice-oriented and context-specific (Bennett & Oliver,
2011).
The above-mentioned problems are also reflected in the absence of a unified measure, with researchers developing a variety of
instruments that target different aspects of TI. However, independently from the fact that these instruments measure either a single or
several aspects of TI, their developers often present them as overall measures of TI (Hsu, 2010, 2017). For this reason, Hsu (2010)
argued that it is difficult to compare TI rates across studies and to interpret the results obtained from such different instruments.
The current situation makes it very difficult for researchers to orient themselves in TI research. Nevertheless, in recent years, there
has been a renewed interest in TI research, even with an additional shift in its focus. More and more researchers have argued that what
matters is how technologies are integrated into instructional practices, not the frequency of technology use (OECD, 2015). Conse­
quently, new instruments for measuring aspects related to the quality of TI (e.g., the use of technology to support pedagogical goals)
are emerging (see Antonietti et al., 2023; Backfisch et al., 2021; Fütterer et al., 2022; Juuti et al., 2022). By providing an overview of
the survey instruments measuring TI used in educational research from 2010 to 2021, critically assessing the current research prac­
tices, and identifying directions for future research, this study aims to provide guidance and orientation to researchers who, despite the
different issues highlighted above, intend to develop and advance TI research.
Before introducing our systematic review, we discuss some views on defining and conceptualizing TI, present examples of survey
instruments measuring the quality and quantity of TI, and provide an overview of previous studies on TI measures.

1.1. A multiplicity of views on TI

In this section, we present and discuss different existing definitions and conceptualizations of TI, with a focus on those explicitly
referring to the term “TI.” Thereby, we aim to provide initial insight into different directions of TI research and clarify which TI concept
guided our review.
One of the most comprehensive definitions of TI we found is that given by the Education Research Information Center (ERIC)
thesaurus:
The process of making technological tools and services, such as computer systems and the Internet, a part of the educational
environment – includes changes made to the curriculum as well as to educational facilities. (Education Resources Information Center,
2022).
In this definition, TI is conceived of as a process rather than a state, and the focus seems to be primarily on technological tools and
services. The mentioned changes to the curriculum do not seem to be required, and it is unclear whether they also include changes
regarding pedagogy and teaching methods. Thus, it can be considered a rather “technocentric” understanding of TI (Duran, 2022).
Especially in the early days of the introduction of digital technologies in schools, there was a tendency to focus on digital tools and
skills; more recently, the focus shifted toward the pedagogical use of technology to achieve specific learning goals (Davies & West,
2014; Duran, 2022). The definition provided by the ERIC thesaurus mainly reflects the first tendency. It is important to keep in mind
that the definition has been formulated with the purpose of categorizing studies rather than contributing to the theoretical discourse on
TI. However, precisely for this reason, we believe that it provides us with insight into how TI has been conceived and operationalized in
research.
In a handbook chapter on TI in schools, Davies and West (2014) defined TI as the “effective implementation of educational
technology to accomplish intended learning outcomes” (p. 843). This definition focuses on effectiveness and learning outcomes and
thus implies that TI is more than the mere uptake of technological devices into classroom practices. This definition reflects the more
recent trend of focusing on the pedagogically sound use of technology to enhance teaching and learning practices and adds an
emphasis on effectiveness. However, it does not yet explicitly refer to the need to transform pedagogies and teaching practices.
The way scholars conceptualize TI is sometimes grounded on frameworks and models rather than on definitions. One example is the
TI concept underlying the Technological Pedagogical and Content Knowledge (TPACK) framework (Mishra & Koehler, 2006).
Referring to this framework, Wentworth et al. (2009) defined TI as follows: “Ways in which technology is integrated into teaching
methods [pedagogy] and curriculum [content].” Further, in this case, TI refers to a concept beyond the mere use of technology in
classroom practices, namely to the result of a complex interplay among three fundamental components of learning environments:
technology, pedagogy, and content. Mishra and Koehler (2006) emphasized that teachers must develop sophisticated knowledge that
enables them to use technology by taking into account the features of the teaching content, adequate teaching methods, and the
educational context. What is distinctive of this conceptualization is that TI cannot follow always-valid principles but must adapt to the
content to be taught and the context.
Another conceptualization of TI that focuses on pedagogy and instructional practices is the Technology Integration Matrix (TIM)
(Welsh et al., 2011). This framework combines the dimensions of “good teaching” (active, constructive, goal-directed, authentic, and
collaborative) with five levels of TI (entry, adoption, adaptation, infusion, and transformation). These are combined to form a matrix of
25 cells, which should help researchers and practitioners evaluate TI practices. Furthermore, the TIM considers TI from different
perspectives by formulating descriptors to assess TI from the viewpoints of students, teachers, and learning environments. This

2
T. Consoli et al. Computers & Education 197 (2023) 104742

conceptualization of TI considers TI both as a stepwise process to be gradually achieved and as a challenge related to pedagogy and
teaching quality. It also implies that technology should be used to transform teaching and learning practices by making them more
student-centered and constructivist. This is one of the most sophisticated conceptualizations of TI that we found.
Apart from frameworks and models of TI focusing on pedagogy and instructional practices, many other models focus on teachers’
professional development and investigate the interplay of factors influencing teachers’ uptake of technologies in educational practices
(Niederhauser & Lindstrom, 2017; Tondeur et al., 2021). The Technology Acceptance Model by Davis (1985), the Unified Theory of
Acceptance and Use of Technology Model by Venkatesh et al. (2003), and the Will Skill Tool model by Christensen and Knezek (2001) are
some of the most frequently used models. Studies based on these models often conceptualize TI as the entire process leading to
teachers’ adoption of technologies in classroom practices, as the output variable of this process, or as both process and outcome.
Following Duran (2022) and most of the conceptualizations discussed above, we believe that TI should always consider aspects
related to pedagogy and teaching quality rather than focusing solely on aspects related to the quantity of TI. Indeed, the results of
large-scale studies stress that the mere frequency of technology use at school does not lead to better learning outcomes (OECD, 2015),
whereas experimental studies focusing on the use of technology to foster meaningful learning goals have found an overall positive
impact of TI on learning (Chien et al., 2016; Stegmann, 2020). Consequently, several scholars have started focusing increasingly on the
quality instead of the quantity of TI (see Antonietti et al., 2023; Backfisch et al., 2021; Fütterer et al., 2022; Juuti et al., 2022; Parker
et al., 2019; Petko et al., 2017). The following section provides some examples of how researchers operationalize the quality and
quantity of TI to measure it.

1.2. Quantitative measures of the quality and quantity of TI

In this section, we present some ways in which the quantity and quality of TI have been operationalized and measured in research.
These examples are intended to provide a better understanding of what is meant by the terms “quantity” and “quality of TI.”
As stated by several scholars, the expression “quantity of TI” is commonly used to refer to quantitative indicators, such as the
frequency of technology use at school, the student–computer ratio, or the amount of time spent using digital devices (Backfisch et al.,
2021; Petko et al., 2017; Proctor et al., 2003). If there is any evidence of a good consensus among researchers on the operationalization
of the “quantity of TI,” it is much less the case regarding operationalizing the “quality of TI.” As an illustrative example, Proctor et al.
(2003) developed an instrument to “measure the quantity and quality of ICT curriculum integration experienced by students in
classroom” (p. 69). The authors operationalized the quality of TI as the use of technology to support the dimensions of “productive
pedagogies” (e.g. connectedness to the world and inclusivity). Differently, Bowman et al. (2020), developed Bloom’s Digital Taxonomy
scale to measure the “quality of TI.” As the name of the scale suggests, the instrument is based on Bloom, (1956) learning goal tax­
onomy and measures classroom technology use to facilitate lower- and higher-order critical thinking tasks.
For the purpose of this study, we consider “quality of TI” as all concepts, operationalizations, and aspects related to the instructional
use of digital technologies, with considerations beyond the mere frequency of use of different technological devices (e.g., the use of
technology to support pedagogical goals).

1.3. Previous studies on TI measures

In a previous systematic review, Lai and Bower (2019) collected instruments used to evaluate the use of technology in education.
The review has a very broad focus and includes all types of studies (e.g., quantitative, qualitative, experiments, surveys), contexts, and
perspectives. The analysis of the studies shows that studies evaluating technology use focus mostly on the following aspects: learning
outcomes, affective elements, behaviours, design, and technology elements. The analysis also found that the diversity of instruments
and methods used to assess technology use is high, that 66.6% of the studies used an established instrument to evaluate the use of
technology, that only 1.4% of the studies considered the institutional environment, and that only 7% of the studies assessed the use of
the technology holistically by considering four or more aspects of education. However, this review focuses on the effects and per­
ceptions of technology use rather than TI, and only considers studies published between 2015 and 2017 in Computers & Education,
thus running the risk of leaving out studies published in other scientific journals and in other years.
In addition to this systematic review, in a handbook chapter on TI measurements, Christensen and Knezek (2018) listed many
“respectable instruments” to measure “attitudes, skills (competencies), and pedagogical practices, in the general realm of technology
integration” (p. 370). The authors conceptualize TI as a complex phenomenon with at least three components and distinguish between
attitudinal measures, competency and self-efficacy measures, pedagogy measures and strictly TI measures that directly assess TI.
According to the authors, the following instruments belong to the latter category:
The Apple Classrooms of Tomorrow (ACOT) survey (Dwyer et al., 1990).
Stages of Adoption of Technology (Stages) (Christensen, 2002).
The Concerns-Based Adoption Model Level of Use (CBAM LoU) (Hall et al., 1975).
Technology Integration Scale (TIS) (Agyei & Voogt, 2011; Hancock et al., 2007).
The TPACK Assessment Instrument for Pre-Service Teachers (Schmidt et al., 2009).
The fact that the authors first describe TI as a complex phenomenon composed of at least three dimensions and then list one-
dimensional instruments to measure TI “directly” highlights the terminological ambiguity concerning the term “TI”. In their
conclusion, the authors recommended using different types of instruments to assess all components of TI.
Christensen and Knezek (2018) did not follow a systematic selection process for the instruments. Further, both studies presented in
this section gave no particular attention to the aspects related to the quality of TI in instructional practices and omitted consideration of

3
T. Consoli et al. Computers & Education 197 (2023) 104742

the conceptual or theoretical underpinnings of the instruments analyzed.

1.4. Research questions

As previously noted, the lack of a shared and sophisticated definition of TI and the absence of a unified view of how to measure TI
make it difficult for researchers to rely on a coherent theoretical discourse and sound research practices. By clarifying which oper­
ationalizations of TI have been used in survey studies from 2010 to 2021 and critically evaluating current research practices, this study
aims to contribute to the development of a coherent theoretical discourse on TI and sound research practices. Given the current
research interest in operationalizing aspects regarding the quality of TI, and considering that these aspects have been ignored by
previous studies on TI measures, we focus particularly on them.
We focused on quantitative survey instruments because they are widespread and suitable for comparison. Furthermore, a particular
focus on K–12 education was chosen by including only studies that considered at least one K–12 level.
The following research questions guided our study.

a) Which quantitative survey instruments have researchers used to measure TI in K–12 education from 2010 to 2021?
b) What are the main properties of the instruments (e.g., number of items, number of dimensions measured, and information on
validity and reliability)?
c) Are the instruments based on concepts or theories?
d) What dimensions/aspects of TI are measured?
e) Which aspects related to the quality of TI in instructional practices are measured?

2. Methodology

To answer the first research question, we conducted a systematic review (Gough et al., 2017; Zawacki-Richter et al., 2020) of
quantitative survey studies measuring TI using questionnaires. We used the Preferred Reporting Items for Systematic Reviews and
Meta-Analyses (PRISMA) (Page et al., 2021) as guidelines to report our review process. We retrieved 695 records published between
January 2010 and March 2021. Thirty-six articles that used 35 different survey instruments met our eligibility criteria. We then
analyzed the instruments collected in the systematic review by applying content analysis to address our four other research questions.

2.1. Systematic review

2.1.1. Search
To locate studies focusing on TI, we used the following expressions: “technology integration,” “ICT integration,” “ICTs integration,”
“computer integration.” Due to the lack of an unambiguous and commonly accepted definition of TI, and thus the possibility of
determining the synonyms for this concept, we focused on the manifest language level and searched for studies containing the term
“integration.” Therefore, we have consciously ignored studies that exclusively use related expressions, such as “technology use,”

Table 1
Databases, search strings, search options and n. of articles found.
Search string and search options N.
Database

WoS (TI=(“technology integration” OR “ICT integration” OR “ICTs integration” OR “computer integration”) OR AB=(“technology 370
integration” OR “ICT integration” OR “ICTs integration” OR “computer integration”)) AND (TI =(survey* OR scale* OR questionnaire*
OR measur*) OR AB=(survey* OR scale* OR questionnaire* OR measur*)) AND TS=(education* OR school* OR teach* OR learn* OR
instruction* OR student*)
Refined by: Databases: (WOS) AND DOCUMENT TYPES: (ARTICLE)
Timespan: 2010–2021. Databases: WOS.
Search language = Auto
ERIC via (TI (“technology integration” OR “ICT integration” OR “ICTs integration” OR “computer integration”) OR AB (“technology integration” 409
EBSCO OR “ICT integration” OR “ICTs integration” OR “computer integration”)) AND (TI (survey* OR scale* OR questionnaire* OR measur*)
OR AB (survey* OR scale* OR questionnaire* OR measur*))
Language: english, peer reviewed, 2010–2021
Scopus TITLE-ABS (“technology integration” OR “ICT integration” OR “ICTs integration” OR “computer integration”) AND TITLE-ABS (survey* 479
OR scale* OR questionnaire* OR measur*) AND TITLE-ABS-KEY (education* OR school* OR teach* OR learn* OR instruction* OR
student*) AND (LIMIT-TO (PUBYEAR, 2021) OR LIMIT-TO (PUBYEAR, 2020) OR LIMIT-TO (PUBYEAR, 2019) OR LIMIT-TO (PUBYEAR,
2018) OR LIMIT-TO (PUBYEAR, 2017) OR LIMIT-TO (PUBYEAR, 2016) OR LIMIT-TO (PUBYEAR, 2015) OR LIMIT-TO (PUBYEAR,
2014) OR LIMIT-TO (PUBYEAR, 2013) OR LIMIT-TO (PUBYEAR, 2012) OR LIMIT-TO (PUBYEAR, 2011) OR LIMIT-TO (PUBYEAR,
2010)) AND (LIMIT-TO (LANGUAGE, “English”)) AND (LIMIT-TO (SRCTYPE, “j"))
PsycINFO (TI (“technology integration” OR “ICT integration” OR “ICTs integration” OR “computer integration”) OR AB (“technology integration” 157
OR “ICT integration” OR “ICTs integration” OR “computer integration”)) AND (TI (survey* OR scale* OR questionnaire* OR measur*)
OR AB (survey* OR scale* OR questionnaire* OR measur*)) AND (education* OR school* OR teach* OR learn* OR instruction* OR
student*)
Language: english; Peer reviewd; 2010–2021

4
T. Consoli et al. Computers & Education 197 (2023) 104742

“technology implementation,” or “technology adoption.” This was also necessary to narrow the corpus of our review and to ensure a
clear focus on TI. Given that we intended to focus on studies measuring TI generally, terms referring to specific digital tools and devices
were not included. The terms survey*, scale*, questionnaire*, measur* were used to locate quantitative survey studies, and the terms
education*, school*, teach*, learn*, instruction*, student*, were used to narrow our review to the field of educational research. We
combined all those terms with Boolean operators, field tags, and parentheses (see Table 1) and conducted our search in four
comprehensive digital databases (PsycINFO, ERIC, Web of Science, and Scopus) to ensure that the review included all the relevant
studies.
To guarantee that only studies meeting the standards of the scientific community were identified, we focused exclusively on peer-
reviewed articles and excluded gray literature.
Given our interest in analyzing and assessing current research practices, we opted to consider studies published since 2010. The
year 2010 is also pivotal in terms of digital technologies, for example, with the release of the first electronic tablet (Apple iPad), which
opened the door to new ways of using technology in teaching and learning practices.
We searched only for studies published in English.
We conducted our search on February 5, 2021, and obtained 1415 records (see Table 1). After downloading the bibliographic
references and importing them into the literature management software “citavi,” we deleted the duplicate articles (automatically and
manually) and obtained 695 unique articles.

2.1.2. Screening and selection


According to the PRISMA flow diagram (Page et al., 2021), we divided our screening and selection process into three main steps
(see Fig. 1). In the first screening step, we considered only the titles and abstracts of the articles. We included all studies that: have an
abstract written in English, mention the expression “technology integration” (or “ICT integration,” “ICTs integration,” “computer
integration”), and report the results of an empirical survey study. We excluded studies using methodologies other than surveys (e.g.,

Fig. 1. PRISMA 2020 flow diagram for new systematic reviews, which included searches of databases and registers only.

5
T. Consoli et al. Computers & Education 197 (2023) 104742

meta-analysis, observation studies, purely qualitative studies, etc.), studies not published in peer-reviewed academic journals (e.g.,
master’s theses, dissertations, books, conference proceedings, etc.), studies from research fields other than educational research,
studies that do not consider any K–12 level, and studies focusing on TI-related constructs without measuring TI itself. Based on the
information in the title and abstract alone, it was not always possible to determine whether to include the articles. All articles for which

Table 2
Survey-Instruments found with the systematic review.
Name Publication for data Studies using the istrument Notes
Id. extraction

1 Implementation of Technology in Almalki (2020) Almalki (2020)


Class
2 No Name Apostolou (2020) Apostolou (2020)
3 No Name Blau and Blau and Shamir-Inbal (2017)
Shamir-Inbal (2017)
4 SoCQ Chen and Jang Chen and Jang (2014)
(2014)
5 TPACK Chen and Jang Chen and Jang (2014) Modified from Schmidt
(2014) et al. (2009)
6 Concerns-Based Adoption Model- Christensen et al. Bauer (2013), Christensen and Knezek (2017a)
Level of Use (CBAM LoU) (2001)
7 Stage of Adoption of Technology Christensen and Christensen and Knezek (2017a), Christensen and Knezek
(SoA) Knezek (2017a) (2017b), Cifuentes et al. (2011), Agyei and Voogt (2011),
Farjon et al. (2019)
8 Mobile Learning Readiness Christensen and Christensen and Knezek (2017b)
Survey (MLRS) Knezek (2017b)
9 TI-ICT tools, TI-Social media, TI- Chuang et al. (2015) Chuang et al. (2015)
Graphic and dynamic visualizations
10 ICT Integration in Teaching and Ghavifekr et al. Ghavifekr et al. (2014)
Learning Process (2014)
11 No Name Hechter and Hechter and Vermette (2014)
Vermette (2014)
12 Integration of Technology in the Hogarty et al. (2003) Besnoy et al. (2012)
Classroom Scale
13 Teacher Technology Practices Howard et al. (2015) Howard et al. (2015)
(TTP)
14 No Name Howley et al. (2011) Howley et al. (2011)
15 ICT integration proficency scale Hsu (2010) Hsu (2010)
16 ICT integration proficency scale Hsu (2017) Hsu (2017) Updated and revisited from
Hsu (2010)
17 No Name Shamir-Inbal and Shamir-Inbal and Blau, (2017)
Blau, (2017)
18 Technology Integration Scale Karaca et al. (2013) Karaca et al. (2013)
19 Smart Classroom Inventory Li et al. (2015) Li et al. (2015)
20 Technology Integration Liu (2012) Liu (2012)
Implementation
21 IMPLE implementation of Liu (2016) Liu (2016)
technology integration
22 Level of ICT integration in L2 Men and Noordin Men and Noordin (2019)
teaching (2019)
23 No Name Mwila (2018) Mwila (2018)
24 No Name Olugbara and Letseka Olugbara and Letseka (2020)
(2020)
25 ICT integration in the classroom Padayachee (2017) Padayachee (2017)
26 Web 2.0 tools integration Pan and Franklin Pan and Franklin (2011)
instrument (WTII) (2011)
27 Technology use by students Pittman and Gaines Pittman and Gaines (2015)
(2015)
28 TPACK Schmidt et al. (2009) Farjon et al. (2019)
29 TEDDICS Siddiq and Scherer, Siddiq and Scherer (2016)
(2016)
30 Learning with ICT Thannimalai and Thannimalai and Raman (2018) Adaption of Proctor et al.
Raman (2018) (2003)
31 ICT Class Use Scale: PC2 van Braak et al. Sinatra et al. (2017)
(2004)
32 E-capacity Measurement Scales Vanderlinde and van Aydin et al. (2016)
Braak (2010)
33 SoA Revisited Wang (2013) Wang (2013) Revisited version of
Christensen and Knezek
(2017a)
34 Actual behaviour Wang et al. (2019) Muhaimin et al. (2020)
35 No Name Williams et al. (2014) Williams et al. (2014)

6
T. Consoli et al. Computers & Education 197 (2023) 104742

doubt occurred were coded as such and included in the subsequent screening and selection step. During this first screening step, 379
documents were excluded.
In the second screening step, we checked the availability of the studies. If the studies were not fully available online, or if we could
not access the article by contacting university libraries and authors, we excluded them (n = 4).
In the last step, we considered the full articles and applied our exclusion and inclusion criteria. Again, we excluded studies that
were not in English (n = 6), that did not consider a K–12 level (n = 3), and that were not published in scientific journals (n = 1). We also
excluded studies that did not use fully documented survey instruments in English (available inside the study or in another study) (n =
131). We included only studies in which TI was measured quantitatively within a survey study and that explicitly stated an intention to
measure TI (or “ICT integration,” “ICTs integration,” “computer-integration”) or an aspect of it. We excluded all studies that exclu­
sively intended to measure related constructs, such as attitude or self-efficacy toward TI, without measuring TI directly. However, we
included studies that explicitly stated that they operationalized TI by measuring related constructs, such as attitude or self-efficacy
toward TI. In this last step, we also excluded articles with 10 or fewer than 10 participants (n = 1) and articles in which the term
TI was used in an ambiguous way (n = 5). During this phase, we excluded 276 articles and selected 36 articles that used 35 instruments
to include in our content analysis.

2.2. Data extraction and content analysis

After concluding the systematic review and identifying the 35 available survey instruments used by researchers to measure TI from
2010 to 2021, we analyzed them by applying content analysis focusing on the aspects addressed in the research questions.
As a first step, we collected general information about the instruments. If the studies themselves contained the complete docu­
mentation of the instrument, we relied on them; if they did not, we considered either the most recent publication containing the full
instrument available online or the original one (e.g., CBAM LoU, Christensen et al., 2001). In both cases, the publication referenced
was retrieved and coded. We assessed only the parts of the instruments referred to in the studies identified in the systematic review. For
each instrument, we collected the following information: the name of the instrument (when available), references of studies (found
with the systematic review) using the instrument, and reference of publication used for data extraction. We also assigned an identi­
fication number to each instrument (see Table 2).
In the second step, we analyzed the instruments by applying content analysis (following the steps done by Tang et al., 2020). We
first familiarized ourselves with the collected material (journal articles found with the systematic review and instruments’ docu­
mentation). Second, we developed a coding scheme to systematically extract information from our material that took into account our
research questions and the specificity of our data. Third, we applied the coding scheme to all the collected material. During this
process, we improved the initial coding scheme (by adding, removing, and clarifying categories) and started over again with the
information extraction procedure. Fourth, after reaching a stable coding scheme, two coders individually coded the same randomly
selected 8 (22.86%) of the instruments and reached an overall inter-rater agreement of 96.54%. Any discrepancies were discussed and
resolved. Finally, one coder coded the remaining instruments.
For each instrument, we extracted a) general information (including information about the instruments’ theoretical and conceptual
underpinnings); b) dimensions of TI measured (teacher-related aspects, student-related aspects, and school-related aspects); and c)
information about aspects related to the quality of TI in instructional practices.

2.2.1. General information


For each instrument, we collected the following information: number of items; number of dimensions measured; whether the
instrument measures TI holistically; whether the instrument TI as a process with different stages; whether information about validity is
available; whether information about reliability is available; whether the instrument is based on a theory/concept of TI; whether the
instrument is based on another theory/concept (e.g. technology use, diffusion of innovations); whether the instrument is based on an

Table 3
Coding scheme for general information.
Code Explanation
Aspect

Holistic instrument Yes The instrument considers both context and people factors and more than two dimensions/aspects of technology
integration.
No The instrument considers only contextual factors or people factors, or the instrument considers fewer than three
aspects/dimensions of technology integration.
Technology integration as a Yes The instrument understands technology integration as a process and measures in which part of the process (or in which
process step/level) the participants are.
No The instrument understands technology integration as something static (a state of beings, a result or a condition).
Information about validity Yes Factor analysis, correlations or other validation procedures are reported and documented.
No No validation procedure is reported.
Vague The instrument was only validated by experts or no details about the validation procedure.
Information about reliability Yes Cronbach’s alpha, McDonald’s omega or other consistency coefficients are reported.
No No reliability/consistency coefficients.
Vague No details, no clarity on how the values were calculated, only overall Cronbach’s alpha, not clear which consistency/
reliability measure was used.

7
T. Consoli et al. Computers & Education 197 (2023) 104742

educational document, program, or curriculum; and whether the instrument is based on other instruments; school level; type of
participants; number of participants. For almost every aspect, we used the following answer options/codes: yes, no, vague/not clear.
Table 3 indicates the meaning of the different codes in cases in which it is not intuitive.

2.2.2. Dimensions of TI

2.2.2.1. Teacher-related aspects. For each instrument, we examined whether the instrument considers teachers’ attitudes or beliefs
toward TI-related practices, teachers’ actual technology practices for teaching-related purposes, teachers’ TI-related knowledge, skills,
competencies or self-efficacy, teachers’ professional development related to TI, and teachers’ reflectiveness toward their TI practices.

2.2.2.2. Students’ related aspects. For each instrument, we examined whether the instrument considers students’ attitudes or beliefs
toward TI-related practices, students’ actual technology practices for school-related purposes, students’ TI-related knowledge, skills,
competencies or self-efficacy. And students’ motivation or engagement.

2.2.2.3. School-related aspects. For each instrument, we examined whether the instrument considers the school ICT infrastructure,
leadership-related factors such as school vision and policy, and school ICT culture (e.g., aspects such as teachers’ support and

Table 4
Coding scheme for aspects related to the quality of technology integration.
Code Explanation
Aspect

General quality of technology integration Yes At least one item asks about the general quality of TI.
No There is no item addressing this aspect.
Vague It is unclear whether the instrument takes this aspect into account.
Effectiveness/Teaching and learning Yes At least one item asks whether teaching or learning practice are enhanced/more effective through the use of
enhancement technology.
No There is no item addressing this aspect.
Vague It is unclear whether the instrument takes this aspect into account.
Pedagogy/Teaching methods Yes At least one item asks about the use of technology to support different instructional activities (group work,
being creative, individual work, exercise, drill & practice, frontal teaching, etc.), or at least one item asks if
technology is integrated/used to achieve pedagogical goals.
No There is no item addressing this aspect.
Vague It is unclear whether the instrument takes this aspect into account.
Emphasis on constructivists/student Yes Most of the pedagogy items are about constructivists/student centered pedagogies.
centered pedagogies No There is no item or only few items address this aspect.
Vague It is unclear whether the instrument put a particular emphasis on constructivists/students centered
pedagogies.
Cognitive activation/higher order Yes At least one item asks if technology is used to foster metacognition, to carry out activities requiring cognitive
thinking skills activation or higher order thinking skills, or to enhance cognitive activation in a general way.
No There is no item addressing this aspect.
Vague It is unclear whether the instrument takes this aspect into account.
Students’ support Yes At least one item asks if technology is used to support students.
No There is no item addressing this aspect.
Vague It is unclear whether the instrument takes this aspect into account.
Classroom management Yes At least one item asks about the quality of classroom management with technology.
No There is no item addressing this aspect.
Vague It is unclear whether the instrument takes this aspect into account.
Differentiation/adaptive teaching Yes At least one item asks if technology is used to differentiate the teaching practices/make teaching practices
more adaptive.
No There is no item addressing this aspect.
Vague It is unclear whether the instrument takes this aspect into account.
Investigations/(re)search Yes At least one item asks if students use technology to carry out investigations/online (re)searches.
No There is no item addressing this aspect.
Vague It is unclear whether the instrument takes this aspect into account.
Independent/self-regulated learning Yes At least one item asks if technology is used to foster/enable self-regulated learning.
No There is no item addressing this aspect.
Vague It is unclear whether the instrument takes this aspect into account.
Students’ collaboration/cooperation Yes At least one item asks if students use technology to collaborate.
No There is no item addressing this aspect.
Vague It is unclear whether the instrument takes this aspect into account.
Simulations, animations and Yes At least one item asks technology is used for visualizations and simulations.
visualizations No There is no item addressing this aspect.
Vague It is unclear whether the instrument takes this aspect into account.
Emphasis on media literacy, safety and Yes At least one item asks about the emphasis on media education/digital literacy/ethical or health aspects of TI
ethics in classroom practices (e.g., impact of media on society, ability to recognize fake news, responsible and
healthy use of media, etc.).
No There is no item addressing this aspect.
Vague It is unclear whether the instrument takes this aspect into account.

8
T. Consoli et al. Computers & Education 197 (2023) 104742

collaboration among teachers).

2.2.3. Aspects related to the quality of TI in instructional practices


For each instrument, we examined a) whether the instrument considers the general quality of TI, b) the effectiveness of teaching
and learning practices, c) the use of technology to support different instructional activities, d) the use of technology to implement and
emphasize constructivists/student-centered pedagogies, e) students’ activities requiring students’ cognitive activation/higher-order
thinking skills, f) students’ support, g) classroom management, h) differentiation of the teaching practices, i) students’ technology
use for investigation and (re)searches online, j) students’ technology use for self-regulated learning, k) students’ technology use for
collaboration and cooperation, l) the classroom use of simulations, animations and visualizations, and m) the emphasis on media
literacy, safety and ethics. We coded these aspects according to the coding scheme presented in Table 4.
The categories cognitive activation, students’ support, and classroom were deductively inferred and are based on the German
framework of Three Basic Dimensions of Teaching Quality (Praetorius et al., 2018); all the other categories were inductively inferred
from the analyzed material.

3. Results

In this session, we present the results of the study organized by research question.

a) Which quantitative instruments have researchers used to measure TI in K–12 education from 2010 to 2021?

Table 2 contains the survey instruments that were identified in the systematic literature review. Apart from the Stages of Adoption
of Technology (SoA) (Christensen, 1997; Christensen and Knezek, 2017a), which has been used five times, and CBAM-LoU (Chris­
tensen et al., 2001) , which has been used two times, all instruments were used once in the included studies.

b. What are their main properties?

Table 5 shows that the instruments are composed of 1–70 items and measure from 1 to 10 dimensions of TI. Eight of the instruments
measure TI as a holistic construct that includes both school-related and people-related factors. Four instruments measure TI as a
process, while 31 measure TI as a state or result. Most instruments were developed for primary and secondary education levels.
Twenty-six instruments were designed for in-service teachers, five for pre-service teachers, two for students, and two for school
principals or ICT coordinators. Only a few instruments have been developed to measure TI in upper secondary schools or kinder­
gartens. Furthermore, 10 survey instruments have vague or no indications of validity, and 11 have vague or no indications of
reliability.

c. Are the instruments based on concepts or theories?

According to Table 5, 10 instruments are based on TI-specific concepts or theories (e.g., TPACK), 7 are based on other more general
theories (e.g., diffusion of innovations), and 4 are based on specific educational programs, documents, or curricula. Nine instruments
have no theoretical or conceptual underpinnings but are based on previous instruments, and the other nine instruments are neither
based on concepts or documents nor on previous instruments.

d. What dimensions/aspects of TI are measured?

A clear trend emerges when analyzing the data reported in Table 6 that most instruments (n = 32) are related to classroom practices
(i.e., teachers’ and students’ technology practices for teaching and learning purposes). All instruments but one consider at least one
teacher-related factor. Ten instruments do not include student-related aspects. Only eight instruments comprise factors related to the
school context.

e. Which aspects related to the quality of TI in instructional practices are measured?

Most of the instruments analyzed measure at least one aspect related to the quality of TI in instructional practices (n = 31). Two
ask—each with a single item—the perceived general quality of TI; ten instruments ask the participants if TI makes teaching or learning
more effective. Twenty-nine instruments measure (at least with one item) some pedagogical aspects, and six of them focus particularly
on constructivist pedagogies.
Among the most considered pedagogical aspects of TI are using technology to carry out activities requiring students’ cognitive
activation and higher-order thinking skills (n = 16), using technology to promote collaboration and cooperation between students (n =
14), and using technology to allow students to carry out investigations and (re)searches online (n = 13). From the perspective of the
three basic dimensions of teaching quality (Praetorius et al., 2018), many of the instruments measure the use of technology to foster
cognitive activation or higher order thinking skills. However, only eight instruments measure whether teachers use technology to
support students in some way, and only one instrument considers the use of technology in relation to classroom management.
In addition, eight instruments probe whether technologies are used to increase teaching differentiation, 3 ask whether they are used

9
T. Consoli et al. Computers & Education 197 (2023) 104742

Table 5
General information about the instruments.
Id. 1 2 3 4 5 6 7 8 9 10 11 12 13 14

N. of items 7 8 18 22 35 1 1 28 14 19 11 12 27 15
N. of dimensions 1 8 3 5 7 1 1 4 3 1 3 1 5 1
Holistic instrument X X X X
TI as process X X X
Information about ? X X X X X X ? X X X
validity
Information about X X X X X X X X
reliability
Conceptual and theoretical underpinnings
Instr. based on a theory/ X X X ? X X
concept about TI
Instr. based on another X X X X
theory/concept
Instr. based on a X X
pedagogical
document/program/
curriculum
Instrument based on X X X X ?
previous instruments
Educational level
Kindergarten ?
Primary school ? X X X X X X ? X X
Secondary school ? X X X ? X
Upper secondary school ? X X X X X ? X X
Participants
In-service teachers X X X X X X X X X X X X
Pre-service teacher
Students X
School principals X
ICT coordniators and X
facilitators
N. of participants 38 89 392 605 605 566 1430 1430 320 61 433 1850 3624 500

X = Yes; ? = Vague/Not clear.

to promote independent/self-regulated learning among students, and 8 ask whether digital technologies are used for visualizations and
simulations. Finally, five instruments measure whether issues such as the ethics and safety of technology use, the evaluation of the
reliability of online sources, or the impact of technology on society are addressed in teaching.

4. Discussion

This study conducted a systematic review to identify survey instruments used to measure TI from 2010 to 2021 in K–12 educational
research. Our review yielded 35 complete instruments available online in English. The instruments were analyzed following a content
analysis methodology. The results highlight several underlying problems in TI research, which will be addressed in response to each of
our research questions.

a) Which quantitative instruments have researchers used to measure TI in K–12 education from 2010 to 2021?

The results of our systematic review (see Table 2) revealed that only the SoA (Christensen and Knezek, 2017a) and CBAM-LoU
(Christensen et al., 2001) were used in more than one study. Therefore, it seems that these two instruments are the most
well-known and established instruments for measuring TI. Nevertheless, considering the recent shift of research focus on the quality of
TI in instructional practices and the importance of addressing aspects related to the quality of TI, their use may be problematic, as they
do not properly measure pedagogical or other aspects related to the quality of instructional practices. For example, in the CBAM-LoU
instrument, which was developed for teachers, the only answer option that might correspond to the consideration of pedagogical or
instructional design aspects is the following:
“I vary the use of information technology in education to increase the expected benefits within the classroom. I am working on
using information technology to maximize the effects with my students." (Christensen et al., 2001)
However, no details are asked about the concrete integration of technologies for teaching or learning practices. Furthermore, it is
not clear what is meant by “benefits” and “effects.” Another problem with these two instruments is that they both measure the attitudes
and self-efficacy of teachers more than their actual behaviors or practices. For example, even if a teacher recognizes themselves in the
last stage of the SoA instrument (item wording of Stage 6: “I can apply what I know about technology in the classroom. I am able to use

10
T. Consoli et al. Computers & Education 197 (2023) 104742

15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35

40 28 10 10 36 4 6 13 8 3 32 6 17 8 12 20 8 45 1 10 70
6 6 5 1 10 1 3 1 1 1 4 1 2 1 3 1 1 8 1 1 5
X X X X
X
X X X X X X X X ? X X X X X X ? X X ? ?

X X X X X X ? ? X X ? X X X X X X X X

Conceptual and theoretical underpinnings


X X ? X ? X X

? ? X X X

X X

X X X X X X X X X X X X X

Educational level
? X X
X X X X X ? ? ? X X X X X X X
X X X X X ? X ? X X ? X X X X
? ? ? X ? X
Participants
X X X X X X X X X X X X X X
X X X X X
X
X

X
3729 5938 358 1030 409 401 362 72 100 100 114 461 75 124 1072 213 468 471 275 462 420

it as an instructional tool and integrate it into the curriculum”), there is no certainty that the teacher is actually integrating technology
into their lessons, only that they feel able to do so. Petko (2012) has already raised similar issues regarding the SoA instrument. He
pointed out that the use of the SoA instrument in the will, skill, and tool model can lead to invalid results, as the presence of measures
related to teacher attitude and competence in both the independent and the dependent variables may lead to issues of autocorrelation.
Further, it must also be highlighted that the SoA instrument is based on a theory about the learning stages of adults starting to use e-
mail (Russel, 1995), and the CBAM-LoU instrument is based on a general theory about innovation adoption (Hall et al., 1975). Thus,
both instruments are based on general theories and not on theories specific to teaching or TI in a formal educational context.
Lastly, both instruments conceive TI as the process teachers go through when they start integrating technology into their teaching
practices. For this reason, we believe that the two instruments are only useable in studies with teachers who began their careers
without integrating digital technologies in their teaching practices. Therefore, it might be inappropriate to apply these measures in
studies with teachers who have learned to integrate digital technologies from the beginning of their careers or in contexts where the
use of technology is mandatory, ubiquitous, or taken for granted.
The fact that all the other instruments from the systematic review are used only once indicates that regardless of the many different
instruments developed and available, none of those has established itself as a reference or standard instrument for measuring TI and
that several studies do not consider it necessary to use established instruments. This fact could further be interpreted as an indicator of
the great heterogeneity of research on TI.

b) What are their main properties?

Our analysis showed that some instruments measure TI as a state and others as a process and that some studies measure TI as a
holistic construct, whereas others capture it as a particular construct. This highlights the lack of a shared understanding of how to
define and measure TI. As explained regarding the SoA instrument, TI is sometimes conceptualized as the process that teachers go
through as they begin to use technologies in their teaching practices. In other instruments, however, TI is the frequency of use of
technologies in classroom practices (e.g., Id. 34 – Table 2), and for many others, it is a complex and holistic phenomenon that involves
factors related to teachers, students, and school culture (e.g., Id. 32). The results presented in Tables 6 and 7 reveal that TI can be
conceptualized with yet additional meanings (e.g., the use of technology to enhance learning and teaching practices or the trans­
formative use of technology to make teaching and learning practices more constructivist). We believe that the existence of different

11
T. Consoli et al.
Table 6
General information about the instruments.

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35
Id.

9
Teachers’ related factors
Attitudes and beliefs X X X X X X
Technology use for X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X
teaching related
purposes
Knowledge/skills/ X X X X X ? X X X
competencies/self-
efficacy
Reflection about TI X X X X X X
practices
Professional X X X X
development
Students’ related factors
12

Attitude and beliefs ? ?


Technology use for X X X X X X X X X X X X X ? X ? X X X X X X X X X X ? X
school-learning
Knowledge/ ? X X X X
competencies/
skills/self-efficacy
Students’motivation/ X X X X X X
engagement
Schools’ related factors
ICT infrastructure X X X X X
School vision, policy X X X
and leadership
School support and X X X X X X
teachers’

Computers & Education 197 (2023) 104742


collaboration

X = Yes; ? = Vague/Not clear.


T. Consoli et al. Computers & Education 197 (2023) 104742

instruments that measure such diverse aspects of TI is not problematic. What is problematic is that all these instruments claim to
measure TI. The problem thus lies mainly in the lack of precision in the use of the term “TI."
As teachers are regarded as playing a key role in the TI process (Niederhauser & Lindstrom, 2017), it is understandable that most of
the instruments have been developed for them. The scarcity of instruments targeting school leaders and ICT coordinators (Id. 3 and 17)
probably results from the fact that the school factors of TI are often overlooked (Tondeur et al., 2008).
The few instruments targeting kindergarten and upper secondary school may indicate that the research focused primarily on
primary and secondary school, leaving other school levels under-researched.
About one-third of the studies lacked indications of the validity or reliability of the instruments. In some cases (e.g., Id. 23), this
deficiency went hand in hand with the poor quality of the studies and in others (e.g., Id. 1, 2, 3, 11), it seems that the authors of the
studies did not intend to develop statistically valid instruments but simply to describe a specific situation.
Our results confirm the findings of Lai and Bower (2019), who found a huge diversity of instruments but only a few that consider the
school level and assess the use of technologies holistically.

c. Are the instruments based on concepts or theories?

The lack of conceptual or theoretical basis in many instruments is in line with the findings of Hew et al. (2019), who found a general
lack of theoretical underpinnings in the field of educational technology. In the screening procedure, we found that several studies used
the term “integration” as a synonym for “use” (e.g., Id. 27), and it seems that they consider its meaning to be something self-evident
that does not need any definition or explanation. Therefore, many studies do not adequately define and operationalize the term TI,
formulate the instruments’ items intuitively, or apply existing instruments without reflecting on their appropriateness and validity.
The examined studies and instruments often do not refer to theories. When they do, they are often not specific to educational and
learning science but are more general (e.g., the theory of planned behavior or the theory of diffusion of innovation). Moreover, in some
cases, the studies are based on frameworks and models (e.g., the TPACK framework and the Will Skill Tool model) rather than on
elaborated explanatory theories (for a more detailed discussion of the difference between theories, frameworks, and models, see
Tondeur et al., 2021). Thus, the contribution of the examined studies to the development of educational theories is often minimal or
completely absent.
Interestingly, four of the instruments are based on curricula or educational programs (Id. 3, 13, 15, 17). These instruments measure
whether technology is used to achieve both technology-specific and more general learning goals set by a program or curricula. With
such assessments, it is possible to measure whether teachers integrate technology to ensure good teaching as well as effective teaching
(see Berliner, 2005). However, these instruments can be used only where certain programs and curricula are being implemented, and
the results obtained are, therefore, context-specific and not generalizable.

d. What dimensions/aspects of TI are measured?

Although the instruments are quite different, they all tend to focus mainly on factors related to teachers and classroom practices,
leaving other aspects (e.g., school-related factors) under-investigated. The lack of attention to school-level aspects of TI was previously
observed by Tondeur et al. (2008). The fact that 10 instruments do not consider any aspect related to students could be an indicator
that the scholars who developed have a teacher-centered view of teaching. However, several researchers argue that digital technol­
ogies are highly suitable for student-centered and constructivist teaching and learning practices (Jonassenet al., 1999; Keengwe et al.,
2008; Mustafa & Fatma, 2013; Pourhosein Gilakjani et al., 2013). These practices require students to use technology themselves. Thus,
assessing students’ technology-related activities in the classroom is often seen as crucial. Furthermore, having items that also consider
students’ and not just teachers’ TI practices is important to reduce the risk of social desirability bias that might occur when using
self-reporting instruments in which teachers have to assess their own TI practices (Howley et al., 2011; Mueller et al., 2008; Pittman &
Gaines, 2015; Retelsdorf et al., 2010).
The lack of consideration given to students’ digital competencies and 21st-century skills within measures of TI might be prob­
lematic, as several scholars have stated that they are of primary importance and the ultimate goal of integrating digital technologies
into teaching and learning practices (Duran, 2022; Fraillon et al., 2014, 2019; Voogt et al., 2013).

e. Which aspects related to the quality of TI in classroom practices are measured?

Contrary to what may be thought considering the current debate around the quantity and quality of IT, our systematic review shows
that instruments measuring pedagogical or other aspects related to the quality of TI have been used for several years (e.g., Id. 3, 9, 15,
or 30, which is based on Proctor et al., 2003). However, these instruments are sometimes based on specific pedagogical programs and
curricula, and thus the results obtained are not generalizable. In other cases (e.g., Id. 1, 10, 12), studies using instruments measuring
aspects related to the quality of TI do not contain any conceptual or theoretical reflections on the quality of TI and, for this reason, have
probably not been considered by the current debate about the quantity and quality of TI.
It should also be mentioned that often, only a small part of the instrument items measure aspects related to the quality of TI in
instructional practices. For example, in the instrument developed by Men and Noordin (2019), only 2 items (of 13) measure overtly
pedagogical aspects (“Drill and Practice” and “Game-based learning”); all other items ask about the frequency of use of different
devices and applications.
Interestingly, our review shows that six instruments focus on using technology to implement constructivist or student-centered

13
T. Consoli et al.
Table 7
Aspects related to the quality of TI in instructional practices.

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35
Id.

9
General quality X X
Effectiveness/teaching X X X ? X X X X ? X X X
and learning
enhancement
Pedagogy/Teaching X X X X X X X X X X X X X X X X X X ? X X ? X X X X X X X X ? X
methods
Enphasis on X X ? X ? X X X
constructivists/
student centered
pedagogies
Cognitive activation/ X X X X X ? X X X X X X X X X ? X X
higher order
thinking skills
14

Students’support X ? X X X X X X X
Classroom management X
Differentiation/ X X X X X X X X
adaptive teaching
Investigations/(re) X X X X X X X X X X X X X
search
Independent/self- ? X X X
regulated learning
Students’collaboration/ X X X X X X X X X X ? X X X X
cooperation
Simulations, animations X X X X X X X X
and visualizations
Enphasis on media ? X X X X X ?
literacy, safety and

Computers & Education 197 (2023) 104742


ethics

X = Yes; ? = Vague/Not clear.


T. Consoli et al. Computers & Education 197 (2023) 104742

pedagogies. Given that digital technologies are highly suitable for student-centered and constructivist teaching and learning practices
(Jonassen et al., 1999; Keengwe et al., 2008; Mustafa & Fatma, 2013; Pourhosein Gilakjani et al., 2013), this focus reflects some of the
trends and consensus we also find in TI research.
Regarding the three basic dimensions of teaching quality (Praetorius et al., 2018), the use of technology to foster students’
cognitive activation seems to be the most important dimension. This aspect is also reflected in the instruments’ focus on constructivist
pedagogies, which imply students’ active engagement in constructing their knowledge, and in the presence of items that assess specific
activities with a higher potential for cognitive activation, such as students’ technology use to conduct online (re)searches and in­
vestigations or to collaborate online. In contrast, the other two dimensions of the three basic dimensions framework are seldom
considered. This is problematic because classroom management and student support are crucial dimensions in ensuring quality
teaching and good learning outcomes (Klieme, 2018; Praetorius et al., 2018).
Among the aspects related to the quality of TI measured by the instruments, we found some related to the affordances of digital
technologies for teaching and learning practices. The term “digital affordances” in the context of learning theories refers to the specific
potential of digital technology to change the way we learn and teach (Berthelsen & Tannert, 2020). Previous research has identified
numerous digital affordances in the school context (see, for example, Cope & Kalantzis, 2017). In our content analysis, we found the
following aspects regarding digital affordances: the use of technologies to promote connectivity and collaboration among students; the
use of technologies to enable simulations, animations, and visualizations, the use of technologies to increase differentiation in teaching
practices, and the use of technology to promote self-regulated and independent learning. The presence of these affordances is
important because it highlights that TI, for many scholars, does not only mean using technologies to reproduce or enhance traditional
teaching and learning practices but also to extend and develop those practices by considering the new potentials of digital
technologies.
Digital technologies not only have the potential to change the way we teach and learn, but they are also radically changing our
society, bringing with them both new chances and new risks. Therefore, some educational technology experts have suggested that
digital media education should not only help learners understand how technology works but also teach them how they can use it and its
effects on society (Missomelius, 2016; Lee, 2010). Furthermore, competencies such as being able to handle digital information (i.e., to
access, evaluate, share, and communicate it) are crucial for actively participating in 21st-century society (Fraillon et al., 2019). The
fact that only five instruments consider aspects related to the ethical, societal, and security implications of TI is thus problematic.
Finally, our analysis highlights different ways to operationalize aspects related to the quality of TI in instructional practices. To
measure the quality of TI, survey instruments can, for example, ask whether digital technologies are used to achieve specific learning
goals, to make teaching and learning practices more effective, to sustain dimensions of teaching quality (e.g. coegnitive activation,
classroom management and students support), to support pedagogy (e.g. constructivists pedagogies), to exploit the affordancies digital
technologies, to address ethical and safety aspects related to the use of digital technologies, or to foster students’digital and media
literacies. This variety highlights the lack of a shared concept of the quality of TI, but on the other hand, it may inspire scholars who
intend to develop survey instruments to measure aspects related to the quality of TI.

5. Limitations

The present study has several limitations. One of these is that only studies and instruments published in English were considered.
This likely leads to an underrepresentation of countries in which the English language is not, or is rarely, the language used for research
publication. Another limitation is that we focused only on peer-reviewed journal articles. This choice was made with the intent to
examine only scientifically sound studies and instruments and lead to filtering out the gray literature, and probably also studies that
present non-significant results (publication bias). Another limitation lies in the accessibility of the instruments; as per our screening
process, only those fully available online were considered. However, this also raises the question of research transparency and the need
for authors to share their material to allow for further analysis or replication.
The lack of a shared definition of TI highlighted by this study is also an important limitation of the study itself, for which screening
had to be conducted based on a linguistic level (i.e., on the use of the term “technology integration”) rather than conceptual or
theoretical aspects. As a result, studies, and instruments that researchers may consider as investigating TI but that do not explicitly
refer to the term “TI” in their title or abstract were filtered out.
Finally, we opted to focus on quantitative survey instruments, but other types of instruments (e.g., rubrics or qualitative measures)
are also used (see, for example, Parker et al., 2019). Therefore, we believe that future research should also consider these different
types of instruments and examine whether there are differences between different research methods and approaches.

6. Conclusions and recommendations

This study illustrates several underlying problems of TI research and may help researchers to select appropriate instruments
measuring TI or to develop new instruments measuring the quantity and quality of TI. Furthermore, by highlighting several ways to
operationalize aspects related to the quality of TI into instructional practices, our study also contributes to shifting the focus of TI
research from purely technological aspects (such as the frequency of use of different devices) to more qualitative ones. Focusing more
on pedagogical and other aspects related to the quality of TI can also be useful for practitioners seeking to improve teaching and
learning practices through technology. Several recent studies have empirically demonstrated that TI quality positively impacts student
learning outcomes (Fütterer et al., 2022; Juuti et al., 2022; Stegmann, 2020).
One of the biggest issues highlighted by the study is the lack of a shared conceptualization of TI and of the quality of TI. To provide

15
T. Consoli et al. Computers & Education 197 (2023) 104742

greater clarity, we recommend that future research on TI make sure to explain what they precisely mean by the term “TI"—the use of
technology? The process of teachers starting to use technology in teaching? The degree to which technology is integrated into
pedagogy and vice versa? The use of technology to enhance teaching and learning practices? The digital transformation of school
culture? The use of technology to improve students’ media literacy? All of these constructs are different, but, as we expounded earlier,
they are so far still indistinctively labeled as TI.
Based on the definitions reported at the beginning of the article and the results of the present content analysis, we propose to define
TI as the use of technology in educational contexts to support educational goals, or as the process leading to that. Our results pointed out that
even if it is often conceptualized as a process, TI is typically measured as a state at a given stage or as a result. This definition is broad
enough to be appropriate for most of the instruments analyzed and implies considering aspects related to the quality of TI. As an
extension of the definition provided by Davies and West (2014), it stresses the role of the alignment of technology use to the
educational context, such as teaching methodologies, pedagogy, teachers’ and students’ skills and beliefs, students’ characteristic and
learning outcomes, school culture and infrastructure, educational environment, curriculum, and assessment. Whether technology use
succeeds in effectively supporting educational goals might depend on how well it is integrated and aligned with important aspects of
the educational context. As the term “technology integration” implies, considering aspects regarding the educational context might be
the defining aspect of research on TI in comparison to other strands of research on educational technology and technology use in
general. To avoid ambiguity, we also find it important to always state whether TI is conceptualized as a holistic or as a particular
phenomenon. Finally, we recommend that the instruments used in the studies be fully published and documented within the study so
that it is possible to determine what exactly is measured and how.
Similar to Hew et al. (2019), our study also showed that many of the studies analyzed had little to no theoretical underpinning. To
contribute to theory development about TI, researchers should develop valid and reliable instruments based on theories, frameworks,
models, or concepts. Further, if the purpose of a study is to develop a theory, the study should seek to obtain generalizable results and
not just to describe a particular situation.
Overall, our study showed that in most cases, TI instruments focus on classroom practices and measure TI from the teachers’
perspective. Future studies, including the development of new TI instruments, could therefore consider aspects of the concept that have
so far received little attention (e.g., school-level aspects or student digital competencies). The PISA 2021 ICT framework (Lorenceau
et al., 2019), which was developed to assess TI in an educational context, conceptualizes TI as a holistic phenomenon and offers an
overview of additional aspects of TI that can be studied. Regarding the quality of TI in instructional practice, future studies should
more closely consider the dimensions of classroom management and student support. More instruments could also be developed to
measure TI from the perspectives of students and school leaders.
Another perspective derived from this study is the combined use of different instruments. In this way, it would be possible to show
the complexity of the TI while cross-validating the instruments.

Credit author statement

Tessa Consoli: Conceptualization, Methodology, Formal analysis, Investigation, Writing – original draft, Writing – review &
editing, Visualization. Juliette Désiron: Validation, Formal analysis, Writing – review & editing, Supervision. Alberto Cattaneo:
Supervision, Validation, Writing – review & editing, Funding acquisition.

Data availability

Data will be made available on request.

Acknowledgements

This work was supported by the Swiss National Science Foundation (grant number 187277).

Appendix A. Supplementary data

Supplementary data to this article can be found online at https://doi.org/10.1016/j.compedu.2023.104742.

References

Agyei, D. D., & Voogt, J. M. (2011). Exploring the potential of the will, skill, tool model in Ghana: Predicting prospective and practicing teachers’ use of technology.
Computers & Education, 56(1), 91–100. https://doi.org/10.1016/j.compedu.2010.08.017
Almalki, A. (2020). Integration of technology among Saudi EFL teachers. English Language Teaching, 13(8), 160–167.
Antonietti, C., Schmitz, M. L., Consoli, T., Cattaneo, A., Gonon, P., & Petko, D. (2023). Development and validation of the ICAP Technology Scale to measure how
teachers integrate technology into learning activities. Computers & Education, 192, Article 104648.
Apostolou, C. (2020). The level of ICT infrastructure as a factor of ICT integration in Greek high school science teaching. Electronic Journal of E-Learning, 18(6),
562–574. https://doi.org/10.34190/JEL.18.6.008

16
T. Consoli et al. Computers & Education 197 (2023) 104742

Aydin, M. K., Gürol, M., & Vanderlinde, R. (2016). Evaluating ICT integration in Turkish K-12 schools through teachers’ views. Eurasia Journal of Mathematics, Science
and Technology Education, 12(4), 747–766. https://doi.org/10.12973/eurasia.2016.1227a
Backfisch, I., Lachner, A., Stürmer, K., & Scheiter, K. (2021). Variability of teachers’ technology integration in the classroom: A matter of utility. Computers &
Education, 166, Article 104159. https://doi.org/10.1016/j.compedu.2021.104159
Bauer, W. I. (2013). The acquisition of musical technological pedagogical and content knowledge. Journal of Music Teacher Education, 22(2), 51–64. https://doi.org/
10.1177/1057083712457881
Belland, B. R. (2009). Using the theory of habitus to move beyond the study of barriers to technology integration. Computers & Education, 52(2), 353–364. https://doi.
org/10.1016/j.compedu.2008.09.004
Bennett, S., & Oliver, M. (2011). Talking back to theory: The missed opportunities in learning technology research. Research in Learning Technology, 19(3), 179–189.
Berliner, D. C. (2005). The near impossibility of testing for teacher quality. Journal of Teacher Education, 56(3), 205–213. https://doi.org/10.1177/
0022487105275904
Berthelsen, U. D., & Tannert, M. (2020). Utilizing the affordances of digital learning materials. L1: Educational Studies in Language and Literature, 20, 1–23. https://doi.
org/10.17239/L1ESLL-2020.20.02.03
Besnoy, K. D., Dantzler, J. A., & Siders, J. A. (2012). Creating a digital ecosystem for the gifted education classroom. Journal of Advanced Academics, 23(4), 305–325.
https://doi.org/10.1177/1932202X12461005
Blau, I., & Shamir-Inbal, T. (2017). Digital competences and long-term ICT integration in school culture: The perspective of elementary school leaders. Education and
Information Technologies, 22(3), 769–787. https://doi.org/10.1007/s10639-015-9456-7
Bloom, B. S. (Ed.). (1956). Taxonomy of educational objectives: The classification of educational goals. New York Longmans: Green.
Bowman, M. A., Vongkulluksn, V. W., Jiang, Z., & Xie, K. (2020). Teachers’ exposure to professional development and the quality of their instructional technology use:
The mediating role of teachers’ value and ability beliefs. Journal of Research on Technology in Education, 54(2), 88–204. https://doi.org/10.1080/
15391523.2020.1830895
Chen, Y.-H., & Jang, S.-J. (2014). Interrelationship between stages of concern and technological, pedagogical, and content knowledge: A study on Taiwanese senior
high school in-service teachers. Computers in Human Behavior, 32, 79–91. https://doi.org/10.1016/j.chb.2013.11.011
Chien, Y. T., Chang, Y. H., & Chang, C. Y. (2016). Do we click in the right way? A meta-analytic review of clicker-integrated instruction. Educational Research Review,
17, 1–18.
Christensen, R. (1997). Effect of technology integration education on the attitudes of teachers and their students. Doctoral dissertation. University of North Texas.
Christensen, R. (2002). Effects of technology integration education on the attitudes of teachers and students. Journal of Research on Technology in Education, 34(4),
411–433. https://doi.org/10.1080/15391523.2002.10782359
Christensen, R., Griffin, D. A., & Knazek, G. (2001). Measures of teacher stages of technology integration and their correlates with student achievement: Paper presented at the
Annual Meeting of the American Association of Colleges for Teacher Education. Dallas, TX): 53rd. March 1-4.
Christensen, R., & Knezek, G. (2001). Instruments for assessing the impact of technology in education. Computers in the Schools, 18(2–3), 5–25. https://doi.org/
10.1300/J025v18n02_02
Christensen, R., & Knezek, G. (2017a). Validating the technology proficiency self-assessment questionnaire for 21st century learning (TPSA C-21). Journal of Digital
Learning in Teacher Education, 33(1), 20–31. https://doi.org/10.1080/21532974.2016.1242391
Christensen, R., & Knezek, G. (2017b). Readiness for integrating mobile learning in the classroom: Challenges, preferences and possibilities. Computers in Human
Behavior, 76, 112–121. https://doi.org/10.1016/j.chb.2017.07.014
Christensen, R., & Knezek, G. (2018). Measuring teacher attitudes, competencies, and pedagogical practices in support of student learning and classroom technology
integration. In J. Voogt, G. Knezek, R. Christensen, & K.-W. Lai (Eds.), Springer international handbooks of education. second handbook of information technology in
primary and secondary education (pp. 357–374). Springer International Publishing. https://doi.org/10.1007/978-3-319-71054-9_21.
Chuang, H.-H., Weng, C.-Y., & Huang, F.-C. (2015). A structure equation model among factors of teachers’ technology integration practice and their TPCK. Computers
& Education, 86, 182–191. https://doi.org/10.1016/j.compedu.2015.03.016
Cifuentes, L., Maxwell, G., & Bulu, S. (2011). Technology integration through professional learning community. Journal of Educational Computing Research, 44(1),
59–82. https://doi.org/10.2190/EC.44.1.d
Cope, B., & Kalantzis, M. (Eds.). (2017). E-learning ecologies: Principles for new learning and assessment. Routledge. https://doi.org/10.4324/9781315639215.
Davies, R. S., & West, R. E. (2014). Technology integration in schools. In Handbook of research on educational communications and technology (Vols. 841–853). Springer.
https://doi.org/10.1007/978-1-4614-3185-5_68
Davis, F. D. (1985). A technology acceptance model for empirically testing new end-user information systems: Theory and results. Massachusetts Institute of Technology.
https://dspace.mit.edu/bitstream/handle/1721.1/15192/14927137-mit.pdf.
Duran, M. (2022). In , Vols. 11–33. Technology integration. Springer, Cham: Learning Technologies,.
Dwyer, D., Ringstaff, C., & Sandholtz, J. (1990). The evolution of teachers’ instructional beliefs and practices in high-access-to-technology classrooms. ACOT Report.
Cupertino, CA: Apple Computer.
Education Resources Information Center. (2022). Thesaurus. May 23 https://eric.ed.gov/?ti=all.
Farjon, D., Smits, A., & Voogt, J. (2019). Technology integration of pre-service teachers explained by attitudes and beliefs, competency, access, and experience.
Computers & Education, 130, 81–93. https://doi.org/10.1016/j.compedu.2018.11.010
Fraillon, J., Ainley, J., Schulz, W., Duckworth, D., & Friedman, T. (2019). IEA international computer and information literacy study 2018 assessment framework. Springer
International Publishing, 10.1007/978-3-030-19389-8 https://library.oapen.org/handle/20.500.12657/22874.
Fraillon, J., Ainley, J., Schulz, W., Friedman, T., & Gebhardt, E. (2014). Preparing for life in a digital age: The IEA international computer and information literacy study
international report. Springer. https://doi.org/10.1007/978-3-319-14222-7
Fütterer, T., Scheiter, K., Cheng, X., & Stürmer, K. (2022). Quality beats frequency? Investigating students’ effort in learning when introducing technology in
classrooms. Contemporary Educational Psychology, 69, Article 102042. https://doi.org/10.1016/j.cedpsych.2022.102042
Ghavifekr, S., Razak, A. Z. A., Ghani, M. F. A., Ran, N. Y., Meixi, Y., & Tengyue, Z. (2014). ICT integration in education: Incorporation for teaching & learning
improvement. Malaysian Online Journal of Educational Technology, 2(2), 24–45.
Gough, D., Oliver, S., & Thomas, J. (2017). An introduction to systematic reviews (2nd ed.). SAGE.
Hall, G. E., Loucks, S. F., Rutherford, W. L., & Newlove, B. W. (1975). Levels of use of the innovation: A framework for analyzing innovation adoption. Journal of
Teacher Education, 26(1), 52–56. https://doi.org/10.1177/002248717502600114
Hancock, R., Knezek, G. G., & Christensen, R. R. (2007). Cross-validating measures of technology integration. Journal of Computing in Teacher Education, 24(1), 15–21.
https://doi.org/10.1080/10402454.2007.10784579
Hechter, R., & Vermette, L. A. (2014). Tech-savvy science education? Understanding teacher pedagogical practices for integrating technology in K-12 classrooms.
Journal of Computers in Mathematics and Science Teaching, 33(1), 27–47.
Hew, K. F., & Brush, T. (2007). Integrating technology into K-12 teaching and learning: Current knowledge gaps and recommendations for future research. Educational
Technology Research & Development, 55(3), 223–252. https://doi.org/10.1007/s11423-006-9022-5
Hew, K. F., Lan, M., Tang, Y., Jia, C., & Lo, C. K. (2019). Where is the "theory" within the field of educational technology research? British Journal of Educational
Technology, 50(3), 956–971. https://doi.org/10.1111/bjet.12770
Hogarty, K. Y., Lang, T. R., & Kromrey, J. D. (2003). Another look at technology use in classrooms: The development and validation of an instrument to measure
teachers’ perceptions. Educational and Psychological Measurement, 63(1), 139–162. https://doi.org/10.1177/0013164402239322
Howard, S. K., Chan, A., Mozejko, A., & Caputi, P. (2015). Technology practices: Confirmatory factor analysis and exploration of teachers’ technology integration in
subject areas. Computers & Education, 90, 24–35. https://doi.org/10.1016/j.compedu.2015.09.008
Howley, A., Wood, L., & Hough, B. (2011). Rural elementary school teachers’ technology integration. Journal of Research in Rural Education, 26(9).

17
T. Consoli et al. Computers & Education 197 (2023) 104742

Hsu, S. (2010). Developing a scale for teacher integration of information and communication technology in grades 1-9. Journal of Computer Assisted Learning, 26(3),
175–189. https://doi.org/10.1111/j.1365-2729.2010.00348.x
Hsu, S. (2017). Developing and validating a scale for measuring changes in teachers’ ICT integration proficiency over time. Computers & Education, 111, 18–30.
https://doi.org/10.1016/j.compedu.2017.04.001
Jonassen, D. H. (1995). Supporting communities of learners with technology: A vision for integrating technology with learning in schools. Educational Technology, 35
(4), 60–63.
Jonassen, D. H., Peck, K. L., & Wilson, B. G. (1999). Learning with technology: a constructivist perspective. Upper Saddle River, NJ: Merrill.
Juuti, K., Kervinen, A., & Loukomies, A. (2022). Quality over frequency in using digital technology: Measuring the experienced functional use. Computers & Education,
176, Article 104361. https://doi.org/10.1016/j.compedu.2021.104361
Karaca, F., Can, G., & Yildirim, S. (2013). A path model for technology integration into elementary school settings in Turkey. Computers & Education, 68, 353–365.
https://doi.org/10.1016/j.compedu.2013.05.017
Keengwe, J., Onchwari, G., & Wachira, P. (2008). Computer technology integration and student learning: Barriers and promise. Journal of Science Education and
Technology, 17(6), 560–565. https://doi.org/10.1007/s10956-008-9123-5
Klieme, E. E. (2018). Unterrichtsqualität. In M. Harring, C. Rohlfs, & M. Gläser-Zikuda (Eds.), Handbuch schulpädagogik,393–408utb-studi-e-book: Vol. 8698. Stuttgart,
Deutschland: utb GmbH.
Lai, J. W., & Bower, M. (2019). How is the use of technology in education evaluated? A systematic review. Computers & Education, 133, 27–42. https://doi.org/
10.1016/j.compedu.2019.01.010
Lee, A. Y. L. (2010). Media education: Definitions, approaches and development around the globe. New Horizons in Education, 58(3), 1–13.
Li, B., Kong, S. C., & Chen, G. (2015). Development and validation of the smart classroom inventory. Smart Learning Environments, 2(1). https://doi.org/10.1186/
s40561-015-0012-0
Liu, S.-H. (2012). A multivariate model of factors influencing technology use by preservice teachers during practice teaching. Educational Technology & Society, 15(4),
137–149.
Liu, S.-H. (2016). Teacher education programs, field-based practicums, and psychological factors of the implementation of technology by pre-service teachers.
Australasian Journal of Educational Technology, 32(3), 65–79. https://doi.org/10.14742/ajet.2139
Lloyd, M. (2005). Towards a Definition of the Integration of ICT in the Classroom. In P. Jeffrey (Ed.), AARE ’05 Education Research Creative Dissent: Constructive (pp.
1–18). Australia: Australian Association of Research in Education.
Lorenceau, A., Marec, C., & Mostafa, T. (2019). Upgrading the ICT questionnaire items in PISA 2021. Retrieved from OECD, 202, 10.1787/d0f94dc7-en https://www.
oecd-ilibrary.org/content/paper/d0f94dc7-en.
Men, L.-K., & Noordin, N. (2019). Senior meets junior digital natives: Teslians integrating technology in their teaching practice. International Journal of Education and
Literacy Studies, 7(4), 150–157.
Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.
Missomelius, P. (2016). Die Dagstuhl-Erklärung: Erklärung zur Relevanz von Medienbildung. Medienimpulse, 54(1).
Mueller, J., Wood, E., Willoughby, T., Ross, C., & Specht, J. (2008). Identifying discriminating variables between teachers who fully integrate computers and teachers
with limited integration. Computers & Education, 51(4), 1523–1537. https://doi.org/10.1016/j.compedu.2008.02.003
Muhaimin, A., Habibi, A., Mukminin, A., & Hadisaputra, P. (2020). Science teachers’ integration of digital resources in education: A survey in rural areas of one
Indonesian province. Heliyon, 6(8). https://doi.org/10.1016/j.heliyon.2020.e04631
Mustafa, E., & Fatma, E. N. (2013). Instructional technology as a tool in creating constructivist classrooms. Procedia - Social and Behavioral Sciences, 93, 1441–1445.
https://doi.org/10.1016/j.sbspro.2013.10.060
Mwila, P. (2018). Assessing the attitudes of secondary school teachers towards the integration of ICT in the teaching process in Kilimanjaro, Tanzania. International
Journal of Education and Development Using Information and Communication Technology, 14(3), 223–238. Available online at: http://search.ebscohost.com/login.
aspx?direct=true&db=eric&AN=EJ1201563&site=ehost-live.
Niederhauser, D. S., & Lindstrom, D. L. (2017). Instructional technology integration models and frameworks: Diffusion, competencies, attitudes, and dispositions. In
R. Latiner Raby, & E. J. Valeau (Eds.), Springer international handbooks of education. handbook of comparative studies on community colleges and global counterparts
(pp. 1–21). Springer International Publishing. https://doi.org/10.1007/978-3-319-53803-7_23-2.
OECD. (2015). Students, computers and learning. making the connection. PISA. OECD Publishing. https://doi.org/10.1787/9789264239555-en
Olugbara, C. T., & Letseka, M. (2020). Factors predicting integration of e-learning by preservice science teachers: Structural model development and testing. Electronic
Journal of E-Learning, 18(5), 421–435. https://doi.org/10.34190/JEL.18.5.005
Padayachee, K. (2017). A snapshot survey of ICT integration in South African schools. South African Computer Journal, 29(2), 36–65. https://doi.org/10.18489/sacj.
v29i2.463
Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for
reporting systematic reviews. Systematic Reviews, 10(1), 1–11.
Pan, S.-C., & Franklin, T. (2011). In-service teachers’ self-efficacy, professional development, and Web 2.0 tools for integration. New Horizons in Education, 59(3).
Parker, C. E., Stylinski, C. D., Bonney, C. R., DeLisi, J., Wong, J., & Doty, C. (2019). Measuring quality technology integration in science classrooms. Journal of Science
Education and Technology, 28(5), 567–578. https://doi.org/10.1007/s10956-019-09787-7
Peeraer, J., & van Petegem, P. (2012). Measuring integration of information and communication technology in education: An item response modeling approach.
Computers & Education, 58(4), 1247–1259. https://doi.org/10.1016/j.compedu.2011.12.015
Petko, D. (2012). Teachers’ pedagogical beliefs and their use of digital media in classrooms: Sharpening the focus of the ’will, skill, tool’ model and integrating
teachers’ constructivist orientations. Computers & Education, 58(4), 1351–1359. https://doi.org/10.1016/j.compedu.2011.12.013
Petko, D., Cantieni, A., & Prasse, D. (2017). Perceived quality of educational technology matters. Journal of Educational Computing Research, 54(8), 1070–1091.
https://doi.org/10.1177/0735633116649373
Pittman, T., & Gaines, T. (2015). Technology integration in third, fourth and fifth grade classrooms in a Florida school district. Educational Technology Research &
Development, 63(4), 539–554. https://doi.org/10.1007/s11423-015-9391-8
Pourhosein Gilakjani, A., Mei Leong, L., & Nizam Ismail, H. (2013). Teachers’ use of technology and constructivism. International Journal of Modern Education and
Computer Science, 5(4), 49–63. https://doi.org/10.5815/ijmecs.2013.04.07
Praetorius, A.-K., Klieme, E. E., Herbert, B., & Pinger, P. (2018). Generic dimensions of teaching quality: The German framework of Three Basic Dimensions. ZDM, 50
(3), 407–426. https://doi.org/10.1007/s11858-018-0918-4
Proctor, R. M. J., Watson, G., & Finger, G. (2003). Measuring information and communication technology (ICT) curriculum integration. Computers in the Schools, 20
(4), 67–87. https://doi.org/10.1300/J025v20n04_06
Retelsdorf, J., Butler, R., Streblow, L., & Schiefele, U. (2010). Teachers’ goal orientations for teaching: Associations with instructional practices, interest in teaching,
and burnout. Learning and Instruction, 20(1), 30–46. https://doi.org/10.1016/j.learninstruc.2009.01.001
Russel, A. L. (1995). Stages in learning new technology: Naive adult email users. Computers & Education, 25(4), 173–178.
Schmidt, D. A., Baran, E., Thompson, A. D., Mishra, P., Koehler, M. J., & Shin, T. S. (2009). Technological pedagogical content knowledge (TPACK) the development
and validation of an assessment instrument for preservice teachers. Journal of Research on Technology in Education, 42(2), 123–149.
Shamir-Inbal, T., & Blau, I. (2017). Which pedagogical parameters predict the general quality of ICT integration from the perspective of elementary school leaders?
Computers in the Schools, 34(3), 168–191. https://doi.org/10.1080/07380569.2017.1347427
Siddiq, F., & Scherer, R. (2016). The relation between teachers’ emphasis on the development of students’ digital information and communication skills and computer
self-efficacy: The moderating roles of age and gender. Large-Scale Assessments in Education, 4(1). https://doi.org/10.1186/s40536-016-0032-4
Sinatra, M., Limone, P., & Contini, P. (2017). Determinants of ICT integration: A path model of teacher-related factors. Turkish Online Journal of Educational
Technology, 2017, 771–779. Special Issue 2017).

18
T. Consoli et al. Computers & Education 197 (2023) 104742

Stegmann, K. (2020). Effekte digitalen Lernens auf den Wissens- und Kompetenzerwerb in der Schule. Zeitschrift für Padagogik, 66(2), 174–190.
Tang, X., Yin, Y., Lin, Q., Hadad, R., & Zhai, X. (2020). Assessing computational thinking: A systematic review of empirical studies. Computers & Education, 148, Article
103798. https://doi.org/10.1016/j.compedu.2019.103798
Thannimalai, R., & Raman, A. (2018). The influence of principals’ technology leadership and professional development on teachers’ technology integration in
secondary schools. Malaysian Journal of Learning and Instruction, 15(1), 203–228. https://doi.org/10.32890/mjli2018.15.1.8
Tondeur, J., Petko, D., Christensen, R. R., Drossel, K., Starkey, L., Knezek, G., & Schmidt-Crawford, D. A. (2021). Quality criteria for conceptual technology integration
models in education: Bridging research and practice. Educational Technology Research & Development, 69(4), 2187–2208. https://doi.org/10.1007/s11423-020-
09911-0
Tondeur, J., van Keer, H., van Braak, J., & Valcke, M. (2008). ICT integration in the classroom: Challenging the potential of a school policy. Computers & Education, 51
(1), 212–223. https://doi.org/10.1016/j.compedu.2007.05.003
Valtonen, T., López-Pernas, S., Saqr, M., Vartiainen, H., Sointu, E. T., & Tedre, M. (2022). The nature and building blocks of educational technology research.
Computers in Human Behavior, 128, Article 107123. https://doi.org/10.1016/j.chb.2021.107123
Van Braak, J., Tondeur, J., & Valcke, M. (2004). Explaining different types of computer use among primary school teachers. European Journal of Psychology of
Education, 19(4), 407–422.
Vanderlinde, R., & van Braak, J. (2010). The e-capacity of primary schools: Development of a conceptual model and scale construction from a school improvement
perspective. Computers & Education, 55(2), 541–553.
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478.
Voogt, J., Erstad, O., Dede, C., & Mishra, P. P. (2013). Challenges to learning and schooling in the digital networked world of the 21st century. Journal of Computer
Assisted Learning, 29(5), 403–413. https://doi.org/10.1111/jcal.12029
Wang, P.-Y. (2013). Examining the digital divide between rural and urban schools: Technology availability, teachers’ integration level and students’ perception.
Journal of Curriculum and Teaching, 2(2), 127–139.
Wang, J., Tigelaar, D. E., & Admiraal, W. (2019). Connecting rural schools to quality education: Rural teachers’ use of digital educational resources. Computers in
Human Behavior, 101, 68–76.
Welsh, J., Harmes, J. C., & Winkelman, R. (2011). Florida’s technology integration matrix. Principal Leadership, 12(2), 7–71.
Wentworth, N., Graham, C. R., & Monroe, E. E. (2009). TPACK development in a teacher education program. In Handbook of research on new media literacy at the K-12
level: Issues and challenges (pp. 823–838). IGI Global.
Williams, M. R., Warner, W. J., Flowers, J. L., & Croom, D. B. (2014). Accessibility and usage of technology by North Carolina agriculture teachers. Journal of
Agricultural Education, 55(4), 191–205.
Zawacki-Richter, O., Kerres, M., Bedenlier, S., Bond, M., & Buntins, K. (Eds.). (2020). Systematic reviews in educational research. Springer Fachmedien Wiesbaden.
https://doi.org/10.1007/978-3-658-27602-7.

19

You might also like