The Individuals Role in Cybercrime Preve
The Individuals Role in Cybercrime Preve
The Individuals Role in Cybercrime Preve
Purpose
As a well-known social institution, crime prevention has traditionally been in the purview of
public authorities. However, the ceaseless increase in the use of online resources and
governments’ responsibilisation approach to cybercrime prevention have created an
ecosystem, which necessitates the empowerment of individuals. By introducing the concept
of internal spheres of protection, I will show how traditionally public responsibilities require
increased facilitation by individuals in order to adequately safeguard what they value.
Design/methodology/approach
This is a brief conceptual article, which recasts the individual’s role in cybercrime prevention
through a critique of the routine activity theory (Cohen and Felson, 1979) and explains how
responsibilisation (Garland, 1996) has created an unavoidable need to provide individuals
with the knowledge and tools required to avoid online victimization.
Findings
In the context of cybercrime, criminological theories that are limited in considering the
individual both as the target of crime and the person likeliest to prevent it, are quickly
becoming outdated. Public authorities either cannot intervene or are moving away from
intervening on citizens’ behalf to effectively stymie the pressure from cybercriminals. Thus,
we need an approach that starts with individuals and their value-based motivations.
Originality
Introduction
The emergence of commercial Internet in the 1990s (Naughton, 2016) brought about a
momentous shift in how society constructs and views ‘community’ and the public.
Technological developments relying on the Internet also altered the way individual agents
function within various spheres of (online) social being (Ryan et al., 2017). A study by
Ofcom (2018) found that people have come to expect a constant Internet connection
wherever they go, check their smartphones every 12 minutes on average, and that one in five
people spend more than 40 hours online every week. Furthermore, the recent spread of
COVID-19 saw spiked increases in the time people spent online as they were abiding by
government-imposed lockdowns (Koeze and Popper, 2020). Where social isolation and stress
relating to the pandemic negatively impacted the mental well-being of many (Panchal et al.,
2020), there were some who excelled under the circumstances, i.e. “if there’s one group of
people rubbing their hands with glee, rather than sanitiser, it’s cybercriminals” (Woollacott,
2020). In other words, an increased number of people spending more time online constitutes a
larger and more accessible pool of potential victims for cybercriminals. Changes in people’s
daily routines regarding time spent online are therefore also bound to bring about adaptations
in the conduct of those seeking to exploit and prey upon such routines within the same
‘community’.
In his study of human ecology, Hawley (1950) distinguished three important temporal facets
of community life: rhythm, tempo, and timing. Rhythm referred to the regular periodicity
with which events occurred, tempo noted the number of events occurring per unit of time,
and timing marked the coordination and interdependency of such events. In the pre-Internet
era, Cohen and Felson (1979) picked up what Hawley suggested about rhythms, tempo and
timing, and applied these notions to the study of crime. Thus, from the perspective of the
criminal actor’s survival, the every-day (legitimate) routine activities of people are a
necessity. Taking into account the possibilities offered by the Internet, we have come to
realise that the human is no longer bound by geographical barriers with respect to effortlessly
effecting impacts on others as well as receiving impacts from them. Put differently, the
Internet made it possible for criminals, among others, to ‘think locally’ but manifest their
intentions on a global scale.
Due to the rapidly increasing prevalence of online activities, the performance of tasks that
have traditionally been in the sole purview of public entities, e.g. law enforcement, have
severely decreased in their efficiency with respect to keeping members of the public free
from harm (United Nations, 2013). From the perspective of an individual, representatives of
public authorities can be considered as ‘someone else’, who may or may not be able to aid the
individual in avoiding online victimisation. In the routine activity theory (Cohen and Felson,
1979), this notion of someone else is represented by the concept of a ‘capable guardian’, i.e. a
person whose presence could interrupt the commission of a criminal offence. However, the
idea and its applications have remained unclear in general (Hollis et al., 2013), undecided
within the context of cybercrime (Leukfeldt and Yar, 2016) and tenuous at best in assessing
actual crime risk (Pratt and Turanovic, 2016). Since no certainty exists about the usefulness
of relying on others to prevent online victimisation, the only remaining solution is to focus on
the individuals’ own abilities in safeguarding themselves and their values. Therefore, to
address the need for re-conceptualizing the role of the individual in personally facilitating
what has conventionally been in the purview of public authorities, I will introduce the
concept of internal spheres of protection. In the context of cybercrime, I will explain how the
true ability to safeguard these internal spheres of protection starts with the individual, i.e.
how cyber-knowledge can prompt individuals to take risk-decreasing actions and use relevant
security-enhancing tools.
The routine activity theory (Cohen and Felson, 1979) puts forward that a crime event occurs
when three elements converge: a motivated offender, a suitable target, and the absence of a
capable guardian. Guardianship has been defined as “someone whose mere presence serves
as a gentle reminder that someone is looking” (Felson and Boba, 2010, p. 37). The guardian
and guardianship concepts have been strongly contested aspects of the theory, requiring
review and reappraisal (Hollis-Peel et al., 2011; Hollis et al., 2013). Hollis et al (2013)
suggest two important distinctions for what guardianship is not. Namely, guardianship is
distinct from both formal and informal social control, and guardianship is not synonymous
with target-hardening activities (Eck and Clarke, 2019). The first aspect was addressed by
Hollis et al (2013), suggesting that individuals, most often persons other than police officers
or security guards (Felson and Boba, 2010, p. 28), deter crime by their mere presence. It is
further elaborated that the actual presence as well as the illusion of someone watching is
enough to deter criminal activity. Hollis and colleagues reject the use of personal and
physical dimensions in the construction of the reappraised concept of guardianship
(Vakhitova et al., 2016). Hence, the reappraised guardianship concept necessarily involves a
human presence, but the presence of someone else, i.e. an individual cannot simultaneously
be the target of a crime and his or her own guardian.
What is more, even if one were to accept the reappraised concept of guardianship insofar as it
concerns the requirement that a guardian must be someone other than the target (in cases
where the target is human, to be precise), the question of intent on the part of the guardian
remains. In a separate work, Hollis and Wilson (2014) utilise the third stage of guardianship
as described from the perspective of “guardianship-in-action” (Reynald, 2009). The first
stage is the availability of guardians, the second adds the activity of monitoring to the
guardian and the third stage further involves the element of intervention by a guardian, e.g.
“the nosy neighbour”. As pointed out previously, Hollis et al (2013) consider guardianship as
distinct from both formal and informal social control, as well as from collective efficacy,
because all these imply intent on the part of those engaged in attempts to control untoward or
criminal activities. To the extent intent is concerned, drawing a viable distinction between the
actions of our “nosy neighbour” as the representative of the third stage of guardianship
(Reynald, 2009) and the general description of informal social control is of diminished utility.
This applies similarly to formal social control, because the difference lies in who is doing the
controlling. Hence, even if a difference exists between the third stage of guardianship and
informal social control, it is a semantic one.
In the context of cybercrime, it has become increasingly difficult to suggest there exists a
‘someone else’ who can intervene and prevent the criminal act on behalf of the target. For
example, when a person is opening a malicious attachment included in an email, any
inadvertent protective effects from the mere presence of someone else are difficult if not
impossible to conceive. If we developed personal cybersecurity based on the (reappraised)
guardian concept from Hollis et al (2013), we would immediately run into problems
regarding the observable space available to guardians of specific targets such as personal
devices, email accounts, or the data contained therein. In such common scenarios, there is no
guardian who keeps “an eye on the potential target of crime” (Felson, 2006, p. 80) aside from
the person using the device and reading their email.
Nevertheless, the lack of a third person capable guardian does not necessarily mean that the
target is helpless before the offender, or that nothing can be done to improve the chances of
preventing victimisation in the absence of a ‘guardian’. Multiple studies emphasise
individuals’ capacity to protect themselves (Hutchings 2009) or implement ‘technical capable
guardianship’ (Leukfeldt and Yar, 2016). This means that a person’s use of technology
constitutes guardianship, or the technology itself becomes the guardian. According to the
reappraised guardian concept, these formulations are not in line with the initial intentions of
the original authors (Hollis et al., 2013). Regardless, self-protective measures have long been
operationalised as proxies of guardianship (Tewksbury and Mustaine, 2003; Garofalo and
Clark, 1992; Tseloni et al., 2004) in routine activity theory. Following the introduction of
situational crime prevention (Clarke 1983), self-protective measures are often referred to as
target-hardening practices. However, individuals carry multiple roles and can therefore be
both targets and guardians. The latter applies to people’s interactions with other community
members, and to themselves as individuals.
Pratt and Turanovic (2016) have put forward an important criticism regarding lifestyle-
routine activities theories, wherein “risky” behaviour is not separated from “problematic”
behaviour. For instance, shopping or sharing personal images online always carries some risk
of subsequent online victimisation. Yet, it is the “risky” behaviours of people that is likely to
increase their probability of being victimised, e.g. purchasing items from users without a
track record on popular online marketplaces or leaving the status of shared images as public
(Shadel et al., 2014). Alongside the risky and problematic behaviour, we ought to also
introduce safeguarding into the discussion as an analytically distinct concept. Safeguarding
can be viewed as actions people take, or measures that people implement, which either
decrease the probability of victimisation that is always present (“problematic behaviour”) or
counteract and mitigate previously elevated levels of risk (“risky behaviour”).
While actions and implemented measures are ways of decreasing victimisation risk, there
remains the question of why individuals ought to consider such protective actions and
measures in the first place. This requires us to consider the way in which the advent of the
Internet altered our participation in the public sphere, and take into account that the efficacy
of public authorities have decreased in (cyber)crime prevention relative to the increases in the
size of our ‘communities’. Public crime prevention efforts have come to rely on
responsibilisation (Garland 1996), i.e. rendering individual subjects responsible for what was
previously the task of another, e.g. a state agency, or was not a responsibility at all. In the
following section, I will explain how the lapse in the ability of public authorities builds a
motivated foundation for individuals to increase their own ability to safeguard what they
value.
2. Responsibilisation: from the Public to the Personal
As our ‘communities’ grew in size alongside the use of online resources, the effectiveness of
our traditional protectors, i.e. law enforcement institutions, has been put under severe
pressure. For example, while instances of and damages from cybercrime are soaring
(Morgan, 2019), law enforcement agencies have been unable to keep pace (United Nations,
2013). Furthermore, cybercrimes are every (wo)man’s offences (Kikerpill, 2019), i.e.
cybercrime offences are not solely perpetrated by career criminals or criminal groups,
because regular people abuse online opportunities in a similar manner. Before the pressure
becomes a matter for law enforcement, however, individuals targeted by offenders face an
onslaught of crime attempts that are not reflected in official crime statistics (Kikerpill and
Siibak, 2019). Additional indications of the short reach of public authorities is the increased
attention to cyber hygiene, i.e. practices and behaviours meant to keep a person safe from
online harm (Cain et al., 2018), and the growing importance of implementing such practices
for the purposes of obtaining private insurance coverage (Camillo, 2017). Nevertheless, as
protecting the community remains a responsibility of public authorities, the direct effects of
responsibilisation are more visible with respect to private companies and online platforms,
because these are bigger targets with respect to policy changes.
Organisations, whose continued and secure operation is critical to the functioning of our
information society, face increasing pressure from state authorities to seal up common
vulnerabilities to cybercrime (European Commission, 2017a). This is witnessed by the
adoption of numerous legal instruments requiring the adoption of risk-based approaches to
security and the implementation of ‘appropriate security measures’ (Porcedda, 2018).
Moreover, this specific manifestation of responsibilisation is by no means local and can also
be evinced from the mass promulgation of risk-based instruments across the globe (Kosseff,
2018). While it is possible to view such state efforts as targeting large companies and critical
infrastructure operators, the individuals employed by these companies receive and carry the
brunt of the actual responsibility (MacEwan, 2017). What is more, shortcomings in security
knowledge and experience can cost regular employees their jobs (Matthews, 2019). The
shortcomings of responsibilisation, and a lack of willingness from actors above the category
of the individual, is apparent in the hotly debated issue of online governance and platform
accountability (Castillo, 2018).
Private actors are now able to create open ‘spaces’ that can facilitate public discussions,
although it remains unclear who, if anyone, ought to take responsibility for protecting the
individual in such instances. The continuously growing importance of the Internet has
spurred various views on its role in society, in particular with respect to freedom of
expression (Pfaffenberger, 1996). This places online platforms in the double-role of
facilitators and moderators who can face liability (Kuczerawy, 2015; Peters, 2017), even
though enforcing freedom of expression provisions remains undecided (Brannon, 2019).
Oftentimes, the factual responsibility of moderating online content lies with volunteer
moderators (Matias, 2019), which can result in arbitrary enforcement of community rules and
the creation of a toxic environment (Massanari, 2017). Such lack of willingness, or
capability, has lately even made possible the sharing of involuntary pornographic materials
on openly accessible online platforms, i.e. allowing for severe infringements on personal
privacy and values (Chesney and Citron, 2019; Kikerpill, 2020). Given that the livelihood
and personal security of an individual has come under immense pressure due to the inability
of public authorities, or large private corporations with the factual ability to govern, it is up to
the individuals to effect change in terms of safeguarding from online criminal impacts what
they hold valuable.
Provided that third persons and the state are not capable of directly protecting the individual
in many online situations, the issue of personal responsibility rises to the foreground. Thus,
discussing personal responsibility requires us to move past the public and private sphere, and
focus on an individual’s true ability - what people know, do, and what measures they use - to
protect what is personally valuable to them. Although individuals’ values remain consistent
across cultures, their value priorities vary (Schwartz, 2012). Thus, the following section will
use the value of security, i.e. safety, harmony, and stability of society, relationships, and the
self (Schwartz, 2012, p. 6), as the guiding example. The COVID-19 pandemic, which
required people to physically distance themselves from others, and further increased reliance
on online resources, has been the biggest shock to individuals’ security in recent years. True
to their opportunistic tendencies, cybercriminals released a ‘pandemic’ of their own making
(Fontanilla, 2020). Criminals exploited the sense of fear and the need for information
experienced by most people under such trying circumstances (Grad, 2020). To stay secure,
including from online victimisation, people had to safely navigate a plethora of social-
engineering attacks unleashed during the virus pandemic (Cohen, 2020). This made
knowledge of such threats and their impacts, i.e. cyber-knowledge, as well as the appropriate
actions necessary for avoiding falling victim to the attacks paramount in adequately
safeguarding what people value.
Everyone shares a similar set of basic values (Schwartz, 2012), but the priority of these
values varies individually. For many, the COVID-19 pandemic pushed to the forefront the
value of security, in particular with respect to staying physically safe from infection. What is
valued, is also worth protecting. Seizing on that notion, i.e. an individual’s internal
deliberations on value priorities that resulted in security surfacing to the top, criminals
devised ways of exploiting it via online means. Hence, due to external circumstances such as
the virus pandemic, and the respective deliberations on value priorities influenced by these
(and other) circumstances, we create internal ‘spaces’ that we personally consider worthy of
protecting from any potentially harmful impacts. I refer to these internal ‘spaces’ as our
internal spheres of protection.
The ‘spaces’ present as planes or spheres, because they need to hold multiple manifestations
emanating from the same basic value. In our case and with respect to the value of security,
both staying safe from infection and not falling victim to cybercrime occupy the same sphere.
As people are knowledgeable about the negative consequences of illness, they are inclined to
seek information about avoiding it and follow through with appropriate action. However,
where knowledge regarding a connection between an external threat and its potential impact
on our internal spheres of protection is absent, the underlying value is at risk of being
compromised. Hence, while people always value security to one extent or another, they are
not always adequately knowledgeable with respect to what is putting the value at risk, and
thus fail to protect it. Previous research has shown that 95% of cybersecurity incidents are
caused by human error (IBM, 2014), which clearly indicates a lack of knowledge regarding
the connection between threats and the risk these threats pose to the value that underpins our
sphere of protection. In terms of the value of security, socially-engineered online attacks such
as phishing (Kikerpill and Siibak, 2019) are specifically designed to take advantage of this
discrepancy present for a person’s sphere of protection. In other words, people are
knowledgeable about and willing to protect one manifestation of security, e.g. protecting
health by avoiding disease, yet unknowledgeable of another coinciding way the value can be
compromised, e.g. unable to protect their resources from cybercriminals.
The aforementioned issues are further exacerbated when people seeking to safeguard their
spheres of protection need to rely on others. Of course, reliance on others is often the case,
especially in severe instances. For example, people must turn to doctors and medicine to get
relief from severe illnesses and proper authorities must be notified in cases of serious
cybersecurity breaches. The difference, however, comes on the level of recognising the
threat, whether great or small, in the first place and that is precisely where impacts from
health issues and cybercrime attempts diverge with respect to individuals’ value of security.
Thus, where a person lacks knowledge about the risk presented by phishing (Leukfeldt, 2015)
or is unable to apply existing knowledge due to successful deception (Norris and Brookes,
2020), victimisation is more likely to occur. Surveys have shown that people do not consider
themselves well-informed about cybercrime threats (European Commission, 2017b).
However, a lack of knowledge or passivity regarding personal cybersecurity is hardly a
luxury the individuals can afford nowadays (Morgan, 2019).
Returning to the beginning, the key prerequisite in the construction of internal spheres of
protection is knowledge. A person’s intent regarding a specific act of using a computer
system can be in disunity with events that will actually unfold, e.g. entering personal
information on a fake website as the outcome of a successful phishing attack results in stolen
data instead of a changed password. However, the underlying reason for victimisation likely
entails from a lack of knowledge about common phishing email characteristics (Jakobsson,
2007) or the inability to apply existing knowledge due to emotional distress resulting from
the socially-engineered message (Norris and Brookes, 2020, p. 2). Nevertheless, other
individuals play no role in that moment of deciding, i.e. it falls on the improvement of
knowledge and its utilisation, not the presence of other people, to decrease the likelihood of
victimisation. Theoretically, it would be possible to bring the absolute exposure to phishing
risk (Gottfredson, 1981, p. 715) down to zero by simply not having an email account (or a
phone that can receive text messages and calls). However, given the prevalence of email for
personal and business communication (Kikerpill and Siibak, 2019), blanket exclusion of this
kind is unlikely for many, and would be extremely difficult to propose as policy in the digital
era. Therefore, knowledge regarding threats to an individual’s sphere of protection and the
ability of utilising such knowledge when necessary are primary in deciding on the course of
action or required measures. Based on the previous discussion, the emergence and
safeguarding of an internal sphere of protection can be briefly presented as follows:
1. There exists some underlying value. Individuals seek to protect and preserve this value
to a certain extent, depending on their value priorities. Since the value can be threatened
in various ways, it creates an internal, value-based sphere, which always takes a base risk
of being compromised (>0), i.e. no risk would mean no value. The value-based internal
sphere is subsequently at risk from the different sources of impact, all of which carry a
likelihood of success.
2. Thus, the first instance in safeguarding the internal sphere is recognising something or
someone is a threat to the preservation of a value. Here, knowledge about cybercrime, e.g.
common techniques used or red flags to be noticed, and the ability to utilise such
knowledge obtain their significance. The unattainable ideal is the condition of complete
information, whereas people function under conditions of incomplete information about
threats. An improvement in knowledge has the potential of entailing improvements in the
ability to safeguard the internal sphere.
3. The second instance comprises the actions, and the measures that support these actions.
Knowledge, actions informed by knowledge, and protective measures supporting such
actions constitute an individual’s true ability to safeguard their internal spheres.
Knowledge is required for recognising the threat as well as considering effective actions
and measures. Actions (or omissions) based on prior knowledge deal with the threat as it
impacts the individual’s sphere of protection. Measures and tools support actions, e.g.
spam filters or watermarking, or decrease the probability of victimisation independently,
e.g. an anti-virus software.
Conclusion
The changes entailed by the emergence of the commercial Internet, specifically the increased
size of our ‘communities’, has put pressure on law enforcement and revealed shortcomings in
their ability to protect the community from harm. As a response, states have chosen the use of
responsibilisation - delegating tasks, which have conventionally been in the sole purview of
public authorities, to private entities. Ultimately, these responsibilities come to rest on the
shoulders of private individuals who directly experience the impact from the shortcomings of
public authorities, e.g. law enforcement, or large private companies, e.g. online platforms.
Thus, the individual’s role in cybercrime prevention has to be recast as significant and
capable. While routine activity theory requires the presence of others who can intervene and
foil the commission of crimes, this notion is largely irrelevant with respect to cybercrime.
Hence, constructing the individual’s role through a value-based internal sphere worth
protecting, and including knowledge, actions, and various measures available to the
individual in assessing a person’s true ability to safeguard what they value, is a small step
forward in addressing current cybercrime prevention needs. The recognition of such needs by
the public is also a practicality, given how crime prevention tasks have trickled down from
the public to the private, and finally to the personal. Providing individuals with cyber-
knowledge, i.e. knowledge on the connection between an individual’s sphere of protection
and the negative impacts threatening it, is perhaps the only way for public authorities to
regain some control over their conventional domain.
References
Brannon, V. C. (2019), “Free speech and the regulation of social media content”,
Congressional Research Service, available at https://fas.org/sgp/crs/misc/R45650.pdf
(Accessed 5 September 2020).
Cain A.A., Edwards M.E. and Still J.D. (2018), “An exploratory study of cyber hygiene
behaviors and knowledge”, Journal of Information Security and Application, Vol. 42, pp. 36-
45.
Camillo, M. (2017), “Cyber risk and the changing role of insurance”, Journal of Cyber
Policy, Vol. 2 No. 1, pp. 53-63.
Chesney, R. and Citron, D. K. (2019), “Deep fakes: a looming challenge for privacy,
democracy, and national security”, California Law Review, Vol. 107 No. 6, pp. 1753-1820.
Clarke, R. V. (1983), “Situational crime prevention: its theoretical basis and practical scope”,
Crime and Justice, Vol. 4, pp. 225-256.
Cohen, J. (2020), “Phishing attacks increase 350 amid COVID-19 quarantine”, PCMag, 20
March, available at https://www.pcmag.com/news/phishing-attacks-increase-350-percent-
amid-covid-19-quarantine (Accessed 1 June 2020).
Cohen, L.E. and Felson, M. (1979), “Social change and crime rate trends: a routine activity
approach”, American Sociological Review, Vol. 44 No. 4, pp. 588–608.
Eck, J.E. and Clarke, R.V. (2019), “Situational crime prevention: theory, practice and
evidence”, in Kohn, M. D., Hendrix, N., Penly Hall, G. and Lizotte, A. J. (Ed.s.), Handbook
on Crime and Deviance, Springer, pp. 355-376.
Felson, M., and Boba, R. (2010). Crime and everyday life: insight and implications for
society. Pine Forge, Thousand Oaks.
Fontanilla, M.V. (2020), “Cybercrime pandemic’, Eubios Journal of Asian and International
Bioethics, Vol. 30 No. 4, pp. 161-165.
Garland, D. (1996), “The limits of the sovereign state: strategies of crime control in
contemporary society”, British Journal of Criminology, Vol. 36 No. 4, pp. 445–71.
Garofalo, J. and Clark, D. (1992), “Guardianship and residential burglary”, Justice Quarterly,
Vol. 9 No. 3, pp.443–463.
Gottfredson, M.R. (1981), “On the etiology of criminal victimization”, Journal of Criminal
Law and Criminology, Vol. 72 No. 2, pp.714-726.
Grad, P. (2020), “Router phishing scam targets global fear over coronavirus”, TechXplore, 27
March, available at https://techxplore.com/news/2020-03-router-phishing-scam-global-
coronavirus.html (Accessed 5 September 2020).
Hawley, A. (1950), Human ecology: a theory of community structure, Ronald, New York.
Hollis M.E., Felson, M. and Welsh B.C. (2013), “The capable guardian in routine activities
theory: a theoretical and conceptual reappraisal”, Crime Prevention and Community Safety,
Vol. 15, pp. 65-79.
Hollis, M.E. and Wilson, J. (2014), “Who are the guardians in product counterfeiting? A
theoretical application of routine activities theory”, Crime Prevention and Community Safety,
Vol. 16, pp. 169-188.
Hollis-Peel, M.E., Reynald D. M., van Bavel, M., Elffers, H. and Welsh, B.C. (2011),
“Guardianship for crime prevention: a critical review of the literature”, Crime, Law and
Social Change, Vol. 56, pp. 53-70.
Hutchings, A. and Hayes, H. (2009), “Routine activity theory and phishing victimization:
who gets caught in the ‘net’?”, Current Issues in Criminal Justice, Vol. 20 No. 3.
IBM Global Technology Services. (2014), “IBM Security Services 2014 Cyber Security
Intelligence Index”, Research Report.
Jakobsson, M. (2007), “The human factor in phishing”, Privacy & Security of Consumer
Information, Indiana University, Bloomington, Indiana, USA.
Junger, M., Montoya, L., Hartel, P.H. and Heydari, M. (2017), “Towards the normalization of
cybercrime victimization: a routine activities analysis of cybercrime in Europe”, in
International Conference On Cyber Situational Awareness, Data Analytics And Assessment
(Cyber SA), London, pp. 1-8, doi: 10.1109/CyberSA.2017.8073391.
Kikerpill, K. (2019), “Work, prey, love: a critical analysis of Estonian cybercrime case law
2014-2019”, Proceedings Estonian Academy of Security Sciences, Vol. 18, pp. 109-138.
Kikerpill, K. (2020), “Choose your stars and studs: the rise of deepfake designer porn”, Porn
Studies, doi:10.1080/23268743.2020.1765851.
Kikerpill, K. and Siibak, A. (2019), “Living in a spamster’s paradise: deceit and threats in
phishing emails”, Masaryk University Journal of Law and Technology, Vol. 13 No. 1, pp. 45-
66.
Koeze, E. and Popper, N. (2020), “The virus changed the way we Internet”, The New York
Times, April 7, available at:
https://www.nytimes.com/interactive/2020/04/07/technology/coronavirus-internet-use.html
(Accessed 5 September 2020).
Kosseff, J. (2018), “Defining cybersecurity law”, Iowa Law Review, Vol. 103, No. 985, pp.
985-1031.
Leukfeldt, E.R. (2015), “Comparing victims of phishing and malware attacks: unraveling risk
factors and possibilities for situational crime prevention”, International Journal of Advanced
Studies in Computer Science and Engineering, Vol. 4 No. 5, pp. 26–32.
Leukfeldt, E.R. and Yar, M. (2016), “Applying routine activity theory to cybercrime: a
theoretical and empirical analysis”, Deviant Behavior, Vol. 37 No. 3, pp. 263-280.
Massanari, A. (2017), “#Gamergate and the Fappening: how Reddit’s algorithm, governance,
and culture support toxic technocultures”, New Media & Society, Vol. 19 No. 3, pp. 329-346.
Matias, J.N. (2019), “The civic labor of volunteer moderators online”, Social Media +
Society, Vol. 5 No. 2, https://doi.org/10.1177%2F2056305119836778.
Matthews, K. (2019), “Can victims of phishing scams be fired for falling prey?”, Security
Boulevard. 28 March, available at https://securityboulevard.com/2019/03/can-victims-of-
phishing-scams-be-fired-for-falling-prey/ (Accessed 1 June 2020).
Morgan, S. (2019), “2019 cybersecurity almanac: 100 facts, figures, predictions and
statistics”, Cybercrime Magazine, 6 Feb, available at:
https://cybersecurityventures.com/cybersecurity-almanac-2019/ (Accessed 1 June 2020).
Naughton, J. (2016). “The evolution of the Internet: from military experiment to general
purpose technology”, Journal of Cyber Policy, Vol. 1 No. 1, 5-28.
Panchal, N., Kamal, R., Orgera, K., Cox, C., Garfield, R., Hamel, L., Muñana, C. and
Chidambaram, P. (2020), “The implications of COVID-19 for mental health and substance
abuse”, KFF, 21 August, available at: https://www.kff.org/coronavirus-covid-19/issue-
brief/the-implications-of-covid-19-for-mental-health-and-substance-use/ (Accessed 5
September 2020).
Peters, J. (2017), “The ‘sovereigns of cyberspace’ and state action: the First Amendment’s
application – or lack thereof – to third-party platforms”, Berkeley Technology Law Journal,
Vol. 32 No. 2, pp. 989-1026.
Pfaffenberger, B. (1996), “’If I want it, it’s OK’: Usenet and the (outer) limits of free
speech”, The Information Society, Vol. 12 No. 4, pp. 365–386.
Porcedda, M.A. (2018), “Patching the patchwork: appraising the EU regulatory framework
on cyber security breaches”, Computer Law and Security Review, Vol. 34 No. 5, pp. 1077-
1098.
Pratt, T.C. and Turanovic, J.J. (2016), “Lifestyle and routine activity theories revisited: the
importance of ‘risk’ to the study of victimization”, Victims & Offenders, Vol. 11 No. 3, pp.
335-354.
Reynald, D.M. (2009), “Guardianship in action: developing a new tool for measurement”,
Crime Prevention and Community Safety, Vol. 11 No. 1, pp. 1-20.
Ryan, T., Allen, K.A., Gray, D.L. and McInerney, D.M. (2017), “How social are social
media? A review of online social behaviour and connectedness”, Journal of Relationships
Research, Vol. 8, doi:10.1017/jrr.2017.13.
Schwartz, S.H. (2012), “An overview of the Schwartz theory of basic values”, Online
Readings in Psychology and Culture, Vol. 2 No. 1, https://doi.org/10.9707/2307-0919.1116.
Shadel, D., Pak, K. and Sauer J.H. (2014), “Caught in the scammer’s net: risk factors that
may lead to becoming an Internet fraud victim”, AARP Survey of American Adults of Age
18 and Older, available at
https://www.aarp.org/content/dam/aarp/research/surveys_statistics/econ/2014/Caught-
Scammer-Net-Risk-Factors-Internet-Fraud-Victims.doi.10.26419%252Fres.00076.001.pdf
(Accessed 1 June 2020).
Tewksbury, R. and Mustaine, E.E. (2003), “College students’ lifestyles and self-protective
behaviors: further considerations of the guardianship concept in routine activity theory”,
Criminal Justice and Behavior, Vol. 30 No. 3, pp. 302-327.
Tseloni, A., Wittebrood, K., Farrell, G. and Pease, K. (2004), “Burglary victimization in
England and Wales, the United States, and the Netherlands: a cross-national comparative test
of routine activities and lifestyle theories”, British Journal of Criminology, Vol. 44 No. 1, pp.
61-91.
United Nations Office on Drugs and Crime. (2013), “Draft Comprehensive Study on
Cybercrime”, available at https://www.unodc.org/documents/organized-
crime/UNODC_CCPCJ_EG.4_2013/CYBERCRIME_STUDY_210213.pdf (Accessed 1 June
2020).
Vakhitova, Z.I., Reynald, D.M. and Townsley, M. (2016), “Toward the adaptation of routine
activity and lifestyle exposure theories to account for cyber abuse victimization”, Journal of
Contemporary Criminal Justice, Vol. 32 No. 2, pp. 169-188.
Williams, E.J., Hinds, J. and Joinson A.N. (2018), “Exploring susceptibility to phishing in the
workplace”, International Journal of Human-Computer Studies, Vol. 120, pp. 1-3.