AIandcounselingfourlevels Russell Fulmer
AIandcounselingfourlevels Russell Fulmer
AIandcounselingfourlevels Russell Fulmer
net/publication/333705460
CITATIONS READS
19 11,402
1 author:
Russell Fulmer
Xi'an Jiaotong-Liverpool University
10 PUBLICATIONS 248 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
My new book is titled, "Counseling and Psychotherapy: Theory & Beyond" View project
All content following this page was uploaded by Russell Fulmer on 13 July 2019.
Article
Russell Fulmer
Northwestern University
Abstract
Artificial Intelligence (AI) is increasingly prominent in public, academic, and clinical provinces. A
widening research base is expanding AI’s reach, including to that of the counseling profession.
This article defines AI and its relevant subfields, provides a brief history of psychological AI,
and suggests four levels of implementation to counseling, corresponding to time orientation and
influence. Implications of AI are applicable to counseling ethics, existentialism, clinical practice,
and public policy.
Keywords
artificial intelligence, artificial intelligence and ethics, artificial intelligence and existentialism,
chatbots, psychological artificial intelligence
Artificial Intelligence (AI) is expected to play an influential role in the mental health care
of the future (Luxton, 2014, 2016). Many theorists and researchers predict AI to shape
the existential future of life on earth (Barrat, 2015; Bostrom, 2014; Kurzweil, 2014;
Müller, 2016) with special implications for jobs and careers (Ross, 2017). Late physicist
Stephen Hawking discussed AI potentially bringing about the end of humanity, stressing
the importance of enacting safety measures including raising awareness and a deepened
understanding of the risks, challenges, and short- and long-term impacts of AI develop-
ment (Hawking, Russell, Tegmark, & Wilczek, 2014). In 2016, some of the world’s larg-
est companies formed an alliance to help ensure that AI develops in a beneficent manner.
Amazon, Apple, Deep Mind, Google, Facebook, IBM, and Microsoft are founding part-
ners in the “Partnership on Artificial Intelligence To Benefit People and Society,” a col-
laboration that promotes interdisciplinary inclusiveness in AI and its societal impact
Corresponding author:
Russell Fulmer, Northwestern University, 618 Library Place, Evanston, IL 60201, USA.
Email: [email protected]
2 Theory & Psychology 00(0)
(Gaggioli, 2017a). This partnership aims to bring together activists and experts in other
fields including psychology to discuss AI’s current and future role and impact on society.
Efforts are thus being made to approach AI as a societal shift with multidisciplinary
implications. Specifically, the developers of AI are prudently seeking input from mental
health professionals, as the psychological sciences have played a central role in AI devel-
opment since its formal inception (Frankish & Ramsey, 2014).
Counselors have forecasted AI to infiltrate their profession for some time (Illovsky,
1994; Sharf, 1985). But only within the past decade have improvements in computer
processing power and natural language processing ability—along with advancements in
artificial neural networks—brought about a new wave of AI ability (Hirschberg &
Manning, 2015; Kurzweil, 2006; Russell & Norvig, 2003). These advancements have
positioned AI in the spotlight. The Artificial Intelligence Index (2017) Annual Report
states, “Artificial Intelligence has leapt to the forefront of global discourse, garnering
increased attention from practitioners, industry leaders, policymakers, and the general
public” (p. 5). AI research is advancing extremely fast. According to the AI Index Annual
Report, “even experts have a hard time understanding and tracking progress across the
field” (p. 5). AI applications already assist health-care professionals with clinical train-
ing, treatment, assessment, and clinical decision-making (Hamet & Tremblay, 2017;
Luxton, 2014). AI has become a vast, interdisciplinary field that often intersects with
counseling. One purpose of this article is to review AI progress in domains relevant to
clinical counseling.
What AI actually is stands as a deceptively complex question largely because defining
intelligence alone is challenging (Gardner, 2017; Monnier, 2015). Before explaining cur-
rent implementations and future implications for the counseling profession, I will define
and explain relevant terms and concepts associated with AI. Next, I will review the past,
present, and future of AI in relation to counseling. Finally, I will reveal four metalevels
of AI implementation to the counseling profession: one historical, one current, one pos-
sible in the near future, and one conceivable in the long-term. Each theoretical level
shows an increasing amount of relevancy, facility, and influence of AI on the counseling
profession.
intelligence researcher Max Tegmark (2017), who states that intelligence is the “ability
to accomplish complex goals” (p. 39). Subsequently, I offer the following definition of
AI as the ability of non-biological mechanisms to accomplish goals. The qualifier “com-
plex” is deleted from Tegmark’s definition because intelligence is not a dichotomous
concept; rather, both simple and complex goals can be attained. Intelligence in its rudi-
mentary or advanced states occupies different points on a continuum, encapsulated
within the same category, differing quantitatively. AI is akin to an operating system, like
the human brain. Indeed, neuroscience has informed a substantial portion of prevailing
AI research (Hassabis, Kumaran, Summerfield, & Botvinick, 2017; Lecun, Bengio, &
Hinton, 2015). The embodiment of AI can take various forms, from a computer screen
avatar to a robot.
If AI one day advances to the level of competent counseling practice, it will be through
the underlying mechanisms that drive machine learning called algorithms. What culmi-
nates in a computer program besting a world champion Go player or, potentially, an AI
employing a counseling technique, begins with a set of logic-driven instructions detail-
ing how a task should be performed. The notion of an algorithm does not lend itself well
to a rigorous definition (Gurevich, 2012); however, Pedro Domingos (2015) provides a
constitutive explanation of an algorithm as “a sequence of instructions telling a computer
what to do” (p. 1). AI is a broad area, machine learning is a subfield, and algorithms are
specific operations—like written communications that can both therapeutically inform
and give conversational voice to the AI.
field heavily invested in human conversation, the Turing test may prove pivotal when
considering counseling implementation, ethics, working conditions, and accessibility.
Perception is reality to many people. Counselors would be well served to monitor
public perception about psychological artificial intelligence. In doing so, counselors
could decide that using psychological AI as a supplement to traditional counseling may
benefit clients and the profession alike. To a small degree, chatbots like Eliza have mim-
icked counseling skills for some time. Counselors themselves may disagree. However, if
or when the public views psychological AI as relatively synonymous with counseling,
counselors would be wise to pay heed.
Level 1: Historical
Historical AI implementations in counseling did not establish a professional relationship
and likely neither empowered nor helped people accomplish their goals to any signifi-
cant degree. Traditionally, counselors have made little use of artificial intelligence.
Connections drawn between the two fields are indistinct and indirect. First-level interac-
tion involved chatbots showcasing rudimentary applications of natural language process-
ing (NLP), a field of AI concerned with understanding and modeling human language
(Tanana, Hallgren, Imel, Atkins, & Srikumar, 2016). The field of NLP has advanced
from its 1960s inception in that now complex models can be applied via powerful com-
puter-generated statistical processors to assess statistical probabilities of sequences of
words, inflection, and semantics in large samples of natural language (Tanana et al.,
2016). These progressions have led to AI-assisted programs designed for therapeutic use,
6 Theory & Psychology 00(0)
in which AIs have been programmed to simulate mental health patients, for example.
While being imperfect, these programs do show some therapeutic efficacy and warrant
further research (Dalfonso et al., 2017; Luxton, 2014).
Level 2: Contemporary
Modern AI implementations in counseling do not establish a professional relationship and
empower to an unknown degree, but likely help clients accomplish their goals to some
degree. Level two is marked by AI-assisted implementations in counseling backed by
research. Contemporary implementations take two major forms. The first is through text-
based bots like Woebot, a text-based agent that employs Cognitive Behavioral Therapy
(CBT) by conveying CBT self-help techniques in conversation-like interactions with users.
Woebot has been shown to alleviate symptoms of depression and anxiety in young adults
(Fitzpatrick, Darcy, & Vierhile, 2017). Another example is Tess, a psychological AI using an
integrative theoretical orientation which included conversational, informational, and CBT-
like approaches. Research suggests that Tess can reduce depression and anxiety in college
students by providing interventions applicable to real life through AI-generated conversations
(Fulmer, Joerin, Gentile, Lakerink, & Rauws, 2018). The second form is through virtual real-
ity. Ellie, termed a virtual human interviewer, combines virtual reality with affective comput-
ing (Gaggioli, 2017b). Appearing on a screen as a virtual human, Ellie is capable of analyzing
a client’s verbal responses, facial expressions, and vocal intonations (Darcy, Louie, & Roberts,
2016). In many respects, Ellie represents the higher end of today’s therapeutic AI applica-
tions. Noteworthy are Ellie’s abilities in assessment, as her capacity to identify distress indi-
cators may prove beneficial in the diagnosis and treatment of Posttraumatic Stress Disorder
(PTSD), in addition to depression and anxiety (DeVault et al., 2014).
Today’s AI implementations show the utility of a wide range of counseling theories,
with CBT being most prominent. There is movement beyond strictly text-based com-
munication into visual and auditory domains as well as AI-based assessments that may
lead to greater reliability in diagnosis (DeVault et al., 2014; Hahn, Nierenberg, &
Whitfield-Gabrieli, 2016). Research is leading to improvements in data sensors, NLP,
and general machine learning by applying more complex models when computing com-
municative and behavioral input and output, and continuing to elucidate the processes
underlying human sensory and perception systems as well as learning paradigms so that
they may be implemented in computers. Coupled with research attesting to therapeutic-
AI efficacy, AI may play a greater role in the counseling of the future. Levels three and
four represent how that future may come to fruition.
Level 3: The medium to distant future, i.e., the dawn of artificial general
intelligence
Level three is characterized by the onset of Artificial General Intelligence (AGI). AIs
at this level may possess the expertise necessary to form professional relationships
with clients. Additionally, an AGI would have the capability of empowering and help-
ing clients accomplish their goals. Modern AI is known as having narrow intelligence
Fulmer 7
The median estimate of respondents was for a one in two chance that high-level machine
intelligence will be developed around 2040–2050, rising to a nine in ten chance by 2075.
Experts expect that systems will move on to superintelligence in less than 30 years thereafter.
They estimate the chance is about one in three that this development turns out to be “bad” or
“extremely bad” for humanity. (p. 555)
Summary
Each implementation level sees AI growing more into the fabric of counseling (see
Table 1). The past saw nominal AI implementation to the counseling field, but the
present has seen an AI resurgence. There are strong indications of more AI research in
the future as the European Commission, U.S., and China devote billions of dollars to
funding such endeavors (Cath, Wachter, Mittelstadt, Taddeo, & Floridi, 2018; Kelly,
2018; Larson, 2018). Whether the research surge brings about levels three and four
remains to be seen.
Discussion
This article intended to define and explain AI concepts, to discuss how AI pertains to
clinical counseling, and to present AI-in-counseling implementation levels from a theo-
retical viewpoint. Four metalevels of implementation were presented. The levels corre-
spond to time orientation, with level one relating to historical and level four to future
implementations affecting humanity in the long-term. I acknowledge that the future is
unknowable to some degree, but as climate scientists forecast a hotter world due to global
warming based on data patterns, so too are AI prognostications grounded in current
research (Hulme, 2016).
Artificial intelligence and counseling already interface. In the future, the extent to
which they interweave will depend largely on AI’s rate of growth, which, if current
trends continue, will fall somewhere between sequential and exponential. With exponen-
tial growth, for example, an AI capable only of posing elementary questions one day
could learn advanced assessment, diagnosis, and ways to embody the ethical, cognitive,
emotional, and relational characteristics of expert therapists (Jennings, Sovereign,
Fulmer 9
Bottorff, Mussell, & Vye, 2005; Skovholt & Jennings, 2004) essentially overnight.
Exponential growth is not certain, but explosive growth is certainly plausible (Pratt,
2015; see Kurzweil, 2006, for a technical explanation of how this might occur).
The presence of AI and high-technology in counseling looks to continue, and even
current-level AI implementations in counseling raise a host of practice-oriented and ethi-
cal questions regarding how and when AI use is appropriate or effective, to which degree
it can be used in place of a human counselor, how it may affect a person seeking human
connection via counseling, whether data produced during AI use could be stored in a
hacker-proof manner, and whether counselor and client AI are adequately trained and
informed on AI practices.
At present, the counseling literature contains a paucity of articles addressing AI from
a descriptive, correlative, or experimental basis. More research could inform clinical
practice if clinicians employ AI-assisted supplements, such as the psychological AI Tess,
to help their clients. Research could also inform thought-leadership if a need arises for
the ACA to address AI at a public policy level. Perhaps the most immediate need for
research is in counseling ethics.
Using Green’s (2018) outline of ethical concerns surrounding AI as a guide, research
must focus on the ways in which AI counseling services can avoid negative side effects,
overgeneralizations, and potentially harmful exploration in strategies and techniques.
Further, attention must be dedicated to ensuring AI functional transparency, or ensuring
that AI actions can be understood by those designing, manufacturing, implementing, and
interacting with it. Another ethical concern revolves around data security and privacy
practices when implementing AI services. Finally, investigations should seek to deter-
mine the extent to which both counselors and clients need to be versed in AI technology
and implementation to ensure fairness, beneficence, and non-maleficence in practice and
counselor and client safety and wellbeing (Green, 2018).
The counseling community needs further information about the effect AI services
could have on people specifically seeking out human interactions because they feel
unheard, unseen, and unworthy of the care of others. The shift from human to human-like
interactions in counseling, as well as other fields, may bring about a plethora of unchar-
tered existential questions. Coupled with the onslaught of induced unemployment, socio-
economic inequality, growing technological dependency, and human de-skilling, these
existential questions may warrant closer attention and preparation by researchers and
those who specialize in human emotion and crisis, such as counselors (Green, 2018). AI
brings power and influence that can be abused. Research helps prepare the profession to
address ethical questions when they arise.
More research is needed about psychological artificial intelligence. Considering its
burgeoning nature, there is a dearth of research on the topic and noteworthy is the absence
of literature about ethical ramifications. This article fills a research gap at the theoretical
level, offering a taxonomy with the proposed levels of implementation and providing
structure for forthcoming literature. For example, the nature of a clinical ethical dilemma
will look different at level one compared to level four. Theoretical pieces carry inherent
advantages and limitations. Advantages include providing constitutive definitions to
guide future inquiry and high-level context to frame AI implementation and influence on
the field. A limitation is the lack of specificity and clinical examples found in an abstract,
10 Theory & Psychology 00(0)
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this
article.
ORCID iD
Russell Fulmer https://orcid.org/0000-0002-4582-5167
References
Abdul-Kader, S. A., & Woods, J. (2015). Survey on chatbot design techniques in speech conversa-
tion systems. International Journal of Advanced Computer Science and Applications, 6(7),
72–80.
Agar, N. (2016). Don’t worry about superintelligence. Journal of Evolution & Technology, 26(1),
73–82.
Arel, I., Rose, D. C., & Karnowski, T. P. (2010). Deep machine learning—A new frontier in arti-
ficial intelligence research [research frontier]. IEEE Computational Intelligence Magazine,
5(4), 13–18. doi: 10.1109/mci.2010.938364
Artificial Intelligence Index. (2017). 2017 Annual Report. Stanford, CA: Author.
Barrat, J. (2015). Our final invention: Artificial intelligence and the end of the human era. New
York, NY: Thomas Dunne Books.
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford, UK: Oxford University
Press.
Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial intelligence and
the “good society”: The US, EU, and UK approach. Science and Engineering Ethics, 24(2),
505–528.
Cherniss, C., Extein, M., Goleman, D., & Weissberg, R. P. (2006). Emotional intelligence: What
does the research really indicate? Educational Psychologist, 41(4), 239–245. doi: 10.1207/
s15326985ep4104_4
Copeland, B. J. (1998). Artificial intelligence: A philosophical introduction. Malden, MA:
Blackwell.
Dalfonso, S., Santesteban-Echarri, O., Rice, S., Wadley, G., Lederman, R., Miles, C., . . . Alvarez-
Jimenez, M. (2017). Artificial intelligence-assisted online social therapy for youth mental
health. Frontiers in Psychology, 8(796). doi: 10.3389/fpsyg.2017.00796
Darcy, A. M., Louie, A. K., & Roberts, L. W. (2016). Machine learning and the profession of
medicine. Jama, 315(6), 551–552. doi: 10.1001/jama.2015.18421
Davies, P. H. (2002). Ideas of intelligence. Harvard International Review, 24(3), 62–66.
Fulmer 11
science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-
but-are-we-taking-9313474.html
Hirschberg, J., & Manning, C. D. (2015). Advances in natural language processing. Science,
349(6245), 261–266.
Hulme, M. (2016). 1.5 C and climate research after the Paris Agreement. Nature Climate Change,
6(3), 222–224.
Illovsky, M. E. (1994). Counseling, artificial intelligence, and expert systems. Simulation &
Gaming, 25(1), 88–98. doi: 10.1177/1046878194251009
Jennings, L., Sovereign, A., Bottorff, N., Mussell, M. P., & Vye, C. (2005). Nine ethical values of
master therapists. Journal of Mental Health Counseling, 27(1), 32–47.
Kaplan, D. M., Tarvydas, V. M., & Gladding, S. T. (2014). 20/20: A vision for the future of coun-
seling: The new consensus definition of counseling. Journal of Counseling & Development,
92(3), 366–372. doi: 10.1002/j.1556–6676.2014.00164.x
Kaplan, J. (2015). Humans need not apply: A guide to wealth and work in the age of artificial intel-
ligence. New Haven, CT: Yale University Press.
Kelly, É. (2018, April 26). EU to boost artificial intelligence research spend to €1.5B. Science
Business. Retrieved from https://sciencebusiness.net/framework-programmes/news/eu-
boost-artificial-intelligence-research-spend-eu15b
Kurzweil, R. (2006). The singularity is near: When humans transcend biology. London, UK: Penguin.
Kurzweil, R. (2014). How to create a mind: The secret of human thought revealed. New York,
NY: Penguin Books.
Larson, C. (2018, February 8). China’s massive investment in artificial intelligence has an insidi-
ous downside. Science. Retrieved from http://www.sciencemag.org/news/2018/02/china-s-
massive-investment-artificial-intelligence-has-insidious-downside
Lecun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. doi:
10.1038/nature14539
Legg, S., & Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds
and Machines, 17(4), 391–444. doi: 10.1007/s11023–007–9079-x
Lin, P., Abney, K., & Bekey, G. A. (2014). Robot ethics: The ethical and social implications of
robotics. Cambridge, MA: MIT Press.
Luxton, D. D. (2014). Artificial intelligence in psychological practice: Current and future applica-
tions and implications. Professional Psychology: Research and Practice, 45(5), 332–339.
Luxton, D. D. (2016). Artificial intelligence in behavioral and mental health care. Amsterdam, the
Netherlands: Elsevier.
MacDorman, K. F., & Kahn, P. J. (2007). Introduction to the special issue on psychological bench-
marks of human-robot interaction. Interaction Studies: Social Behaviour and Communication
in Biological and Artificial Systems, 8(3), 359–362. doi: 10.1075/is.8.3.02mac
Malle, B. F. (2015). Integrating robot ethics and machine morality: The study and design of moral
competence in robots. Ethics and Information Technology, 18(4), 243–256. doi: 10.1007/
s10676–015–9367–8
Mauldin, M. L. (1994, August). ChatterBots, TinyMuds, and the Turing test: Entering the Loebner
prize competition. Proceedings of the twelfth national conference on artificial intelligence
(AAAI-94) (pp. 16–21). Menlo Park, CA: AAAI Press. Retrieved from https://pdfs.semantic-
scholar.org/bdd4/9b4a0b7de03b00412e3b807a855504e1d3af.pdf
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the Dartmouth
summer research project on artificial intelligence, August 31, 1955. AI Magazine, 27(4), 12.
doi: 10.1609/aimag.v27i4.1904
Monnier, M. (2015). Difficulties in defining social-emotional intelligence, competences and
skills—A theoretical analysis and structural suggestion. International Journal of Research
for Vocational Education and Training, 2(1), 59–84.
Fulmer 13
Müller, V. C. (2016). Risks of artificial intelligence. Boca Raton, FL: Chapman & Hall.
Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert
opinion. Fundamental Issues of Artificial Intelligence, SYLI 376, 555–572. doi: 10.1007/978–
3–319–26485–1_33
Pratt, G. A. (2015). Is a Cambrian explosion coming for robotics? Journal of Economic
Perspectives, 29(3), 51–60. doi: 10.1257/jep.29.3.51
Reese, B. (2018). The fourth age: Smart robots, conscious computers, and the future of humanity.
New York, NY: Atria Books.
Ross, A. (2017). The industries of the future. London, UK: Simon & Schuster.
Russell, S. J., & Norvig, P. (2003). Artificial intelligence: A modern approach (2nd ed.). Upper
Saddle River, NJ: Prentice Hall.
Santos-Lang, C. C. (2015). Moral ecology approaches to machine ethics. In S. P. van Rysewyk
& M. Pontier (Eds.), Machine medical ethics (pp. 111–127). Cham, Switzerland: Springer
International. doi: 10.1007/978–3–319–08108–3_8
Saygin, A. P., Cicekli, I., & Akman, V. (2000). Turing test: 50 years later. Minds and Machines,
10(4), 463–518.
Schroeder, M. J. (2017). The case of artificial vs. natural intelligence: Philosophy of information as a
witness, prosecutor, attorney, or judge? Proceedings, 1(3), 111. doi: 10.3390/is4si-2017–03972
Sharf, R. S. (1985). Artificial intelligence: Implications for the future of counseling. Journal of
Counseling & Development, 64(1), 34–37. doi: 10.1002/j.1556–6676.1985.tb00999.x
Skovholt, T. M., & Jennings, L. (2004). Master therapists exploring expertise in therapy and
counseling. Boston, MA: Pearson/Allyn & Bacon.
Sternberg, R. J. (1985). Implicit theories of intelligence, creativity, and wisdom. Journal of
Personality and Social Psychology, 49(3), 607–627. doi: 10.1037//0022–3514.49.3.607
Tanana, M., Hallgren, K. A., Imel, Z. E., Atkins, D. C., & Srikumar, V. (2016). A comparison
of natural language processing methods for automated coding of motivational interviewing.
Journal of Substance Abuse Treatment, 65, 43–50. doi: 10.1016/j.jsat.2016.01.006
Tavani, H. (2018). Can social robots qualify for moral consideration? Reframing the question
about robot rights. Information, 9(4), 73. doi: 10.3390/info9040073
Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. New York, NY:
Random House.
Turing, A. (1950). Computing machinery and intelligence. Mind, 49, 433–460.
Wallach, W., & Allen, C. (2010). Moral machines: Teaching robots right from wrong. Oxford,
UK: Oxford University Press.
Warwick, K., & Shah, H. (2014). Good machine performance in Turing’s imitation game. IEEE
Transactions on Computational Intelligence and AI in Games, 6(3), 289–299.
Weizenbaum, J. (1966). ELIZA—A computer program for the study of natural language commu-
nication between man and machine. Communications of the ACM, 9(1), 36–45.
Yampolskiy, R. V., & Fox, J. (2012). Artificial general intelligence and the human mental model.
In A. H. Eden, J. H. Soraker, & E. Steinhart (Eds.), Singularity hypotheses (pp. 129–145).
Berlin, Germany: Springer.
Author biography
Russell Fulmer is a faculty member with the Counseling@Northwestern program through The
Family Institute at Northwestern University. His central research interests involve psychological
artificial intelligence (AI) and the psychodynamic system. He recently published a randomized
controlled trial that showed the efficacy of an AI mental health support agent (Tess) to help college
students battle anxiety and depression. His current work examines ethical issues faced by clini-
cians when using psychological AI in practice.