Abstract
The introduction of a new generation of AI systems has kicked off another wave of AI hype. Now that AI systems have added the ability to produce new content to their predictive capabilities, extreme excitement about their alleged capabilities and opportunities is matched only by long held fears about job loss and machine control.
We typically understand the dynamics of AI hype to be something that happens to us, but in this commentary, we propose to flip the script. We suggest that AI hype is not a social fact, but a widely shared practice. We outline some negative implications of this practice and suggest how these can be mitigated, especially with regards to shifting ways of knowing and learning about AI, in the classroom and beyond. Even though pedagogical efforts (broadly understood) have benefited from AI hyping (there is now more varied AI training than ever), such efforts can also help minimize the impacts of hyping on the public’s credulity toward extravagant claims made about AI’s potential benefits and dangers.
Below, we consider steps that can be taken to address this issue and illustrate pathways for more holistic AI educational approaches that participate to a lesser degree in the practice of AI hyping. We contend that designing better AI futures will require that AI hyping be blunted to enable grounded debates about the ways that AI systems impact people’s lives both now and in the near future.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 From AI hype to AI hyping
Although AI hype is often framed as simply a fact about our world that affects us, AI hype is actually something that is done [1]. AI hyping is a shared practice that, strictly speaking, involves writing and talking about AI so that people get excited about or interested in it [2]. The emphasis should thus be on the process of hyping, rather than the content of a specific instance of hype. This framing of hyping does not fundamentally presuppose any specific valence on the possibilities; there is both ‘boosterist’ and ‘doomerist’ hyping [3]. Identifying an act of hyping does not require an analysis of the value or validity of claims being made, or the degree to which a speaker is engaged in hyperbole. To the contrary, hyping is concerned simply with promotion and the generation of interest and attention.
Importantly, AI hyping largely consists of generating excitement by claiming a capability or agency for AI systems in ways that cannot be verified or falsified. Headlines claiming AI “may be coming for our jobs” [4], that “industry leaders” are warning “AI poses a ‘Risk of Extinction’” [5], or even that “AI can outperform doctors” [6] cannot be shown to be wrong because they variously focus on some time far in the future or because they mobilize a tempting but ultimately meaningless comparison (e.g. What does it mean to ‘outperform’ doctors? Under what circumstances and in what settings?). Hyping depends on the ability to continue promoting AI, regardless of what may happen in, or be true about, the world.
Additionally, AI hyping is about linking AI to a long current of well-established tropes and narratives: AI narratives reliably oscillate between dystopia and utopia, invoking century-old dreams and fears about intelligent machines [7, 8]. The use of common narratives about technology–AI will save us all! AI will doom us all!–enables the hyping to capture attention more easily, whether to promote or demote AI. Even ambivalent claims, like in the headline “Will AI free us from drudgery– or leave us jobless and hungry?” [9] can serve as an act of hyping since it is designed to capture attention without making a specific claim that could someday be verified or falsified. At the same time, the extreme claims of AI hyping, regardless of whether the hype praises or condemns AI, have had a significant distorting effect on our understanding and debates about AI, and particularly about its social, political, economic, and environmental impacts, as policymakers and pundits react to the hyping, rather than an assessment of likely impacts of AI systems [10]. For example, hyping that suggests AI will replace workers has resulted in increased investigation of policy proposals promoting a universal basic income [11], while hyping that suggests that an omnipotent AI system might imperil all of humanity has led to serious proposals that a ‘kill switch’ be mandated in AI system designs [12].
1.1 Here we go again: generative AI hyping
We take for granted that readers will recognize the characteristics of AI hyping in the context of recently introduced text- or image-based generative AI systems (such as ChatGPT, Midjourney, Bard, Dall-E, or Claude). These systems are the latest round of AI technology that, yet again, is allegedly poised to bring about fundamental changes to the ways we live, work, and relate to one another. In many ways, discussions about generative AI have mirrored past patterns of excitement about new possibilities alongside anxieties around automation and intelligent machines [13]. In fact, critics and scholars have argued that AI hyping is, like hyping of other past technologies, often carefully curated by people in power, precisely to sustain their influence and even their personal benefit [14].
Generative AI systems are claimed to score well on the LSAT and other standardized tests [15], including those assessing divergent thinking [16]; support clinical research and doctor-patient interaction [17]; directly engage with their physical world, such as in the context of manufacturing [18]; and much more. Predictions about the beneficial impacts of generative AIs range from multiple percentage points of economic growth [19] to revolutionary new artistic self-expression [20]. At the same time, concerns and potential harms are rapidly emerging around generative AI systems. These include job displacement [21], biased representations [22, 23], incorrect content or ‘hallucinations’, or even potential national security concerns [24]. Some of these harms are no longer speculative, but real. For example, studies have shown that generative AI perpetuates stereotypical public health tropes [25], sustains racial stereotypes in recruiting [26], and consistently hallucinates when used in legal tasks [27].
One key challenge to serious discussion of AI’s role in society is that the focus of hyping is typically on the technical machinery, rather than the broader sociotechnical systems in which the generative AIs operate [28]. The narrative focus is entirely on the machines’ capabilities and agency, rather than the active, necessary role played by people and communities when these systems are actually deployed and integrated into social life [29]. Any AI system intervenes into already existing social structures if it is useful: a system like ChatGPT depends on users who have prompts they need responses to; an autonomous driving AI needs cars and passengers with destinations; and a medical AI system is useless without doctors and nurses to interpret or implement its outputs in ways that affect patient outcomes.
A narrow focus on the technical system also makes it particularly easy to cast grandiose claims about the dangers of AI. For example, recent years have seen many instances of AI hyping with the narrative of machines becoming too powerful to control. In particular, this dynamic has become very visible as powerful figures in the AI field, including Geoffrey Hinton, a prominent machine learning researcher often considered the “godfather of AI” [30], have emphasized concerns about the “existential risk” of AI (i.e., AI’s potential of eradicating humanity). However, this hyping ignores the broader sociotechnical contexts in which all of these systems are deployed, and as a result, has often led to relatively fatalistic framings of the possibility of existential risk that focus solely on the AIs, even though most existential risks would require additional, non-AI capabilities to have any realistic chance of happening.
Although situated as polar opposites, stories of excitement and of terror are both integral to the practice of AI hyping because they grossly simplify AI narratives and pit them against the realities of AI design and use. These simplifications systematically distract us from the real-world situations in which AI is developed and deployed. And whilst this observation allows us to uncover how the organization and stratification of society are deeply entangled with technology and innovation, it doesn’t necessarily allow us to better understand and respond to the dynamics and widespread effects of AI hyping per se. Understanding and responding to the effects of AI hyping requires teaching both specialists and the public to see the social dimensions that AI hyping omits, from the narratives that sustain hype claims to the missing steps of social integration that are never included in the hype.
1.2 “AI is just math.”
AI hyping presupposes that the technical aspects of artificial intelligence can be separated from its sociocultural, organizational, and psychological aspects. In other words, a key ingredient to AI hyping is perpetuating a narrative that frames AI as a math problem, a statistical model, or simply 1s and 0s [31,32,33]. This framing is visible in assumptions implied in much of the AI hype, such as the assumption that AI will diagnose diseases better than any human because it can better analyze all the medical data we have, or that AI understands the logics of language because it can predict the likeliness of word sequences.
Sometimes, the “AI is just math” frame is used to debunk myths around AI’s power [31]. More commonly, framing AI as exclusively math or code is a key part of AI hyping that privileges the technical aspects of AI over the social and interpersonal contexts, processes, organizations, and practices in which it is embedded (including the activity of hyping itself). AI-as-math essentializes AI in a very specific way that divorces it from social context. And as a result, extreme claims about AI’s capabilities or impacts often go unchallenged when placed in this frame. As long as the mathematical claims of an AI are viewed as plausible, people will under-examine the practical claims of an AI in its context of deployment [34]. And not only that, when an extreme claim about the AI does not come to fruition, for example that ChatGPT will cause widespread unemployment [35], then the blame can always be placed on something that was not the subject of the hype claim, like economic factors or theories about labor and the workforce, none of which are admitted to participate in the mathematical function of the AI itself. Moreover, the fallacious separation of AI from its sociotechnical contexts has been studied in detail by various scholarships, from Science and Technology Studies to the history of science, philosophy of scientific practice, and more recently critical technology studies, among others [36,37,38,39]. And it should be intimately familiar to industry practitioners as they also grapple with the entanglement of technical with the social, albeit to sell products.
The point here is that AI hyping structurally hinders efforts to consider AI as the sociotechnical system it is, and not as a purely technical or a purely social system. If AI is hyped as able to “outperform” doctors, it becomes more difficult to investigate how AI performance can be evaluated in the real-world hospitals and exam rooms where doctors conduct their work. If AI is hyped as someday posing an “existential risk” for humanity, it becomes more difficult to understand the concrete risks faced by those who currently interact with AI systems. When AI hyping dominates narratives and discussions, then it creates barriers to helping people understand the sociotechnical frames within which we should conceptualize AI systems. And if the social dimensions of AI are foreclosed by AI hyping, it becomes difficult to interrogate the hype at all.
One very specific implication is that AI hyping affects how we organize AI education (broadly understood), as well as how we think, research, and teach about the entanglement of social and technical concerns. For example, many AI-relevant degree programs require only (or almost only) technical courses, thereby perpetuating the view that AI is a purely technical product. We thus consider below how pedagogies focused on AI and AI-adjacent disciplines, particularly (but not solely) in technical and engineering fields in higher education (e.g., computer science, electrical engineering, data science, or statistics), can instead be used to meaningfully blunt the impacts of AI hyping through pedagogical innovation that jointly examines the social and the technical, rather than as separate dimensions. First, though, we consider the broader set of approaches to defeating or mitigating AI hyping.
2 Popping the bubble
The act of AI hyping, much like the production of economic bubbles [40], depends on value claims remaining unchallenged and unsubstantiated. For AI hyping, the dissociation of the technical from the social is one way to prevent such challenges. In the following, we discuss the ways a sociotechnical framing for AI claims can pop the bubbles blown by AI hyping. We do so to unwind the effects of AI hyping on education, as well as to implicate educational strategies in combatting AI hyping. For example, one could argue that pushes and pulls of AI hyping occur within a dialectic that oscillates between the ‘AI-as-utopia’ and ‘AI-as-dystopia’ poles. This shapes educational initiatives in important ways, so to address the structural effects of AI hyping on AI education, it makes sense to consider how we can ‘pop’ the AI hyping bubble. At the same time, it is crucial that this popping is done in the right ways; in particular, efforts to directly show that some hype claim is false or implausible are well-intentioned but ultimately misguided for two different reasons.
First, such efforts are doomed to failure by the nature of hype. Hyping, by default, is not about sharing and discussing facts, but about generating excitement and interest. Efforts to falsify claims, however straightforward and broad they may be, simply fall outside of the ontological frame of AI hyping. By its nature, hyping cannot be falsified, whether because any apparent falsification can be explained away or because truth is not the point of hyping.
When we look at AI hyping, we see that it is quite easy to construct hype about the technical side of AI [41]. And if the focus is solely on the technical aspects of AI, then there will never be any reason that one must conclude that something cannot be done; instead, one can always look elsewhere. AI hyping often claims that AI will be better than humans, for example in diagnostics [42, 43] or in recruiting [44]. When this claim fails, then one can simply point to circumstances, such as mismeasurement by medical technicians or misunderstandings by physicians, or incorrect use by recruiters or prevailing bias among hiring managers—or any of a host of other reasons why the AI system fell short in a particular context. There is never any need to lay blame at the feet of the AI system.Footnote 1
Second, and more importantly, the extreme claims made about AI as part of AI hyping privilege technical interpretations of AI rather than broader sociotechnical systems with AI as one component. A good example is a new regulatory emphasis on red-teaming as an AI accountability technique [45]. Red-teaming is a structured attack on an AI system in a controlled environment to detect vulnerabilities, including harmful or discriminatory outputs from an AI system. Here, the ‘locus’ of the vulnerability is thought to be the technical system itself, rather than the broader and continually emerging sociotechnical and sociopolitical aspects of its design and use.Footnote 2
Efforts to combat AI hyping by disproving (or otherwise attacking) the technical aspects of an AI system ultimately feed and reinforce the idea that AI is “just math” that can be examined independently of context. For example, one should not counter AI hyping around medical diagnostic systems solely by emphasizing their challenges with generalization and statistical bias (though such problems should be pointed out when present), as that response perpetuates the focus on the purely technical. We contend that the best way to counter AI hyping is instead to reorient discussions around the sociotechnical, as it is not possible to be hyping sociotechnical AI systems. Claims about those systems will have sufficient specificity to be falsifiable, and also almost always provide details that ground evaluation of a particular AI system.
3 Other ways of talking and learning about AI
AI hyping and AI education are deeply entangled. On the one hand, AI hyping has led to significant increases in resources: data science and AI degree programs have rapidly proliferated [47] and demand for courses and content about AI has soared [48].
Even the humanities in higher education are seeing some of these additional resources, as Deans and departments find that they can, for instance, get additional personnel by hiring faculty to teach classes such as “Ethics and AI” or “History of Technology.” The broader higher education community is thus, to some degree, a beneficiary of AI hyping. But at the same time, AI hyping has created new headaches in all disciplines. These range from questions about how to use AI in teaching and research to questions around equity in AI access, amongst other issues.
AI education efforts, i.e., efforts to talk and learn about AI across a wide range of contexts, audiences, and goals, provide an interesting example of the challenges to overcome AI hyping, but also offer opportunities to do so. That is, reshaping AI pedagogy provides one way to pop the hyping bubble.
We take a broad view of what counts as pedagogy, which we see as spanning formal education, on the job training, and public discourse, so we cannot hope to give a complete taxonomy of the different ways that pedagogical efforts could divorce themselves from AI hyping. Instead, we consider three distinct pedagogical goals that one might have, and explore the ways that efforts directed at each goal could blunt AI hyping. We also provide examples from our own experiences, as both demonstration of feasibility and anecdotal evidence for the potential success of this approach.
3.1 Goal 1: understanding AI as part of sociotechnical systems and contexts
A straightforward way to address AI hyping is to help people understand the ways in which AI technologies are always integrated into systems that are already sociotechnical in nature. Many papers have underlined the nature of AI systems as sociotechnical, but the point is more general: all of our environments are already sociotechnical [49]. In other words, AI is not the thing that makes our world sociotechnical; rather, AI intervenes in an already sociotechnical environment. If people can be encouraged to understand that AI is never deployed in isolation but always within a sociotechnical world, then they will potentially be better able to recognize AI hyping as hopelessly underspecified. Ideally, students would learn to recognize any continuities between the pre-AI and post-AI sociotechnical systems, as it can be quite informative to see what has remained relatively stable. However, the first step is simply to learn to look beyond the technology itself.
For example, consider the student success prediction systems that many universities are interested in developing or purchasing. These AI systems will be integrated into highly technologized and data-driven education environments in which students are constantly surveilled by learning analytics platforms, student retention algorithms, and educational technology in the classroom. Explicit teaching about this context can help students to counter AI hyping that claims these success prediction systems will be “transformative” or “empowering,” as the students will ideally come to recognize that those claims are simply empty. More positively, such instruction may help them understand how existing sociotechnical systems and contexts are deeply relevant for the functions and impacts of AI systems. Much depends on the details about how data is collected, how the predictions are used, who has access to them, how success is defined for purposes of the AI, and so forth. Of course, this type of understanding is not a fool-proof defense against AI hyping, but it can help people appropriately constrain its influence on their own understandings of the role of technology in society.
One example of this approach is in the data science courses (both undergraduate and graduate) taught by the authors. In these courses, students learn about various contexts and aspects of sociotechnical systems, and how to identify, for some AI, the relevant parts of those systems. More importantly, students learn to determine (and describe) beneficial and problematic sociotechnical systems for a given piece of AI technology. That is, in addition to learning to describe the sociotechnical systems for some AI, they also learn to identify opportunities to shape the sociotechnical system itself so that the AI acts in more beneficial ways. Anecdotally, students are significantly more skeptical about AI hyping by the end of a course, partly because they come to understand that claims about “the technology” are always relatively content-less.
Another example is public engagement and outreach conducted by one co-author, particularly through a public speakers series that brings together various experts on a distinct topic that intersects with AI. Here, the topical focus serves as a frame (e.g., “Agriculture” or “Security” or “Games”) and the expert guests are asked to prepare short provocations that speak to the connection between AI and the topical focus from their point of view. Inevitably, context and existing sociotechnical systems are brought to the fore. For example, the ways in which the agriculture industry is already data-driven, but also how agricultural practices sometimes clash with AI-based machinery and processes. Often, both audiences and guests walk away with more questions than they had before, indicating an intervention had been made to push beyond AI hyping and towards discourse.
Finally, the historical entanglement of technology with existing social structures of politics and ideology has been the focus of work by two of the co-authors with an organization that takes young professionals to Holocaust sites in Germany and Poland. The goal of these study trips is to help participants grapple with the complicity of professionals in that genocide, and how these acts of complicity were permissible under certain framings of “ethics.” The hope is that this understanding can help participants start to recognize the ethical challenges that arise in their own professional lives. Understanding how ethics, design, and technology were part of a single dynamic in this way is thus a fruitful way of engaging the genealogies and contexts of technology and, importantly, technology innovation and progress.
3.2 Goal 2: recognizing the open-endedness of AI development
A second approach to addressing AI hyping by way of shifting AI pedagogies is to help people realize that the creation of a sociotechnical AI system involves many choices that depend on ethical, societal, psychological, legal, or other types of factors.
As a simple example, for almost every possible use, there are multiple measures and multiple loss functions for which one could optimize a machine learning system. Student success could be understood as average grades, likelihood of enrolling in the next term, expected salary upon graduation, self-reported satisfaction, or many other targets. And even given a target, one might aim to minimize expected error, minimize variance of the error, maximize generalization behavior, or many other goals. The key point is that these choices are not determined by technical considerations, but rather by the needs, interests, politics, and values of the developers and other stakeholders who are allowed to partake in the AI design process [50]. More generally, one can aim to help people understand that many, perhaps even most, of the choices in AI development involve non-technical factors and goals.
The practice of AI hyping frequently depends on people believing in the near-inevitability of the technology. If the AI benefits or harms might or might not happen depending on choices throughout development, then the demands for our attention are much less compelling. One of the co-authors explicitly emphasizes, in classes, capstone projects, and a two-week summer research experience for undergraduates, the importance of non-technical choices. For example, students in all of these contexts are required to first identify all of the choices that must be made across the lifecycle of a data science project, and then explain, for each choice, the technical and non-technical constraints on that choice. This approach has been adopted by some of the other capstone project mentors at the co-author’s institution, and small-N comparisons of the resulting projects with those performed by groups that did not adopt this methodology suggest it is associated with more grounded, less hype-prone work.Footnote 3
More generally, AI hyping is typically framed as though people, including developers, have little-to-no agency in what is created. The well-known protest amongst Google workers against Project Maven [51] is an exception that proves the rule; the protest was notable precisely because developers so rarely push back about non-technical aspects of a project. However, as anyone who has ever built an AI system is acutely aware, the creators’ choices have an enormous impact on the success or failure of the system in practice. Hence, one of the key aspects of AI hyping can be undermined if people come to recognize the active role played by researchers and developers.
In contrast to some common views of technological development, technologists– researchers, developers, and engineers who construct AI systems– do not reveal the capabilities of, or the “animal spirits” motivating, technology (cf [52]). Rather, they play an active role in constructing the goals, capacities, and specific integrations of technological systems. Technologists are also subject to the processes at the heart of hyping; their objectives are often set by particular instances of AI hyping, and are resourced accordingly. For example, in 2023, Google’s parent company Alphabet increased its R&D expense by $6 billion from the previous year to be at the forefront of generative AI [53].
When corporate strategists buy into the hype around a capability—e.g., that generative AI will soon find application in every business domain (e.g [54])—they then reshape the active priorities of technologists working in their companies, perhaps shifting focus away from non-generative AI applications. They also reshape the work of non-technical workers in their companies, as compliance offers, legal counsel offices, user experience designers, and sales engineers shift their activities to understanding and enacting the hype claim (or in the case of dystopian hype, forestalling that claim).
This tendency in industry can make AI hyping seem like an inevitability, or a self-fulfilling prophecy. But analyzing and understanding the social dimensions of technological development in industry– i.e., understanding the tech industry as a sociotechnical systemin its own right– is crucial for popping the AI hype bubble. This approach also has a pedagogical dimension within the AI development process. Teaching technologists, students who will soon be technologists, and the public, about how corporate priorities are set, translated into objectives and key results, and eventually shape concrete systems people interact with is necessary background knowledge for discerning the limits of hype. And incorporating practices that embed a consideration of the social impacts of an AI product or system, for example impact assessments [55, 56] or consequence scanning [57], into the development process bring a consideration of the social to the already sociotechnical systems developers produce. Hyping claims of AI systems’ capabilities becomes more difficult, and less hyperbolic, in the face of the practicalities of companies who must make concrete investments like engineers’ labor time or cloud computing hours needed for training an AI system, and how those investments might be otherwise. The relationship between AI hyping and how people might ultimately be affected by AI systems depends on decisions made in corporate boardrooms, not in the inevitable unfolding of technological development in any of the directions AI hyping would suggest.
3.3 Goal 3: pedagogically engaging policymakers
AI hyping motivates policymakers and others to provide resources, take steps towards or away from AI governance, and otherwise impact the ways that organizations and governments respond to AI. That means that policymakers are involved in AI hyping too, and that they can potentially be motivated towards the hypers’ desired ends.
At the same time, policy typically does not focus on the individual case, but rather on the more general type: we do not regulate a specific hiring algorithm, but rather provide regulations for all hiring algorithms [58, 59]. Hence, the previous approaches may not be appropriate, as they emphasized the context- and use-specificity of an AI system, and thereby focused on the individual. Different approaches are required to help policymakers be resistant to AI hyping. These approaches must provide sufficient background knowledge, but also be attuned to the fact that policymakers need to act at a meta level. That can mean that AI pedagogy interactions with policymakers may most meaningfully occur on a case or issue basis (such as the question of AI’s impact on labor law).
Here, working to reach the two pedagogical goals that we have outlined above - understanding AI as part of already existing socio-technical systems and context, and recognizing the open-endedness of AI development - can have positive knock-on effects of AI policy and governance. Taking on a sociotechnical perspective in research and in the classroom can equip academic experts with the arguments and case studies they need to help policymakers understand both the technical intricacies of AI, as well as their socio-technical histories and contexts. In advisory roles, for example on advisory committees or in testimonies, academics can (and the co-authors have) engage directly with questions and concerns that policymakers may have. We have also participated in the production of policy briefings and directly engaged with policymakers as “trusted experts.” Related efforts include becoming trusted partners in peer learning environments.Footnote 4 In other words, there are many opportunities to bring a sociotechnical anti-hype pedagogy to policy-makers, though the direct impacts of those efforts are, and will continue to be, exceedingly difficult to measure given the complex processes that result in policies, laws, and regulations.
4 Conclusion
In this commentary, we have tackled the common assumption that AI hype is a social fact and suggested that, instead, it is best understood as a shared practice involving many stakeholders.
We have outlined how this shared practice is continually divorcing the technical from the social in AI, and how this gets us stuck in a loop of polarized narratives that prevent a more meaningful dialogue on AI and its impact. We have suggested to flip that script and reorient AI discussions around the socio-technical, stipulating that AI hyping and accounts of AI as deeply socio-technical are mutually exclusive.
Cued in by that, we have proposed that AI education and new ways of framing AI pedagogy can help mitigate the harmful effects of AI hyping. Specifically, we have outlined three goals that new pedagogical efforts should strive for: understanding AI as part of already existing socio-technical systems and context, recognizing the open-endedness of AI development, and engaging policymakers.
We hope that taking these steps, especially among educators, will help enable people to be less implicated in AI hyping and facilitate a more productive and equitable dialogue on the ways that AI systems impact people’s lives both now and in the near future.
We want to underline that there are already many efforts underway across academia, industry, government, and independent research institutes to help technologists and technology students (whether that is in computer science, data science, or other AI-related fields) grapple with the sociotechnical nature of AI, with useful outputs for this purpose (e.g [61,62,63]). There is plenty of attention and investment in the STEM field with regards to “data ethics”, “technology ethics” or “public interest technology.” However, non-STEM fields and disciplines are often left behind, even though they can be more intimately familiar with the technology’s deeply social nature. Our hope is that addressing AI hyping, and complicity with AI hyping, will in part result in a renewed attention and investment into the social sciences and humanities. Taking the sociotechnical seriously in AI, and using that as a way to address AI hyping, cannot come at the cost of bleeding out historically non-technical forms of education and expertise. That approach also would constitute complicity with AI hyping and divorcing of social and technical aspects. And that would mean that AI hyping was still out there for us to tackle.
Notes
Philosophers of science may note close similarities with the Quine-Duhem problem.
More recently, scholars have found that red-teaming is limited in mitigating real-world harms and holistically assessing an AI systems [46].
We emphasize that this has not been a controlled study, and there are many differences between groups and mentors. Although there is a small-N association, we obviously do not currently have evidence for a causal connection.
A good example is the Public Technology Leadership Collaborative (PTLC), a new peer learning collective of scholars, researchers, and government leaders committed to addressing the social and cultural implications of data and technology [60].
References
Seifert, F., Fautz, C.: Hype after hype: From Bio to Nano to AI. Nanoethics. 15, 143–148 (2021). https://doi.org/10.1007/s11569-021-00399-3
Britannica Dictionary: Hype Definition & Meaning, https://www.britannica.com/dictionary/hype
Martin, M.: People disagree about the risks and benefits of artificial intelligence, (2023). https://www.npr.org/2023/11/21/1214341015/people-disagree-about-the-risks-and-benefits-of-artificial-intelligence
Zinkula, A.M.: Jacob: ChatGPT may be coming for our jobs. Here are the 10 roles that AI is most likely to replace., https://www.businessinsider.com/chatgpt-jobs-at-risk-replacement-artificial-intelligence-ai-labor-trends-2023-02
Roose, K.: A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn, (2023). https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html
Longoni, C., Morewedge, C.K.: AI Can Outperform Doctors. So Why Don’t Patients Trust It? (2019). https://hbr.org/2019/10/ai-can-outperform-doctors-so-why-dont-patients-trust-it
Cave, S., Craig, C., Dihal, K., Dillon, S., Montgomery, J., Singler, B., Taylor, L.: Portrayals and Perceptions of AI and why they Matter. Apollo - University of Cambridge Repository (2018)
Cave, S., Dihal, K., Dillon, S., eds:: AI Narratives: A History of Imaginative Thinking about Intelligent Machines. Oxford University Press, New York (2020)
Mahdawi, A.: Will AI free us from drudgery– or leave us jobless and hungry? (2023). https://www.theguardian.com/commentisfree/2023/may/30/will-ai-free-us-from-drudgery-or-leave-us-jobless-and-hungry
LaGrandeur, K.: The consequences of AI hype. AI Ethics. (2023). https://doi.org/10.1007/s43681-023-00352-y
Lowrey, A.: Before AI Takes OVer, Make Plans to Give Everyone Money, https://www.theatlantic.com/ideas/archive/2023/05/ai-job-losses-policy-support-universal-basic-income/674071/
Walch, K.: Will There Be A ‘Kill Switch’ For AI? https://www.forbes.com/sites/cognitiveworld/2020/03/05/will-there-be-a-kill-switch-for-ai/
Bassett, C., Roberts, B.: Automation anxiety: A critical history - the apparently odd recurrence of debates about computation, AI and labour. In: Handbook of Critical Studies of Artificial Intelligence, pp. 79–93. Edward Elgar Publishing (2023)
Knibbs, K., Goode, L., Johnson, K.: Prominent women in tech say they don’t want to join OpenAI’s All-Male Board, (2023). https://www.wired.com/story/women-in-tech-openai-board/
OpenAI, Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F.L., Almeida,D., Altenschmidt, J., Altman, S., Anadkat, S., Avila, R., Babuschkin, I., Balaji,S., Balcom, V., Baltescu, P., Bao, H., Bavarian, M., Belgum, J., Bello, I., Berdine,J., Bernadett-Shapiro, G., Berner, C., Bogdonoff, L., Boiko, O., Boyd, M., Brakman,A.-L., Brockman, G., Brooks, T., Brundage, M., Button, K., Cai, T., Campbell, R.,Cann, A., Carey, B., Carlson, C., Carmichael, R., Chan, B., Chang, C., Chantzis, F.,Chen, D., Chen, S., Chen, R., Chen, J., Chen, M., Chess, B., Cho, C., Chu, C., Chung,H.W., Cummings, D., Currier, J., Dai, Y., Decareaux, C., Degry, T., Deutsch, N., Deville,D., Dhar, A., Dohan, D., Dowling, S., Dunning, S., Ecoffet, A., Eleti, A., Eloundou,T., Farhi, D., Fedus, L., Felix, N., Fishman, S.P., Forte, J., Fulford, I., Gao, L.,Georges, E., Gibson, C., Goel, V., Gogineni, T., Goh, G., Gontijo-Lopes, R., Gordon,J., Grafstein, M., Gray, S., Greene, R., Gross, J., Gu, S.S., Guo, Y., Hallacy, C.,Han, J., Harris, J., He, Y., Heaton, M., Heidecke, J., Hesse, C., Hickey, A., Hickey,W., Hoeschele, P., Houghton, B., Hsu, K., Hu, S., Hu, X., Huizinga, J., Jain, S.,Jain, S., Jang, J., Jiang, A., Jiang, R., Jin, H., Jin, D., Jomoto, S., Jonn, B.,Jun, H., Kaftan, T., Kaiser, Ł., Kamali, A., Kanitscheider, I., Keskar, N.S., Khan,T., Kilpatrick, L., Kim, J.W., Kim, C., Kim, Y., Kirchner, J.H., Kiros, J., Knight,M., Kokotajlo, D., Kondraciuk, Ł., Kondrich, A., Konstantinidis, A., Kosic, K., Krueger,G., Kuo, V., Lampe, M., Lan, I., Lee, T., Leike, J., Leung, J., Levy, D., Li, C.M.,Lim, R., Lin, M., Lin, S., Litwin, M., Lopez, T., Lowe, R., Lue, P., Makanju, A.,Malfacini, K., Manning, S., Markov, T., Markovski, Y., Martin, B., Mayer, K., Mayne,A., McGrew, B., McKinney, S.M., McLeavey, C., McMillan, P., McNeil, J., Medina, D.,Mehta, A., Menick, J., Metz, L., Mishchenko, A., Mishkin, P., Monaco, V., Morikawa,E., Mossing, D., Mu, T., Murati, M., Murk, O., Mély, D., Nair, A., Nakano, R., Nayak,R., Neelakantan, A., Ngo, R., Noh, H., Ouyang, L., O’Keefe, C., Pachocki, J., Paino,A., Palermo, J., Pantuliano, A., Parascandolo, G., Parish, J., Parparita, E., Passos,A., Pavlov, M., Peng, A., Perelman, A., Peres, F. de A.B., Petrov, M., Pinto, H.P.de O., Michael, Pokorny, Pokrass, M., Pong, V.H., Powell, T., Power, A., Power, B.,Proehl, E., Puri, R., Radford, A., Rae, J., Ramesh, A., Raymond, C., Real, F., Rimbach,K., Ross, C., Rotsted, B., Roussez, H., Ryder, N., Saltarelli, M., Sanders, T., Santurkar,S., Sastry, G., Schmidt, H., Schnurr, D., Schulman, J., Selsam, D., Sheppard, K.,Sherbakov, T., Shieh, J., Shoker, S., Shyam, P., Sidor, S., Sigler, E., Simens, M.,Sitkin, J., Slama, K., Sohl, I., Sokolowsky, B., Song, Y., Staudacher, N., Such, F.P.,Summers, N., Sutskever, I., Tang, J., Tezak, N., Thompson, M.B., Tillet, P., Tootoonchian,A., Tseng, E., Tuggle, P., Turley, N., Tworek, J., Uribe, J.F.C., Vallone, A., Vijayvergiya,A., Voss, C., Wainwright, C., Wang, J.J., Wang, A., Wang, B., Ward, J., Wei, J., Weinmann,C.J., Welihinda, A., Welinder, P., Weng, J., Weng, L., Wiethoff, M., Willner, D.,Winter, C., Wolrich, S., Wong, H., Workman, L., Wu, S., Wu, J., Wu, M., Xiao, K.,Xu, T., Yoo, S., Yu, K., Yuan, Q., Zaremba, W., Zellers, R., Zhang, C., Zhang, M.,Zhao, S., Zheng, T., Zhuang, J., Zhuk, W., Zoph, B.: GPT-4 Technical Report, http://arxiv.org/abs/2303.08774,(2024)
Hubert, K.F., Awa, K.N., Zabelina, D.L.: The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks. Sci. Rep. 14, 3440 (2024). https://doi.org/10.1038/s41598-024-53303-w
Gupta, A., Corrado, G.: How 3 healthcare organizations are using generative AI, (2023). https://blog.google/technology/health/cloud-next-generative-ai-health/
Metz, C., Gardi, B.: How the A.I. That Drives ChatGPT Will Move Into the Physical World, (2024). https://www.nytimes.com/2024/03/11/technology/ai-robots-technology.html
Briggs, J.: Generative AI Could Raise Global GDP by 7%, https://www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-gdp-by-7-percent.html
Huang, S., Grady, P.: GPT-3: Generative AI: A Creative New World, https://www.sequoiacap.com/article/generative-ai-a-creative-new-world/
Fleming, F.: CEOs say generative AI will result in job cuts in 2024, https://arstechnica.com/ai/2024/01/ceos-say-generative-ai-will-result-in-job-cuts-in-2024/
Milmo, D., Hern, A.: Google chief admits ‘biased’ AI tool’s photo diversity offended users, (2024). https://www.theguardian.com/technology/2024/feb/28/google-chief-ai-tools-photo-diversity-offended-users
Merken, S.: Another NY lawyer faces discipline after AI chatbot invented case citation, (2024). https://www.reuters.com/legal/transactional/another-ny-lawyer-faces-discipline-after-ai-chatbot-invented-case-citation-2024-01-30/
Associated Press: National Security Agency is starting an artificial intelligence security center, https://apnews.com/article/nsa-artificial-intelligence-security-deepfakes-f9b19dd64890884cc2b0700ddf66e666
Alenichev, A., Kingori, P., Grietens, K.P.: Reflections before the storm: The AI reproduction of biased imagery in global health visuals. Lancet Global Health. 11, e1496–e1498 (2023). https://doi.org/10.1016/S2214-109X(23)00329-7
Yin, L., Alba, D., Nicoletti, L.: OpenAI GPT Sorts Resume Names With Racial Bias, Test Shows, https://www.bloomberg.com/graphics/2024-openai-gpt-hiring-racial-discrimination/?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTcwOTg1NjE0OCwiZXhwIjoxNzEwNDYwOTQ4LCJhcnRpY2xlSWQiOiJTQTA1Q1FUMEFGQjQwMCIsImJjb25uZWN0SWQiOiI2NDU1MEM3NkRFMkU0QkM1OEI0OTI5QjBDQkIzRDlCRCJ9.MdkSGC3HMwwUYtltWq6WxWg3vULNeCTJcjacB-DNi8k
Dahl, M., Magesh, V., Suzgun, M., Ho, D.E.: Hallucinating Law: Legal Mistakes with Large Language Models are Pervasive, https://hai.stanford.edu/news/hallucinating-law-legal-mistakes-large-language-models-are-pervasive
Fostering Responsible Computing Research: Foundations and Practices. National Academies, Washington, D.C. (2022)
Elish, M.C., Mateescu, A.: AI in Context: The Labor of Integrating New Technologies. Data & Society Research Institute (2019)
Rothman, J.: Why the Godfather of A.I. Fears What He’s Built, (2023). https://www.newyorker.com/magazine/2023/11/20/geoffrey-hinton-profile-ai
Broussard, M.: Artificial Unintelligence: How Computers Misunderstand the World. MIT Press, Cambridge, MA (2018)
Etzioni, O.: Deep Learning Isn’t a Dangerous Magic Genie. It’s Just Math, (2016). https://www.wired.com/2016/06/deep-learning-isnt-dangerous-magic-genie-just-math/
Noreika, A.: Is AI Just Math? How Much Math You Need To Master AI? (2023). https://www.technology.org/how-and-why/how-much-math-you-need-to-master-ai/
Sloane, M., Moss, E., Chowdhury, R.: A Silicon Valley love triangle: Hiring algorithms, pseudo-science, and the quest for auditability. Patterns. 3, 100425 (2022). https://doi.org/10.1016/j.patter.2021.100425
Toh, M.: ChatGPT and AI automation: 300 million jobs could be affected globally, says, Sachs, G.: https://www.cnn.com/2023/03/29/tech/chatgpt-ai-automation-jobs-impact-intl-hnk/index.html
Hacking, I.: The Social Construction of what? Harvard University Press, Cambridge, MA (1999)
Forsythe, D.E.: Engineering Knowledge: The construction of knowledge in Artifical Intelligence. Soc. Stud. Sci. 23, 445–477 (1993)
Bijker, W.E., Hughes, T.P., Pinch, T. (eds.): The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. MIT Press, Cambridge, MA (1987)
Haraway, D.J.: Situated knowledges: The Science question in Feminism and the privilege of partial perspective. Feminist Stud. 14, 575 (1988). https://doi.org/10.2307/3178066
Chang, V., Newman, R., Walters, R.J., Wills, G.B.: Review of economic bubbles. Int. J. Inf. Manag. 36, 497–506 (2016). https://doi.org/10.1016/j.ijinfomgt.2016.02.007
Heikkilä, M., Heaven, W.D.: Yann LeCun has a bold new vision for the future of AI, https://www.technologyreview.com/2022/06/24/1054817/yann-lecun-bold-new-vision-future-ai-deep-learning-meta/
Lenharo, M.: Google AI has better bedside manner than human doctors — and makes better diagnoses. Nature. 625, 643–644 (2024). https://doi.org/10.1038/d41586-024-00099-4
Imaging, A.I., Diagnostics: https://health.google/health-research/imaging-and-diagnostics/
Kelly, J.: AI Recruiting Will Be A Game Changer, https://www.forbes.com/sites/jackkelly/2023/11/21/ai-recruiting-will-be-a-game-changer/
The White House: Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
Sorelle Friedler, R., Singh, B., Blili-Hamelin, J., Metcalf, Brian, J., Chen: AI Red-Teaming Is Not a One-Stop Solution to AI Harms. Recommendations for Using Red-Teaming for AI Accountability (2023)
Coffey, L.: Data Science Major Takes Off, https://www.insidehighered.com/news/tech-innovation/teaching-learning/2024/01/25/data-science-major-takes-across-college-campuses
Munk, C.W.: Colleges are touting AI degree programs. Here’s how to decide if it’s worth the cost, https://www.cnbc.com/2024/03/02/college-ai-degree-programs-are-booming-will-jobs-exist-for-graduates.html
Bijker, W.E., Law, J. (eds.): Shaping Technology/Building Society: Studies in Sociotechnical Change. MIT Press, Cambridge, MA (1992)
O’Neal, C.: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, New York (2016)
Shane, S., Wakabayashi, D.: ‘The Business of War’: Google Employees Protest Work for the Pentagon - The New York Times, https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html
Heidegger, M.: The Question Concerning Technology, and Other Essays. Garland Pub, New York (1977)
George, K.: 4 Key Takeaways from Google-Parent Alphabet’s Earnings Call, https://www.investopedia.com/4-key-takeaways-from-google-parent-alphabet-earnings-call-8553147
Eliot, L., Generative, AI ChatGPT Is Going To Be Everywhere Once The API Portal Gets Soon Opened, Stupefying, A.I., Ethics And, A.I., Law: https://www.forbes.com/sites/lanceeliot/2023/01/22/generative-ai-chatgpt-is-going-to-be-everywhere-once-the-api-portal-gets-soon-opened-stupefying-ai-ethics-and-ai-law/
Ada Lovelace Institute: Algorithmic Impact Assessment: A case Study in Healthcare. Ada Lovelace Institute (2022)
Moss, E., Watkins, E.A., Singh, R., Elish, M.C., Metcalf, J.: Assembling Accountability: Algorithmic Impact Assessment for the Public Interest. Data & Society Research Institute, New York (2021)
Brown, S.: Consequence Scanning: An Agile Event for Responsible Innovators. doteveryone, London (2019)
Zetoony, D.A., Boiangin, T., Goldberg, J.F.: What is the Local Law 144 rule in NYC, https://www.natlawreview.com/article/nyc-s-local-law-144-and-final-regulations-regulation-ai-driven-hiring-tools-united
European Commission, Act, A.I.: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
Data & Society: Public Technology Leadership Collaborative, https://datasociety.net/public-technology-leadership-collaborative/
Ehsan, U., Saha, K., De Choudhury, M., Riedl, M.O.: Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI. Proc. ACM Hum.-Comput. Interact. 7, 34:1–34:32 (2023). https://doi.org/10.1145/3579467
Nyczepir, D.: NIST takes socio-technical approach to AI bias en route to management standard, (2022). https://fedscoop.com/nist-socio-technical-ai-bias/
Krakowski, A., Greenwald, E., Hurt, T., Nonnecke, B., Cannady, M.: Authentic integration of Ethics and AI through Sociotechnical. Problem-Based Learn. AAAI. 36, 12774–12782 (2022). https://doi.org/10.1609/aaai.v36i11.21556
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare that no competing interests exist.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Topical Collection on The Ethical Implications of AI Hype: Examining the overinflation and misrepresentation of AI capabilities and performance.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Sloane, M., Danks, D. & Moss, E. Tackling AI Hyping. AI Ethics 4, 669–677 (2024). https://doi.org/10.1007/s43681-024-00481-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s43681-024-00481-y