Impact-Driven Research, Innovation and Moonshots

Impact-Driven Research, Innovation and Moonshots (I-DRIM) is a global and multidisciplinary research team harnessing AI’s potential to advance science, drive product innovation, and address societal challenges, all with the aim to positively impact billions of lives.

Impact-Driven Research, Innovation and Moonshots (I-DRIM) is a global and multidisciplinary research team harnessing AI’s potential to advance science, drive product innovation, and address societal challenges, all with the aim to positively impact billions of lives.

About the team

Our teams focus on artificial intelligence (AI) and machine learning (ML) research to drive innovation and advance science. We aim to help communities and governments mitigate, adapt and build resilience to the increasing climate crisis through various climate & sustainability efforts. Our health initiatives help catalyze the adoption of human-centered AI to make healthcare more accurate, accessible, and affordable. Our teams are also pioneering the development of AI technologies with a focus on education, by personalizing the learning journey for students and streamlining tasks for educators

The development of AI is at a crucial juncture, and the progress we make now will profoundly shape our future. We believe that together we must commit to harnessing AI for good, leveraging its potential responsibly as we aim to address real-world problems to improve lives. We're proud to advance science and drive innovation, guided by our AI principles.

Team focus summaries

Large language model factuality

Leading foundational and applied research on the factuality of large language models to enable reliable LLMs for real world applications.

Large language model efficiency

Making generative AI faster using advanced techniques such as Speculative Decoding.

Multi-modal generative AI

Rethinking AI from its first principles by working on new model architectures and training schemes for LLMs, and image and video generation models.

AI for health

Advancing research to catalyze the adoption of human-centered AI in healthcare with a focus on communities, consumers, and caregivers, and driven by the belief that care can be made more accurate, equitable, accessible and affordable.

AI for climate & sustainability

Advancing AI research to help address climate mitigation (e.g., reducing impact of transportation on global warming) and climate adaptation (e.g., flood forecasting, wildfire predictions, food security).

AI for education

Pioneering education-focused AI technologies aimed at enhancing both the students’ learning journey and the teachers’ experience.

Market algorithms

Advancing efforts to develop knowledge and tools to inform the design and analysis of complex ecosystems.

Foundations

Advance state of the art on foundational questions related to ML, natural language processing (NLP), and differential privacy.

Featured publications

Large Language Models Encode Clinical Knowledge
Karan Singhal
Sara Mahdavi
Jason Wei
Hyung Won Chung
Nathan Scales
Ajay Tanwani
Heather Cole-Lewis
Perry Payne
Martin Seneviratne
Paul Gamble
Abubakr Abdelrazig Hassan Babiker
Nathanael Schaerli
Aakanksha Chowdhery
Philip Mansfield
Dina Demner-Fushman
Katherine Chou
Juraj Gottweis
Nenad Tomašev
Alvin Rajkomar
Joelle Barral
Nature (2023)
Preview abstract Large language models (LLMs) have demonstrated impressive capabilities, but the bar for clinical applications is high. Attempts to assess the clinical knowledge of models typically rely on automated evaluations based on limited benchmarks. Here, to address these limitations, we present MultiMedQA, a benchmark combining six existing medical question answering datasets spanning professional medicine, research and consumer queries and a new dataset of medical questions searched online, HealthSearchQA. We propose a human evaluation framework for model answers along multiple axes including factuality, comprehension, reasoning, possible harm and bias. In addition, we evaluate Pathways Language Model (PaLM, a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA and Measuring Massive Multitask Language Understanding (MMLU) clinical topics), including 67.6% accuracy on MedQA (US Medical Licensing Exam-style questions), surpassing the prior state of the art by more than 17%. However, human evaluation reveals key gaps. To resolve this, we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, knowledge recall and reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal limitations of today’s models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLMs for clinical applications. View details
Preview abstract Floods are one of the most common natural disasters, with a disproportionate impact in developing countries that often lack dense streamflow gauge networks. Accurate and timely warnings are critical for mitigating flood risks, but hydrological simulation models typically must be calibrated to long data records in each watershed. Here we show that AI-based forecasting achieves reliability in predicting extreme riverine events in ungauged watersheds at up to a 5-day lead time that is similar to or better than the reliability of nowcasts (0-day lead time) from a current state of the art global modeling system (the Copernicus Emergency Management Service Global Flood Awareness System). Additionally, we achieve accuracies over 5-year return period events that are similar to or better than current accuracies over 1-year return period events. This means that AI can provide flood warnings earlier and over larger and more impactful events in ungauged basins. The model developed in this paper was incorporated into an operational early warning system that produces publicly available (free and open) forecasts in real time in over 80 countries. This work highlights a need for increasing the availability of hydrological data to continue to improve global access to reliable flood warnings. View details
Flood forecasting with machine learning models in an operational framework
Asher Metzger
Chen Barshai
Dana Weitzner
Frederik Kratzert
Gregory Begelman
Guy Shalev
Hila Noga
Moriah Royz
Niv Giladi
Ronnie Maor
Sella Nevo
Yotam Gigi
Zvika Ben-Haim
HESS (2022)
Preview abstract Google’s operational flood forecasting system was developed to provide accurate real-time flood warnings to agencies and the public, with a focus on riverine floods in large, gauged rivers. It became operational in 2018 and has since expanded geographically. This forecasting system consists of four subsystems: data validation, stage forecasting, inundation modeling, and alert distribution. Machine learning is used for two of the subsystems. Stage forecasting is modeled with the Long Short-Term Memory (LSTM) networks and the Linear models. Flood inundation is computed with the Thresholding and the Manifold models, where the former computes inundation extent and the latter computes both inundation extent and depth. The Manifold model, presented here for the first time, provides a machine-learning alternative to hydraulic modeling of flood inundation. When evaluated on historical data, all models achieve sufficiently high-performance metrics for operational use. The LSTM showed higher skills than the Linear model, while the Thresholding and Manifold models achieved similar performance metrics for modeling inundation extent. During the 2021 monsoon season, the flood warning system was operational in India and Bangladesh, covering flood-prone regions around rivers with a total area of 287,000 km2, home to more than 350M people. More than 100M flood alerts were sent to affected populations, to relevant authorities, and to emergency organizations. Current and future work on the system includes extending coverage to additional flood-prone locations, as well as improving modeling capabilities and accuracy. View details
A Neural Encoder for Earthquake Rate Forecasting
Oleg Zlydenko
Brendan Meade
Alexandra Sharon Molchanov
Sella Nevo
Yohai bar Sinai
Scientific Reports (2023)
Preview abstract Forecasting the timing of earthquakes is a long-standing challenge. Moreover, it is still debated how to formulate this problem in a useful manner, or to compare the predictive power of different models. Here, we develop a versatile neural encoder of earthquake catalogs, and apply it to the fundamental problem of earthquake rate prediction, in the spatio-temporal point process framework. The epidemic type aftershock sequence model (ETAS) effectively learns a small number of parameters to constrain assumed functional forms for the space and time relationships of earthquake sequences (e.g., Omori-Utsu law). Here we introduce learned spatial and temporal embeddings for point process earthquake forecast models that capture complex correlation structures. We demonstrate the generality of this neural representation as compared with ETAS model using train-test data splits and how it enables the incorporation of additional geophysical information. In rate prediction tasks, the generalized model shows > 4% improvement in information gain per earthquake and the simultaneous learning of anisotropic spatial structures analogous to fault traces. The trained network can be also used to perform short-term prediction tasks, showing similar improvement while providing a 1,000-fold reduction in run-time. View details
TRUE: Re-evaluating Factual Consistency Evaluation
Or Honovich
Hagai Taitelbaum
Vered Cohen
Thomas Scialom
NAACL 2022, The Association for Computational Linguistics (2022)
Preview abstract Grounded text generation systems often generate text that contains factual inconsistencies, hindering their real-world applicability. Automatic factual consistency evaluation may help alleviate this limitation by accelerating evaluation cycles, filtering inconsistent outputs and augmenting training data. While attracting increasing attention, such evaluation metrics are usually developed and evaluated in silo for a single task or dataset, slowing their adoption. Moreover, previous meta-evaluation protocols focused on system-level correlations with human annotations, which leave the example-level accuracy of such metrics unclear. In this work, we introduce TRUE: a comprehensive study of factual consistency metrics on a standardized collection of existing texts from diverse tasks, manually annotated for factual consistency. Our standardization enables an example-level meta-evaluation protocol that is more actionable and interpretable than previously reported correlations, yielding clearer quality measures. Across diverse state-of-the-art metrics and 11 datasets we find that large-scale NLI and question generation-and-answering-based approaches achieve strong and complementary results. We recommend those methods as a starting point for model and metric developers, and hope TRUE will foster progress towards even better methods. View details
Preview abstract Inference from large autoregressive models like Transformers is slow - decoding K tokens takes K serial runs of the model. In this work we introduce speculative decoding - an algorithm to sample from autoregressive models faster without any changes to the outputs, by computing several tokens in parallel. At the heart of our approach lie the observations that (1) hard language-modeling tasks often include easier subtasks that can be approximated well by more efficient models, and (2) using speculative execution and a novel sampling method, we can make exact decoding from the large models faster, by running them in parallel on the outputs of the approximation models, potentially generating several tokens concurrently, and without changing the distribution. Our method can accelerate existing off-the-shelf models without retraining or architecture changes. We demonstrate it on T5-XXL and show a 2X-3X acceleration compared to the standard T5X implementation, with identical outputs. View details
Q^2: Evaluating Factual Consistency in Knowledge-Grounded Dialogues via Question Generation and Question Answering
Or Honovich
Leshem Choshen
Ella Neeman
Omri Abend
Empirical Methods in Natural Language Processing (EMNLP) (2021) (to appear)
Preview abstract Neural knowledge-grounded generative models for dialogue often produce content that is \textit{factually inconsistent} with the knowledge they rely on, making them unreliable and limiting their applicability. Inspired by recent work on evaluating factual consistency in abstractive summarization, we propose an automatic evaluation metric for factual consistency in knowledge-grounded dialogue using automatic question generation and question answering. Our metric, denoted $Q^2$, compares answer spans using natural language inference, which enables better factual comparison than in previous token-based metrics. To foster proper evaluation, we curate a novel dataset of state-of-the-art dialogue system outputs for the Wizard-of-Wikipedia dataset, manually annotated for factual consistency. We perform a thorough meta-evaluation of $Q^2$ against other metrics using the new dataset and two others, where it shows higher correlation with human judgements. View details
Towards Generalist Biomedical AI
Danny Driess
Andrew Carroll
Chuck Lau
Ryutaro Tanno
Ira Ktena
Anil Palepu
Basil Mustafa
Aakanksha Chowdhery
Simon Kornblith
Philip Mansfield
Sushant Prakash
Renee Wong
Sunny Virmani
Sara Mahdavi
Bradley Green
Ewa Dominowska
Joelle Barral
Karan Singhal
Pete Florence
NEJM AI (2024)
Preview abstract BACKGROUND: Medicine is inherently multimodal, requiring the simultaneous interpretation and integration of insights between many data modalities spanning text, imaging, genomics, and more. Generalist biomedical artificial intelligence systems that flexibly encode, integrate, and interpret these data might better enable impactful applications ranging from scientific discovery to care delivery. METHODS: To catalyze development of these models, we curated MultiMedBench, a new multimodal biomedical benchmark. MultiMedBench encompasses 14 diverse tasks, such as medical question answering, mammography and dermatology image interpretation, radiology report generation and summarization, and genomic variant calling. We then introduced Med-PaLM Multimodal (Med-PaLM M), our proof of concept for a generalist biomedical AI system that flexibly encodes and interprets biomedical data including clinical language, imaging, and genomics with the same set of model weights. To further probe the capabilities and limitations of Med-PaLM M, we conducted a radiologist evaluation of model-generated (and human) chest x-ray reports. RESULTS: We observed encouraging performance across model scales. Med-PaLM M reached performance competitive with or exceeding the state of the art on all MultiMedBench tasks, often surpassing specialist models by a wide margin. In a side-by-side ranking on 246 retrospective chest x-rays, clinicians expressed a pairwise preference for Med-PaLM Multimodal reports over those produced by radiologists in up to 40.50% of cases, suggesting potential clinical utility. CONCLUSIONS: Although considerable work is needed to validate these models in real-world cases and understand if cross-modality generalization is possible, our results represent a milestone toward the development of generalist biomedical artificial intelligence systems. View details
ELIXR: Towards a general purpose X-ray artificial intelligence system through alignment of large language models and radiology vision encoders
Shawn Xu
Lin Yang
Timo Kohlberger
Martin Ma
Atilla Kiraly
Sahar Kazemzadeh
Zakkai Melamed
Jungyeon Park
Patricia MacWilliams
Chuck Lau
Christina Chen
Mozziyar Etemadi
Sreenivasa Raju Kalidindi
Kat Chou
Shravya Shetty
Daniel Golden
Rory Pilgrim
Krish Eswaran
arxiv (2023)
Preview abstract Our approach, which we call Embeddings for Language/Image-aligned X-Rays, or ELIXR, leverages a language-aligned image encoder combined or grafted onto a fixed LLM, PaLM 2, to perform a broad range of tasks. We train this lightweight adapter architecture using images paired with corresponding free-text radiology reports from the MIMIC-CXR dataset. ELIXR achieved state-of-the-art performance on zero-shot chest X-ray (CXR) classification (mean AUC of 0.850 across 13 findings), data-efficient CXR classification (mean AUCs of 0.893 and 0.898 across five findings (atelectasis, cardiomegaly, consolidation, pleural effusion, and pulmonary edema) for 1% (~2,200 images) and 10% (~22,000 images) training data), and semantic search (0.76 normalized discounted cumulative gain (NDCG) across nineteen queries, including perfect retrieval on twelve of them). Compared to existing data-efficient methods including supervised contrastive learning (SupCon), ELIXR required two orders of magnitude less data to reach similar performance. ELIXR also showed promise on CXR vision-language tasks, demonstrating overall accuracies of 58.7% and 62.5% on visual question answering and report quality assurance tasks, respectively. These results suggest that ELIXR is a robust and versatile approach to CXR AI. View details
Shared computational principles for language processing in humans and deep language models
Ariel Goldstein
Zaid Zada
Eliav Buchnik
Amy Price
Bobbi Aubrey
Samuel A. Nastase
Harshvardhan Gazula
Gina Choe
Aditi Rao
Catherine Kim
Colton Casto
Lora Fanda
Werner Doyle
Daniel Friedman
Patricia Dugan
Lucia Melloni
Roi Reichart
Sasha Devore
Adeen Flinker
Liat Hasenfratz
Omer Levy,
Kenneth A. Norman
Orrin Devinsky
Uri Hasson
Nature Neuroscience (2022)
Preview abstract Departing from traditional linguistic models, advances in deep learning have resulted in a new type of predictive (autoregressive) deep language models (DLMs). Using a self-supervised next-word prediction task, these models generate appropriate linguistic responses in a given context. In the current study, nine participants listened to a 30-min podcast while their brain responses were recorded using electrocorticography (ECoG). We provide empirical evidence that the human brain and autoregressive DLMs share three fundamental computational principles as they process the same natural narrative: (1) both are engaged in continuous next-word prediction before word onset; (2) both match their pre-onset predictions to the incoming word to calculate post-onset surprise; (3) both rely on contextual embeddings to represent words in natural contexts. Together, our findings suggest that autoregressive DLMs provide a new and biologically feasible computational framework for studying the neural basis of language. View details

Some of our people