People use varied language to express their causal understand-ing of the world. But how does that language map onto peo-ple’s underlying representations, and how do people choosebetween competing ways to best describe what happened? Inthis paper we develop a model that integrates computationaltools for causal judgment and pragmatic inference to addressthese questions. The model has three components: a causalinference component which computes counterfactual simula-tions that capture whether and how a candidate cause madea difference to the outcome, a literal semantics that mapsthe outcome of these counterfactual simulations onto differentcausal expressions (such as “caused”, “enabled”, “affected”,or “made no difference”), and a pragmatics component thatconsiders how informative each causal expression would befor figuring out what happened. We test our model in an ex-periment that asks participants to select which expression bestdescribes what happened in video clips depicting physical in-teractions.