This article provides insufficient context for those unfamiliar with the subject.(November 2022) |
In statistics, a parametric model or parametric family or finite-dimensional model is a particular class of statistical models. Specifically, a parametric model is a family of probability distributions that has a finite number of parameters.
Definition
editThis section includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations. (May 2012) |
A statistical model is a collection of probability distributions on some sample space. We assume that the collection, 𝒫, is indexed by some set Θ. The set Θ is called the parameter set or, more commonly, the parameter space. For each θ ∈ Θ, let Fθ denote the corresponding member of the collection; so Fθ is a cumulative distribution function. Then a statistical model can be written as
The model is a parametric model if Θ ⊆ ℝk for some positive integer k.
When the model consists of absolutely continuous distributions, it is often specified in terms of corresponding probability density functions:
Examples
edit- The Poisson family of distributions is parametrized by a single number λ > 0:
where pλ is the probability mass function. This family is an exponential family.
- The normal family is parametrized by θ = (μ, σ), where μ ∈ ℝ is a location parameter and σ > 0 is a scale parameter:
This parametrized family is both an exponential family and a location-scale family.
- The Weibull translation model has a three-dimensional parameter θ = (λ, β, μ):
- The binomial model is parametrized by θ = (n, p), where n is a non-negative integer and p is a probability (i.e. p ≥ 0 and p ≤ 1):
This example illustrates the definition for a model with some discrete parameters.
General remarks
editA parametric model is called identifiable if the mapping θ ↦ Pθ is invertible, i.e. there are no two different parameter values θ1 and θ2 such that Pθ1 = Pθ2.
Comparisons with other classes of models
editParametric models are contrasted with the semi-parametric, semi-nonparametric, and non-parametric models, all of which consist of an infinite set of "parameters" for description. The distinction between these four classes is as follows:[citation needed]
- in a "parametric" model all the parameters are in finite-dimensional parameter spaces;
- a model is "non-parametric" if all the parameters are in infinite-dimensional parameter spaces;
- a "semi-parametric" model contains finite-dimensional parameters of interest and infinite-dimensional nuisance parameters;
- a "semi-nonparametric" model has both finite-dimensional and infinite-dimensional unknown parameters of interest.
Some statisticians believe that the concepts "parametric", "non-parametric", and "semi-parametric" are ambiguous.[1] It can also be noted that the set of all probability measures has cardinality of continuum, and therefore it is possible to parametrize any model at all by a single number in (0,1) interval.[2] This difficulty can be avoided by considering only "smooth" parametric models.
See also
editNotes
edit- ^ Le Cam & Yang 2000, §7.4
- ^ Bickel et al. 1998, p. 2
Bibliography
edit- Bickel, Peter J.; Doksum, Kjell A. (2001), Mathematical Statistics: Basic and selected topics, vol. 1 (Second (updated printing 2007) ed.), Prentice-Hall
- Bickel, Peter J.; Klaassen, Chris A. J.; Ritov, Ya’acov; Wellner, Jon A. (1998), Efficient and Adaptive Estimation for Semiparametric Models, Springer
- Davison, A. C. (2003), Statistical Models, Cambridge University Press
- Le Cam, Lucien; Yang, Grace Lo (2000), Asymptotics in Statistics: Some basic concepts (2nd ed.), Springer
- Lehmann, Erich L.; Casella, George (1998), Theory of Point Estimation (2nd ed.), Springer
- Liese, Friedrich; Miescke, Klaus-J. (2008), Statistical Decision Theory: Estimation, testing, and selection, Springer
- Pfanzagl, Johann; with the assistance of R. Hamböker (1994), Parametric Statistical Theory, Walter de Gruyter, MR 1291393