Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Vector covariant

This result shows that, by its transformation properties, Aljkl is equivalent to a covariant vector of rank two. This process of summing over a pair of contravariant and covariant indices is called contraction. It always reduces the rank of a mixed tensor by two and thus, when applied to a mixed tensor of rank two, the result is a scalar ... [Pg.37]

The products of the components of two covariant vectors, taken in pairs, form the components of a covariant tensor. If the vectors are contravariant,... [Pg.158]

Secondly, the commutator is the Lie product33 of the operators X Xs and Xu this choice of multiplication is particularly appropriate when one realizes that the X XS are the generators of the semisimple compact Lie group U , which is associated with the infinitesimal unitary transformations of the Euclidean vector space R (e.g., the space of the creation operators).34 With the preceding comments, the action of the transformation operator on the creation operators can formally be written in the usual form of the transformation law for covariant vectors,33... [Pg.216]

In words, the transformation operator transforms a covariant vector into a covariant vector [cf. Eq. (54)], but the transformation operator transforms a contravariant vector into a contravariant rank 1 tensor that is not a traditional vector. Since Lrs is antisymmetric, the rank 1 contravariant tensor in Eq. (55) can be converted into a vector by interchanging indices, which results in a minus sign. However, in cases in which there is no ambiguity, the covariant and contravariant indices will be collimated to make the notation more compact. [Pg.218]

A third approach proposed by Mandema et al. (16) was an improvement on the Maitre et al. (15) approach. The first step is similar to that proposed by Maitre et al. (15), but in the second step individual PK/PD parameters are regressed against covariates using generalized additive modeling (GAM). In the final step, NONMEM is used to optimize and finalize the population model. The approach does not discuss how a reduction in the dimensionahty of the covariate vector should be handled. [Pg.230]

The reduction of the dimensionality of the covariate vector by eliminating redundant covariates—taking into account colinearity between covariates— before performing the GAM step is taken into account with this approach. [Pg.231]

Data structure analysis, exploratory examination of raw data (dose, exposure, response, and covariates) for hidden structure, and the reduction of the dimensionality of the covariate vector. [Pg.384]

Data structure analysis is the examination of the raw data for hidden structure, outliers, or leverage observations. This is repeated during the exploratory modeling (and nonlinear mixed effects modeling) steps using case deletion diagnostics (20). This type of analysis is important since outliers or leverage observations may occur in a population PM data set. It is equally important for the reduction of the covariate vector. [Pg.386]

Equation (4) implies that (detA) = 1, detA = 1. It follows that A is a non-singular matrix, with A = g Ag. Along with contravariant vectors we associate covariant vectors (covectors) a such that... [Pg.113]

We are now in a position to define the homogeneous coordinates in tangent space precisely. Consider any, but always fixed, covariant vector (pa of index 0, with Pq =. To describe the homogeneous coordinates of a given point dx of the tangent space we take an arbitrary number k and set... [Pg.332]

Exactly as in all affine cases we can now interpret the different projective tensors geometrically. Say, for instance, that Aa is a projective covariant vector. Then... [Pg.334]

Furthermore Goo is a projective scalar and Goa a projective covariant vector. [Pg.334]

If we assume, for instance, that Mj,..., Mj are four covariant vectors, then so are the scalars W . The coordinates obtained in this way we call scalar coordinates. We are however, free also to choose other transformation laws for the Mj. Even so we can also introduce general affine coordinates in the five-dimensional associated space. We do this through the transformation... [Pg.383]

So, for instance, does the projective covariant vector Aa correspond to the affine vector... [Pg.385]

Alternatively, an affine covariant vector corresponds to the projective... [Pg.385]

Now, let us choose n — h linearly independent covariant vectors gP as basis in the reaction subspace V and show that n - /j is the number of independent chemical reactions in the mixture, cf. below (4.45) and Rem. 4. These vectors can be written in the basis ofW as... [Pg.153]

In contrast to the three-dimensional situation of nonrelativistic mechanics there are now two kinds of vectors within the four-dimensional Minkowski space. Contravariant vectors transform according to Eq. (3.36) whereas covariant vectors transform acoording to Eq. (3.38) by the transition from IS to IS. The reason for this crucial feature of Minkowski space is solely rooted in the structure of the metric g given by Eq. (3.8) which has been shown to be responsible for the central structure of Lorentz transformations as given by Eq. (3.17). As a consequence, the transposed Lorentz transformation A no longer represents the inverse transformation A . As we have seen, the inverse Lorentz transformation is now more involved and given by Eq. (3.25). [Pg.63]

The 4-gradient has been written as a row vector above solely for our convenience it still is to be interpreted mathematically as a column vector, of course. Being defined as the derivative with respect to the contravariant components x, the 4-gradient dpi is naturally a covariant vector since its transformation property under Lorentz transformations is given by... [Pg.64]

Consequently, the transformation property of a covariant Lorentz tensor under Lorentz transformations is therefore given as the one of an n-fold product of covariant vectors. [Pg.64]

Besides that we are going to consider that individual carry a covariate vector represented by x, so data from i-th... [Pg.454]

It is assumed that the repeated events of an individual with Xl covariate vector x occur according to a nonhomogeneous Poisson process with intensity function given by... [Pg.454]

If two distinct sets of n independent displacement coordinates were denoted by covariant vector components r, and qr, a nonlinear transformation would take the general form... [Pg.22]

Each of the columns of the above matrix corresponds to a sigma point vector x -i e with components corresponding to the state, process, and measurement noise, respectively, and P is the corresponding augmented covariance vector, incorporating the process and observation noise components ... [Pg.1680]


See other pages where Vector covariant is mentioned: [Pg.258]    [Pg.165]    [Pg.184]    [Pg.320]    [Pg.215]    [Pg.837]    [Pg.330]    [Pg.331]    [Pg.351]    [Pg.31]    [Pg.162]    [Pg.99]    [Pg.454]   
See also in sourсe #XX -- [ Pg.39 , Pg.73 , Pg.74 , Pg.154 , Pg.339 , Pg.440 ]

See also in sourсe #XX -- [ Pg.113 ]




SEARCH



0 electrodynamics covariant 4-vectors

4-component vector element covariant vectors

Covariance

Covariant

Covariant basis vectors

Covariant spin vector

Covariates

Covariation

Lorentz covariant 4-vector

Vector covariant component

© 2024 chempedia.info