Jump to content

Talk:Vector calculus identities

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

The other vector identities can be found in the apendices of a lot of textbooks. These particular ones came from "Advanced Engineering Electromagnetics" by Constantine A. Balanis.

Restate identities using div, grad, curl (names) ?

[edit]

Hello. I wonder if anyone is opposed to rewriting the identities as curl grad foo = 0, etc, i.e., using the words curl, div, and grad. As Wikipedia is a reference work, it seems safe to assume that we are going to get many readers who are capable of understanding the identities, but are only occasional users of the notation. Yes, no, maybe? Keep up the good work, 64.48.158.121 03:59, 19 May 2006 (UTC)[reply]

Collect identities from other pages here

[edit]

Another thing to think about -- it would be a good idea to look at the other pages in Category:Vector calculus and see if there are some identities which can be copied here. 64.48.158.121 04:01, 19 May 2006 (UTC)[reply]

Reversions of identities using Feynman notation

[edit]

Hello User talk:Crowsnest: You reverted these entries; why?

Occasionally I come to this page for a quick reference of vector calculus identities. Feynman notation, while fine enough in itself, is not helpful. (We get it -- you can understand the notation.) The fact is that Feynman notation has not caught on as large as say, Einstein notation, and so to include it the vector calculus identity page in Wikipedia is just confusing. Keep it, fine, but put it as an "oh by the way, here's the same stuff in different notation" at the end of the page. It would be very helpful to just include the formulas that most of us work with at the beginning when we just come for a quick reference -- otherwise your page is gonna get passed right over. —Preceding unsigned comment added by Jkedmond (talkcontribs) 15:29, 29 January 2009 (UTC)[reply]


In simpler form, using Feynman subscript notation:

where the notation A means the subscripted gradient operates on only the factor A.

where the Feynman subscript notation B means the subscripted gradient operates on only the factor B.

Point 1: The notation is defined and is useful - why delete it? It's only notation - use it or don't; suit yourself. So why not show how it works and give the reader a choice?

Point 2: Feynman's notation is used; for example, see: "following Feynman , introduce a partial operator ∇A in the latter identity (14), where the subscript denotes the quantity to be differentiated. p. 4

Point 3: Quote from Feynman: "Here is our new convention: we show by a subscript, what a differential operator works on; the order has no meaning." etc. The guy introduced it in his undergrad lectures - he must think its useful. Hey, he's a Nobel Prize winner; why trust his judgment, eh?? Reference: R P Feyman, & Leighton & Sands (1964). The Feynman Lecture on Physics. Addison-Wesley. p. Vol II p. 27-4. ISBN 0805390499.

Point 4: They are identities, for Pete's sake. They are here for reference only - no great wisdom attaches. You can prove them yourself, there is no question of validity. It's just to save some time, or provide a little smörgåsbord for browsing when solving a problem. What possible purpose or rationale is there for these reverts??? Brews ohare (talk) 05:35, 23 April 2008 (UTC)[reply]

Hello Brews. I reverted them because they are not common vector calculus and unreferenced. Not because they are wrong, not useful, etc. This article starts with "...in vector calculus". So that says something of what people may expect on this page. Different branches of science develop different tools and notation they use, depending on their needs (which may spread out to other branches later on).
You cannot expect everybody to know Feynman notation, and things have to be verifiable on Wikipedia: "The threshold for inclusion in Wikipedia is verifiability, not truth".
So please re-instate them, with the references mentioned above (or others). And if you like, start an article on Feynman notation and its use (that sounds at least interesting to me, I came across some variational problems in fluid dynamics where use of Feynman notation would have shortened the writing-up considerably). Best regards, Crowsnest (talk) 09:51, 23 April 2008 (UTC)[reply]
OK - I've done that. I think you'll find that the overdot notation is the convenient one. I stumbled across it yesterday by accident. Brews ohare (talk) 15:23, 23 April 2008 (UTC)[reply]

Ambiguity in A.nabla

[edit]

I think A.nabla should be made clearer. In cartesian coordinates its easy to understand but in other coordinate systems, one might wonder if it is referring to nabla treated as a vector operator as it appears in the gradient or in the divergence which are of course different. The answer is as it appears in the divergence but since A.nabla is sometimes used as a notation for the directional derivative in the direction of A, one may think the gradient form should be used. I think it is definitely worth making this clearer. —Preceding unsigned comment added by 130.104.48.8 (talk) 10:55, 24 February 2009 (UTC)[reply]

See the discussion on Talk:Navier–Stokes equations#Convection Vs Advection. A copy of the relevant part for your question:
Provided for ∇ the covariant derivative is taken in a curvilinear coordinate system. -- Crowsnest (talk) 16:56, 24 February 2009 (UTC)[reply]

Using Cartesian tensors to derive identities

[edit]

I think it would be good to mention how Cartesian tensors in rectangular coordinates (a condition which I didn't state in a previous edit) can be used to derive the vector calculus identities stated in this article. This method is much easier and more elegant than laboriously expanding curls of curls etc. and then grouping terms. However, detailing how this is done might expand the article a substantial amount. Any thoughts? —Preceding unsigned comment added by Breeet (talkcontribs) 07:25, 5 May 2010 (UTC)[reply]

Hello ... I have extended this article by adding a few more identities. Now, I think this article is superset of the other article!. So, I would like to suggest to delete(or something equivalent) the article "List of vector identities". If anybody has objection, please let us discuss. zinka 09:30, 5 May 2010 (UTC)

Removal of overlap with Vector algebra relations

[edit]

The present article Vector calculus identities describes identities involving integration and derivative operations like the curl and gradient. The article Vector algebra relations involves only algebraic relations like the dot and cross products, and no calculus. Inasmuch as the subsection of Vector calculus identities titled "Addition and Multiplication" does not involve any calculus (and so is not properly part of the subject of Vector calculus identities), and inasmuch as the material of this section is contained (and expanded upon) in Vector algebra relations, I have removed this section and replaced it with a link to Vector algebra relations. Brews ohare (talk) 15:18, 14 September 2010 (UTC)[reply]

As seen in the section immediately above this one this article is the result of a merge, between Vector calculus identities and List of vector identities, i.e. it contains both vector calculus and more general identities. This makes sense as the the calculus and non-calculus identities are closely related, while the article with both of them in is not by any measure too long. There is no need now to split off this content into a separate article, undoing an uncontentious merge, forcing readers to search though two articles for closely related identities.--JohnBlackburnewordsdeeds 15:38, 14 September 2010 (UTC)[reply]
The two different types of identity are not closely related. Repeating the non-calculus related identities here is unnecessary. Moreover, the purely algebraic identities here are only a subset of the useful identities reported at Vector algebra relations, so the reader will possibly have to go to Vector algebra relations even with this repetition. It makes no sense to duplicate. Brews ohare (talk) 16:28, 14 September 2010 (UTC)[reply]
Ok, but eight years on and it doesn't look like anything has been done to correct it. Alternatively, should we just merge them together? --Ipatrol (talk) 20:41, 3 December 2018 (UTC)[reply]

RfC

[edit]

Should duplicate material in Vector calculus identities be removed?

The article Vector calculus identities describes identities involving integration as well as derivative operations like the curl and gradient. The article Vector algebra relations contains only algebraic relations involving the dot and cross products, and no calculus. Inasmuch as the subsection of Vector calculus identities titled "Addition and Multiplication" does not involve any calculus (and so is not properly part of the subject of Vector calculus identities), and inasmuch as the material of this section is contained (and expanded upon) in Vector algebra relations, should this section of Vector calculus identities be removed? Brews ohare (talk) 17:19, 14 September 2010 (UTC)[reply]

Comments

Please add your comments below beginning with an asterisk *

  • Remove: The inclusion here of a shortened subset of algebraic identities from Vector algebra relations serves no purpose in the listing of unrelated differential and integral identities in Vector calculus identities. Brews ohare (talk) 17:36, 14 September 2010 (UTC)[reply]
  • keep There is no reason for removing perfectly good content which makes this a good list of vector identities. Calculus and non-calculus identities are closely related: many of the identities from vector algebra have parallels in vector calculus, so it is good to list them side by side. They are in the same article as the result of a merge that was carried out earlier this year and was uncontentious at the time. If a reader is looking for a list of "vector identities" this is it. The duplicate material is only there as you have just forked it, i.e. you have in the last couple of days created a new article and populated it with a poorly formatted list of relations, definitions, properties, identities, inequalities and proofs with no clear criteria for inclusion. This is after having proposing it at this ongoing AfD then not waited for the proper conclusion of the AfD, so preempting the outcome of the AfD.--JohnBlackburnewordsdeeds 17:35, 14 September 2010 (UTC)[reply]
There is no "close relation" between the two sets of equations, and no parallels are pointed out. The earlier merge was between this article and a hodge-podge mixture of differential, integral and algebraic relations. The algebraic relations should have been separated from the rest at that time, rather than merge the whole lot under a misnomer. That separation now is the case with Vector algebra relations. Brews ohare (talk) 17:42, 14 September 2010 (UTC)[reply]
Listing algebraic identities in an article devoted to calculus identities makes Vector calculus identities a miscellany improperly named. Brews ohare (talk) 19:19, 14 September 2010 (UTC)[reply]
  • Remove or alternately find some way to merge the two articles, though I think something like Vector identities might be too vague to be useful. The identities that are not the result of calculus don't belong here, as is indicated clearly by the article title. siafu (talk) 20:33, 14 September 2010 (UTC)[reply]
  • Keep and merge any non-duplicative content into this article. Keeping the information in the same article is very useful from a "trying to find something" point of view -- I'm reminded of Glossary of graph theory, which probably could be split into a few different articles but the advantage of it being a single article is that there is a single place to turn. Readers interested in the properties of vector shouldn't have to first determine whether they are interested in calculus identities or algebraic relations, they should be able to go to Properties of Vectors -- or it looks like Euclidean_vector#Basic_properties -- and get what they are looking for. In general, I share the OP's concern of duplicate content in different places. This leads to incomplete content in many places, rather than complete content in one. Consider the overlap in:
And lastly, while I understand the interest in naming the articles correctly, and then the logical next step of creating a fork of content that doesn't exactly belong in one article, the need to be clear, correct, and easy to find for the average reader is much more important (IMO) -- so if that means combining the two articles and renaming it Vector properties or Vector identities or Vector relations -- and consolidating the material, then I strongly suggest that. jheiv talk contribs 08:58, 22 September 2010 (UTC)[reply]
jheiv: The issue you raise is whether a miscellany of topics under a broad heading like Vector properties is an easier place to find results than a number of accurately named more specialized sections (appropriately cross-linked) such as Vector calculus identities and Vector algebra relations. That issue may be difficult to decide, falling under the rubric of personal preferences.
As you have noted, the present article Vector calculus identities is a specialized header, and so the material here that is related to Vector algebra relations and unrelated to calculus is incorrectly included here, especially as it is already found where it belongs in the more complete listing of Vector algebra relations. Brews ohare (talk) 18:01, 1 October 2010 (UTC)[reply]
  • Keep. Why? Because storing information like this in Wikipedia is useless unless it is convenient and easy to work with. While the basic algebra relations aren't strictly calculus identities, one would specifically expect to use these basic identities when performing vector calculus. More complicated algebra relations, such as the cross product of two cross products, are derivative from the basic ones, and thus may be left for a dedicated article on algebra relations, yet it is rare that one needs to perform vector calculus, yet doesn't need at least one of the elementary algebra relations listed on the page at this moment.
I notice that nobody has continued this discussion since almost a year ago. This may have been due to poor formatting, content, etc, of this article. The way I see it, this article should serve as a convenient reference page for those working on problems involving vector calculus, such as fluid dynamics. As such, the article should select appropriate content based on typical applications, rather than a strict adherence to the title. I have already started working addressing such weaknesses. Aielyn (talk) 14:22, 19 September 2011 (UTC)[reply]
In a further effort to streamline and improve the page, I've just spent some time reorganising the page. As part of this, I am considering removing the overly-fundamental rules of vector operations, restricting the ones listed to only the important-but-not-obvious triple products (and any others that people might think are important enough to include). Anyone who knows enough about vectors to be doing anything at all with vector calculus will inevitably know the more fundamental ones, and a direct link to the algebra relations page should be enough for anyone who doesn't. Aielyn (talk) 15:44, 19 September 2011 (UTC)[reply]
The discussion was concluded a year ago with a consensus to keep the basic products in after a (now banned) editor tried to remove them. They are here because of a merge of Vector identities with Vector calculus identities, done over a year ago without objection. So anyone looking for Vector identities would expect to find them here and they should not be removed against the consensus already established to keep them.--JohnBlackburnewordsdeeds 15:55, 19 September 2011 (UTC)[reply]
Ah, I see. In that case, should I re-add the four-vector identities (dot and cross products of cross products), which I removed while working on the article? I really do think they're superfluous on this page. Aielyn (talk) 17:10, 19 September 2011 (UTC)[reply]
They may seem superfluous if this page is considered in isolation but they are here as this page is also linked from Vector identities/List of vector identities because of the merge. That also contained some vector calculus identities, so there was a lot of overlap, but without them it would have been so short so as to be of little use.--JohnBlackburnewordsdeeds 17:23, 19 September 2011 (UTC)[reply]

deleted section

[edit]

I deleted the useless template + section in this edit [1], is it really that necessary to expand on this? An integral is an integral, if there really are widespread wacky notations in use just add as and when, instead of leaving it empty...-- F = q(E + v × B) 00:05, 2 March 2012 (UTC)[reply]

Conditions on second partial derivatives needed

[edit]

The identities curl(grad(f)) = 0 and div(curl(F)) = 0 need conditions on the scalar field f and the vector field F, namely continuous second partials in a connected open set. This article says that the identities hold unconditionally. — Preceding unsigned comment added by 69.210.137.176 (talk) 05:06, 11 March 2012 (UTC)[reply]

Removal of Feynman subscript notation

[edit]

I have never seen Feynman subscript notation before. In this article it is introduced as if it is just a convenient abbreviation for a two-term operation in vector calculus, but then it is used only once! Perhaps I am dense but it just seems unintuitive, and it sticks out like a sore thumb in an otherwise beautifully concise page of identities written using the standard nabla notation. The introduction of Hestenes overdot notation is even more strange as it is not used at all.

To be clear I'm not saying alternative notations/operators don't belong on wikipedia, just that they don't fit in here. It would be nice to see an article about these, as I think there are probably many many more variants worth mentioning as long as the intuition and the gain is made obvious. --Nanite (talk) 10:48, 24 November 2015 (UTC)[reply]

Dyadic calculus identites

[edit]

I know there are a few vector calculus identities involving dyadics, i.e. , but I'm not sure if they should be included in this page or the dyadics wikipedia page. Thoughts? 76.116.117.40 (talk) 00:48, 22 February 2016 (UTC)[reply]

Draft proof of corollary

[edit]

Can anyone provide a draft proof (or a reference) to the extraction of this relation:

\oiint

from the 'standard' theorems (e.g. Stokes/Green/Gauss)? I've found a reference on Mathworld (Divergence Theorem), but it mentions using a constant vector "c" which troubles me. Aleonymous (talk) 16:54, 6 March 2018 (UTC)[reply]

It says: Suppose we accept that for any vector field
Now given a vector field , choose an arbitrary constant (i.e. not a function of position) vector and let , then we accept that
(for any and any ), or equivalently
We choose to be constant, so and
(for any and any constant ). This can only be true irrespective of our choice of if
(We could choose to be successively , and to show that the Cartesian components of the two integrals are the same.) So the linked derivation seems ok (to me) --catslash (talk) 01:56, 7 March 2018 (UTC)[reply]

Error in Vector Dot Product

[edit]

It seems there is an error in this section. To be concrete, we have to know how to express everything in terms of vector components, i.e. in index notation. I will follow the Einstein summation convention for repeated indices. The original left-hand side is clear: . I agree with the first right-hand side, e.g. . But the third line seems ambiguous: what in components is ? Is it a) or b) ? Following the article Dyadics#Product of dyadic and dyadic, , so that a) is the right answer. But this is just , so that the two cross-product terms in the first right-hand side must be zero! But they are not. So, I think we want b), which means we should write .

What am I missing? Is there an "authoritative" place to go for the meaning of this kind of notation on Wikipedia? Dstrozzi (talk) 03:29, 24 April 2018 (UTC)[reply]

The reference cited (Kelly) says (p117) that conventions vary, and defines
but uses
Dyadics are less well established in common usage than Gibbs-Heaviside and vectors, and consequently dyadic notation does vary somewhat.
For the thing-on-the-left in the notation (the ) to be the thing-on-the-right in the dyadic is (to me) very surprising. Our article does not adequately define the gradient of a vector, so the notation is at best ambiguous.
I suggest: (1) switching to the former definition above, (2) explaining this notation in the Gradient section at the top the article with (3) a note that some authors use the transpose of this.
Actually I would be happy to drop dyadics from this article altogether as proposed above. --catslash (talk) 12:06, 24 April 2018 (UTC)[reply]

I would say using the dyadic definition of Kelley (and me) is fine, unambiguous, and intuitive, in that the "the thing on the left" () stays on the left. It's a very compact expression, and I don't think it's opaque. There is no need to define "the gradient of a vector", which I'd say is even more ambiguous and non-standard than dyadics. I assume your comment about "the thing on the left becoming the thing on the right" refers to the later, definition, not the earlier dyadic one.

Either way - is there a place on Wikipedia where the math notation that pages should use is defined? I must say that I find the "transpose" in general confusing, since I don't know if a "plain" vector on this page should be understood as a row or column vector. Thinking of things in index notation, transposing a vector "makes no sense", though of course transposing a matrix does.

Also, sorry, I just realized I've been using \vec instead of \mathbf for vectors. If Wikipedia wants people to use boldface and not over-arrows for vectors, maybe they should re-define \vec.

Dstrozzi (talk) 12:35, 28 April 2018 (UTC)[reply]

Try MOS:MATH --catslash (talk) 01:51, 29 April 2018 (UTC)[reply]
The definition
is certainly what it should be, but currently on this page it uses the transpose when relating it to the Jacobian. I'm going to modify this to bring it in line with the usual definition of the dyad product (i.e. left symbol index is i, right symbol index is j) 163.1.246.246 (talk) 12:27, 20 August 2019 (UTC)[reply]
The above discussion still doesn't resolve the problem that, at least currently on this page, , which is confusing and verging on abuse of notation. It is also not universal, since I've usually seen the latter term set as , which is also more in line with the definition of discussed above. I'm going to go ahead and adopt this form in the article, but if people still have problems with this (perhaps because they believe it belongs on the Dyadics page) then I suggest we just remove the offending terms altogether. 90.247.214.191 (talk) 14:23, 8 November 2021 (UTC)[reply]

Gradient section needs work!

[edit]

The section on the Gradient refers to "a tensor field of order n" although the meaning of "order n" for a tensor field is not explained in this article and is not explained in the linked article Tensor field, either.

The section discusses the case of a space of dimension 3 but then suddenly starts talking about tensor fields of order n. Why?2600:1700:E1C0:F340:ED8B:9E01:6222:F4CC (talk) 19:48, 20 January 2019 (UTC)[reply]

Row vs. Column Vectors

[edit]

This page was very confusing because it uses row vectors for A and B, etc., rather than column vectors like every math textbook I have ever read. It also seems to do matrix multiplication in reverse, so to speak.

For example, there is one place where A is multiplied by a rectangular (i.e., with more than one row and more than one column) Jacobean matrix J. I had assumed A was an (n x 1) column vector, and matrix multiplication was defined in the normal way. For this to work, J would have to be a (1 x m) matrix, that is, a row vector, not a rectangular matrix.

The only way any of this makes sense is if we think of everything in reverse that the rest of the world seems to use. Or, let A be a column vector and put it on the right of J and use the normal matrix multiplication conventions.

I cannot fix it because I came here to learn the formulas, so I do not know them well enough to rewrite them. — Preceding unsigned comment added by 146.142.1.10 (talk) 21:55, 29 January 2020 (UTC)[reply]

I got tripped up by the exact same thing. You're right, this is very confusing. In the section "Gradient" where the Jacobian is first introduced, it is explicitly stated that A is a row vector.
The linked university script writes out a full Jacobian as "gradu" and later defines the gradient of a dot product as "grad(u.v) = (gradu)Tv + (gradv)Tu". That notation seems to make sense if you interpret the vectors as column vectors and do matrix multiplication in the usual way. It is also consistent with what you get if you first write out the scalar product u.v as a sum (u1·v1 + u2·b2 + ...) and then apply the nabla operator to it. 198.48.131.231 (talk) 21:39, 1 April 2020 (UTC)[reply]

Inconsistent formula spacing

[edit]

Has anyone else noticed the strange and inconsistent LaTeX spacing used for some of the identities? For example, the use of , written (\nabla {\cdot} \psi), in the operator notation section's special notation subsection compared to the use of , written (\nabla \cdot \psi), in the summary section's second derivative subsection. Is there a reason for this? CoronalMassAffection (talk) 03:02, 10 June 2021 (UTC)[reply]

Plain English Introduction Needed

[edit]

This wikipedia entry is absurdly distant from Plain English. There are dozens of articles that explain this subject so that non-experts can understand this topic. DDilworth (talk) 16:33, 27 May 2024 (UTC)[reply]

Alleged recently discovered identity

[edit]

The newly added identity is merely Stokes' theorem for dyads. The source cited does not state that it is recently discovered and implies the opposite. By the logic of its inclusion we should also add the divergence theorem for dyads, etc., but we would then have a proliferation of special cases of tensor identities without the tensor identities themselves. It would be better to have the general tensor identities and omit the special cases, but that would make the article more about tensor identities than vector identities. It is best, therefore, not to add such special cases. Zophar (talk) 16:56, 24 August 2024 (UTC)[reply]

Example of Feynman method

[edit]

The argument of Page and Adams in their derivation of Eq. (18-3) (see section Curve–surface integrals and note therein) may be reconstructed as follows. Using the algebraic identity A×(B×C) = (C×BA = C⋅(AB) − C(AB), Stokes' theorem, and a related integral theorem, we have, ∫C A×(B×dr) = ∫C {dr⋅(AB) − dr(AB)} = ∫S {dS⋅(∇×(AB)) − dS×(∇(AB))} = ∫S {(dS×∇)⋅(AB) − (dS×∇)(AB)} = ∫S ((dS×∇)×BA = ∫S {dS⋅(B∇) − dS(B⋅∇)}×A = ∫S {[∇×(AB)]⋅dS + [∇⋅(BA)]×dS} = ∫S [∇×(AB)]⋅dS + ∫S [∇⋅(BA)]×dS. In the course of the argument we have derived the algebraic identity (C×D)⋅(AB) − (C×D)(AB) = [D×(AB)]⋅C + [D⋅(BA)]×C, which may be used with the substitution method (see section Special notations) as an alternative to the Feynman method. Zophar (talk) 18:51, 15 October 2024 (UTC)[reply]