Talk:Y′UV: Difference between revisions
Quasimondo (talk | contribs) |
|||
Line 154: | Line 154: | ||
I was really distressed by referring to narrowband chrominance as "compression". As I understand compression, it's almost always a different process from simple low-pass filtering. That's why I edited that section. I think the author doesn't know the history of developing compatible color, but I could be off-base. (Just now, I haven't yet read the article (or section) on Y'IQ.) Considering the capabilities of NTSC receivers, I and Q did provide enough bandwidth to provide a good color image, but true I and Q demodulation was rare, if ever used after the RCA CT-100 receiver. It was somewhat more expensive, and "high video fidelity" hadn't caught on.[[User:Nikevich|Nikevich]] ([[User talk:Nikevich|talk]]) 19:04, 23 May 2009 (UTC) |
I was really distressed by referring to narrowband chrominance as "compression". As I understand compression, it's almost always a different process from simple low-pass filtering. That's why I edited that section. I think the author doesn't know the history of developing compatible color, but I could be off-base. (Just now, I haven't yet read the article (or section) on Y'IQ.) Considering the capabilities of NTSC receivers, I and Q did provide enough bandwidth to provide a good color image, but true I and Q demodulation was rare, if ever used after the RCA CT-100 receiver. It was somewhat more expensive, and "high video fidelity" hadn't caught on.[[User:Nikevich|Nikevich]] ([[User talk:Nikevich|talk]]) 19:04, 23 May 2009 (UTC) |
||
Coment: From my TV days it's better to express the TV color signal as Hue (the color) and Saturation (the ammount of color). |
|||
The 'compression' comes from the fact that the color carrier occupies a narrower frequency spectrum than the luminance signal. (Reduced to mimmic how the human eye percieves things.) In the time domain, the color information is a single frequency sine wave added to the luminance signal (care taken not to violate transmission standards). The height of the color frequency sinewave corresponds the saturaion, while the phase of the sine wave (relitive to the reference 'color burst') gives the hue. As a side note the Hue and Saturtion are in Polar coordinates, not in XY coordinates, as any color vector scope's display will show. I'll try and clean up with references from my Tektronix TV Measurements book and pictures someday, maybe. |
|||
== Fixed point approximation == |
== Fixed point approximation == |
Revision as of 23:02, 25 October 2010
Color C‑class Low‑importance | ||||||||||
|
I made a minor change to the wording of how chroma signals are made from the U and V channels as the previous wording implied that U and V are directly added when they're actually conveyed separately on the color subcarrier.
U+V = C ?
Hi, first of all, what do U and V mean? Then, I don't understand the way how U and V are mapped on the u and v color map. Do the x- and y-axis equal voltages that are on the U and V channel? For example +0.3 Volts on the V-channel, and -0,2 Volts on the U-Channel equal a brownish color?
I really have problems with the following sentence:
"For instance if you amplitude-modulate the U and V signals onto quadrature phases of a subcarrier you end up with a single signal called C, for chroma, which can then make the YC signal that is S-Video."
What does quadrature blahblah mean? Aren't the U and V signals totally destroyed when they are mixed?
Thank you so much for all the efford you put into this article, --Abdull 21:45, 17 Jan 2005 (UTC)
- i don't really understand this myself but the following pages may help
- Chrominance Colorburst Quadrature amplitude modulation Plugwash 23:30, 18 Feb 2005 (UTC)
- The U and V are mapped by the equations. The R, G, & B values are normalized to a range [0, 1]. So, for example, (R, G, B) = (1, 0, 0) would be red and (1, 1, 1) would be white. Using the equations in the article you can see Y, U, and V are on different ranges. The values have no direct relationship to voltages (that's for the physical implementation to deal with as this is just the color space).
- Abdull, you need to understand this "quadrature blahblah" before you can understand that information is not lost. It's the process of converting two real signals, say, U and V onto a complex signal C = U + j*V. So your equation as the header here is mising the imaginary number: C = U+jV not C = U+V. Cburnett 00:41, 19 Feb 2005 (UTC)
I don't know if this will help or hinder, but I find it helpful to think of complex numbers as simply pairs of numbers. It is elegant mathematically to convert the two real numbers a and b into the complex number C = a + j.b but doesn't really change anything, except to make some kinds of formula quicker to write. For example, you can add two complex numbers together and it is just the same as adding together the separate numbers in each pair. Consider
C1 = a1 + j.b1 C2 = a2 + j.b2 C1 + C2 = a1 + j.b1 + a2 + j.b2 = (a1 + a2) + j.(b1+b2)
But take care, if you are new to complex mathematics, not to generalise. You can add and subtract complex numbers in this way, and you can multiply or divide by a single constant. But multiplying two complex numbers is NOT the same as just multiplying their elements.
If I read this quadrature stuff right, the two signals are mixed in such a way that they can be separated (because they are at 90 degree angles?)* The complex notation is just a convenient way of representing this combined information with a single item in a formula. Notinasnaid 10:27, 19 Feb 2005 (UTC)
- That's right. Nikevich (talk) 18:52, 23 May 2009 (UTC)
If complex numbers are hard to understand (I think the odd aspects tend to be over-emphasized to novices), you don't need them to understand how color is transmitted in analog color TV. The subcarrier is split into two sinusoids (sine-shaped) waves with a 90-degree phase difference. (You could call them sine and cosine without being too far off). In NTSC analog, the I and Q signals are similar to U and V. In the NTSC encoder, I and Q are zero if the image is black and white; they can be positive or negative voltages depending upon the color they represent.
One of those sinusoids is modulated by I, and the other, at 90 degrees, by Q. What's nice is that I and Q don't interfere with each other in this process. The amplitude at the output of each modulator is proportional to the magnitude of I (or Q). If I (or Q) goes negative, the output is inverted. If the image is black and white, the modulator outputs are ideally zero.
The outputs of these two modulators still differ in phase by 90 degrees. They are added, and the result is a single sinusoid with a phase that represents the hue, and amplitude that represents saturation. (By all means, please do learn what "hue" and "saturation" mean! Hue is probably better known – red, orange, yellow, etc. are different hues. Pale pink is an unsaturated red. A hazy sky without clouds that is pale blue is a unsaturated. (Note that brightness, Y, luminance, and value (artist's term) all mean about the same thing – how much light the color sends to your eye.))
For some unknown reason, the author apparently ignored the term "color difference"; NTSC used that term often, and U and V are still color difference signals. By all means, study that multi-panel image of a barn. The color-difference panes should render as a uniform, featureless gray, ideally, if saturation is reduced to zero, or if they are photographed with black and white film. Although they are "bright" enough to see (for practical reasons), they do not carry any brightness information themselves! They only show the color deviation from neutral gray (or dark or light gray).
As to digital TV, the same general ideas hold, although comparatively-narrowband chroma has been left behind, it seems. The same general principles apply, although modulating two sinusoids 90 degrees apart is analog; digital TV doesn't need a color subcarrier.
Btw, the color burst in NTSC is yellow-green, I'm fairly sure! It has to have a hue, after all.
HTH, a bit! Nikevich (talk) 18:52, 23 May 2009 (UTC)
- Thank you all for your great help. Right now I have the idea that the colors red, green and blue all lie 120° apart from each other on the U-V color plane. Am i right? Well, my problem is to understand on what principle this u-v color plane is made up... why is reddish colorin the upper region, why is a purple tint in the right region, etc? there must be some argumentable logic behind it. Has YUV always its roots in RGB color space (thus, you always need RGB before you can give YUV values)?
As to the image, please realize that color space is three-dimensional, and that this is only one plane of many (as many as the number of bits for the third axis defines...) in U,V color space. (Note the absence of yellow, for instance!) Nikevich (talk) 18:52, 23 May 2009 (UTC)
- And still, what does U and V stand for? Ultra and Vulvar? Thanks, --Abdull 13:14, 23 Mar 2005 (UTC)
- U, V and W are secondary coordinate identifiers, whereas X, Y and Z are the primary ones. Maybe they used variables with the names U and V for color because X and Y were already taken for the pixel location in the being edited? --Zom-B 9:07, 8 Sep 2006 (UTC)
Essentially, I'd say that U, V, and W are simply the next letters of the roman alphabet, starting from the end. X, Y, (and Z, but I don't know, or recall, what Z is...) are already assigned, so, working backwards, U.V. and W are the next available letters. Nikevich (talk) 18:52, 23 May 2009 (UTC)
Exact numbers
Are the 3 decimal place figures in the matrices and formulae of the article exact, or can they be written completely accurately in a surd/trigonometrical/whatever form? Just curious where they came from... – drw25 (talk) 15:37, 14 July 2005 (UTC)
- I don't know the details in this case, but I think these are most likely to be first generation numbers - not derived numbers. Some people picked them to get the results they wanted. So in that sense they are probably exact. The exotic numbers in Lab are exact in that sense. Notinasnaid 16:20, 14 July 2005 (UTC)
- The numbers are from "Rec. 601". The document was originally known as "CCIR Rec. 601". It's now ITU-R BT.601, the current version is BT.601-5 . So, yes, they're just as precise as they are in that document. NikolasCo 10:51, 19 March 2006 (UTC)
Code sample
I removed the following from the page:
Below is a function which convert RGB565 into YUV:
void RGB565ToYCbCr(U16 *pBuf,int width, int height) { U8 r1,g1,b1,r2,g2,b2,y1,cb1,cr1,y2,cb2,cr2; int i,j,nIn,val; for(i=0;i<height;i++) { nIn = i*width; for(j=0;j<width;j+=2) { b1 = (U8)((pBuf[nIn+j]&0x001f)<<3); g1 = (U8)((pBuf[nIn+j]&0x07e0)>>3); r1 = (U8)((pBuf[nIn+j]&0xf800)>>8); val = (int)((77*r1 + 150*g1 + 29*b1)/256); y1=((val>255) ? 255 : ((val<0) ? 0 : val)); val = (int)(((-44*r1 - 87*g1 + 131*b1)/256) + 128); cb1=((val>255) ? 255 : ((val<0) ? 0 : val)); val = (int)(((131*r1 - 110*g1 - 21*b1)/256) + 128); cr1=((val>255) ? 255 : ((val<0) ? 0 : val)); b2 = (U8)((pBuf[nIn+j+1]&0x001f)<<3); g2 = (U8)((pBuf[nIn+j+1]&0x07e0)>>3); r2 = (U8)((pBuf[nIn+j+1]&0xf800)>>8); val = (int)((77*r2 + 150*g2 + 29*b2)/256); y2=((val>255) ? 255 : ((val<0) ? 0 : val)); val = (int)(((-44*r2 - 87*g2 + 131*b2)/256) + 128); cb2=((val>255) ? 255 : ((val<0) ? 0 : val)); val = (int)(((131*r2 - 110*g2 - 21*b2)/256) + 128); cr2=((val>255) ? 255 : ((val<0) ? 0 : val)); pBuf[nIn+j] = (U16)(y1<<8 | ((cb1+cb2)/2)); pBuf[nIn+j+1] = (U16)(y2<<8 | ((cr1+cr2)/2)); } } }
I removed it for the following reasons: 1. It is not formatted (yes I could fix that) 2. While wikipedia does have algorithms, there doesn't seem to be any great amount of completed program code. Does it belong here? 3. Wasn't in the right part of the article. 4. RGB565 should be defined, linked etc. As it is a representation rather than a color space, it isn't likely to be a familiar term even to color scientists, unless they happen to work with this format. 5. The code is not self-contained; it relies on external definitions, not included, like U8 and U16.
Comments?
Notinasnaid 08:06, 10 August 2005 (UTC)
I've formatted it by adding spaces at the start of each line (which turn off normal line wrapping, and force a special raw text display format). I agree: Wikipedia articles should not contain code unless they are about algorithms. An equation is more appropriate here. -- The Anome 08:13, August 10, 2005 (UTC)
Can you explain me what language does this code example (fixed-point approximation) uses? If this is C/C++ what is := operator? If this is pascal what >> operator?
Pekka Lehtikoski: Language is C, please refer to "The C Programming Language, Kernighan & Ritchie, ISBN 0-13-110370-9". This kind of nit picking will take us nowhere, at least it will not make wikipedia better. As far this is best we got, let's publish it. Until when and if someone contributes improved or new version on the topic. About the code: I see no problem in incuding code samples within wikipedia, maybe not in main article, but linked to it. I would think that this would make wikipedia more useful to programmers, and would not hurt anyone else.
I think, in any case, the article needs reformatting - at least. There should be either fragments of pseudocode or mathematics, but not a mashup of both. The use of "+=" or ">>" in mathematical formulae is neither pleasing to the eye nor does it follow any convention. Also, I don't consider using \times for notating multiplication appropriate if matrix notation is used in the same article. Either the target audience is general (use \times) or mathematical (use \cdot, matrix notation, whatever). 137.226.196.26 (talk) 13:19, 30 March 2009 (UTC)
Oh, dear...
When uploading the original image for this article, I absolutely forgot to mention that I actually created it from scratch. I am going to re-upload it and properly licence it this time. Denelson83 04:42, 29 October 2005 (UTC)
YUV to RGB image misrepresenting the transformation
Let A denote the 3x3 transformation matrix from RGB-space to YUV-space. It is true that each in-range RGB-vector transforms into an in-range YUV-vector using A, it is _not_ the case that any in-range YUV-vector (as stated in the tex: 0<=Y<=1, -0.436<=U<=0.436, -0.615<=V<=0.615) has a valid RGB vector as it is transformed by inv(A). The point is that the picture for this article presents a whole bunch of transformed in-range YUV-vectors that do not map to the RGB space. These are located around the edge of the picture. For clarity, shouldn't non-valid RGB-vectors be left out of that image?
Example: y = [0.5, -0.4, -0.4]' (it's on the image) transforms to (approx.): r = [0.044, 0.89, -0.31]' (which is a non-valid RGB vector)
--81.172.252.164 14:47, 1 March 2006 (UTC)
- The actual color range within the UV color space is actually a hexagonal shape, which doesn't fill the (-0.5,-0.5)-(0.5,0.5) square but touches the inner sides of the (-0.436,-0.615)-(0.436,0.615) square. And by the way, do I need to remind you that this is an analog colorspace. When tuning things on a TV like color and/or brightness, the internal voltages might indeed drop below 0 (depends on the circuitry) and only at the very last moment, when the voltage enters the tube, the negative values are clipped (actually negative values just don't manipulate the electron beam, just as if the voltage were 0) --Zom-B 10:05, 8 Sep 2006 (UTC)
color
The article says,
standards such as NTSC reduce the amount of data consumed by the chrominance channels considerably, leaving the eye to extrapolate much of the color. NTSC saves only 11% of the original blue and 30% of the red. The green information is usually preserved in the Y channel. Therefore, the resulting U and V signals can be substantially compressed.
What does it mean to save X% of the original color? What is the source of this information? The article on NTSC doesn't even say this.
- In the YIQ article, it says that broadcast NTSC limits I to 1.3 MHz, Q to 0.4 MHz, and Y to 4 MHz. That's where the percentages come from, though it's rather simplistic to map YIQ directly to RGB. Having less bandwidth is like having a smaller number of colors, if I understand correctly. - mako 00:41, 24 March 2006 (UTC)
- Not necessarily a smaller number of colors. It could be a smaller number of chroma pixels per field, which is indeed an aspect of the NTSC format.
- Your phrase "rather simplistic" is a euphemism for "inaccurate".
I was really distressed by referring to narrowband chrominance as "compression". As I understand compression, it's almost always a different process from simple low-pass filtering. That's why I edited that section. I think the author doesn't know the history of developing compatible color, but I could be off-base. (Just now, I haven't yet read the article (or section) on Y'IQ.) Considering the capabilities of NTSC receivers, I and Q did provide enough bandwidth to provide a good color image, but true I and Q demodulation was rare, if ever used after the RCA CT-100 receiver. It was somewhat more expensive, and "high video fidelity" hadn't caught on.Nikevich (talk) 19:04, 23 May 2009 (UTC)
Coment: From my TV days it's better to express the TV color signal as Hue (the color) and Saturation (the ammount of color). The 'compression' comes from the fact that the color carrier occupies a narrower frequency spectrum than the luminance signal. (Reduced to mimmic how the human eye percieves things.) In the time domain, the color information is a single frequency sine wave added to the luminance signal (care taken not to violate transmission standards). The height of the color frequency sinewave corresponds the saturaion, while the phase of the sine wave (relitive to the reference 'color burst') gives the hue. As a side note the Hue and Saturtion are in Polar coordinates, not in XY coordinates, as any color vector scope's display will show. I'll try and clean up with references from my Tektronix TV Measurements book and pictures someday, maybe.
Fixed point approximation
I can't understand how the given formula for fixed point approximation can work. Unless I'm missing something, it seems plain wrong to me. It is not explained what range is expected for RGB components and what range the YUV result would have. I see in the history the formula has already been removed and readded, please double check it. I can figure out one myself, and post it if needed, but maybe it's worth asking where the one currently in the article came from. SalvoIsaja 21:04, 30 April 2006 (UTC)
- I have a working code snippet for the inverse transformation (YUV->RGB).
/* YUV->RGB conversion */
/* According to Rec. BT.601 */
void
convert(unsigned char y, unsigned char u, unsigned char v,
unsigned char *r, unsigned char *g, unsigned char *b)
{
int compo, y1, u1, v1;
/* Remove offset from components */
y1 = (int) y - 16;
u1 = (int) u - 128;
v1 = (int) v - 128;
/* Red component conversion and clipping */
compo = (298 * y1 + 409 * v1) >> 8;
*r = (unsigned char) ((compo < 0) ? 0: ((compo > 255) ? 255: compo));
/* Green component conversion and clipping */
compo = (298 * y1 - 100 * u1 - 208 * v1) >> 8;
*g = (unsigned char) ((compo < 0) ? 0: ((compo > 255) ? 255: compo));
/* Blue component conversion and clipping */
compo = (298 * y1 + 516 * u1) >> 8;
*b = (unsigned char) ((compo < 0) ? 0: ((compo > 255) ? 255: compo));
}
Don't know if it can be inserted in the main article. Think about it. --Cantalamessa 14:35, 30 May 2006 (UTC)
- Discussions I've seen in the past seemed to suggest that people don't want to see sample code in article, preferring that articles instead give algorithms and formulae. (By the way, your code also does not give expected range of inputs or outputs; it is NOT the case that 0 to 255 is a universal representation for color values; worth adding that in a comment. The formulae on the page are open to the same criticism, but they do at least use the 0 to 1 range which color scientists use). Notinasnaid 17:17, 30 May 2006 (UTC)
Maybe the code is not intended to please 'color scientists', but I can guarantee that if you download a YUV sequence from the internet, well, this kind of demapping is what makes the sequence to be seen on the screen with decent colors. --Cantalamessa 15:03, 31 May 2006 (UTC)
- Whether it pleases colour scientists or not, the key point is that you don't define the range of input and output values. Notinasnaid 17:21, 31 May 2006 (UTC)
- By the way I was mistaken to say the article doesn't define this. It says "Here, R, G and B are assumed to range from 0 to 1, with 0 representing the minimum intensity and 1 the maximum. Y is in range 0 to 1, U is in range -0.436 to 0.436 and V is in range -0.615 to 0.615." Notinasnaid 18:35, 31 May 2006 (UTC)
Notinasnaid, please don't misunderstand me: I don't want to struggle against you at all! For the input and output ranges, they are defined in the BT.601 recommendation, as per the second line of comment (YUV means BT.601 in almost 90% of the downloadable sequences). Major details to be found in YCbCr. --Cantalamessa 23:50, 31 May 2006 (UTC)
Ah, this is interesting stuff. According to[1], BT.601 is based on YCbCr, which as this article says '...is sometimes inaccurately called "YUV".' Notinasnaid 08:31, 1 June 2006 (UTC)
The problem with "YUV" is that it is a term that is used incorrectly pretty much by everyone. Y'UV is the analog system used for PAL video similar to Y'IQ for NTSC. Things like YUV 4:4:4, and YUV 4:2:0 are incorrect terms. YUV in this case is referring to Y'CbCr digital video. YUV is just being used incorrectly by everyone, and thus causing much confusion. Read [2] for more information.
Pekka Lehtikoski: Most readers of YUV-RDB conversions are programmers, and practical code by Cantalmessa is most appreciated. It should be in the main article, or linked clearly to it.
Exact numbers
I noticed the original RGB-to-YUV matrix is rounded to 3 decimal places, and the YUV-to-RGB matrix
is the exact matrix-inverse of the first matrix, but displayed using all decimals. As you can see, elements 0,1 and 2,2 are all zeros up to the 4th decimal place. Because of this, I thought maybe these elements ought to have been 0 and some elements in the RGB-to-YUV matrix in the 2nd and 3rd row should be more accurate than 3 decimal places.
--Zom-B 07:45, 8 Sep 2006 (UTC)
- Using a formula slightly different from the one on the YCbCr page I managed to find the exact values of the first matrix.
- In matrix form:
- And the exact inverse:
- --Zom-B 05:25, 9 Sep 2006 (UTC)
I think you mixed up the signs in that last matrix, at least if I compare it to the one above. Shouldn't it rather say:
--Quasimondo (talk) 11:40, 25 June 2010 (UTC)
Y'CbCr information should go on Y'CbCr page
See Poynton's note about the difference. [3] If I wasn't lazy myself, I would revamp this article.Glennchan 06:18, 10 December 2006 (UTC)
Any relation to UV mapping?
Do UV color values have any corelation to UV mapping on 3D surfaces? --24.249.108.133 16:49, 1 March 2007 (UTC)
Θ
Difference between YUV and Y'CbCr
The difference is very poorly explained. The conversion matrix coefficients given in the YUV conversion equations here are exactly the same as the equations given in the Y'CbCr for the (601) variation. The explanation says they have difference scaling factors and that YUV is analog. That sounds weird, surely digitising an analog signal does not change anything about the fundamental signal content. —Preceding unsigned comment added by 61.8.12.133 (talk • contribs)
- The analog and digital scale factors are different because different formats are used to encode the signal (e.g. NTSC encoding may be used on the analog signal). In digital, there are many flavours of Y'CbCr with different scale factors... in rec. 601 y'cbcr, the chroma is in the 16-240 range while in JPEG it's something else.Glennchan (talk) 00:53, 31 October 2008 (UTC)
Example numbers
The RGB articles lists a bunch of example values for white, black, red, blue, green, yellow, and so forth. There is no such entry in the YUV article. Can someone add these values to make it less abstract and theoretical? I would do it myself but I don't trust my ability to calculate accurate values. I came here to find what the YUV value for white is, and didn't see any information. -Rolypolyman (talk) 21:43, 16 January 2008 (UTC)
YUV to RGB formula not working properly
I have been using the formula:
C = Y - 16
D = U - 128
E = V - 128
R = clip(( 298 * C + 409 * E + 128) >> 8)
G = clip(( 298 * C - 100 * D - 208 * E + 128) >> 8)
B = clip(( 298 * C + 516 * D + 128) >> 8)
for calculating RGB from YUV. In order to make it work properly I have to change it to:
C = Y + 16
D = U - 128
E = V - 128
Any ideas why this is? Is this just a typing error or is the YUV-frame I use different from the standard specifications somehow?
Y'' in mathematical derivation section
Is Y'' an error for Y'? This article used to have Y and was then changed to Y'. In the old article the variables were all Y, in the words and in the matrices. If this is not an error, Y'' should be defined. SpinningSpark 08:20, 25 April 2008 (UTC)
Hidden comment YUV / RGB
I am removing a hidden comment in the article text and moving it here because the article is not the place to discuss errors. With context;
- Y'UV models human perception of color in a different way from the standard RGB model used in computer graphics hardware.
- <hidden>this is wrong; HSL and HSV are just different coordinate systems for RGB: but not as closely as the HSL color model and HSV color model.</hidden>
Requested move
- The following discussion is an archived discussion of the proposal. Please do not modify it. Subsequent comments should be made in a new section on the talk page. No further edits should be made to this section.
The result of the proposal was Move. Every source cited gives "YUV" so I recommend changing the content here as well. —Wknight94 (talk) 10:57, 24 July 2008 (UTC)
Y'UV → YUV — YUV is the more popular usage by a wide margin, even though is Y'UV might be the mathematically correct usage according to some editor. — Voidvector (talk) 21:04, 2 July 2008 (UTC)
Survey
- Feel free to state your position on the renaming proposal by beginning a new line in this section with
*'''Support'''
or*'''Oppose'''
, then sign your comment with~~~~
. Since polling is not a substitute for discussion, please explain your reasons, taking into account Wikipedia's naming conventions.
- Support. YUV is much more widespread. Cantalamessa (talk) 12:51, 4 July 2008 (UTC)
- Oppose. Most of the information on this page should be moved/merged with Y'CbCr, as that is what is what it describes. There is no reason why an encyclopedia should promote an erroneous term, just because many users does. An encyclopedia should rather strive to use terms consistent with itself, with relevant litterature and public standards. Suggested article structure:
- 'yuv' containing information about the "yuv" color file formats, such as yuv422, no matter what the actual colorspace used in them.
- 'Y'UV' containing information about the "Y'UV" colorspace used in analog television (that 1% of the users probably are interested in)
- 'Y'CbCr' containing information about the digital colorspace used in MPEG, h26x, jpg, etc. (that 99% of the users probably are interested in)
- Of course, all three articles should be cross-referenced at the intro and wherever it makes sense. —Preceding unsigned comment added by Knutinh (talk • contribs) 09:27, July 6, 2008
- Support. YUV was orignally used with PAL/NTSC television when MPEG was unheard of. Anyone looking for information from that era (and still actually in use though rapidly obsolescent) will type YUV. Y' is not used in any of my 1960s/1970s textbooks. It is only later that the Y'UV terminology arises. YCbCr is related but not identical. SpinningSpark 18:02, 16 July 2008 (UTC)
Discussion
- Any additional comments:
I understand that Y'UV might be the mathematically correct notation, but I think the article should be located at YUV because it is the more widespread usage by a wide margin. --Voidvector (talk) 11:28, 1 July 2008 (UTC)
- Well it used to be at YUV but someone changed it without discussion. I don't see the name of the article as a big issue because YUV redirects here. What was more of a problem is that Y was replaced in the text with some automatic editor which made a complete mess of the page. Several other editors have had to work at cleaning this up. What is really missing from the article is an explanation of what Y and Y' actually mean, the relationship between them and how the terms are used both colloquially and formally. As I see it, it would be perfectly ok to use Y in the article instead of Y' as long as the article made it clear upfront what the term meant and that their was a more formal notation as well. SpinningSpark 19:08, 1 July 2008 (UTC)
- I have proposed a move. I think the article should use "YUV" normally as it is the more widespread usage, and use Y' when describing it in more technical details (such as in the matrix calculations). --Voidvector (talk) 01:19, 4 July 2008 (UTC)
Reply to Knutinh above, first of all YUV is not wrong, it is simply a form of simplification that ignores "primes". In fact based on the current logic, this article should be located at Y′UV (prime symbol is a separate codepoint under Unicode). Quote from Wikipedia:Naming conflict: "Bear in mind that Wikipedia is descriptive, not prescriptive. We cannot declare what a name should be, only what it is." --Voidvector (talk) 07:39, 7 July 2008 (UTC)
- According to the book by Poynter, "YUV" and "Y'UV" is wrong when used for the regular separation of luma and chroma used in digital image/video coding. Moreover, those terms are already occupied by different colorcodings. When public standards say one thing, wikipedia should not contradict them, no matter how much easier it is to write yuv on a keyboard. My suggestion still stands: Use terminology that corresponds to public standards and reference litterature. Cross-reference and explain in order to clear up the common misconception. —Preceding unsigned comment added by Knutinh (talk • contribs) 13:56, 28 July 2008 (UTC)
- The above discussion is preserved as an archive of the proposal. Please do not modify it. Subsequent comments should be made in a new section on this talk page. No further edits should be made to this section.
10-bit 4:2:2 in YUV10 or YUV16
All material currently supplied via the EBU is provided as either 10-bit 4:2:2 in YUV10 or YUV16 file format or as 8-bit 4:2:2 material. Isn't that just YUV at 10 and 16 bits? Can we have some info about them here?
—IncidentFlux [ TalkBack | Contributions ] 13:35, 28 October 2008 (UTC)
Image is for Y,R-Y,B-Y not for YUV
The image on the top of the page is not the YUV color space. It is the Y,Y-R,Y-B color space. In YUV the color difference signals are scaled. In the image, it is labled from -.5 to .5 on both axis, which if were used in Y,R-Y,B-Y color space with a lumanance of .5 it will be correct because the blue level will be 1 at B-Y=.5 and 0 at B-Y=-.5, and same with the red. —Preceding unsigned comment added by 75.57.161.130 (talk) 02:23, 31 March 2009 (UTC)
Seems that somebody should correct this, or am I confusen? Nikevich (talk) 19:36, 23 May 2009 (UTC)
Historical background and human visual perception
Back before the merger that created the IEEE, the electronics people were the IRE. Their journal, Proceedings of the IRE, Color Television Issue, was a lifetime inspiration to me (I was probably a freshman in high school, but learned electronics early, from the MIT Rad. Lab. series). IIrc, it was the June, 1949 issue; just about positive about the year.
Although there was a lot of explanation of the color-subcarrier scheme for carrying chroma in NTSC, that's probably of interest to people who want to learn about NTSC analog (and why a 3-ns time error creates a noticeable hue shift!) (The spoof name was "Never Twice The Same Color", btw; people usually don't get that word "twice".)
If you're a serious student, the quadrature suppressed-carrier modulation scheme is worth learning about; it's still in use, I'm pretty sure, although in a more-developed form. It could be a background for learning about OFDM (and COFDM).
Human perception
Main point, though, is that some articles in that issue explained human color vision very clearly, and made it very evident that chrominance could have much less bandwidth than luminance yet still provide a quite-satisfactory image. As well, it was explained why the axes were chosen as they were, to correspond to human color perception.
Moreover, that issue made the concept of color difference signals (U and V, or I and Q) quite clear. It does take some rethinking! I dare say that, back then, NTSC represented sophisticated engineering. Nikevich (talk) 19:36, 23 May 2009 (UTC)
YUV to YIQ Matrix?
Does anyone here have any proper formulae to convert YUV to YIQ? I know of sites that have matrices to convert YIQ to YUV but not the other way around. WikiPro1981X (talk) 09:00, 12 September 2009 (UTC)
YUYV or UYVY ???
People, I'm quite puzzled about the YUV422 format. The article says that YUV422 represent a macropixel in the format YUYV, but this[4] website says the contrary: a macropixel is of the format UYVY. Additionally, the Aldebaran Nao robocup documentation says that the YUV422 format is UYVY, so who's right?
(Aldebaran documentation[5], unfortunately, is readeble only for Robocup participants, sorry!) —Preceding unsigned comment added by 151.50.50.94 (talk) 22:01, 14 January 2010 (UTC)
Further Confusion
YUV and Y'UV were used for a specific analog encoding of color information in television systems, while YCbCr was used for digital encoding of color information suited for video and still-image compression and transmission such as MPEG and JPEG. Today, the term YUV is commonly used in the computer industry to describe file-formats that are encoded using YCbCr.
The article begins with an explanation that says YCbCr is used for digital but that it's called YUV. This corresponds well with the YCbCr article which states: when referring to signals in video or digital form, the term "YUV" mostly means "Y′CbCr"
However, then this article has another description under Confusion with Y'CbCr. This section states that YUV is analogue and YCbCr is digital and that computer textbooks/implementations erroneously use the term YUV where Y'CbCr would be correct.
Right after that, the article describes "getting a digital signal", followed by Converting between Y'UV and RGB. This is very confusing as it seems to be addressing digital video (referencing pixels and FourCC) but uses YUV repeatedly. Is this about analogue-to-digital or about digital methods erroneously called 'YUV'?