Loading AI tools
Television audio encoding standard From Wikipedia, the free encyclopedia
Multichannel Television Sound (MTS) is the method of encoding three additional audio channels into analog 4.5 MHz audio carriers on System M and System N.The system was developed by an industry group known as the Broadcast Television Systems Committee, a parallel to color television's National Television System Committee, which developed the NTSC television standard.
MTS works by adding additional audio signals in otherwise empty portions of the television signal, and allows up to a total of four audio channels, with two producing the left and right stereo channels. An additional second audio program (SAP) is used to broadcast other languages or television radio services, including weather radio that could be accessed by the user, typically through a button on their remote control. The fourth channel was a professional audio channel used for internal purposes by broadcasters and is indecipherable with a common consumer receiver.
Initial work on design and testing of a stereophonic audio system began in 1975 when Telesonics approached Chicago public television station WTTW. WTTW was producing a music show titled Soundstage at that time, and was simulcasting the stereo audio mix on local FM stations. Telesonics offered a way to send the same stereo audio over the existing television signals, thereby removing the need for the FM simulcast.
Telesonics and WTTW formed a working relationship and began developing the system which was similar to FM stereo modulation. Twelve WTTW studio and transmitter engineers added the needed broadcast experience to the relationship. The Telesonics system was tested and refined using the WTTW transmitter facilities on the Sears Tower.
In 1979, WTTW had installed a stereo Grass Valley master control switcher and had added a second audio channel to the microwave STL (Studio Transmitter Link). By that time, WTTW engineers had further developed stereo audio on videotape recorders in their plant, using split audio track heads manufactured to their specifications, outboard record electronics, and Dolby noise reduction that allowed Soundstage to be recorded and electronically edited. In addition, an Ampex MM1100, 24-track audio recorder was also used for music production and mixing. PBS member stations who wished to deliver Soundstage in stereo were provided with a four-track (left, right, vertical drive, and time code) audiotape that could be synced with the video machines in those cities.
During the FCC approval process, several manufacturers applied to the FCC for consideration. Most notably the Electronic Industries Alliance (EIA) and Japanese EIA asked to be included in order to represent their members in the testing and specification phases of the approval process. WTTW engineers helped set standards for frequency response, separation, and other uses of the spectrum. They also provided program source material used for the testing and maintained the broadcast chain. A 3M 24-track audio recorder was used to allow the selection of 12 different stereo programs for testing. The Matsushita Quasar TV manufacturing plant and laboratory, just west of Chicago, was used as the source for all testing of the competing systems. Following the testing, several questions were raised about the validity of some of the tests, and a second round of testing began.
WTTW installed a Broadcast Electronics prototype stereo modulator in October 1983 and began full-time broadcasting in stereo at that time using the Telesonics system prior to Federal Communications Commission (FCC) rule-making on the BTSC system. MTS was officially adopted by the FCC on 23 April 1984. Following EIA and FCC recommendations, the BE modulator was modified to meet BTSC specifications, and by August 1984 was in full-time use on WTTW.
Sporadic network transmission of stereo audio began on NBC on July 26, 1984, with The Tonight Show Starring Johnny Carson, although at the time only the network's New York City flagship station, WNBC, had stereo broadcast capability;[1] regular stereo transmission of NBC programs began during early 1985. ABC and CBS followed suit in 1986 and 1987, respectively. FOX was the last network to join in 1987, with the four networks having their entire prime-time schedules in stereo by late 1994 (The WB and UPN launched the following season with their entire line-ups in stereo). One of the first television receiving systems to include BTSC capability was the RCA Dimensia, released in 1984.[2]
From 1985 to 2000, the networks would display the disclaimer "in stereo (where available)" at the beginning of stereo programming, sometimes using marketing tags such as CBS's "StereoSound" to describe their institution of stereo service. Networks in Canada and Mexico, which also used the NTSC video standard, utilized MTS sound when made available.
As a component of the NTSC standard, MTS is no longer being used in U.S. full-power television broadcasting after the June 12, 2009 DTV transition in the United States. It remains in use in LPTV and in analogue cable television.[citation needed] All coupon-eligible converter boxes (CECBs) are required to output stereo sound via RCA connectors, but MTS is merely optional for the RF modulator that every CECB contains. NTIA has stated that MTS was made optional for cost reasons;[3] this may have been due to a belief that MTS still required royalty payments to THAT Corporation, which is no longer true except for some digital implementations.[4]
THAT created consumer pages on the DTV transition and how it affected MTS by the choice of CECB, as some only receive stereo-incompatible RF signals and only output mono sound.[5]
This article or section appears to contradict itself on the frequency of the PRO channel. (April 2023) |
The original North American television standards provided a significant amount of bandwidth for the audio signal, 0.5 MHz, although the audio signal itself was defined to extend from 50 Hz to 15,000 Hz. This was centered on the audio carrier signal 4.5 MHz above the video signal, and given 25 kHz on either side of the carrier, using only 15 kHz of it.[6]
This meant the lower and upper 0.2475 MHz of the audio channel was unused. Due to the nature of the NTSC color signals added in the 1950s, the upper parts of the color signal pushed into the lower audio sideband. With the audio signal centered within the 0.5 MHz channel, and the lower 0.25 MHz being partially infringed on by leftover video signal, the upper 0.25 MHz was left largely empty.[6]
MTS worked by adding new signals to the free portion of this upper 0.25 MHz allocation. The original audio signal was left alone and broadcast as it always had been. Under MTS, this is the Main Channel. The actual signal in this channel is constructed by adding together the two stereo channels to produce a signal largely identical to the original monoaural signals and can be received on any NTSC television even without stereo circuitry.[6]
A second channel is then added, the Stereo Subchannel, centered at 31.468 kHz and extending 15 kHz on either side. This left a small gap between the Main and Stereo signals at 15.734 kHz. This pilot signal is also known as "H", or "1H", and its frequency is selected to be a harmonic of the video's horizontal scan signal so that it can be accurately recreated from the video signal using a phase locked loop. If there is any signal present at the 1H frequency, the television knows a stereo version of the signal is present.[6]
The Stereo Subchannel consisted of the same two audio signals, L and R, but mixed out of phase to produce the "L–R" signal, or "difference". This signal is sent at a higher amplitude and encoded using dbx compression to reduce high-frequency noise. To lower total average power, the carrier is not sent (which means there is no always-on signal at that frequency). On reception, the receiver uses the video signal to create the Pilot, and then examines that frequency to see if there is any signal present. If there is, the difference signal is extracted by filtering out the signal between 1H and 3H into a separate channel, and the carrier is re-created by adding 2H to this. This signal is then decompressed from its dbx format, and then fed, along with the original Main, to a stereo decoder.[6] FM stereo radio works in the same fashion, differing mainly in that the equivalent to the H signal is 19 kHz, not 15.734.
SAP, if present, is centered on 5H, but encodes a monaural signal at lower audio fidelity and amplitude.[6] The PRO signal is likewise encoded at 7H. A signal using all four channels extends only to about half of the available bandwidth in the original audio upper sideband.
The second audio program (SAP) also is part of the standard, providing another language, a video description service like DVS, or a completely separate service like a campus radio station or weatheradio. This sub-carrier is at 5x horizontal sync and is also dBx encoded.
A third PRO (professional) channel is provided for internal use by the station, and may handle audio or data. The PRO channel is normally used with electronic news gathering during news broadcasts to talk to the remote location (such as a reporter on-location), which can then talk back through the remote link to the TV station. Specialized receivers for the PRO channel are generally only sold to broadcast professionals. This sub-carrier is at 6.5x horizontal sync.
MTS signals are indicated to the television receiver by adding a 15.734 kHz pilot tone to the signal. The MTS pilot is locked or derived from the horizontal sync signal used to lock the video display. Variations in phase or frequency of the horizontal sync are therefore transferred to the audio. UHF transmitters in use in 1984 generally had significant phase errors introduced in this signal making the transmission of stereo audio on UHF stations of that time nearly impossible. Later refinements in UHF transmitters minimized these effects and allowed stereo transmission for those stations.
Most FM broadcast receivers are capable of receiving the audio portion of NTSC Channel 6 at 87.75 MHz, but only in monaural. Because the pilot tone frequency at 15.734 kHz is different from that of the ordinary FM band (19 kHz), such radios cannot decode MTS.
In ideal circumstances, MTS Stereo is better in performance than standard VHF FM stereo.[citation needed] In both FM Stereo and MTS the L-R subchannel is AM double-sideband modulated. AM is known to be susceptible to interference and noise reduction to the sub-channel aids in the improvement of SNR over standard FM broadcast. The sub-channel information is dbx-encoded to improve the audio with typical SNR greater than 50 dB. By adding noise reduction to the sub-channel only, complete mono compatibility was maintained. Viewers who owned mono TV sets would hear normal audio, only limited to 15 kHz bandwidth.
The original specifications called for a brick wall elliptical filter in each of the audio channels prior to encoding. The cutoff frequency of this filter was 15 kHz to prevent any horizontal sync interference (15.734 kHz) from being encoded in the audio. Manufacturers of modulators, however, used lower cutoff filters as they saw fit, also reducing the cost of audio filters. Typically, they chose 14 kHz although some used filters as low as 12.5 kHz. The elliptical filter was chosen for having the greatest bandwidth with the lowest phase distortion at the cutoff frequency. In comparison, standard FM modulators filter the audio at slightly higher frequencies but still must protect the 19 kHz pilot signal. The filter used during EIA/FCC testing had a characteristic that was −60 dB at 15.5 kHz. As transformer audio coupling was common at that time, the lower frequency limit was set to 50 Hz although modulators without transformer inputs were flat down to at least 20 Hz.
Typical separation was better than 35 dB. However, level matching between channels was essential to achieve this specification. Left and Right audio levels needed to be matched within less than 0.1 dB to achieve the separation stated.
Maintaining the phase stability of the horizontal sync is essential to good audio in the decode process. During transmission, the phase of the horizontal sync could vary with picture and chroma levels. ICPM (Incidental Carrier Phase Modulation), a measure of transmitted phase stability, needs to be less than 4.5% for best audio sub channel decoding. This was more of a problem with UHF transmitters of the day. Multi-cavity klystron RF amplifiers of that time typically had an ICPM above 7%. This made UHF transmission of MTS stereo impossible. Later UHF transmitter designs improved ICPM performance and allowed MTS stereo transmission.
Because of the use of dbx companding, every TV device that decoded MTS originally required the payment of royalties—first to dbx, Inc., then to THAT Corporation which was spun off from dbx in 1989 and acquired its MTS patents in 1994; however, those patents expired worldwide in 2004.[7] Though THAT now owns some patents related to digital implementations of MTS, a letter from THAT to the National Telecommunications and Information Administration in 2007 confirms that no license is required from THAT for all analog and some digital implementations of MTS.[4]
Several nations outside North America using the NTSC standard adopted the MTS format for their analog systems, including Chile, Colombia, Taiwan and the Philippines. Japan adopted their own EIAJ MTS standard, within their own domestic NTSC variant. Two countries, Brazil and Argentina, used the MTS standard with alternate PAL standards.
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.