Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
NTSC
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Technical details == === Resolution and refresh rate === NTSC color encoding is used with the [[CCIR System M|System M]] television signal, which consists of {{frac|30|1.001}} (approximately 29.97) [[interlaced video|interlace]]d frames of [[video]] per [[second]]. Each frame is composed of two fields, each consisting of 262.5 scan lines, for a total of 525 scan lines. The visible [[raster scan|raster]] is made up of 486 scan lines. The later digital standard, [[Rec. 601]], only uses 480 of these lines for visible raster. The remainder (the [[vertical blanking interval]]) allow for vertical [[synchronization]] and retrace. This blanking interval was originally designed to simply blank the electron beam of the receiver's CRT to allow for the simple analog circuits and slow vertical retrace of early TV receivers. However, some of these lines may now contain other data such as [[closed captioning]] and vertical interval [[timecode]] (VITC). In the complete [[raster scan|raster]] (disregarding half lines due to [[Interlaced video|interlacing]]) the even-numbered scan lines (every other line that would be even if counted in the video signal, e.g. {2, 4, 6, ..., 524}) are drawn in the first field, and the odd-numbered (every other line that would be odd if counted in the video signal, e.g. {1, 3, 5, ..., 525}) are drawn in the second field, to yield a [[Flicker fusion threshold|flicker-free]] image at the field refresh [[frequency]] of {{frac|60|1.001}} Hz (approximately 59.94 Hz). For comparison, 625 lines (576 visible) systems, usually used with [[PAL#PAL-B/G/D/K/I|PAL-B/G]] and [[SECAM]] color, and so have a higher vertical resolution, but a lower temporal resolution of 25 frames or 50 fields per second. The NTSC field refresh frequency in the black-and-white system originally exactly matched the nominal 60 Hz [[utility frequency|frequency]] of [[alternating current]] power used in the United States. Matching the field [[refresh rate]] to the power source avoided [[intermodulation]] (also called ''beating''), which produces rolling bars on the screen. Synchronization of the refresh rate to the power incidentally helped [[kinescope]] cameras record early live television broadcasts, as it was very simple to synchronize a [[film]] camera to capture one frame of video on each film frame by using the alternating current frequency to set the speed of the synchronous AC motor-drive camera. This, as mentioned, is how the NTSC field refresh frequency worked in the original black-and-white system; when ''color'' was added to the system, however, the refresh frequency was shifted slightly downward by 0.1%, to approximately 59.94 Hz, to eliminate stationary dot patterns in the difference frequency between the sound and color carriers (as explained below in [[NTSC#Color encoding|§{{spaces}}Color encoding]]). By the time the frame rate changed to accommodate color, it was nearly as easy to trigger the camera shutter from the video signal itself. The actual figure of 525 lines was chosen as a consequence of the limitations of the vacuum-tube-based technologies of the day. In early TV systems, a master [[Phase-locked loop|voltage-controlled oscillator]] was run at twice the horizontal line frequency, and this [[frequency divider|frequency was divided]] down by the number of lines used (in this case 525) to give the field frequency (60 Hz in this case). This frequency was then compared with the 60 Hz [[power-line frequency]] and any discrepancy [[Frequency-locked loop|corrected by adjusting the frequency]] of the master oscillator. For interlaced scanning, an odd number of lines per frame was required in order to make the vertical retrace distance identical for the odd and even fields,{{Clarify|date=May 2022}} which meant the master oscillator frequency had to be divided down by an odd number. At the time, the only practical method of frequency division was the use of a chain of [[vacuum tube]] [[multivibrator]]s, the overall division ratio being the mathematical product of the division ratios of the chain. Since all the factors of an odd number also have to be odd numbers, it follows that all the dividers in the chain also had to divide by odd numbers, and these had to be relatively small due to the problems of [[Frequency drift|thermal drift]] with vacuum tube devices. The closest practical sequence to 500 that meets these criteria was {{math|3×5×5×7{{=}}525}}. (For the same reason, 625-line PAL-B/G and SECAM uses {{math|5×5×5×5}}, the old [[405-line television system|British 405-line system]] used {{math|3×3×3×3×5}}, the French [[819 line|819-line]] system used {{math|3×3×7×13}} etc.) === Colorimetry === [[File:CIE1931xy gamut comparison CC v06 NTSC 709 P3.png|thumb|1931 CIE chromaticity diagram, showing gamuts for NTSC, BT.709, and P3]] {{See also|Color space}} [[Colorimetry]] refers to the specific colorimetric characteristics of the system and its components, including the specific primary colors used, the camera, the display, etc. Over its history, NTSC color had two distinctly defined colorimetries, shown on the accompanying chromaticity diagram as NTSC 1953 and SMPTE C. Manufacturers introduced a number of variations for technical, economic, marketing, and other reasons.<ref>{{cite journal |title=Display Color Gamuts: NTSC to Rec.2020 |url=https://sid.onlinelibrary.wiley.com/doi/10.1002/j.2637-496X.2016.tb00920.x |access-date=July 15, 2024 |journal=Information Display |date=2016 |publisher=Frontline Technology|doi=10.1002/j.2637-496X.2016.tb00920.x |last1=Soneira |first1=Raymond M. |volume=32 |issue=4 |pages=26–31}}</ref> {| class="wikitable" |+ RGB chromaticity coordinates ! rowspan="2" | [[Color space]] ! colspan="2" | [[Standard illuminant#White points of standard illuminants|White point]] ! [[Correlated color temperature|CCT]] ! colspan="6" | [[Primary color]]s (CIE 1931 xy) |- ! x ! y ! k ! R<sub>x</sub> ! R<sub>y</sub> ! G<sub>x</sub> ! G<sub>y</sub> ! B<sub>x</sub> ! B<sub>y</sub> |- | align="center" | NTSC (1953) | style="background:#{{Color temperature|6774|hexval}}" | 0.31 | style="background:#{{Color temperature|6774|hexval}}" | 0.316 | style="background:#{{Color temperature|6774|hexval}}" | 6774 ([[Standard illuminant#Illuminants B and C|C]]) | style="background-color: color(rec2020 0.96 0.349 0.0);" | 0.67 | style="background-color: color(rec2020 0.96 0.349 0.0);" | 0.33 | style="background-color: color(rec2020 0.308 0.933 0.255);" | 0.21 | style="background-color: color(rec2020 0.308 0.933 0.255);" | 0.71 | style="background-color: color(rec2020 0.187 0.333 0.985);" | 0.14 | style="background-color: color(rec2020 0.187 0.333 0.985);" | 0.08 |- | align="center" | SMPTE C (1987) | style="background:#{{Color temperature|6500|hexval}}" | 0.3127 | style="background:#{{Color temperature|6500|hexval}}" | 0.329 | style="background:#{{Color temperature|6500|hexval}}" | 6500 ([[Illuminant D65|D65]]) | style="background-color: color(rec2020 0.806 0.351 0.176);" | 0.63 | style="background-color: color(rec2020 0.806 0.351 0.176);" | 0.34 | style="background-color: color(rec2020 0.645 0.953 0.352);" | 0.31 | style="background-color: color(rec2020 0.645 0.953 0.352);" | 0.595 | style="background-color: color(rec2020 0.299 0.224 0.957);" | 0.155 | style="background-color: color(rec2020 0.299 0.224 0.957);" | 0.07 |} ''<small>Note: displayed colors are approximate and require a [[Gamut|wide gamut]] display for faithful reproduction.</small>'' ==== NTSC 1953 ==== The original 1953 color NTSC specification, still part of the United States [[Code of Federal Regulations]], defined the [[Colorimetry|colorimetric]] values of the system as shown in the above table.<ref>47 CFR § 73.682 (20) (iv)</ref> Early color television receivers, such as the RCA [[CT-100]], were faithful to this specification (which was based on prevailing motion picture standards), having a larger gamut than most of today's monitors. Their low-efficiency phosphors (notably in the Red) were weak and long-persistent, leaving trails after moving objects. Starting in the late 1950s, picture tube phosphors would sacrifice saturation for increased brightness; this deviation from the standard at both the receiver and broadcaster was the source of considerable color variation. ==== SMPTE C ==== {{anchor|Color correction in studio monitors and home receivers}} To ensure more uniform color reproduction, some manufacturers incorporated color correction circuits into sets, that converted the received signal—encoded for the colorimetric values listed above—adjusting for the actual phosphor characteristics used within the monitor. Since such color correction can not be performed accurately on the nonlinear [[Gamma correction|gamma corrected]] signals transmitted, the adjustment can only be approximated, introducing both hue and [[Luma (video)|luminance]] errors for highly saturated colors. Similarly at the broadcaster stage, in 1968–69 the Conrac Corp., working with RCA, defined a set of controlled phosphors for use in broadcast color picture [[Display device|video monitors]].<ref name="DeMarsh, Leroy 1098">DeMarsh, Leroy (1993): TV Display Phosphors/Primaries — Some History. SMPTE Journal, December 1993: 1095–1098. {{doi|10.5594/J01650}}</ref> This specification survives today as the '''SMPTE C''' phosphor specification:<ref>{{cite web |title=SMPTE C Color Monitor Colorimetry RP 145-2004 |url=https://pub.smpte.org/latest/rp145/rp0145-2004_stable2010.pdf |access-date=July 15, 2024 |website=SMPTE}}</ref> As with home receivers, it was further recommended<ref name="ITU470">International Telecommunication Union Recommendation ITU-R 470-6 (1970–1998): Conventional Television Systems, Annex 2.</ref> that studio monitors incorporate similar color correction circuits so that broadcasters would transmit pictures encoded for the original 1953 colorimetric values, in accordance with FCC standards. In 1987, the [[Society of Motion Picture and Television Engineers]] (SMPTE) Committee on Television Technology, Working Group on Studio Monitor Colorimetry, adopted the SMPTE C (Conrac) phosphors for general use in Recommended Practice 145,<ref name="SMPTE_RP145">Society of Motion Picture and Television Engineers (1987–2004): Recommended Practice RP 145–2004. Color Monitor Colorimetry.</ref> prompting many manufacturers to modify their camera designs to directly encode for SMPTE C colorimetry without color correction,<ref name="SMPTE_EG27">Society of Motion Picture and Television Engineers (1994, 2004): Engineering Guideline EG 27-2004. Supplemental Information for SMPTE 170M and Background on the Development of NTSC Color Standards, pp. 9</ref> as approved in SMPTE standard 170M, "Composite Analog Video Signal – NTSC for Studio Applications" (1994). As a consequence, the [[ATSC standards|ATSC]] digital television standard states that for [[480i]] signals, SMPTE C colorimetry should be assumed unless colorimetric data is included in the transport stream.<ref>Advanced Television Systems Committee (2003): ATSC Direct-to-Home Satellite Broadcast Standard Doc. A/81, pp.18</ref> Japanese NTSC never changed primaries and whitepoint to SMPTE C, continuing to use the 1953 NTSC primaries and whitepoint.<ref name="ITU470"/> Both the [[PAL]] and [[SECAM]] systems used the original 1953 NTSC colorimetry as well until 1970;<ref name="ITU470"/> unlike NTSC, however, the European Broadcasting Union (EBU) rejected color correction in receivers and studio monitors that year and instead explicitly called for all equipment to directly encode signals for the "EBU" colorimetric values.<ref name="EBU1975">European Broadcasting Union (1975) Tech. 3213-E.: E.B.U. Standard for Chromaticity Tolerances for Studio Monitors.</ref> ==== Color compatibility issues ==== In reference to the gamuts shown on the CIE chromaticity diagram (above), the variations between the different colorimetries can result in significant visual differences. To adjust for proper viewing requires [[Color management|gamut mapping]] via [[Lookup table#Lookup tables in image processing|LUT]]s or additional [[color grading]]. SMPTE Recommended Practice RP 167-1995 refers to such an automatic correction as an "NTSC corrective display matrix."<ref>{{cite web |title=SMPTE RP 167-1995 |url=https://pub.smpte.org/latest/rp167/rp0167-1995_stable2004.pdf |access-date=July 15, 2024 |website=SMPTE |page=5 (A.4) |quote=The NTSC corrective matrix in a display device is intended to correct any colorimetric errors introduced by the differ- ence between the camera primaries and the display tube phosphors.}}</ref> For instance, material prepared for 1953 NTSC may look desaturated when displayed on SMPTE C or ATSC/[[BT.709]] displays, and may also exhibit noticeable hue shifts. On the other hand, SMPTE C materials may appear slightly more saturated on BT.709/sRGB displays, or significantly more saturated on P3 displays, if the appropriate gamut mapping is not performed. === Color encoding === {{More citations needed|section|date=February 2024}} {{See also|YIQ}} NTSC uses a [[Luma (video)|luminance]]-[[chrominance]] encoding system, incorporating concepts invented in 1938 by [[Georges Valensi]]. Using a separate luminance signal maintained backward compatibility with black-and-white television sets in use at the time; only color sets would recognize the chroma signal, which was essentially ignored by black and white sets. The red, green, and blue primary color signals <math>(R^\prime G^\prime B^\prime)</math> are weighted and summed into a single [[Luma (video)|luma]] signal, designated <math>Y^\prime</math> (Y prime)<ref>{{cite web |title=Poynton's Color FAQ by Charles Poynton |url=http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/POYNTON1/ColorFAQ.html#RTFToC9 |website=Homepages.inf.ed.ac.uk}}</ref> which takes the place of the original [[analog television|monochrome signal]]. The color difference information is encoded into the chrominance signal, which carries only the color information. This allows black-and-white receivers to display NTSC color signals by simply ignoring the chrominance signal. Some black-and-white TVs sold in the U.S. after the introduction of color broadcasting in 1953 were designed to filter chroma out, but the early B&W sets did not do this and [[chroma dots|chrominance]] could be seen as a [[dot crawl|crawling dot pattern]] in areas of the picture that held saturated colors.<ref>{{cite book |last1=Large |first1=David |url=https://books.google.com/books?id=QOFWx2umt8sC&dq=black+and+white+dot+crawl&pg=PA55 |title=Modern Cable Television Technology |last2=Farmer |first2=James |date=January 13, 2004 |publisher=Elsevier |isbn=978-0-08-051193-1}}</ref> To derive the separate signals containing only color information, the difference is determined between each color primary and the summed luma. Thus the red difference signal is <math>R^\prime - Y^\prime</math> and the blue difference signal is <math>B^\prime - Y^\prime</math>. These difference signals are then used to derive two new color signals known as <math>I^\prime</math> (in-phase) and <math>Q^\prime</math> (in quadrature) in a process called [[Quadrature amplitude modulation|QAM]]. The <math>I^\prime Q^\prime</math> color space is rotated relative to the difference signal color space, such that orange-blue color information (which the human eye is most sensitive to) is transmitted on the <math>I^\prime</math> signal at 1.3 MHz bandwidth, while the <math>Q^\prime</math> signal encodes purple-green color information at 0.4 MHz bandwidth; this allows the chrominance signal to use less overall bandwidth without noticeable color degradation. The two signals each amplitude modulate<ref name="Monochrome and Colour Television">{{cite book |last1=Gulati |first1=R. R. |url=https://books.google.com/books?id=SHM47MKmGXkC |title=Monochrome and Colour Television |date=December 2005 |publisher=New Age International |isbn=978-81-224-1607-7}}</ref> 3.58 MHz carriers which are 90 degrees out of phase with each other<ref>{{cite book |url=https://books.google.com/books?id=Tm_U-Qd58RgC&dq=ntsc+i+and+q+carrier&pg=PA226 |title=Newnes Guide to Digital TV |date=November 17, 2002 |publisher=Newnes |isbn=978-0-7506-5721-1}}</ref> and the result added together but with the [[Double-sideband suppressed-carrier transmission|carriers themselves being suppressed]].<ref>{{cite book |url=https://books.google.com/books?id=xwMZw4UewuUC&dq=ntsc+i+and+q+carrier&pg=PA8 |title=Digital Television: Satellite, Cable, Terrestrial, IPTV, Mobile TV in the DVB Framework |date=February 20, 2024 |publisher=Taylor & Francis |isbn=978-0-240-52081-0}}</ref><ref name="Monochrome and Colour Television"/> The result can be viewed as a single sine wave with varying phase relative to a reference carrier and with varying amplitude. The varying phase represents the instantaneous color [[hue]] captured by a TV camera, and the amplitude represents the instantaneous color [[Colorfulness#Saturation|saturation]]. The {{Fraction|3|51|88}} MHz [[subcarrier]] is then added to the Luminance to form the composite color signal<ref name="Monochrome and Colour Television"/> which modulates the video signal [[Carrier wave|carrier]]. 3.58 MHz is often stated as an abbreviation instead of 3.579545 MHz.<ref>{{cite book |url=https://books.google.com/books?id=AvQAa5Zfuj0C&dq=ntsc+3.58+MHz&pg=PA123 |title=Modern Television Practice Principles, Technology and Servicing 2/Ed |publisher=New Age International |isbn=978-81-224-1360-1}}</ref> For a color TV to recover hue information from the color subcarrier, it must have a zero-phase reference to replace the previously suppressed carrier. The NTSC signal includes a short sample of this reference signal, known as the [[colorburst]], located on the back porch of each horizontal synchronization pulse. The color burst consists of a minimum of eight cycles of the unmodulated (pure original) color subcarrier. The TV receiver has a local oscillator, which is synchronized with these color bursts to create a reference signal. Combining this reference phase signal with the chrominance signal allows the recovery of the <math>I^\prime</math> and <math>Q^\prime</math> signals, which in conjunction with the <math>Y^\prime</math> signal, is reconstructed to the individual <math>R^\prime G^\prime B^\prime</math> signals, that are then sent to the [[Cathode-ray tube|CRT]] to form the image. In CRT televisions, the NTSC signal is turned into three color signals: red, green, and blue, each controlling an electron gun that is designed to excite only the corresponding red, green, or blue phosphor dots. TV sets with digital circuitry use sampling techniques to process the signals but the result is the same. For both analog and digital sets processing an analog NTSC signal, the original three color signals are transmitted using three discrete signals (Y, I and Q) and then recovered as three separate colors (R, G, and B) and presented as a color image. When a transmitter broadcasts an NTSC signal, it amplitude-modulates a radio-frequency carrier with the NTSC signal just described, while it frequency-modulates a carrier 4.5 MHz higher with the audio signal. If non-linear distortion happens to the broadcast signal, the {{Fraction|3|51|88}} MHz color carrier may [[Beat (acoustics)|beat]] with the sound carrier to produce a dot pattern on the screen. To make the resulting pattern less noticeable, designers adjusted the original 15,750 Hz scanline rate down by a factor of 1.001 ({{Fraction|100|1,001}}%) to match the audio carrier frequency divided by the factor 286, resulting in a field rate of approximately 59.94 Hz. This adjustment ensures that the difference between the sound carrier and the color subcarrier (the most problematic [[intermodulation]] product of the two carriers) is an odd multiple of half the line rate, which is the necessary condition for the dots on successive lines to be opposite in phase, making them least noticeable. The 59.94 rate is derived from the following calculations. Designers chose to make the chrominance subcarrier frequency an ''n'' + 0.5 multiple of the line frequency to minimize interference between the luminance signal and the chrominance signal. (Another way this is often stated is that the color subcarrier frequency is an odd multiple of half the line frequency.) They then chose to make the audio subcarrier frequency an integer multiple of the line frequency to minimize visible (intermodulation) interference between the audio signal and the chrominance signal. The original black-and-white standard, with its 15,750 Hz line frequency and 4.5 MHz audio subcarrier, does not meet these requirements, so designers had to either raise the audio subcarrier frequency or lower the line frequency. Raising the audio subcarrier frequency would prevent existing (black and white) receivers from properly tuning in the audio signal. Lowering the line frequency is comparatively innocuous, because the horizontal and vertical synchronization information in the NTSC signal allows a receiver to tolerate a substantial amount of variation in the line frequency. So the engineers chose the line frequency to be changed for the color standard. In the black-and-white standard, the ratio of audio subcarrier frequency to line frequency is {{frac|4.5 MHz|15,750 Hz}} {{=}} {{Fraction|285|5|7}}. In the color standard, this becomes rounded to the integer 286, which means the color standard's line rate is {{frac|4.5 MHz|286}} ≈ {{Fraction|15,734|266|1,001}} Hz. Maintaining the same number of scan lines per field (and frame), the lower line rate must yield a lower field rate. Dividing {{frac|4,500,000|286}} lines per second by 262.5 lines per field gives approximately 59.94 fields per second.<!--There are two reasons why this is important. First, the chrominance signal was interpreted as part of the luminance by monochrome TV sets that were in use at the time of the introduction of color TV (which didn't have notch filters to remove the chrominance carrier), causing dots to appear on strongly contrasting edges (which are high-frequency video information.) The chosen line rate causes the dots to move, which makes them harder for the human eye to follow. The effect is still noticeable on close examination, however, and is referred to as [[dot crawl]]. A second benefit of the chosen field rate was realized much later: The phase difference of the interference pattern on successive lines makes it very easy to design a simple [[comb filter]] to separate chrominance and luminance information to a greater degree.--> === Transmission modulation method === [[File:Ntsc channel.svg|thumb|400px|Spectrum of a System M television channel with NTSC color]] An NTSC [[television channel]] as transmitted occupies a total bandwidth of 6 MHz. The actual video signal, which is [[amplitude modulation|amplitude-modulated]], is transmitted between 500 [[Hertz|kHz]] and 5.45 MHz above the lower bound of the channel. The video [[Carrier wave|carrier]] is 1.25 MHz above the lower bound of the channel. Like most AM signals, the video carrier generates two [[sideband]]s, one above the carrier and one below. The sidebands are each 4.2 MHz wide. The entire upper sideband is transmitted, but only 1.25 MHz of the lower sideband, known as a [[Single-sideband modulation|vestigial sideband]], is transmitted. The color subcarrier, as noted above, is 3.579545 MHz above the video carrier, and is [[quadrature amplitude modulation|quadrature-amplitude-modulated]] with a suppressed carrier. The audio signal is [[Frequency modulation|frequency modulated]], like the audio signals broadcast by [[FM broadcasting|FM radio]] [[Radio broadcasting|stations]] in the 88–108 MHz band, but with a 25 kHz maximum [[frequency deviation]], as opposed to 75 kHz as is used on the [[FM broadcast band|FM band]], making analog television audio signals sound quieter than FM radio signals as received on a wideband receiver. The main audio carrier is 4.5 MHz above the video carrier, making it 250 kHz below the top of the channel. Sometimes a channel may contain an [[Multichannel Television Sound|MTS]] signal, which offers more than one audio signal by adding one or two subcarriers on the audio signal, each synchronized to a multiple of the line frequency. This is normally the case when [[Stereophonic sound|stereo audio]] and/or [[second audio program]] signals are used. The same extensions are used in [[ATSC standards|ATSC]], where the ATSC digital carrier is broadcast at 0.31 MHz above the lower bound of the channel. "Setup" is a 54 mV (7.5 [[IRE (unit)|IRE]]) voltage offset between the "black" and "blanking" levels. It is unique to NTSC. CVBS stands for Color, Video, Blanking, and Sync. The following table shows the values for the basic RGB colors, encoded in NTSC<ref>{{cite web |title=Color Bar Levels, Amplitues, and Phases |url=https://www.edn.com/wp-content/uploads/media-1050361-c0195-table1.gif |access-date=February 22, 2022 |website=Edn.com |format=GIF}}</ref> {| class="wikitable" |+ Analog signal values for basic RGB colors, encoded in NTSC ! Color ! [[Luma (video)|Luminance]] level ([[IRE (unit)|IRE]]) ! [[Chrominance]] level (IRE) ! Chrominance amplitude (IRE) ! Phase (º) |- ! White | 100.0 | 0.0 | 0.0 |– |- ! Yellow | 89.5 | 48.1 to 130.8 | 82.7 | 167.1 |- ! Cyan | 72.3 | 13.9 to 130.8 | 116.9 | 283.5 |- ! Green | 61.8 | 7.2 to 116.4 | 109.2 | 240.7 |- ! Magenta | 45.7 | −8.9 to 100.3 | 109.2 | 60.7 |- ! Red | 35.2 | −23.3 to 93.6 | 116.9 | 103.5 |- ! Blue | 18.0 | −23.3 to 59.4 | 82.7 | 347.1 |- ! Black | 7.5 | 0.0 | 0.0 |– |} === Frame rate conversion === {{See also|Telecine}} There is a large difference in [[frame rate]] between film, which runs at 24 frames per second, and the NTSC standard, which runs at approximately 29.97 (10 MHz × {{nowrap|63/88/455/525}}) frames per second. In regions that use 25-fps television and video standards, this difference can be overcome by [[576i#PAL speed-up|speed-up]]. For 30-fps standards, a process called "[[Three-two pull down|3:2 pulldown]]" is used. One film frame is transmitted for three video fields (lasting {{frac|1|1|2}} video frames), and the next frame is transmitted for two video fields (lasting 1 video frame). Two film frames are thus transmitted in five video fields, for an average of {{frac|2|1|2}} video fields per film frame. The average frame rate is thus 60 ÷ 2.5 = 24 frames per second, so the average film speed is nominally exactly what it should be. (In reality, over the course of an hour of real time, 215,827.2 video fields are displayed, representing 86,330.88 frames of film, while in an hour of true 24-fps film projection, exactly 86,400 frames are shown: thus, 29.97-fps NTSC transmission of 24-fps film runs at 99.92% of the film's normal speed.) Still-framing on playback can display a video frame with fields from two different film frames, so any difference between the frames will appear as a rapid back-and-forth flicker. There can also be noticeable jitter/"stutter" during slow camera pans ([[Telecine#Telecine judder|telecine judder]]). Film shot specifically for NTSC television is usually taken at 30 (instead of 24) frames per second to avoid 3:2 pulldown.<ref>{{cite journal |last1=Kennel |first1=Glenn |last2=Pytlak |first2=John |last3=Sehlin |first3=Richard |last4=Uhlig |first4=Ronald |date=December 1988 |title=Major Motion-Picture Production Standards |url=https://ieeexplore.ieee.org/document/7258828 |url-status=dead |journal=SMPTE Journal |volume=97 |issue=12 |pages=985–990 |doi=10.5594/J02849 |url-access=subscription |archive-url=https://web.archive.org/web/20180609012432/https://ieeexplore.ieee.org/document/7258828/ |archive-date=June 9, 2018 |access-date=March 12, 2023}}</ref> To show 25-fps material (such as European [[Television show|television series]] and some European movies) on NTSC equipment, every fifth frame is duplicated and then the resulting stream is interlaced. Film shot for NTSC television at 24 frames per second has traditionally been accelerated by 1/24 (to about 104.17% of normal speed) for transmission in regions that use 25-fps television standards. This increase in picture speed has traditionally been accompanied by a similar increase in the pitch and tempo of the audio. More recently, frame-blending has been used to convert 24 FPS video to 25 FPS without altering its speed. Film shot for television in regions that use 25-fps television standards can be handled in either of two ways: * The film can be shot at 24 frames per second. In this case, when transmitted in its native region, the film may be accelerated to 25 fps according to the analog technique described above, or kept at 24 fps by the digital technique described above. When the same film is transmitted in regions that use a nominal 30-fps television standard, there is no noticeable change in speed, tempo, and pitch. * The film can be shot at 25 frames per second. In this case, when transmitted in its native region, the film is shown at its normal speed, with no alteration of the accompanying soundtrack. When the same film is shown in regions that use a 30-fps nominal television standard, every fifth frame is duplicated, and there is still no noticeable change in speed, tempo, and pitch. Because both film speeds have been used in 25-fps regions, viewers can face confusion about the true speed of video and audio, and the pitch of voices, sound effects, and musical performances, in television films from those regions. For example, they may wonder whether the [[Jeremy Brett]] series of [[Sherlock Holmes]] television films, made in the 1980s and early 1990s, was shot at 24 fps and then transmitted at an artificially fast speed in 25-fps regions, or whether it was shot at 25 fps natively and then slowed to 24 fps for NTSC exhibition. These discrepancies exist not only in television broadcasts over the air and through cable, but also in the home-video market, on both tape and disc, including [[LaserDisc]] and [[DVD]]. In digital television and video, which are replacing their analog predecessors, single standards that can accommodate a wider range of frame rates still show the limits of analog regional standards. The initial version of the [[ATSC standards|ATSC]] standard, for example, allowed frame rates of 23.976, 24, 29.97, 30, 59.94, 60, 119.88 and 120 frames per second, but not 25 and 50. Modern ATSC allows 25 and 50 FPS. === Modulation for analog satellite transmission === Because satellite power is severely limited, analog video transmission through satellites differs from terrestrial TV transmission. [[Amplitude modulation|AM]] is a linear modulation method, so a given demodulated [[signal-to-noise ratio]] (SNR) requires an equally high received RF SNR. The SNR of studio quality video is over 50 dB, so AM would require prohibitively high powers and/or large antennas. Wideband [[Frequency modulation|FM]] is used instead to trade RF bandwidth for reduced power. Increasing the channel bandwidth from 6 to 36 MHz allows a RF SNR of only 10 dB or less. The wider noise bandwidth reduces this 40 dB power saving by 36 MHz / 6 MHz = 8 dB for a substantial net reduction of 32 dB. Sound is on an FM subcarrier as in terrestrial transmission, but frequencies above 4.5 MHz are used to reduce aural/visual interference. 6.8, 5.8 and 6.2 MHz are commonly used. Stereo can be multiplex, discrete, or matrix and unrelated audio and data signals may be placed on additional subcarriers. A triangular 60 Hz energy dispersal waveform is added to the composite baseband signal (video plus audio and data subcarriers) before modulation. This limits the satellite downlink [[Spectral density|power spectral density]] in case the video signal is lost. Otherwise the satellite might transmit all of its power on a single frequency, interfering with terrestrial microwave links in the same frequency band. In half transponder mode, the frequency deviation of the composite baseband signal is reduced to 18 MHz to allow another signal in the other half of the 36 MHz transponder. This reduces the FM benefit somewhat, and the recovered SNRs are further reduced because the combined signal power must be "backed off" to avoid intermodulation distortion in the satellite transponder. A single FM signal is constant amplitude, so it can saturate a transponder without distortion. === Field order === An NTSC ''frame'' consists of two ''fields,'' F1 (field one) and F2 (field two). The [[field dominance]] depends on a combination of factors, including decisions by various equipment manufacturers as well as historical conventions. As a result, most professional equipment has the option to switch between a dominant upper or dominant lower field. It is not advisable to use the terms ''even'' or ''odd'' when speaking of fields, due to substantial ambiguity. For instance if the line numbering for a particular system starts at zero, while another system starts its line numbering at one. As such the same field could be even or odd.<ref name=":0" /><ref>{{cite web |title=Programmer's Guide to Video Systems - Lurker's Guide - lurkertech.com |url=https://lurkertech.com/lg/video-systems/#f1f2 |access-date=January 25, 2023 |website=lurkertech.com}}</ref> While an analog television set does not care about field dominance per se, field dominance is important when editing NTSC video. Incorrect interpretation of field order can cause a shuddering effect as moving objects jump forward and behind on each successive field. This is of particular importance when interlaced NTSC is transcoded to a format with a different field dominance and vice versa. Field order is also important when transcoding progressive video to interlaced NTSC, as any place there is a cut between two scenes in the progressive video, there could be a flash field in the interlaced video if the field dominance is incorrect. The film telecine process where a [[three-two pull down]] is utilized to convert 24 frames to 30, will also provide unacceptable results if the field order is incorrect. Because each field is temporally unique for material captured with an interlaced camera, converting interlaced to a digital progressive-frame medium is difficult, as each progressive frame will have artifacts of motion on every alternating line. This can be observed in PC-based video-playing utilities and is frequently solved simply by transcoding the video at half resolution and only using one of the two available fields.
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
NTSC
(section)
Add topic