Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Rendering (computer graphics)
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Hardware == Rendering is usually limited by available computing power and memory [[Bandwidth (computing)|bandwidth]], and so specialized [[Computer hardware|hardware]] has been developed to speed it up ("accelerate" it), particularly for [[Real-time computer graphics|real-time rendering]]. Hardware features such as a [[framebuffer]] for raster graphics are required to display the output of rendering smoothly in real time. === History === In the era of [[vector monitor]]s (also called ''calligraphic displays''), a display processing unit (DPU) was a dedicated [[Central processing unit|CPU]] or [[coprocessor]] that maintained a list of visual elements and redrew them continuously on the screen by controlling an [[Cathode ray|electron beam]]. Advanced DPUs such as [[Evans & Sutherland]]'s [[Line Drawing System-1]] (and later models produced into the 1980s) incorporated 3D coordinate transformation features to accelerate rendering of [[Wire-frame model|wire-frame images]].{{r|n=Foley82|pp=93-94, 404-421}}{{r|EandS1979}} Evans & Sutherland also made the [[Digistar]] [[planetarium]] projection system, which was a vector display that could render both stars and wire-frame graphics (the vector-based Digistar and Digistar II were used in many planetariums, and a few may still be in operation).{{r|NagoyaCityScienceMuseum}}{{r|WorldwidePlanetariumsDatabase1}}{{r|WorldwidePlanetariumsDatabase2}} A Digistar prototype was used for rendering 3D star fields for the film [[Star Trek II: The Wrath of Khan]] – some of the first 3D computer graphics sequences ever seen in a feature film.{{r|Smith1982}} Shaded 3D graphics rendering in the 1970s and early 1980s was usually implemented on general-purpose computers, such as the [[PDP-10]] used by researchers at the University of Utah{{r|Phong1973}}{{r|Catmull1974}}. It was difficult to speed up using specialized hardware because it involves a [[Graphics pipeline|pipeline]] of complex steps, requiring data addressing, decision-making, and computation capabilities typically only provided by CPUs (although dedicated circuits for speeding up particular operations were proposed {{r|Phong1973}}). [[Supercomputer]]s or specially designed multi-CPU computers or [[computer cluster|clusters]] were sometimes used for ray tracing.{{r|n=IntroToRTCh6}} In 1981, [[James H. Clark]] and [[Marc Hannah]] designed the Geometry Engine, a [[Very-large-scale integration|VLSI]] chip for performing some of the steps of the 3D rasterization pipeline, and started the company [[Silicon Graphics]] (SGI) to commercialize this technology.{{r|Peddie2020}}{{r|Clark1980}} [[Home computer]]s and [[Video game console|game consoles]] in the 1980s contained graphics [[coprocessor]]s that were capable of scrolling and filling areas of the display, and drawing [[Sprite (computer graphics)|sprites]] and lines, though they were not useful for rendering realistic images.{{r|Fox2024}}{{r|NESDevPPU}} Towards the end of the 1980s [[Graphics card|PC graphics cards]] and [[Arcade video game|arcade games]] with 3D rendering acceleration began to appear, and in the 1990s such technology became commonplace. Today, even low-power [[mobile processor]]s typically incorporate 3D graphics acceleration features.{{r|Peddie2020}}{{r|PowerVRAt25}} === GPUs === {{main|Graphics processing unit}} The [[Graphics card|3D graphics accelerators]] of the 1990s evolved into modern GPUs. GPUs are general-purpose processors, like [[Central processing unit|CPUs]], but they are designed for tasks that can be broken into many small, similar, mostly independent sub-tasks (such as rendering individual pixels) and performed in [[Parallel computing|parallel]]. This means that a GPU can speed up any rendering algorithm that can be split into subtasks in this way, in contrast to 1990s 3D accelerators which were only designed to speed up specific rasterization algorithms and simple shading and lighting effects (although [[Kludge#Computer science|tricks]] could be used to perform more general computations).{{r|n=AkenineMöller2018|loc=ch3}}{{r|Peercy2000}} Due to their origins, GPUs typically still provide specialized hardware acceleration for some steps of a traditional 3D rasterization [[Graphics pipeline|pipeline]], including hidden surface removal using a [[Z-buffering|z-buffer]], and [[texture mapping]] with [[mipmap]]s, but these features are no longer always used.{{r|n=AkenineMöller2018|loc=ch3}} Recent GPUs have features to accelerate finding the intersections of rays with a [[bounding volume hierarchy]], to help speed up all variants of [[Ray tracing (graphics)|ray tracing]] and [[path tracing]],{{r|n=RayTracingGems_Forword_Stich}} as well as [[Neural network (machine learning)|neural network]] acceleration features sometimes useful for rendering.{{r|NvidiaDLSS}} GPUs are usually integrated with [[GDDR SDRAM|high-bandwidth memory systems]] to support the read and write [[memory bandwidth|bandwidth]] requirements of high-resolution, real-time rendering, particularly when multiple passes are required to render a frame, however memory [[memory latency|latency]] may be higher than on a CPU, which can be a problem if the [[Analysis of parallel algorithms#Critical path|critical path]] in an algorithm involves many memory accesses. GPU design accepts high latency as inevitable (in part because a large number of [[Thread (computing)|threads]] are sharing the [[Bus (computing)#Memory bus|memory bus]]) and attempts to "hide" it by efficiently switching between threads, so a different thread can be performing computations while the first thread is waiting for a read or write to complete.{{r|n=AkenineMöller2018|loc=ch3}}{{r|Lam2021}}{{r|Gong2019}} Rendering algorithms will run efficiently on a GPU only if they can be implemented using small groups of threads that perform mostly the same operations. As an example of code that meets this requirement: when rendering a small square of pixels in a simple [[Ray tracing (graphics)|ray-traced]] image, all threads will likely be intersecting rays with the same object and performing the same lighting computations. For performance and architectural reasons, GPUs run groups of around 16-64 threads called ''warps'' or ''wavefronts'' in [[Single instruction, multiple threads|lock-step]] (all threads in the group are executing the same instructions at the same time). If not all threads in the group need to run particular blocks of code (due to conditions) then some threads will be idle, or the results of their computations will be discarded, causing degraded performance.{{r|n=AkenineMöller2018|loc=ch3}}{{r|Gong2019}}
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
Rendering (computer graphics)
(section)
Add topic