Shader: Difference between revisions

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Fixed a typo (can contains -> can contain)
 
imported>Citizenofinfinity
m See also: remove duplicate entry from "See also" list - Vector processor
 
Line 6: Line 6:
{{More footnotes|date=April 2014}}
{{More footnotes|date=April 2014}}
}}
}}
[[File:Vulkan simplified pipeline.svg|thumb|upright=0.75|A simplified illustration of the graphics pipeline in the Vulkan API. The yellow sections are programmable shader stages which can modify the different data being processed by the pipeline.]]


[[File:Phong-shading-sample (cropped).jpg|thumb|upright=1.3|An example of two kinds of shadings: [[Flat shading]] on the left and [[Phong shading]] on the right. Phong shading is an improvement on [[Gouraud shading]], and was one of the first computer shading models developed after the basic flat shader, greatly enhancing the appearance of curved surfaces in renders. Shaders are most commonly used to produce lit and shadowed areas in the rendering of [[3D modeling|3D models]].]]
In [[computer graphics]], a '''shader''' is a [[computer program|programmable]] operation which is applied to data as it moves through the [[graphics pipeline|rendering pipeline]].<ref name="Vulkan spec">{{cite web |title=Vulkan® 1.4.323 - A Specification |url=https://registry.khronos.org/vulkan/specs/latest/html/vkspec.html#shaders |publisher=Khronos Group |access-date=28 July 2025}}</ref><ref name="OpenGL spec">{{cite web |title=The OpenGL® Graphics System: A Specification |url=https://registry.khronos.org/OpenGL/specs/gl/glspec46.core.pdf |publisher=Khronos Group |access-date=28 July 2025 |pages=87}}</ref> Shaders can act on data such as [[vertex (computer graphics)|vertices]] and primitives — to generate or morph geometry — and [[fragment (computer graphics)|fragments]] — to calculate the values in a rendered image.<ref name="OpenGL spec"/>
[[File:Example of a Shader.png|thumb|upright=1.3|Another use of shaders is for special effects, even on 2D images, (e.g., a [[digital photograph|photo]] from a [[webcam]]). The unaltered, unshaded image is on the left, and the same image has a shader applied on the right. This shader works by replacing all light areas of the image with white, and all dark areas with a brightly colored texture.]]


In [[computer graphics]], a '''shader''' is a [[computer program]] that calculates the appropriate levels of [[light]], [[darkness]], and [[color]] during the [[Rendering (computer graphics)|rendering]] of a [[3D scene]]—a process known as ''[[shading]]''. Shaders have evolved to perform a variety of specialized functions in computer graphics [[special effects]] and [[video post-processing]], as well as [[general-purpose computing on graphics processing units]].
Shaders can execute a wide variety of operations and can run on different types of hardware. In modern [[real-time computer graphics]], shaders are run on [[graphics processing unit|graphics processing units]] (GPUs) — dedicated hardware which provides highly parallel execution of programs. As rendering an image is [[embarrassingly parallel]], fragment and pixel shaders scale well on [[single instruction, multiple data|SIMD]] hardware. Historically, the drive for faster rendering has produced highly-parallel processors which can in turn be used for other SIMD amenable algorithms.<ref name="Britannica GPU">{{cite web |title=Encyclopedia Britannica: graphics processing unit |url=https://www.britannica.com/technology/graphics-processing-unit |publisher=Encyclopedia Britannica |access-date=28 July 2025}}</ref> Such shaders executing in a [[General-purpose computing on graphics processing units (software)|compute pipeline]] are commonly called [[compute kernel|compute shaders]].
 
Traditional shaders calculate [[rendering (computer graphics)|rendering]] effects on graphics hardware with a high degree of flexibility. Most shaders are coded for (and run on) a [[graphics processing unit]] (GPU),<ref>{{Cite web|url=https://learnopengl.com/Getting-started/Shaders|title=LearnOpenGL - Shaders|website=learnopengl.com|access-date=November 12, 2019}}</ref> though this is not a strict requirement. ''Shading languages'' are used to program the GPU's [[rendering pipeline]], which has mostly superseded the [[fixed-function pipeline]] of the past that only allowed for common [[Vertex shader|geometry transforming]] and [[Pixel shader|pixel-shading]] functions; with shaders, customized effects can be used. The [[3d coordinates|position]] and [[color]] ([[hue]], [[Colorfulness|saturation]], [[brightness]], and [[Contrast (vision)|contrast]]) of all [[pixel]]s, [[vertex (computer graphics)|vertices]], and/or [[texture (computer graphics)|texture]]s used to construct a final rendered image can be altered using [[algorithm]]s defined in a shader, and can be modified by external [[variable (computer science)|variable]]s or textures introduced by the computer program calling the shader.{{Citation needed|date=March 2020}}
 
Shaders are used widely in [[Filmmaking|cinema]] [[Post processing (images)|post-processing]], [[computer-generated imagery]], and [[video games]] to produce a range of effects. Beyond simple lighting models, more complex uses of shaders include: altering the [[hue]], [[Colorfulness|saturation]], [[brightness]] ([[HSL and HSV|HSL/HSV]]) or [[contrast (vision)|contrast]] of an image; producing [[Defocus aberration|blur]], [[light bloom]], [[volumetric lighting]], [[normal mapping]] (for depth effects), [[bokeh]], [[cel shading]], [[posterization]], [[bump mapping]], [[distortion (optics)|distortion]], [[chroma key]]ing (for so-called "bluescreen/[[greenscreen]]" effects), [[edge detection|edge]] and [[motion detection]], as well as [[Psychedelia|psychedelic]] effects such as those seen in the [[demoscene]].{{Clarify|date=March 2020}}


== History ==
== History ==
Line 21: Line 17:
As [[graphics processing unit]]s evolved, major graphics [[software libraries]] such as [[OpenGL]] and [[Direct3D]] began to support shaders. The first shader-capable GPUs only supported [[pixel shading]], but [[vertex shaders]] were quickly introduced once developers realized the power of shaders. The first video card with a programmable pixel shader was the Nvidia [[GeForce 3]] (NV20), released in 2001.<ref>{{Cite web|url=https://www.pcgamer.com/from-voodoo-to-geforce-the-awesome-history-of-3d-graphics/|title=From Voodoo to GeForce: The Awesome History of 3D Graphics|first=Paul|last=Lillypublished|website=PC Gamer |date=May 19, 2009|via=www.pcgamer.com}}</ref> [[Geometry shaders]] were introduced with Direct3D 10 and OpenGL 3.2. Eventually, graphics hardware evolved toward a [[unified shader model]].
As [[graphics processing unit]]s evolved, major graphics [[software libraries]] such as [[OpenGL]] and [[Direct3D]] began to support shaders. The first shader-capable GPUs only supported [[pixel shading]], but [[vertex shaders]] were quickly introduced once developers realized the power of shaders. The first video card with a programmable pixel shader was the Nvidia [[GeForce 3]] (NV20), released in 2001.<ref>{{Cite web|url=https://www.pcgamer.com/from-voodoo-to-geforce-the-awesome-history-of-3d-graphics/|title=From Voodoo to GeForce: The Awesome History of 3D Graphics|first=Paul|last=Lillypublished|website=PC Gamer |date=May 19, 2009|via=www.pcgamer.com}}</ref> [[Geometry shaders]] were introduced with Direct3D 10 and OpenGL 3.2. Eventually, graphics hardware evolved toward a [[unified shader model]].


== Design ==
== Graphics shaders ==
Shaders are simple programs that describe the traits of either a [[Vertex (computer graphics)|vertex]] or a [[pixel]]. Vertex shaders describe the attributes (position, [[Texture mapping|texture coordinates]], colors, etc.) of a vertex, while pixel shaders describe the traits (color, [[Z-buffering|z-depth]] and [[Alpha compositing|alpha]] value) of a pixel. A vertex shader is called for each vertex in a [[Geometric primitive|primitive]] (possibly after [[Tessellation (computer graphics)|tessellation]]); thus one vertex in, one (updated) vertex out. Each vertex is then rendered as a series of pixels onto a surface (block of memory) that will eventually be sent to the screen.
The traditional use of shaders is to operate on data in the graphics pipeline to control the rendering of an image. Graphics shaders can be classified according to their position in the pipeline, the data being manipulated, and the graphics API being used.
 
Shaders replace a section of the graphics hardware typically called the Fixed Function Pipeline (FFP), so-called because it performs [[Computer graphics lighting|lighting]] and texture mapping in a hard-coded manner. Shaders provide a programmable alternative to this hard-coded approach.<ref>{{cite web|url=http://www.directx.com/shader/|title=ShaderWorks' update - DirectX Blog|date=August 13, 2003}}</ref>
 
The basic [[graphics pipeline]] is as follows:
 
* The CPU sends instructions (compiled [[shading language]] programs) and geometry data to the graphics processing unit, located on the graphics card.
* Within the vertex shader, the geometry is transformed.
* If a geometry shader is in the graphics processing unit and active, some changes of the geometries in the scene are performed.
* If a tessellation shader is in the graphics processing unit and active, the geometries in the scene can be [[Subdivision (computer graphics)|subdivided]].
* The calculated geometry is triangulated (subdivided into triangles).
* Triangles are broken down into [[fragment quads]] (one fragment quad is a 2&nbsp;×&nbsp;2 fragment primitive).
* Fragment quads are modified according to the fragment shader.
* The depth test is performed; fragments that pass will get written to the screen and might get blended into the [[frame buffer]].
 
The graphic pipeline uses these steps in order to transform three-dimensional (or two-dimensional) data into useful two-dimensional data for displaying. In general, this is a large pixel matrix or "[[frame buffer]]".
 
== Types ==
There are three types of shaders in common use (pixel, vertex, and geometry shaders), with several more recently added. While older graphics cards utilize separate processing units for each shader type, newer cards feature [[unified shader]]s which are capable of executing any type of shader. This allows graphics cards to make more efficient use of processing power.
 
=== 2D shaders ===
2D shaders act on [[digital images]], also called [[texture (computer graphics)|''textures'']] in the field of computer graphics. They modify attributes of [[pixel]]s. 2D shaders may take part in rendering [[3d geometry|3D geometry]]. Currently the only type of 2D shader is a pixel shader.
 
==== Pixel shaders ====
Pixel shaders, also known as [[fragment (computer graphics)|fragment]] shaders, compute [[color]] and other attributes of each "fragment": a unit of rendering work affecting at most a single output [[pixel]]. The simplest kinds of pixel shaders output one screen [[pixel]] as a color value; more complex shaders with multiple inputs/outputs are also possible.<ref>{{cite web|url=http://www.lighthouse3d.com/tutorials/glsl-tutorial/fragment-shader/|title=GLSL Tutorial – Fragment Shader|date=June 9, 2011}}</ref> Pixel shaders range from simply always outputting the same color, to applying a [[lighting]] value, to doing [[bump mapping]], [[shadows]], [[specular highlight]]s, [[translucency]] and other phenomena. They can alter the depth of the fragment (for [[Z-buffering]]), or output more than one color if multiple [[render target]]s are active. In 3D graphics, a pixel shader alone cannot produce some kinds of complex effects because it operates only on a single fragment, without knowledge of a scene's geometry (i.e. vertex data). However, pixel shaders do have knowledge of the screen coordinate being drawn, and can sample the screen and nearby pixels if the contents of the entire screen are passed as a texture to the shader. This technique can enable a wide variety of two-dimensional [[Post processing (images)|postprocessing]] effects such as [[Gaussian blur|blur]], or [[edge detection]]/enhancement for [[Cel shader|cartoon/cel shaders]]. Pixel shaders may also be applied in ''intermediate'' stages to any two-dimensional images—[[sprite (computer graphics)|sprite]]s or [[texture (computer graphics)|texture]]s—in the [[Graphics pipeline|pipeline]], whereas [[vertex shaders]] always require a 3D scene. For instance, a pixel shader is the only kind of shader that can act as a [[video postprocessing|postprocessor]] or [[video filter|filter]] for a [[video stream]] after it has been [[rasterization|rasterized]].


=== 3D shaders ===
=== Fragment shaders<span class="anchor" id="Pixel shaders"></span> ===
3D shaders act on [[3D model]]s or other geometry but may also access the colors and textures used to draw the model or [[Polygon mesh|mesh]]. Vertex shaders are the oldest type of 3D shader, generally making modifications on a per-vertex basis. Newer geometry shaders can generate new vertices from within the shader. Tessellation shaders are the newest 3D shaders; they act on batches of vertices all at once to add detail—such as subdividing a model into smaller groups of triangles or other primitives at runtime, to improve things like [[curve]]s and [[:wikt:bump|bump]]s, or change other attributes.
{{multiple image|perrow = 1|total_width=200
| image1 = dragon_white.jpg
| image2 = dragon_texture.jpg
| image4 = dragon_pbr.jpg
| footer = The [[Stanford dragon]] model rendered using fragment shaders with different outputs: solid white, applied texture, physically based shading.
}}
Fragment shaders, also known as '''pixel shaders''', compute [[color]] and other attributes of each "fragment": a unit of rendering work affecting at most a single output [[pixel]]. The simplest kinds of pixel shaders output one screen pixel as a color value; more complex shaders with multiple inputs/outputs are also possible.<ref>{{cite web|url=http://www.lighthouse3d.com/tutorials/glsl-tutorial/fragment-shader/|title=GLSL Tutorial – Fragment Shader|date=June 9, 2011}}</ref> Pixel shaders range from simply always outputting the same color, to applying a [[lighting]] value, to doing [[bump mapping]], [[shadows]], [[specular highlight]]s, [[translucency]] and other phenomena. They can alter the depth of the fragment (for [[Z-buffering]]), or output more than one color if multiple [[render target]]s are active. In 3D graphics, a pixel shader alone cannot produce some kinds of complex effects because it operates only on a single fragment, without knowledge of a scene's geometry (i.e. vertex data). However, pixel shaders do have knowledge of the screen coordinate being drawn, and can sample the screen and nearby pixels if the contents of the entire screen are passed as a texture to the shader. This technique can enable a wide variety of two-dimensional [[Post processing (images)|postprocessing]] effects such as [[Gaussian blur|blur]], or [[edge detection]]/enhancement for [[Cel shader|cartoon/cel shaders]]. Pixel shaders may also be applied in ''intermediate'' stages to any two-dimensional images—[[sprite (computer graphics)|sprite]]s or [[texture (computer graphics)|texture]]s—in the [[Graphics pipeline|pipeline]], whereas [[vertex shaders]] always require a 3D scene. For instance, a pixel shader is the only kind of shader that can act as a [[video postprocessing|postprocessor]] or [[video filter|filter]] for a [[video stream]] after it has been [[rasterization|rasterized]].


==== Vertex shaders ====
=== Vertex shaders ===
Vertex shaders are run once for each 3D [[vertex (computer graphics)|vertex]] given to the graphics processor. The purpose is to transform each vertex's 3D position in virtual space to the 2D coordinate at which it appears on the screen (as well as a depth value for the Z-buffer).<ref>{{cite web|url=http://www.lighthouse3d.com/tutorials/glsl-tutorial/vertex-shader/|title=GLSL Tutorial – Vertex Shader|date=June 9, 2011}}</ref> Vertex shaders can manipulate properties such as position, color and texture coordinates, but cannot create new vertices. The output of the vertex shader goes to the next stage in the pipeline, which is either a geometry shader if present, or the [[rasterizer]]. Vertex shaders can enable powerful control over the details of position, movement, lighting, and color in any scene involving [[3D model]]s.
Vertex shaders are run once for each 3D [[vertex (computer graphics)|vertex]] given to the graphics processor. The purpose is to transform each vertex's 3D position in virtual space to the 2D coordinate at which it appears on the screen (as well as a depth value for the Z-buffer).<ref>{{cite web|url=http://www.lighthouse3d.com/tutorials/glsl-tutorial/vertex-shader/|title=GLSL Tutorial – Vertex Shader|date=June 9, 2011}}</ref> Vertex shaders can manipulate properties such as position, color and texture coordinates, but cannot create new vertices. The output of the vertex shader goes to the next stage in the pipeline, which is either a geometry shader if present, or the [[rasterizer]]. Vertex shaders can enable powerful control over the details of position, movement, lighting, and color in any scene involving [[3D model]]s.


==== Geometry shaders ====
=== Geometry shaders ===
Geometry shaders were introduced in Direct3D 10 and OpenGL 3.2; formerly available in OpenGL 2.0+ with the use of extensions.<ref>[http://www.opengl.org/wiki/Geometry_Shader Geometry Shader - OpenGL]. Retrieved on December 21, 2011.</ref> This type of shader can generate new graphics [[primitive (geometry)|primitive]]s, such as points, lines, and triangles, from those primitives that were sent to the beginning of the [[graphics pipeline]].<ref>{{cite web|url=http://msdn.microsoft.com/en-us/library/bb205123(VS.85).aspx|title=Pipeline Stages (Direct3D 10) (Windows)|website=msdn.microsoft.com|date=January 6, 2021 }}</ref>
Geometry shaders were introduced in Direct3D 10 and OpenGL 3.2; formerly available in OpenGL 2.0+ with the use of extensions.<ref>[http://www.opengl.org/wiki/Geometry_Shader Geometry Shader - OpenGL]. Retrieved on December 21, 2011.</ref> This type of shader can generate new graphics [[primitive (geometry)|primitive]]s, such as points, lines, and triangles, from those primitives that were sent to the beginning of the [[graphics pipeline]].<ref>{{cite web|url=http://msdn.microsoft.com/en-us/library/bb205123(VS.85).aspx|title=Pipeline Stages (Direct3D 10) (Windows)|website=msdn.microsoft.com|date=January 6, 2021 }}</ref>


Line 61: Line 39:
Typical uses of a geometry shader include point sprite generation, geometry [[Tessellation (computer graphics)|tessellation]], [[shadow volume]] extrusion, and single pass rendering to a [[cube map]]. A typical real-world example of the benefits of geometry shaders would be automatic mesh complexity modification. A series of line strips representing control points for a curve are passed to the geometry shader and depending on the complexity required the shader can automatically generate extra lines each of which provides a better approximation of a curve.
Typical uses of a geometry shader include point sprite generation, geometry [[Tessellation (computer graphics)|tessellation]], [[shadow volume]] extrusion, and single pass rendering to a [[cube map]]. A typical real-world example of the benefits of geometry shaders would be automatic mesh complexity modification. A series of line strips representing control points for a curve are passed to the geometry shader and depending on the complexity required the shader can automatically generate extra lines each of which provides a better approximation of a curve.


==== Tessellation shaders ====
=== Tessellation shaders ===
As of OpenGL 4.0 and Direct3D 11, a new shader class called a tessellation shader has been added. It adds two new shader stages to the traditional model: tessellation control shaders (also known as hull shaders) and tessellation evaluation shaders (also known as Domain Shaders), which together allow for simpler meshes to be subdivided into finer meshes at run-time according to a mathematical function. The function can be related to a variety of variables, most notably the distance from the viewing camera to allow active [[level of detail (computer graphics)|level-of-detail]] scaling. This allows objects close to the camera to have fine detail, while further away ones can have more coarse meshes, yet seem comparable in quality. It also can drastically reduce required mesh bandwidth by allowing meshes to be refined once inside the shader units instead of downsampling very complex ones from memory. Some algorithms can upsample any arbitrary mesh, while others allow for "hinting" in meshes to dictate the most characteristic vertices and edges.
As of OpenGL 4.0 and Direct3D 11, a new shader class called a tessellation shader has been added. It adds two new shader stages to the traditional model: tessellation control shaders (also known as hull shaders) and tessellation evaluation shaders (also known as Domain Shaders), which together allow for simpler meshes to be subdivided into finer meshes at run-time according to a mathematical function. The function can be related to a variety of variables, most notably the distance from the viewing camera to allow active [[level of detail (computer graphics)|level-of-detail]] scaling. This allows objects close to the camera to have fine detail, while further away ones can have more coarse meshes, yet seem comparable in quality. It also can drastically reduce required mesh bandwidth by allowing meshes to be refined once inside the shader units instead of downsampling very complex ones from memory. Some algorithms can upsample any arbitrary mesh, while others allow for "hinting" in meshes to dictate the most characteristic vertices and edges.


==== Primitive and Mesh shaders ====
=== Primitive and Mesh shaders ===
Circa 2017, the [[AMD Vega]] [[microarchitecture]] added support for a new shader stage—primitive shaders—somewhat akin to compute shaders with access to the data necessary to process geometry.<ref>{{cite web|url=http://www.trustedreviews.com/news/amd-vega-specs-performance-release-date-technology-explained|title=Radeon RX Vega Revealed: AMD promises 4K gaming performance for $499 - Trusted Reviews|date=July 31, 2017}}</ref><ref>{{Cite web|url=https://techreport.com/review/31224/the-curtain-comes-up-on-amds-vega-architecture/|title=The curtain comes up on AMD's Vega architecture|date=January 5, 2017}}</ref>   
Circa 2017, the [[AMD Vega]] [[microarchitecture]] added support for a new shader stage—primitive shaders—somewhat akin to compute shaders with access to the data necessary to process geometry.<ref>{{cite web|url=http://www.trustedreviews.com/news/amd-vega-specs-performance-release-date-technology-explained|title=Radeon RX Vega Revealed: AMD promises 4K gaming performance for $499 - Trusted Reviews|work=TrustedReviews |date=July 31, 2017}}</ref><ref>{{Cite web|url=https://techreport.com/review/31224/the-curtain-comes-up-on-amds-vega-architecture/|title=The curtain comes up on AMD's Vega architecture|date=January 5, 2017}}</ref>   


Nvidia introduced mesh and task shaders with its [[Turing (microarchitecture)|Turing microarchitecture]] in 2018 which are also modelled after compute shaders.<ref>{{cite web|url=https://devblogs.nvidia.com/nvidia-turing-architecture-in-depth/|title=NVIDIA Turing Architecture In-Depth|date=September 14, 2018}}</ref><ref>{{cite web|url=https://devblogs.nvidia.com/introduction-turing-mesh-shaders/|title=Introduction to Turing Mesh Shaders|date=September 17, 2018}}</ref> Nvidia Turing is the world's first GPU microarchitecture that supports mesh shading through DirectX 12 Ultimate API, several months before Ampere RTX 30 series was released.<ref>{{cite web | url=https://www.nvidia.com/en-us/geforce/news/directx-12-ultimate-game-ready-driver/ | title=DirectX 12 Ultimate Game Ready Driver Released; Also Includes Support for 9 New G-SYNC Compatible Gaming Monitors }}</ref>
Nvidia introduced mesh and task shaders with its [[Turing (microarchitecture)|Turing microarchitecture]] in 2018 which are also modelled after compute shaders.<ref>{{cite web|url=https://devblogs.nvidia.com/nvidia-turing-architecture-in-depth/|title=NVIDIA Turing Architecture In-Depth|date=September 14, 2018}}</ref><ref>{{cite web|url=https://devblogs.nvidia.com/introduction-turing-mesh-shaders/|title=Introduction to Turing Mesh Shaders|date=September 17, 2018}}</ref> Nvidia Turing is the world's first GPU microarchitecture that supports mesh shading through DirectX 12 Ultimate API, several months before Ampere RTX 30 series was released.<ref>{{cite web | url=https://www.nvidia.com/en-us/geforce/news/directx-12-ultimate-game-ready-driver/ | title=DirectX 12 Ultimate Game Ready Driver Released; Also Includes Support for 9 New G-SYNC Compatible Gaming Monitors }}</ref>


In 2020, AMD and Nvidia released [[RDNA 2]] and [[Ampere (microarchitecture)|Ampere]] microarchitectures which both support mesh shading through [[DirectX#DirectX 12 Ultimate|DirectX 12 Ultimate]].<ref>{{Cite web|date=2020-03-19|title=Announcing DirectX 12 Ultimate|url=https://devblogs.microsoft.com/directx/announcing-directx-12-ultimate/|access-date=2021-05-25|website=DirectX Developer Blog|language=en-US}}</ref> These mesh shaders allow the GPU to handle more complex algorithms, offloading more work from the CPU to the GPU, and in algorithm intense rendering, increasing the frame rate of or number of triangles in a scene by an order of magnitude.<ref>{{Cite web|date=2021-05-21|title=Realistic Lighting in Justice with Mesh Shading|url=https://developer.nvidia.com/blog/realistic-lighting-in-justice-with-mesh-shading/|access-date=2021-05-25|website=NVIDIA Developer Blog|language=en-US}}</ref> Intel announced that Intel Arc Alchemist GPUs shipping in Q1 2022 will support mesh shaders.<ref>{{Cite web|url=https://www.anandtech.com/show/16895/a-sneak-peek-at-intels-xe-hpg-gpu-architecture|title=Intel Architecture Day 2021: A Sneak Peek At The Xe-HPG GPU Architecture|first=Ryan|last=Smith|website=www.anandtech.com}}</ref>
In 2020, AMD and Nvidia released [[RDNA 2]] and [[Ampere (microarchitecture)|Ampere]] microarchitectures which both support mesh shading through [[DirectX#DirectX 12 Ultimate|DirectX 12 Ultimate]].<ref>{{Cite web|date=2020-03-19|title=Announcing DirectX 12 Ultimate|url=https://devblogs.microsoft.com/directx/announcing-directx-12-ultimate/|access-date=2021-05-25|website=DirectX Developer Blog|language=en-US}}</ref> These mesh shaders allow the GPU to handle more complex algorithms, offloading more work from the CPU to the GPU, and in algorithm intense rendering, increasing the frame rate of or number of triangles in a scene by an order of magnitude.<ref>{{Cite web|date=2021-05-21|title=Realistic Lighting in Justice with Mesh Shading|url=https://developer.nvidia.com/blog/realistic-lighting-in-justice-with-mesh-shading/|access-date=2021-05-25|website=NVIDIA Developer Blog|language=en-US}}</ref> Intel announced that Intel Arc Alchemist GPUs shipping in Q1 2022 will support mesh shaders.<ref>{{Cite web|url=https://www.anandtech.com/show/16895/a-sneak-peek-at-intels-xe-hpg-gpu-architecture|archive-url=https://web.archive.org/web/20210819143825/https://www.anandtech.com/show/16895/a-sneak-peek-at-intels-xe-hpg-gpu-architecture|url-status=dead|archive-date=August 19, 2021|title=Intel Architecture Day 2021: A Sneak Peek At The Xe-HPG GPU Architecture|first=Ryan|last=Smith|website=www.anandtech.com}}</ref>


=== Unified shaders ===
=== Ray-tracing shaders ===
[[Unified shader]] is the combination of 2D shader and 3D shader. [[NVIDIA]] called "unified shaders" as "CUDA cores"; [[AMD]] called this as "shader cores"; while [[Intel Corp.|Intel]] called this as "ALU cores".<ref>{{Cite web | url=https://www.techpowerup.com/gpu-specs/docs/amd-gcn1-architecture.pdf | title=AMD graphics cores next (GCN) architecture | website=www.techpowerup.com}}</ref>
[[Ray-tracing (graphics)|Ray tracing]] shaders are supported by [[Microsoft]] via [[DirectX Raytracing]], by [[Khronos Group]] via [[Vulkan (API)|Vulkan]], [[GLSL]], and [[SPIR-V]],<ref>{{cite web |url=https://www.khronos.org/blog/vulkan-ray-tracing-final-specification-release |title=Vulkan Ray Tracing Final Specification Release |date=2020-11-23 |df=mdy |department=Blog |website=[[Khronos Group]] |access-date=2021-02-22}}</ref> by [[Apple Inc.|Apple]] via [[Metal (API)|Metal]]. [[NVIDIA]] and [[AMD]] called "ray tracing shaders" as "ray tracing cores". Unlike unified shader, one ray tracing shader can contain multiple ALUs.<ref>{{Cite web| title=RTSL: a Ray Tracing Shading Language | url=https://graphics.stanford.edu/~boulos/papers/rtsl.pdf | archive-url=https://web.archive.org/web/20111012221101/http://graphics.stanford.edu/~boulos/papers/rtsl.pdf | archive-date=2011-10-12}}</ref>


=== Compute shaders ===
== Compute shaders ==
[[Compute shader]]s are not limited to graphics applications, but use the same execution resources for [[GPGPU]]. They may be used in graphics pipelines e.g. for additional stages in animation or lighting algorithms (e.g. [[tiled forward rendering]]). Some rendering APIs allow compute shaders to easily share data resources with the graphics pipeline.
[[Compute shader]]s are not limited to graphics applications, but use the same execution resources for [[GPGPU]]. They may be used in graphics pipelines e.g. for additional stages in animation or lighting algorithms (e.g. [[tiled forward rendering]]). Some rendering APIs allow compute shaders to easily share data resources with the graphics pipeline.
=== Ray tracing shaders ===
[[Ray tracing (graphics)|Ray tracing]] shaders are supported by [[Microsoft]] via [[DirectX Raytracing]], by [[Khronos Group]] via [[Vulkan (API)|Vulkan]], [[GLSL]], and [[SPIR-V]],<ref>{{cite web |url=https://www.khronos.org/blog/vulkan-ray-tracing-final-specification-release |title=Vulkan Ray Tracing Final Specification Release |date=2020-11-23 |df=mdy |department=Blog |website=[[Khronos Group]] |access-date=2021-02-22}}</ref> by [[Apple Inc.|Apple]] via [[Metal (API)|Metal]]. [[NVIDIA]] and [[AMD]] called "ray tracing shaders" as "ray tracing cores". Unlike unified shader, one ray tracing shader can contain multiple ALUs.<ref>{{Cite web| title=RTSL: a Ray Tracing Shading Language | url=https://graphics.stanford.edu/~boulos/papers/rtsl.pdf | archive-url=https://web.archive.org/web/20111012221101/http://graphics.stanford.edu/~boulos/papers/rtsl.pdf | archive-date=2011-10-12}}</ref>


=== Tensor shaders ===
=== Tensor shaders ===
Tensor shaders may be integrated in [[AI accelerator|NPU]]s or [[GPU]]s. Tensor shaders are supported by [[Microsoft]] via [[DirectML]], by [[Khronos Group]] via [[OpenVX]], by [[Apple, Inc.|Apple]] via [[Core ML]], by [[Google, Inc.|Google]] via [[TensorFlow]], by [[Linux Foundation]] via [[ONNX]].<ref> {{Cite web |title=NNAPI Migration Guide {{!}} Android NDK |url=https://developer.android.com/ndk/guides/neuralnetworks/migration-guide |access-date=2024-08-01 |website=Android Developers |language=en}}</ref> [[NVIDIA]] and [[AMD]] called "tensor shaders" as "tensor cores". Unlike unified shader, one tensor shader can contains multiple ALUs.<ref>{{Cite web| title=Review on GPU Architecture | url=https://www.irjet.net/archives/V11/i7/IRJET-V11I7124.pdf | archive-url=https://web.archive.org/web/20240906010254/https://www.irjet.net/archives/V11/i7/IRJET-V11I7124.pdf | archive-date=2024-09-06}}</ref>
Tensor shaders may be integrated in [[AI accelerator|NPU]]s or [[GPU]]s. Tensor shaders are supported by [[Microsoft]] via [[DirectML]], by [[Khronos Group]] via [[OpenVX]], by [[Apple, Inc.|Apple]] via [[Core ML]], by [[Google, Inc.|Google]] via [[TensorFlow]], by [[Linux Foundation]] via [[ONNX]].<ref> {{Cite web |title=NNAPI Migration Guide {{!}} Android NDK |url=https://developer.android.com/ndk/guides/neuralnetworks/migration-guide |access-date=2024-08-01 |website=Android Developers |language=en}}</ref> [[NVIDIA]] and [[AMD]] called "tensor shaders" as "tensor cores". Unlike unified shader, one tensor shader can contains multiple ALUs.<ref>{{Cite web| title=Review on GPU Architecture | url=https://www.irjet.net/archives/V11/i7/IRJET-V11I7124.pdf | archive-url=https://web.archive.org/web/20240906010254/https://www.irjet.net/archives/V11/i7/IRJET-V11I7124.pdf | archive-date=2024-09-06}}</ref>


== Parallel processing ==
== Compute kernels ==
Shaders are written to apply transformations to a large set of elements at a time, for example, to each pixel in an area of the screen, or for every vertex of a model. This is well suited to [[parallel computing|parallel processing]], and most modern GPUs have multiple shader [[graphics pipeline|pipeline]]s to facilitate this, vastly improving computation throughput.
Compute kernels are routines compiled for high throughput [[Hardware acceleration|accelerators]] (such as [[Graphics processing unit|graphics processing units]] (GPUs), [[Digital signal processor|digital signal processors]] (DSPs), or [[Field-programmable gate array|field-programmable gate arrays]] (FPGAs)), separate from but used by a main program (typically running on a [[central processing unit]]). They may be specified by a separate [[programming language]] such as "[[OpenCL C]]" as "compute shaders" written in a [[shading language]], or embedded directly in [[application code]] written in a [[high level language]]. They are sometimes called compute shaders, sharing [[Execution unit|execution units]] with [[vertex shaders]] and [[pixel shaders]] on GPUs, but are not limited to execution on one class of device, or [[Graphics API|graphics APIs]].<ref>{{citation |title=Introduction to Compute Programming in Metal |date=14 October 2014 |url=http://metalbyexample.com/introduction-to-compute/}}</ref><ref>{{citation |title=CUDA Tutorial - the Kernel |date=11 July 2009 |url=http://supercomputingblog.com/cuda/cuda-tutorial-2-the-kernel/}}</ref> Compute kernels roughly correspond to [[Inner loop|inner loops]] when implementing algorithms in traditional languages (except there is no implied sequential operation), or to code passed to [[internal iterators]]. Microsoft support this as [[DirectCompute]].


A programming model with shaders is similar to a [[higher order function]] for rendering, taking the shaders as arguments, and providing a specific [[dataflow]] between intermediate results, enabling both [[data parallelism]] (across pixels, vertices etc.) and [[pipeline parallelism]] (between stages). (see also [[map reduce]]).
This [[programming paradigm]] maps well to [[Vector processor|vector processors]]: there is an assumption that each invocation of a kernel within a batch is independent, allowing for [[data parallel]] execution. However, [[atomic operations]] may sometimes be used for [[Synchronization (computer science)|synchronization]] between elements (for interdependent work), in some scenarios. Individual invocations are given indices (in 1 or more dimensions) from which arbitrary addressing of buffer data may be performed (including [[Scatter gather (vector addressing)|scatter gather]] operations), so long as the non-overlapping assumption is respected.
 
The [[Vulkan (API)|Vulkan API]] provides the intermediate [[SPIR-V]] representation to describe ''both'' [[Shader (computer graphics)|graphical shaders]], and compute kernels in a [[Intermediate representation|language independent]] and [[machine independent]] manner. The intention is to facilitate language evolution and provide a more natural ability to leverage GPU compute capabilities, in line with hardware developments such as [[Unified Memory Architecture]] and [[Heterogeneous System Architecture]].  This allows closer cooperation between a CPU and GPU.
 
Much work has been done in the field of Kernel generation through LLMs as a means of optimizing code. KernelBench,<ref>https://scalingintelligence.stanford.edu/blogs/kernelbench/ KernelBench</ref> created by the Scaling Intelligence Lab at [[Stanford University|Stanford]], provides a framework to evaluate the ability of LLMs to generate efficient GPU kernels. [[Cognition AI|Cognition]] has created Kevin 32-B <ref>{{cite web |title=Cognition &#124; Kevin-32B: Multi-Turn RL for Writing CUDA Kernels |url=https://cognition.ai/blog/kevin-32b}}</ref> to create efficient CUDA kernels which is currently the highest performing model on KernelBench.  


== Programming ==
== Programming ==
The language in which shaders are programmed depends on the target environment. The official OpenGL and [[OpenGL ES]] shading language is [[OpenGL Shading Language]], also known as GLSL, and the official Direct3D shading language is [[HLSL|High Level Shader Language]], also known as HLSL. [[Cg (programming language)|Cg]], a third-party shading language which outputs both OpenGL and Direct3D shaders, was developed by [[Nvidia]]; however since 2012 it has been deprecated. Apple released its own shading language called [[Shading language#Metal Shading Language|Metal Shading Language]] as part of the [[Metal (iOS API)|Metal framework]].
{{Main|Shading language}}
 
Several programming languages exist specifically for writing shaders, and which is used can depend on the target environment. The shading language for OpenGL is [[OpenGL Shading Language|GLSL]], and Direct3D uses [[High-Level Shader Language|HLSL]]. The [[Metal (iOS API)|Metal framework]], used by Apple devices, has its own shading language called [[Shading language#Metal Shading Language|Metal Shading Language]].
 
Increasingly in modern graphics APIs, shaders are compiled into [[Standard Portable Intermediate Representation|SPIR-V]], an [[Intermediate representation|intermediate language]], before they are distributed to the end user. This standard allows more flexible choice of shading language, regardless of target platform.<ref>{{Cite web |last=Kessenich |first=John |date=2015 |title=An Introduction to SPIR-V |url=https://registry.khronos.org/SPIR-V/papers/WhitePaper.pdf |website=[[Khronos Group]]}}</ref> First supported by Vulkan and OpenGL, SPIR-V is also being adopted by Direct3D.<ref>{{Cite web |last=Discuss |first=Aleksandar K |date=2024-09-20 |title=Microsoft DirectX 12 Shifts to SPIR-V as Default Interchange Format |url=https://www.techpowerup.com/326819/microsoft-directx-12-shifts-to-spir-v-as-default-interchange-format |access-date=2025-09-08 |website=TechPowerUp |language=en}}</ref>


=== GUI shader editors ===
=== GUI shader editors ===


Modern [[video game]] development platforms such as [[Unity (game engine)|Unity]], [[Unreal Engine]] and [[Godot (game engine)|Godot]] increasingly include [[node-based editor]]s that can create shaders without the need for actual code; the user is instead presented with a [[directed graph]] of connected nodes that allow users to direct various textures, maps, and mathematical functions into output values like the diffuse color, the specular color and intensity, roughness/metalness, height, normal, and so on. Automatic compilation then turns the graph into an actual, compiled shader.
Modern [[video game]] development platforms such as [[Unity (game engine)|Unity]], [[Unreal Engine]] and [[Godot (game engine)|Godot]] increasingly include [[Visual programming language|node-based editor]]s that can create shaders without the need for written code; the user is instead presented with a [[directed graph]] of connected nodes that allow users to direct various textures, maps, and mathematical functions into output values like the diffuse color, the specular color and intensity, roughness/metalness, height, normal, and so on. The graph is then compiled into a shader.


== See also ==
== See also ==
* [[GLSL]]
* [[List of common shading algorithms]]
* [[Vector processor]]
* [[DirectCompute]]
* [[CUDA]]
* [[OpenMP]]
* [[OpenCL]]
* [[SPIR-V]]
* [[SPIR-V]]
* [[HLSL]]
* [[SYCL]]
* [[Compute kernel]]
* [[Metal (API)]]
* [[Shading language]]
* [[GPGPU]]
* [[GPGPU]]
* [[List of common shading algorithms]]
* {{section link|RISC-V|Vector extension}}
* [[Vector processor]]
* [[Digital signal processor]]
* [[Field-programmable gate array]]
* [[AI accelerator]]
** [[Vision processing unit]]
* [[Manycore processor|Manycore]]
* [[Stream processing]]
* [[Computer for operations with functions]]


== References ==
== References ==

Latest revision as of 07:17, 2 December 2025

Template:Short description Script error: No such module "about". Template:Use mdy dates Script error: No such module "Unsubst".

File:Vulkan simplified pipeline.svg
A simplified illustration of the graphics pipeline in the Vulkan API. The yellow sections are programmable shader stages which can modify the different data being processed by the pipeline.

In computer graphics, a shader is a programmable operation which is applied to data as it moves through the rendering pipeline.[1][2] Shaders can act on data such as vertices and primitives — to generate or morph geometry — and fragments — to calculate the values in a rendered image.[2]

Shaders can execute a wide variety of operations and can run on different types of hardware. In modern real-time computer graphics, shaders are run on graphics processing units (GPUs) — dedicated hardware which provides highly parallel execution of programs. As rendering an image is embarrassingly parallel, fragment and pixel shaders scale well on SIMD hardware. Historically, the drive for faster rendering has produced highly-parallel processors which can in turn be used for other SIMD amenable algorithms.[3] Such shaders executing in a compute pipeline are commonly called compute shaders.

History

The first known use of the term "shader" was introduced to the public by Pixar with version 3.0 of their RenderMan Interface Specification, originally published in May 1988.[4]

As graphics processing units evolved, major graphics software libraries such as OpenGL and Direct3D began to support shaders. The first shader-capable GPUs only supported pixel shading, but vertex shaders were quickly introduced once developers realized the power of shaders. The first video card with a programmable pixel shader was the Nvidia GeForce 3 (NV20), released in 2001.[5] Geometry shaders were introduced with Direct3D 10 and OpenGL 3.2. Eventually, graphics hardware evolved toward a unified shader model.

Graphics shaders

The traditional use of shaders is to operate on data in the graphics pipeline to control the rendering of an image. Graphics shaders can be classified according to their position in the pipeline, the data being manipulated, and the graphics API being used.

Fragment shaders

Script error: No such module "Multiple image". Fragment shaders, also known as pixel shaders, compute color and other attributes of each "fragment": a unit of rendering work affecting at most a single output pixel. The simplest kinds of pixel shaders output one screen pixel as a color value; more complex shaders with multiple inputs/outputs are also possible.[6] Pixel shaders range from simply always outputting the same color, to applying a lighting value, to doing bump mapping, shadows, specular highlights, translucency and other phenomena. They can alter the depth of the fragment (for Z-buffering), or output more than one color if multiple render targets are active. In 3D graphics, a pixel shader alone cannot produce some kinds of complex effects because it operates only on a single fragment, without knowledge of a scene's geometry (i.e. vertex data). However, pixel shaders do have knowledge of the screen coordinate being drawn, and can sample the screen and nearby pixels if the contents of the entire screen are passed as a texture to the shader. This technique can enable a wide variety of two-dimensional postprocessing effects such as blur, or edge detection/enhancement for cartoon/cel shaders. Pixel shaders may also be applied in intermediate stages to any two-dimensional images—sprites or textures—in the pipeline, whereas vertex shaders always require a 3D scene. For instance, a pixel shader is the only kind of shader that can act as a postprocessor or filter for a video stream after it has been rasterized.

Vertex shaders

Vertex shaders are run once for each 3D vertex given to the graphics processor. The purpose is to transform each vertex's 3D position in virtual space to the 2D coordinate at which it appears on the screen (as well as a depth value for the Z-buffer).[7] Vertex shaders can manipulate properties such as position, color and texture coordinates, but cannot create new vertices. The output of the vertex shader goes to the next stage in the pipeline, which is either a geometry shader if present, or the rasterizer. Vertex shaders can enable powerful control over the details of position, movement, lighting, and color in any scene involving 3D models.

Geometry shaders

Geometry shaders were introduced in Direct3D 10 and OpenGL 3.2; formerly available in OpenGL 2.0+ with the use of extensions.[8] This type of shader can generate new graphics primitives, such as points, lines, and triangles, from those primitives that were sent to the beginning of the graphics pipeline.[9]

Geometry shader programs are executed after vertex shaders. They take as input a whole primitive, possibly with adjacency information. For example, when operating on triangles, the three vertices are the geometry shader's input. The shader can then emit zero or more primitives, which are rasterized and their fragments ultimately passed to a pixel shader.

Typical uses of a geometry shader include point sprite generation, geometry tessellation, shadow volume extrusion, and single pass rendering to a cube map. A typical real-world example of the benefits of geometry shaders would be automatic mesh complexity modification. A series of line strips representing control points for a curve are passed to the geometry shader and depending on the complexity required the shader can automatically generate extra lines each of which provides a better approximation of a curve.

Tessellation shaders

As of OpenGL 4.0 and Direct3D 11, a new shader class called a tessellation shader has been added. It adds two new shader stages to the traditional model: tessellation control shaders (also known as hull shaders) and tessellation evaluation shaders (also known as Domain Shaders), which together allow for simpler meshes to be subdivided into finer meshes at run-time according to a mathematical function. The function can be related to a variety of variables, most notably the distance from the viewing camera to allow active level-of-detail scaling. This allows objects close to the camera to have fine detail, while further away ones can have more coarse meshes, yet seem comparable in quality. It also can drastically reduce required mesh bandwidth by allowing meshes to be refined once inside the shader units instead of downsampling very complex ones from memory. Some algorithms can upsample any arbitrary mesh, while others allow for "hinting" in meshes to dictate the most characteristic vertices and edges.

Primitive and Mesh shaders

Circa 2017, the AMD Vega microarchitecture added support for a new shader stage—primitive shaders—somewhat akin to compute shaders with access to the data necessary to process geometry.[10][11]

Nvidia introduced mesh and task shaders with its Turing microarchitecture in 2018 which are also modelled after compute shaders.[12][13] Nvidia Turing is the world's first GPU microarchitecture that supports mesh shading through DirectX 12 Ultimate API, several months before Ampere RTX 30 series was released.[14]

In 2020, AMD and Nvidia released RDNA 2 and Ampere microarchitectures which both support mesh shading through DirectX 12 Ultimate.[15] These mesh shaders allow the GPU to handle more complex algorithms, offloading more work from the CPU to the GPU, and in algorithm intense rendering, increasing the frame rate of or number of triangles in a scene by an order of magnitude.[16] Intel announced that Intel Arc Alchemist GPUs shipping in Q1 2022 will support mesh shaders.[17]

Ray-tracing shaders

Ray tracing shaders are supported by Microsoft via DirectX Raytracing, by Khronos Group via Vulkan, GLSL, and SPIR-V,[18] by Apple via Metal. NVIDIA and AMD called "ray tracing shaders" as "ray tracing cores". Unlike unified shader, one ray tracing shader can contain multiple ALUs.[19]

Compute shaders

Compute shaders are not limited to graphics applications, but use the same execution resources for GPGPU. They may be used in graphics pipelines e.g. for additional stages in animation or lighting algorithms (e.g. tiled forward rendering). Some rendering APIs allow compute shaders to easily share data resources with the graphics pipeline.

Tensor shaders

Tensor shaders may be integrated in NPUs or GPUs. Tensor shaders are supported by Microsoft via DirectML, by Khronos Group via OpenVX, by Apple via Core ML, by Google via TensorFlow, by Linux Foundation via ONNX.[20] NVIDIA and AMD called "tensor shaders" as "tensor cores". Unlike unified shader, one tensor shader can contains multiple ALUs.[21]

Compute kernels

Compute kernels are routines compiled for high throughput accelerators (such as graphics processing units (GPUs), digital signal processors (DSPs), or field-programmable gate arrays (FPGAs)), separate from but used by a main program (typically running on a central processing unit). They may be specified by a separate programming language such as "OpenCL C" as "compute shaders" written in a shading language, or embedded directly in application code written in a high level language. They are sometimes called compute shaders, sharing execution units with vertex shaders and pixel shaders on GPUs, but are not limited to execution on one class of device, or graphics APIs.[22][23] Compute kernels roughly correspond to inner loops when implementing algorithms in traditional languages (except there is no implied sequential operation), or to code passed to internal iterators. Microsoft support this as DirectCompute.

This programming paradigm maps well to vector processors: there is an assumption that each invocation of a kernel within a batch is independent, allowing for data parallel execution. However, atomic operations may sometimes be used for synchronization between elements (for interdependent work), in some scenarios. Individual invocations are given indices (in 1 or more dimensions) from which arbitrary addressing of buffer data may be performed (including scatter gather operations), so long as the non-overlapping assumption is respected.

The Vulkan API provides the intermediate SPIR-V representation to describe both graphical shaders, and compute kernels in a language independent and machine independent manner. The intention is to facilitate language evolution and provide a more natural ability to leverage GPU compute capabilities, in line with hardware developments such as Unified Memory Architecture and Heterogeneous System Architecture. This allows closer cooperation between a CPU and GPU.

Much work has been done in the field of Kernel generation through LLMs as a means of optimizing code. KernelBench,[24] created by the Scaling Intelligence Lab at Stanford, provides a framework to evaluate the ability of LLMs to generate efficient GPU kernels. Cognition has created Kevin 32-B [25] to create efficient CUDA kernels which is currently the highest performing model on KernelBench.

Programming

Script error: No such module "Labelled list hatnote".

Several programming languages exist specifically for writing shaders, and which is used can depend on the target environment. The shading language for OpenGL is GLSL, and Direct3D uses HLSL. The Metal framework, used by Apple devices, has its own shading language called Metal Shading Language.

Increasingly in modern graphics APIs, shaders are compiled into SPIR-V, an intermediate language, before they are distributed to the end user. This standard allows more flexible choice of shading language, regardless of target platform.[26] First supported by Vulkan and OpenGL, SPIR-V is also being adopted by Direct3D.[27]

GUI shader editors

Modern video game development platforms such as Unity, Unreal Engine and Godot increasingly include node-based editors that can create shaders without the need for written code; the user is instead presented with a directed graph of connected nodes that allow users to direct various textures, maps, and mathematical functions into output values like the diffuse color, the specular color and intensity, roughness/metalness, height, normal, and so on. The graph is then compiled into a shader.

See also

References

<templatestyles src="Reflist/styles.css" />

  1. Script error: No such module "citation/CS1".
  2. a b Script error: No such module "citation/CS1".
  3. Script error: No such module "citation/CS1".
  4. Script error: No such module "citation/CS1".
  5. Script error: No such module "citation/CS1".
  6. Script error: No such module "citation/CS1".
  7. Script error: No such module "citation/CS1".
  8. Geometry Shader - OpenGL. Retrieved on December 21, 2011.
  9. Script error: No such module "citation/CS1".
  10. Script error: No such module "citation/CS1".
  11. Script error: No such module "citation/CS1".
  12. Script error: No such module "citation/CS1".
  13. Script error: No such module "citation/CS1".
  14. Script error: No such module "citation/CS1".
  15. Script error: No such module "citation/CS1".
  16. Script error: No such module "citation/CS1".
  17. Script error: No such module "citation/CS1".
  18. Script error: No such module "citation/CS1".
  19. Script error: No such module "citation/CS1".
  20. Script error: No such module "citation/CS1".
  21. Script error: No such module "citation/CS1".
  22. Script error: No such module "citation/CS1".
  23. Script error: No such module "citation/CS1".
  24. https://scalingintelligence.stanford.edu/blogs/kernelbench/ KernelBench
  25. Script error: No such module "citation/CS1".
  26. Script error: No such module "citation/CS1".
  27. Script error: No such module "citation/CS1".

Script error: No such module "Check for unknown parameters".

Further reading

  • Script error: No such module "citation/CS1".
  • Script error: No such module "citation/CS1".
  • Script error: No such module "citation/CS1".
  • Script error: No such module "citation/CS1".

External links

Template:Authority control Template:Computer graphics