Single instruction, multiple data: Difference between revisions

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
imported>Lachsdachs
m History: Fixed typo
imported>Ionmars10
m clean up, typo(s) fixed: on December 2024 → in December 2024, Playstation → PlayStation
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
{{Short description|Type of parallel processing}}
{{Short description|Type of parallel processing}}
{{redirect|SIMD|the cryptographic hash function|SIMD (hash function)|the Scottish statistical tool|Scottish index of multiple deprivation}}
{{Redirect|SIMD|the cryptographic hash function|SIMD (hash function)|the Scottish statistical tool|Scottish index of multiple deprivation}}
{{See also|SIMD within a register|Single instruction, multiple threads}}
{{Update|inaccurate=yes|date=March 2017}}
{{Update|inaccurate=yes|date=March 2017}}
{{Flynn's Taxonomy}}
{{Flynn's Taxonomy}}
Line 6: Line 7:
[[File:SIMD2.svg|thumb|Single instruction, multiple data]]
[[File:SIMD2.svg|thumb|Single instruction, multiple data]]


'''Single instruction, multiple data''' ('''SIMD''') is a type of [[parallel computer|parallel processing]] in [[Flynn's taxonomy]]. SIMD describes computers with [[multiple processing elements]] that perform the same operation on multiple data points simultaneously. SIMD can be internal (part of the hardware design) and it can be directly accessible through an [[instruction set architecture]] (ISA), but it should not be confused with an ISA.  
'''Single instruction, multiple data''' ('''SIMD''') is a type of [[parallel computing]] (processing) in [[Flynn's taxonomy]]. SIMD describes computers with [[multiple processing elements]] that perform the same operation on multiple data points simultaneously. SIMD can be internal (part of the hardware design) and it can be directly accessible through an [[instruction set architecture]] (ISA), but it should not be confused with an ISA.


Such machines exploit [[Data parallelism|data level parallelism]], but not [[Concurrent computing|concurrency]]: there are simultaneous (parallel) computations, but each unit performs exactly the same instruction at any given moment (just with different data). A simple example is to add many pairs of numbers together, all of the SIMD units are performing an addition, but each one has different pairs of values to add. SIMD is particularly applicable to common tasks such as adjusting the contrast in a [[digital image]] or adjusting the volume of [[digital audio]]. Most modern [[Central processing unit|CPU]] designs include SIMD instructions to improve the performance of [[multimedia]] use. In recent CPUs, SIMD units are tightly coupled with cache hierarchies and prefetch mechanisms, which minimize latency during large block operations. For instance, AVX-512-enabled processors can prefetch entire cache lines and apply fused multiply-add operations (FMA) in a single SIMD cycle.
Such machines exploit [[Data parallelism|data level parallelism]], but not [[Concurrent computing|concurrency]]: there are simultaneous (parallel) computations, but each unit performs exactly the same instruction at any given moment (just with different data). A simple example is to add many pairs of numbers together, all of the SIMD units are performing an addition, but each one has different pairs of values to add. SIMD is especially applicable to common tasks such as adjusting the contrast in a [[digital image]] or adjusting the volume of [[digital audio]]. Most modern [[central processing unit]] (CPU) designs include SIMD instructions to improve the performance of [[multimedia]] use. In recent CPUs, SIMD units are tightly coupled with cache hierarchies and prefetch mechanisms, which minimize latency during large block operations. For instance, AVX-512-enabled processors can prefetch entire cache lines and apply fused multiply-add operations (FMA) in a single SIMD cycle.


SIMD has three different subcategories in [[Flynn's taxonomy#Single instruction stream, multiple data streams (SIMD)|Flynn's 1972 Taxonomy]], one of which is [[Single instruction, multiple threads|SIMT]]. SIMT should not be confused with [[Thread (computing)|software threads]] or [[Multithreading (computer architecture)|hardware threads]], both of which are task time-sharing (time-slicing). SIMT is true simultaneous parallel hardware-level execution. A key distinction in SIMT is the presence of control flow mechanisms like warps (NVIDIA terminology) or wavefronts (AMD terminology). These allow divergence and convergence of threads, even under shared instruction streams, thereby offering slightly more flexibility than classical SIMD.
== Confusion between SIMT and SIMD ==
{{See also|SIMD within a register|Single instruction, multiple threads|Vector processor}}


Each hardware element (PU) working on individual data item sometimes also referred as SIMD lane or channel. Modern [[graphics processing unit]]s (GPUs) are often wide SIMD (typically >16 data lanes or channel) implementations.{{cn|date=July 2024}} Some newer GPUs go beyond simple SIMD and integrate mixed-precision SIMD pipelines, which allow concurrent execution of 8-bit, 16-bit, and 32-bit operations in different lanes. This is critical for applications like AI inference, where mixed precision boosts throughput.
[[Image:ILLIAC_IV.jpg|thumb|[[ILLIAC IV]] Array overview, from ARPA-funded Introductory description by Steward Denenberg, July 15, 1971<ref>{{Cite web | title=Archived copy | url=https://apps.dtic.mil/sti/tr/pdf/ADA954882.pdf | archive-url=https://web.archive.org/web/20240427173522/https://apps.dtic.mil/sti/tr/pdf/ADA954882.pdf | archive-date=2024-04-27}}</ref>]]


Additionally, SIMD can exist in both fixed and scalable vector forms. Fixed-width SIMD units operate on a constant number of data points per instruction, while scalable designs, like RISC-V Vector or ARM's SVE, allow the number of data elements to vary depending on the hardware implementation. This improves forward compatibility across generations of processors.
SIMD has three different subcategories in [[Flynn's taxonomy#Single instruction stream, multiple data streams (SIMD)|Flynn's 1972 Taxonomy]], one of which is [[single instruction, multiple threads]] (SIMT). SIMT should not be confused with [[Thread (computing)|software threads]] or [[Multithreading (computer architecture)|hardware threads]], both of which are task time-sharing (time-slicing). SIMT is true simultaneous parallel hardware-level execution, such as in the [[ILLIAC IV]].
 
SIMD should not be confused with [[Vector processing]], characterized by the [[Cray 1]] and  clarified in [[Duncan's taxonomy]]. The
[[Vector processor#Difference between SIMD and vector processors|difference between SIMD and vector processors]] is primarily the presence of a Cray-style {{code|SET VECTOR LENGTH}} instruction.


==History==
==History==
The first use of SIMD instructions was in the [[ILLIAC IV]], which was completed in 1972. This included 64 (of an original design of 256) processors that had local memory to hold different values while performing the same instruction. Separate hardware quickly sent out the values to be processed and gathered up the results.
The first known operational use to date of [[SIMD within a register]] was the [[TX-2]], in 1958. It was capable of 36-bit operations and two 18-bit or four 9-bit sub-word operations.
 
The first commercial use of SIMD instructions was in the [[ILLIAC IV]], which was completed in 1972.


SIMD was the basis for [[vector processor|vector supercomputers]] of the early 1970s such as the [[CDC STAR-100|CDC Star-100]] and the [[TI Advanced Scientific Computer|Texas Instruments ASC]], which could operate on a "vector" of data with a single instruction. Vector processing was especially popularized by [[Cray]] in the 1970s and 1980s. Vector processing architectures are now considered separate from SIMD computers: [[Duncan's Taxonomy]] includes them whereas [[Flynn's Taxonomy]] does not, due to Flynn's work (1966, 1972) pre-dating the [[Cray-1]] (1977).
[[vector processor|Vector supercomputers]] of the early 1970s such as the [[CDC STAR-100|CDC Star-100]] and the [[TI Advanced Scientific Computer|Texas Instruments ASC]] could operate on a "vector" of data with a single instruction. Vector processing was especially popularized by [[Cray]] in the 1970s and 1980s. Vector processing architectures are now considered separate from SIMD computers: [[Duncan's Taxonomy]] includes them whereas [[Flynn's Taxonomy]] does not, due to Flynn's work (1966, 1972) pre-dating the [[Cray-1]] (1977). The complexity of Vector processors however inspired a simpler arrangement known as [[SIMD within a register]].


The first era of modern SIMD computers was characterized by [[Massive parallel processing|massively parallel processing]]-style [[supercomputer]]s such as the [[Thinking Machines Corporation|Thinking Machines]] [[Connection Machine|CM-1 and CM-2]]. These computers had many limited-functionality processors that would work in parallel. For example, each of 65,536 single-bit processors in a Thinking Machines CM-2 would execute the same instruction at the same time, allowing, for instance, to logically combine 65,536 pairs of bits at a time, using a hypercube-connected network or processor-dedicated RAM to find its operands. Supercomputing moved away from the SIMD approach when inexpensive scalar [[Multiple instruction, multiple data|MIMD]] approaches based on commodity processors such as the [[Intel i860|Intel i860 XP]] became more powerful, and interest in SIMD waned.<ref>{{cite web|url=http://www.cs.kent.edu/~walker/classes/pdc.f01/lectures/MIMD-1.pdf|title=MIMD1 - XP/S, CM-5}}</ref>
The first era of modern SIMD computers was characterized by [[massively parallel processing]]-style [[supercomputer]]s such as the [[Thinking Machines Corporation|Thinking Machines]] [[Connection Machine]] CM-1 and CM-2. These computers had many limited-functionality processors that would work in parallel. For example, each of 65,536 single-bit processors in a Thinking Machines CM-2 would execute the same instruction at the same time, allowing, for instance, to logically combine 65,536 pairs of bits at a time, using a hypercube-connected network or processor-dedicated RAM to find its operands. Supercomputing moved away from the SIMD approach when inexpensive scalar [[multiple instruction, multiple data]] (MIMD) approaches based on commodity processors such as the [[Intel i860|Intel i860 XP]] became more powerful, and interest in SIMD waned.<ref>{{cite web|url=http://www.cs.kent.edu/~walker/classes/pdc.f01/lectures/MIMD-1.pdf|title=MIMD1 - XP/S, CM-5}}</ref>


The current era of SIMD processors grew out of the desktop-computer market rather than the supercomputer market. As desktop processors became powerful enough to support real-time gaming and audio/video processing during the 1990s, demand grew for this particular type of computing power, and microprocessor vendors turned to SIMD to meet the demand.<ref name="conte">{{cite conference |title=The long and winding road to high-performance image processing with MMX/SSE |first1=G. |last1=Conte |first2=S. |last2=Tommesani |first3=F. |last3=Zanichelli |book-title=Proc. Fifth IEEE Int'l Workshop on Computer Architectures for Machine Perception |year=2000 |doi=10.1109/CAMP.2000.875989 |s2cid=13180531 |hdl=11381/2297671}}</ref> This resurgence also coincided with the rise of DirectX and OpenGL shader models, which heavily leveraged SIMD under the hood. The graphics APIs encouraged programmers to adopt data-parallel programming styles, indirectly accelerating SIMD adoption in desktop software. Hewlett-Packard introduced [[Multimedia Acceleration eXtensions|MAX]] instructions into [[PA-RISC]] 1.1 desktops in 1994 to accelerate MPEG decoding.<ref>{{cite book |first=R.B. |last=Lee |chapter=Realtime MPEG video via software decompression on a PA-RISC processor |title=digest of papers Compcon '95. Technologies for the Information Superhighway |year=1995 |pages=186–192 |doi=10.1109/CMPCON.1995.512384 |isbn=0-8186-7029-0|s2cid=2262046 }}</ref> Sun Microsystems introduced SIMD integer instructions in its "[[Visual Instruction Set|VIS]]" instruction set extensions in 1995, in its [[UltraSPARC|UltraSPARC I]] microprocessor. MIPS followed suit with their similar [[MDMX]] system.
The current era of SIMD processors grew out of the desktop-computer market rather than the supercomputer market. As desktop processors became powerful enough to support real-time gaming and audio/video processing during the 1990s, demand grew for this type of computing power, and microprocessor vendors turned to SIMD to meet the demand.<ref name="conte">{{cite conference |title=The long and winding road to high-performance image processing with MMX/SSE |first1=G. |last1=Conte |first2=S. |last2=Tommesani |first3=F. |last3=Zanichelli |book-title=Proc. Fifth IEEE Int'l Workshop on Computer Architectures for Machine Perception |year=2000 |doi=10.1109/CAMP.2000.875989 |s2cid=13180531 |hdl=11381/2297671}}</ref> This resurgence also coincided with the rise of [[DirectX]] and OpenGL shader models, which heavily leveraged SIMD under the hood. The graphics APIs encouraged programmers to adopt data-parallel programming styles, indirectly accelerating SIMD adoption in desktop software. Hewlett-Packard introduced [[Multimedia Acceleration eXtensions]] (MAX) instructions into [[PA-RISC]] 1.1 desktops in 1994 to accelerate MPEG decoding.<ref>{{cite book |first=R.B. |last=Lee |chapter=Realtime MPEG video via software decompression on a PA-RISC processor |title=digest of papers Compcon '95. Technologies for the Information Superhighway |year=1995 |pages=186–192 |doi=10.1109/CMPCON.1995.512384 |isbn=0-8186-7029-0|s2cid=2262046}}</ref> Sun Microsystems introduced SIMD integer instructions in its "[[Visual Instruction Set|VIS]]" instruction set extensions in 1995, in its [[UltraSPARC|UltraSPARC I]] microprocessor. MIPS followed suit with their similar [[MDMX]] system.


The first widely deployed desktop SIMD was with Intel's [[MMX (instruction set)|MMX]] extensions to the [[x86]] architecture in 1996. This sparked the introduction of the much more powerful [[AltiVec]] system in the [[Motorola]] [[PowerPC]] and IBM's [[IBM Power microprocessors|POWER]] systems. Intel responded in 1999 by introducing the all-new [[Streaming SIMD Extensions|SSE]] system. Since then, there have been several extensions to the SIMD instruction sets for both architectures. Advanced vector extensions AVX, [[AVX2]] and [[AVX-512]] are developed by Intel. AMD supports AVX, [[AVX2]], and [[AVX-512]] in their current products.<ref>{{Cite web |title=AMD Zen 4 AVX-512 Performance Analysis On The Ryzen 9 7950X Review |url=https://www.phoronix.com/review/amd-zen4-avx512 |access-date=2023-07-13 |website=www.phoronix.com |language=en}}</ref>
The first widely deployed desktop SIMD was with Intel's [[MMX (instruction set)|MMX]] extensions to the [[x86]] architecture in 1996. This sparked the introduction of the much more powerful [[AltiVec]] system in the [[Motorola]] [[PowerPC]] and IBM's [[IBM Power microprocessors|POWER]] systems. Intel responded in 1999 by introducing the all-new [[Streaming SIMD Extensions|SSE]] system. Since then, there have been several extensions to the SIMD instruction sets for both architectures. Advanced vector extensions AVX, [[AVX2]] and [[AVX-512]] are developed by Intel. AMD supports AVX, [[AVX2]], and [[AVX-512]] in their current products.<ref>{{Cite web |title=AMD Zen 4 AVX-512 Performance Analysis On The Ryzen 9 7950X Review |url=https://www.phoronix.com/review/amd-zen4-avx512 |access-date=2023-07-13 |website=www.phoronix.com |language=en}}</ref>
All of these developments have been oriented toward support for real-time graphics, and are therefore oriented toward processing in two, three, or four dimensions, usually with vector lengths of between two and sixteen words, depending on data type and architecture. When new SIMD architectures need to be distinguished from older ones, the newer architectures are then considered "short-vector" architectures, as earlier SIMD and vector supercomputers had vector lengths from 64 to 64,000. A modern supercomputer is almost always a cluster of MIMD computers, each of which implements (short-vector) SIMD instructions.
==Advantages==
An application that may take advantage of SIMD is one where the same value is being added to (or subtracted from) a large number of data points, a common operation in many [[multimedia]] applications. One example would be changing the brightness of an image. Each [[pixel]] of an image consists of three values for the brightness of the red (R), green (G) and blue (B) portions of the color. To change the brightness, the R, G and B values are read from memory, a value is added to (or subtracted from) them, and the resulting values are written back out to memory.  Audio [[Digital signal processing|DSP]]s would likewise, for volume control, multiply both Left and Right channels simultaneously.
With a SIMD processor there are two improvements to this process. For one the data is understood to be in blocks, and a number of values can be loaded all at once. Instead of a series of instructions saying "retrieve this pixel, now retrieve the next pixel", a SIMD processor will have a single instruction that effectively says "retrieve n pixels" (where n is a number that varies from design to design). For a variety of reasons, this can take much less time than retrieving each pixel individually, as with a traditional CPU design. Moreover, SIMD instructions can exploit data reuse, where the same operand is used across multiple calculations, via broadcasting features. For example, multiplying several pixels by a constant scalar value can be done more efficiently by loading the scalar once and broadcasting it across a SIMD register.
Another advantage is that the instruction operates on all loaded data in a single operation. In other words, if the SIMD system works by loading up eight data points at once, the <code>add</code> operation being applied to the data will happen to all eight values at the same time. This parallelism is separate from the parallelism provided by a [[superscalar processor]]; the eight values are processed in parallel even on a non-superscalar processor, and a superscalar processor may be able to perform multiple SIMD operations in parallel.


==Disadvantages==
==Disadvantages==
* Not all algorithms can be vectorized easily. For example, a flow-control-heavy task like code [[parsing]] may not easily benefit from SIMD; however, it is theoretically possible to vectorize comparisons and ''"batch flow"'' to target maximal cache optimality, though this technique will require more intermediate state. Note: Batch-pipeline systems (example: GPUs or software rasterization pipelines) are most advantageous for cache control when implemented with SIMD intrinsics, but they are not exclusive to SIMD features. Further complexity may be apparent to avoid dependence within series such as code strings; while independence is required for vectorization.{{Clarify|reason=You lost me. whilst this is all probably true, it is not in easy-to-follow language. which probably means it needs its own subsection, or references, or just rewriting.|date=June 2021}} Additionally, divergent control flow—where different data lanes would follow different execution paths—can lead to underutilization of SIMD hardware. To handle such divergence, techniques like masking and predication are often employed, but they introduce performance overhead and complexity.
With SIMD, an order of magnitude increase in code size is not uncommon, when compared to equivalent scalar or equivalent vector code, and an order of magnitude ''or greater'' effectiveness (work done per instruction) is achievable with Vector ISAs.<ref>{{cite web |last1=Patterson |first1=David |last2=Waterman |first2=Andrew |title=SIMD Instructions Considered Harmful |url=https://www.sigarch.org/simd-instructions-considered-harmful/ |website=SIGARCH |date=18 September 2017}}</ref>
* Large register files which increases power consumption and required chip area.
* Currently, implementing an algorithm with SIMD instructions usually requires human labor; most compilers do not generate SIMD instructions from a typical [[C (programming language)|C]] program, for instance. [[Automatic vectorization]] in compilers is an active area of computer science research. (Compare [[Vector processor|vector processing]].)
* Programming with particular SIMD instruction sets can involve numerous low-level challenges.
*# SIMD may have restrictions on [[Data structure alignment|data alignment]]; programmers familiar with one particular architecture may not expect this. Worse: the alignment may change from one revision or "compatible" processor to another.
*# Gathering data into SIMD registers and scattering it to the correct destination locations is tricky (sometimes requiring [[permute instruction|permute operations]]) and can be inefficient.
*# Specific instructions like rotations or three-operand addition are not available in some SIMD instruction sets.
*# Instruction sets are architecture-specific: some processors lack SIMD instructions entirely, so programmers must provide non-vectorized implementations (or different vectorized implementations) for them.
*# Different architectures provide different register sizes (e.g. 64, 128, 256 and 512 bits) and instruction sets, meaning that programmers must provide multiple implementations of vectorized code to operate optimally on any given CPU. In addition, the possible set of SIMD instructions grows with each new register size. Unfortunately, for legacy support reasons, the older versions cannot be retired.
*# The early [[MMX (instruction set)|MMX]] instruction set shared a register file with the floating-point stack, which caused inefficiencies when mixing floating-point and MMX code. However, [[SSE2]] corrects this.
 
To remedy problems 1 and 5, [[RISC-V]]'s vector extension uses an alternative approach: instead of exposing the sub-register-level details to the programmer, the instruction set abstracts them out as a few "vector registers" that use the same interfaces across all CPUs with this instruction set. The hardware handles all alignment issues and "strip-mining" of loops. Machines with different vector sizes would be able to run the same code. LLVM calls this vector type "{{not a typo|vscale}}".{{citation needed|date=June 2021}}
 
An order of magnitude increase in code size is not uncommon, when compared to equivalent scalar or equivalent vector code, and an order of magnitude ''or greater'' effectiveness (work done per instruction) is achievable with Vector ISAs.<ref>{{cite web |last1=Patterson |first1=David |last2=Waterman |first2=Andrew |title=SIMD Instructions Considered Harmful |url=https://www.sigarch.org/simd-instructions-considered-harmful/ |website=SIGARCH |date=18 September 2017}}</ref>


ARM's [[Scalable Vector Extension]] takes another approach, known in [[Flynn's taxonomy#Single instruction stream, multiple data streams (SIMD)|Flynn's Taxonomy]] as "Associative Processing", more commonly known today as [[Predication (computer architecture)#SIMD, SIMT and vector predication|"Predicated" (masked)]] SIMD. This approach is not as compact as [[Vector processing]] but is still far better than non-predicated SIMD. Detailed comparative examples are given in the [[Vector processor#Vector instruction example|Vector processing]] page. In addition, all versions of the ARM architecture have offered Load and Store multiple instructions, to Load or Store a block of data from a continuous block of memory, into a range or non-continuous set of registers.<ref>{{Cite web |title=ARM LDR/STR, LDM/STM instructions - Programmer All |url=https://programmerall.com/article/2483661565/ |access-date=2025-04-19 |website=programmerall.com}}</ref>
ARM's [[Scalable Vector Extension]] takes another approach, known in [[Flynn's taxonomy#Single instruction stream, multiple data streams (SIMD)|Flynn's Taxonomy]] more commonly known today as [[Predication (computer architecture)#SIMD, SIMT and vector predication|"Predicated" (masked)]] SIMD. This approach is not as compact as [[vector processing]] but is still far better than non-predicated SIMD. Detailed comparative examples are given at {{section link|Vector processor|Vector instruction example}}. In addition, all versions of the ARM architecture have offered Load and Store multiple instructions, to Load or Store a block of data from a continuous block of memory, into a range or non-continuous set of registers.<ref>{{Cite web |title=ARM LDR/STR, LDM/STM instructions - Programmer All |url=https://programmerall.com/article/2483661565/ |access-date=2025-04-19 |website=programmerall.com}}</ref>


==Chronology==
==Chronology==
{| class="wikitable"
{| class="wikitable"
|+ Examples of SIMD supercomputers (not including [[vector processor]]s)
|+ SIMD supercomputer examples excluding [[vector processor]]s
|-
|-
! Year !! Example
! Year !! Example
|-
|-
| 1974 || [[ILLIAC IV]]
| 1974 || [[ILLIAC IV]] - an Array Processor comprising scalar 64-bit PEs
|-
|-
| 1974 || [[ICL Distributed Array Processor]] (DAP)
| 1974 || [[ICL Distributed Array Processor]] (DAP)
Line 68: Line 53:
| 1981 || [[Geometric-Arithmetic Parallel Processor]] from [[Martin Marietta]] (continued at [[Lockheed Martin]], then at [http://www.teranex.com Teranex] and [[Silicon Optix]])
| 1981 || [[Geometric-Arithmetic Parallel Processor]] from [[Martin Marietta]] (continued at [[Lockheed Martin]], then at [http://www.teranex.com Teranex] and [[Silicon Optix]])
|-
|-
| 1983-1991 || [[Goodyear MPP|Massively Parallel Processor]] (MPP), from [[NASA]]/[[Goddard Space Flight Center]]
| 1983–1991 || [[Goodyear MPP|Massively Parallel Processor]] (MPP), from [[NASA]]/[[Goddard Space Flight Center]]
|-
|-
| 1985 || [[Connection Machine]], models 1 and 2 (CM-1 and CM-2), from [[Thinking Machines Corporation]]
| 1985 || [[Connection Machine]], models 1 and 2 (CM-1 and CM-2), from [[Thinking Machines Corporation]]
|-
|-
| 1987-1996 || [[MasPar]] MP-1 and MP-2
| 1987–1996 || [[MasPar]] MP-1 and MP-2
|-
|-
| 1991 || [[Zephyr DC]] from [[Wavetracer]]
| 1991 || Zephyr DC from Wavetracer
|-
|-
| 2001 || [[Xplor (Pyxsys)|Xplor]] from [[Pyxsys, Inc.]]
| 2001 || Xplor from Pyxsys, Inc.
|}
|}


==Hardware==
==Hardware==
Small-scale (64 or 128 bits) SIMD became popular on general-purpose CPUs in the early 1990s and continued through 1997 and later with Motion Video Instructions (MVI) for [[DEC Alpha|Alpha]]. SIMD instructions can be found, to one degree or another, on most CPUs, including [[IBM]]'s [[AltiVec]] and [[Signal Processing Engine|SPE]] for [[PowerPC]], [[Hewlett-Packard|HP]]'s [[PA-RISC]] [[Multimedia Acceleration eXtensions]] (MAX), [[Intel Corporation|Intel]]'s [[MMX (instruction set)|MMX and iwMMXt]], [[Streaming SIMD Extensions|SSE]], [[SSE2]], [[SSE3]] [[SSSE3]] and [[SSE4|SSE4.x]], [[Advanced Micro Devices|AMD]]'s [[3DNow!]], [[ARC (processor)|ARC]]'s ARC Video subsystem, [[SPARC]]'s [[Visual Instruction Set|VIS]] and VIS2, [[Sun Microsystems|Sun]]'s [[MAJC]], [[ARM Holdings|ARM]]'s [[ARM architecture#Advanced SIMD (Neon)|Neon]] technology, [[MIPS architecture|MIPS]]' [[MDMX]] (MaDMaX) and [[MIPS-3D]]. The IBM, Sony, Toshiba co-developed [[Cell (microprocessor)|Cell Processor]]'s [[Synergistic Processing Unit|SPU]]'s instruction set is heavily SIMD based. [[Philips]], now [[NXP Semiconductors|NXP]], developed several SIMD processors named [[Xetal]]. The Xetal has 320 16-bit processor elements especially designed for vision tasks. Apple's M1 and M2 chips also incorporate SIMD units deeply integrated with their GPU and Neural Engine, using Apple-designed SIMD pipelines optimized for image filtering, convolution, and matrix multiplication. This unified memory architecture helps SIMD instructions operate on shared memory pools more efficiently.
=== CPUs ===
Small-scale (64 or 128 bits) SIMD became popular on general-purpose CPUs in the early 1990s and continued through 1997 and later with Motion Video Instructions (MVI) for [[DEC Alpha|Alpha]]. SIMD instructions can be found, to one degree or another, on most CPUs, including [[IBM]]'s [[AltiVec]] and Signal Processing Engine (SPE) for [[PowerPC]], [[Hewlett-Packard]]'s (HP) [[PA-RISC]] [[Multimedia Acceleration eXtensions]] (MAX), [[Intel]]'s [[MMX (instruction set)|MMX and iwMMXt]], [[Streaming SIMD Extensions]] (SSE), [[SSE2]], [[SSE3]] [[SSSE3]] and [[SSE4]].x, [[Advanced Micro Devices|AMD]]'s [[3DNow!]], [[ARC (processor)|ARC's]] ARC Video subsystem, [[SPARC]]'s [[Visual Instruction Set|VIS]] and VIS2, [[Sun Microsystems|Sun]]'s [[MAJC]], [[ARM Holdings|ARM's]] [[ARM architecture#Advanced SIMD (Neon)|Neon]] technology, [[MIPS architecture|MIPS]]' [[MDMX]] (MaDMaX) and [[MIPS-3D]].


Intel's [[AVX-512]] SIMD instructions process 512 bits of data at once.
Intel's [[AVX-512]] SIMD instructions process 512 bits of data at once.
=== Coprocessors ===
The IBM, Sony, Toshiba co-developed [[Cell (processor)|Cell processor's]] [[Cell (processor)#Synergistic Processing Element (SPE)|Synergistic Processing Element's]] (SPE's) instruction set is heavily SIMD based.
Some, but not all, GPUs are SIMD-based. AMD GPUs since the [[TeraScale (microarchitecture)]] are SIMD-based, a feature that has remained in the 2020s microartectures RDNA and CDNA,<ref>{{cite web |last1=Lam |first1=Chester |title=GCN, AMD's GPU Architecture Modernization |url=https://chipsandcheese.com/p/gcn-amds-gpu-architecture-modernization |website=chipsandcheese.com |language=en}}</ref> with a layer of [[single instruction, multiple threads]] (SIMT) above.<ref>{{cite web |last1=Lam |first1=Chester |title=Hot Chips 34 – AMD's Instinct MI200 Architecture |url=https://chipsandcheese.com/p/hot-chips-34-amds-instinct-mi200-architecture?utm_source=publication-search |website=chipsandcheese.com |language=en |quote=On both NVIDIA and AMD, matrix instructions break the SIMT abstraction model and work across a whole wavefront (or “warp” on NVIDIA).}}</ref> On the other hand, Nvidia's CUDA architectures use scalar cores with SIMT.<ref>{{cite web |title=Version 2.3 8/27/2009 OpenCL Programming Guide for the CUDA Architecture |url=https://www.nvidia.com/content/cudazone/download/opencl/nvidia_opencl_programmingguide.pdf}}</ref>
[[Philips]], now [[NXP Semiconductors|NXP]], developed several SIMD processors named [[Xetal]]. The Xetal has 320 16-bit processor elements especially designed for vision tasks.
Apple's M1 and M2 chips also incorporate SIMD units deeply integrated with their GPU and Neural Engine, using Apple-designed SIMD pipelines optimized for image filtering, convolution, and matrix multiplication. This unified memory architecture helps SIMD instructions operate on shared memory pools more efficiently. (The CPU part implements ordinary NEON).


==Software==
==Software==
Line 88: Line 83:
[[File:SIMD cpu diagram1.svg|right|thumb|280px| The SIMD tripling of four 8-bit numbers. The CPU loads 4 numbers at once, multiplies them all in one SIMD-multiplication, and saves them all at once back to RAM. In theory, the speed can be multiplied by 4.]]
[[File:SIMD cpu diagram1.svg|right|thumb|280px| The SIMD tripling of four 8-bit numbers. The CPU loads 4 numbers at once, multiplies them all in one SIMD-multiplication, and saves them all at once back to RAM. In theory, the speed can be multiplied by 4.]]


SIMD instructions are widely used to process 3D graphics, although modern [[Video card|graphics card]]s with embedded SIMD have largely taken over this task from the CPU. Some systems also include permute functions that re-pack elements inside vectors, making them particularly useful for data processing and compression. They are also used in cryptography.<ref>[http://marc.info/?l=openssl-dev&m=108530261323715&w=2 RE: SSE2 speed], showing how SSE2 is used to implement SHA hash algorithms</ref><ref>[http://cr.yp.to/snuffle.html#speed Salsa20 speed; Salsa20 software], showing a stream cipher implemented using SSE2</ref><ref>[http://markmail.org/message/tygo74tyjagwwnp4 Subject: up to 1.4x RSA throughput using SSE2], showing RSA implemented using a non-SIMD SSE2 integer multiply instruction.</ref> The trend of general-purpose computing on GPUs ([[GPGPU]]) may lead to wider use of SIMD in the future. Recent compilers such as LLVM, GCC, and Intel's ICC offer aggressive auto-vectorization options. Developers can often enable these with flags like <code>-O3</code> or <code>-ftree-vectorize</code>, which guide the compiler to restructure loops for SIMD compatibility.
SIMD instructions are widely used to process 3D graphics, although modern [[Video card|graphics card]]s with embedded SIMD have largely taken over this task from the CPU. Some systems also include permute functions that re-pack elements inside vectors, making them especially useful for data processing and compression. They are also used in cryptography.<ref>[http://marc.info/?l=openssl-dev&m=108530261323715&w=2 RE: SSE2 speed], showing how SSE2 is used to implement SHA hash algorithms</ref><ref>[http://cr.yp.to/snuffle.html#speed Salsa20 speed; Salsa20 software], showing a stream cipher implemented using SSE2</ref><ref>[http://markmail.org/message/tygo74tyjagwwnp4 Subject: up to 1.4x RSA throughput using SSE2], showing RSA implemented using a non-SIMD SSE2 integer multiply instruction.</ref> The trend of general-purpose computing on GPUs ([[GPGPU]]) may lead to wider use of SIMD in the future. Recent compilers such as [[LLVM]], [[GNU Compiler Collection]] (GCC), and Intel's ICC offer aggressive auto-vectoring options. Developers can often enable these with flags like <code>-O3</code> or <code>-ftree-vectorize</code>, which guide the compiler to restructure loops for SIMD compatibility.


Adoption of SIMD systems in [[personal computer]] software was at first slow, due to a number of problems. One was that many of the early SIMD instruction sets tended to slow overall performance of the system due to the re-use of existing floating point registers. Other systems, like [[MMX (instruction set)|MMX]] and [[3DNow!]], offered support for data types that were not interesting to a wide audience and had expensive context switching instructions to switch between using the [[Floating-point unit|FPU]] and MMX [[Processor register|registers]]. Compilers also often lacked support, requiring programmers to resort to [[assembly language]] coding.
Adoption of SIMD systems in [[personal computer]] software was at first slow, due to a number of problems. One was that many of the early SIMD instruction sets tended to slow overall performance of the system due to the re-use of existing floating point registers. Other systems, like [[MMX (instruction set)|MMX]] and [[3DNow!]], offered support for data types that were not interesting to a wide audience and had expensive context switching instructions to switch between using the [[Floating-point unit|FPU]] and MMX [[Processor register|registers]]. Compilers also often lacked support, requiring programmers to resort to [[assembly language]] coding.


SIMD on [[x86]] had a slow start. The introduction of [[3DNow!]] by [[Advanced Micro Devices|AMD]] and [[Streaming SIMD Extensions|SSE]] by [[Intel Corporation|Intel]] confused matters somewhat, but today the system seems to have settled down (after AMD adopted SSE) and newer compilers should result in more SIMD-enabled software. Intel and AMD now both provide optimized math libraries that use SIMD instructions, and open source alternatives like [[libSIMD]], [[SIMDx86]] and [[SLEEF]] have started to appear (see also [[libm]]).<ref>{{cite web |title=SIMD library math functions |url=https://stackoverflow.com/a/36637424 |website=Stack Overflow |access-date=16 January 2020}}</ref>
SIMD on [[x86]] had a slow start. The introduction of [[3DNow!]] by [[Advanced Micro Devices|AMD]] and [[Streaming SIMD Extensions|SSE]] by [[Intel]] confused matters somewhat, but today the system seems to have settled down (after AMD adopted SSE) and newer compilers should result in more SIMD-enabled software. Intel and AMD now both provide optimized math [[Library (computing)|libraries]] that use SIMD instructions, and open source alternatives like [[libSIMD]], [[SIMDx86]] and [[SLEEF]] have started to appear (see also [[libm]]).<ref>{{cite web |title=SIMD library math functions |url=https://stackoverflow.com/a/36637424 |website=Stack Overflow |access-date=16 January 2020}}</ref>


[[Apple Inc.|Apple Computer]] had somewhat more success, even though they entered the SIMD market later than the rest. [[AltiVec]] offered a rich system and can be programmed using increasingly sophisticated compilers from [[Motorola]], [[IBM]] and [[GNU]], therefore assembly language programming is rarely needed. Additionally, many of the systems that would benefit from SIMD were supplied by Apple itself, for example [[iTunes]] and [[QuickTime]]. However, in 2006, Apple computers moved to Intel x86 processors. Apple's [[Application programming interface|API]]s and [[Integrated development environment|development tools]] ([[Xcode|XCode]]) were modified to support [[SSE2]] and [[SSE3]] as well as AltiVec. Apple was the dominant purchaser of PowerPC chips from IBM and [[Freescale Semiconductor]]. Even though Apple has stopped using PowerPC processors in their products, further development of AltiVec is continued in several PowerPC and [[Power ISA]] designs from Freescale and IBM.
[[Apple Inc.|Apple Computer]] had somewhat more success, even though they entered the SIMD market later than the rest. [[AltiVec]] offered a rich system and can be programmed using increasingly sophisticated compilers from [[Motorola]], [[IBM]] and [[GNU]], therefore assembly language programming is rarely needed. Additionally, many of the systems that would benefit from SIMD were supplied by Apple itself, for example [[iTunes]] and [[QuickTime]]. However, in 2006, Apple computers moved to Intel x86 processors. Apple's [[Application programming interface|API]]s and [[Integrated development environment|development tools]] ([[Xcode|XCode]]) were modified to support [[SSE2]] and [[SSE3]] as well as AltiVec. Apple was the dominant purchaser of PowerPC chips from IBM and [[Freescale Semiconductor]]. Even though Apple has stopped using PowerPC processors in their products, further development of AltiVec is continued in several PowerPC and [[Power ISA]] designs from Freescale and IBM.
Line 99: Line 94:


===Programmer interface===
===Programmer interface===
It is common for publishers of the SIMD instruction sets to make their own C/C++ language extensions with [[intrinsic function]]s or special datatypes (with [[operator overloading]]) guaranteeing the generation of vector code. Intel, AltiVec, and ARM NEON provide extensions widely adopted by the compilers targeting their CPUs. (More complex operations are the task of vector math libraries.)
It is common for publishers of the SIMD instruction sets to make their own [[C (programming language)|C]] and [[C++]] language extensions with [[intrinsic function]]s or special datatypes (with [[operator overloading]]) guaranteeing the generation of vector code. Intel, AltiVec, and ARM NEON provide extensions widely adopted by the compilers targeting their CPUs. (More complex operations are the task of vector math libraries.)


The [[GNU C Compiler]] takes the extensions a step further by abstracting them into a universal interface that can be used on any platform by providing a way of defining SIMD datatypes.<ref>{{cite web |title=Vector Extensions |url=https://gcc.gnu.org/onlinedocs/gcc/Vector-Extensions.html |website=Using the GNU Compiler Collection (GCC) |access-date=16 January 2020}}</ref> The [[LLVM]] Clang compiler also implements the feature, with an analogous interface defined in the IR.<ref>{{cite web |title=Clang Language Extensions |url=https://clang.llvm.org/docs/LanguageExtensions.html |website=Clang 11 documentation |access-date=16 January 2020}}</ref> Rust's {{code|packed_simd}} crate (and the experimental {{code|std::simd}}) uses this interface, and so does [[Swift (programming language)|Swift]] 2.0+.
The [[GNU C Compiler]] takes the extensions a step further by abstracting them into a universal interface that can be used on any platform by providing a way of defining SIMD datatypes.<ref>{{cite web |title=Vector Extensions |url=https://gcc.gnu.org/onlinedocs/gcc/Vector-Extensions.html |website=Using the GNU Compiler Collection (GCC) |access-date=16 January 2020}}</ref> The [[LLVM]] Clang compiler also implements the feature, with an analogous interface defined in the IR.<ref>{{cite web |title=Clang Language Extensions |url=https://clang.llvm.org/docs/LanguageExtensions.html |website=Clang 11 documentation |access-date=16 January 2020}}</ref> Rust's {{code|packed_simd}} crate (and the experimental {{code|std::simd}}) uses this interface, and so does [[Swift (programming language)|Swift]] 2.0+.
Line 105: Line 100:
C++ has an experimental interface {{code|std::experimental::simd}} that works similarly to the GCC extension. LLVM's libcxx seems to implement it.{{Citation needed|date=March 2023}} For GCC and libstdc++, a wrapper library that builds on top of the GCC extension is available.<ref>{{cite web |title=VcDevel/std-simd |url=https://github.com/VcDevel/std-simd |publisher=VcDevel |date=6 August 2020}}</ref>
C++ has an experimental interface {{code|std::experimental::simd}} that works similarly to the GCC extension. LLVM's libcxx seems to implement it.{{Citation needed|date=March 2023}} For GCC and libstdc++, a wrapper library that builds on top of the GCC extension is available.<ref>{{cite web |title=VcDevel/std-simd |url=https://github.com/VcDevel/std-simd |publisher=VcDevel |date=6 August 2020}}</ref>


[[Microsoft Corporation|Microsoft]] added SIMD to [[.NET Core|.NET]] in RyuJIT.<ref>{{cite web|url=https://devblogs.microsoft.com/dotnet/ryujit-the-next-generation-jit-compiler-for-net|title=RyuJIT: The next-generation JIT compiler for .NET|date=30 September 2013 }}</ref> The {{code|System.Numerics.Vector}} package, available on NuGet, implements SIMD datatypes.<ref>{{cite web|url=https://devblogs.microsoft.com/dotnet/the-jit-finally-proposed-jit-and-simd-are-getting-married|title=The JIT finally proposed. JIT and SIMD are getting married|date=7 April 2014 }}</ref> Java also has a new proposed API for SIMD instructions available in [[OpenJDK]] 17 in an incubator module.<ref>{{cite web|url=https://openjdk.java.net/jeps/338|title=JEP 338: Vector API}}</ref> It also has a safe fallback mechanism on unsupported CPUs to simple loops.
[[Microsoft Corporation|Microsoft]] added SIMD to [[.NET Core|.NET]] in RyuJIT.<ref>{{cite web|url=https://devblogs.microsoft.com/dotnet/ryujit-the-next-generation-jit-compiler-for-net|title=RyuJIT: The next-generation JIT compiler for .NET|date=30 September 2013}}</ref> The {{code|System.Numerics.Vector}} package, available on NuGet, implements SIMD datatypes.<ref>{{cite web|url=https://devblogs.microsoft.com/dotnet/the-jit-finally-proposed-jit-and-simd-are-getting-married|title=The JIT finally proposed. JIT and SIMD are getting married|date=7 April 2014}}</ref> Java also has a new proposed API for SIMD instructions available in [[OpenJDK]] 17 in an incubator module.<ref>{{cite web|url=https://openjdk.java.net/jeps/338|title=JEP 338: Vector API}}</ref> It also has a safe fallback mechanism on unsupported CPUs to simple loops.


Instead of providing an SIMD datatype, compilers can also be hinted to auto-vectorize some loops, potentially taking some assertions about the lack of data dependency. This is not as flexible as manipulating SIMD variables directly, but is easier to use. [[OpenMP]] 4.0+ has a {{code|#pragma omp simd}} hint.<ref>{{cite web |title=SIMD Directives |url=https://www.openmp.org/spec-html/5.0/openmpsu42.html |website=www.openmp.org}}</ref> This OpenMP interface has replaced a wide set of nonstandard extensions, including [[Cilk]]'s {{code|#pragma simd}},<ref>{{cite web |title=Tutorial pragma simd |url=https://www.cilkplus.org/tutorial-pragma-simd |website=CilkPlus |date=18 July 2012 |access-date=9 August 2020 |archive-date=4 December 2020 |archive-url=https://web.archive.org/web/20201204055745/https://www.cilkplus.org/tutorial-pragma-simd |url-status=dead }}</ref> GCC's {{code|#pragma GCC ivdep}}, and many more.<ref>{{cite web|url=https://www.openmp.org/wp-content/uploads/OpenMP_SC20_Loop_Transformations.pdf|title=OMP5.1: Loop Transformations|first=Michael|last=Kruse}}</ref>
Instead of providing an SIMD datatype, compilers can also be hinted to auto-vectorize some loops, potentially taking some assertions about the lack of data dependency. This is not as flexible as manipulating SIMD variables directly, but is easier to use. [[OpenMP]] 4.0+ has a {{code|#pragma omp simd}} hint.<ref>{{cite web |title=SIMD Directives |url=https://www.openmp.org/spec-html/5.0/openmpsu42.html |website=www.openmp.org}}</ref> This OpenMP interface has replaced a wide set of nonstandard extensions, including [[Cilk]]'s {{code|#pragma simd}},<ref>{{cite web |title=Tutorial pragma simd |url=https://www.cilkplus.org/tutorial-pragma-simd |website=CilkPlus |date=18 July 2012 |access-date=9 August 2020 |archive-date=4 December 2020 |archive-url=https://web.archive.org/web/20201204055745/https://www.cilkplus.org/tutorial-pragma-simd |url-status=dead}}</ref> GCC's {{code|#pragma GCC ivdep}}, and many more.<ref>{{cite web|url=https://www.openmp.org/wp-content/uploads/OpenMP_SC20_Loop_Transformations.pdf|title=OMP5.1: Loop Transformations|first=Michael|last=Kruse}}</ref>
 
An example of the use of platform-specific, generic vector-based, and generic hint-based interfaces is "SIMD everywhere", a collection of C/C++ headers implementing of platform-specific intrinsics for other platforms (e.g. SSE intrinsics for ARM NEON).<ref>{{cite web |title=simd-everywhere/simde:  Implementations of SIMD instruction sets for systems which don't natively support them. |url=https://github.com/simd-everywhere/simde |website=GitHub |publisher=SIMD Everywhere |date=6 November 2025}}</ref>


===SIMD multi-versioning===
===SIMD multi-versioning===
Line 114: Line 111:
* Library multi-versioning (LMV): the entire [[Library (computing)|programming library]] is duplicated for many instruction set extensions, and the operating system or the program decides which one to load at run-time.
* Library multi-versioning (LMV): the entire [[Library (computing)|programming library]] is duplicated for many instruction set extensions, and the operating system or the program decides which one to load at run-time.


FMV, manually coded in assembly language, is quite commonly used in a number of performance-critical libraries such as glibc and libjpeg-turbo. [[Intel C++ Compiler]], [[GNU Compiler Collection]] since GCC 6, and [[Clang]] since clang 7 allow for a simplified approach, with the compiler taking care of function duplication and selection. GCC and clang requires explicit {{code|target_clones}} labels in the code to "clone" functions,<ref>{{cite web |title=Function multi-versioning in GCC 6 |url=https://lwn.net/Articles/691932/ |website=lwn.net}}</ref> while ICC does so automatically (under the command-line option {{code|/Qax}}). The [[Rust programming language]] also supports FMV. The setup is similar to GCC and Clang in that the code defines what instruction sets to compile for, but cloning is manually done via inlining.<ref>{{cite web |title=2045-target-feature |url= https://rust-lang.github.io/rfcs/2045-target-feature.html |website=The Rust RFC Book}}</ref>
FMV, manually coded in assembly language, is quite commonly used in a number of performance-critical libraries such as glibc and libjpeg-turbo. [[Intel C++ Compiler]], [[GNU Compiler Collection]] since GCC 6, and [[Clang]] since clang 7 allow for a simplified approach, with the compiler taking care of function duplication and selection. GCC and clang requires explicit {{code|target_clones}} labels in the code to "clone" functions,<ref>{{cite web |title=Function multi-versioning in GCC 6 |url=https://lwn.net/Articles/691932/ |website=lwn.net |date=22 June 2016 }}</ref> while ICC does so automatically (under the command-line option {{code|/Qax}}). The [[Rust programming language]] also supports FMV. The setup is similar to GCC and Clang in that the code defines what instruction sets to compile for, but cloning is manually done via inlining.<ref>{{cite web |title=2045-target-feature |url= https://rust-lang.github.io/rfcs/2045-target-feature.html |website=The Rust RFC Book}}</ref>


As using FMV requires code modification on GCC and Clang, vendors more commonly use library multi-versioning: this is easier to achieve as only compiler switches need to be changed. [[Glibc]] supports LMV and this functionality is adopted by the Intel-backed Clear Linux project.<ref name=clear>{{cite web |title=Transparent use of library packages optimized for Intel® architecture |url=https://clearlinux.org/news-blogs/transparent-use-library-packages-optimized-intel-architecture |website=Clear Linux* Project |access-date=8 September 2019 |language=en}}</ref>
As using FMV requires code modification on GCC and Clang, vendors more commonly use library multi-versioning: this is easier to achieve as only compiler switches need to be changed. [[Glibc]] supports LMV and this functionality is adopted by the Intel-backed Clear Linux project.<ref name=clear>{{cite web |title=Transparent use of library packages optimized for Intel® architecture |url=https://clearlinux.org/news-blogs/transparent-use-library-packages-optimized-intel-architecture |website=Clear Linux* Project |access-date=8 September 2019 |language=en}}</ref>
Line 125: Line 122:
Instances of these types are immutable and in optimized code are mapped directly to SIMD registers. Operations expressed in Dart typically are compiled into a single instruction without any overhead. This is similar to C and C++ intrinsics. Benchmarks for [[4×4 matrix|4×4]] [[matrix multiplication]], [[3D vertex transformation]], and [[Mandelbrot set]] visualization show near 400% speedup compared to scalar code written in Dart.
Instances of these types are immutable and in optimized code are mapped directly to SIMD registers. Operations expressed in Dart typically are compiled into a single instruction without any overhead. This is similar to C and C++ intrinsics. Benchmarks for [[4×4 matrix|4×4]] [[matrix multiplication]], [[3D vertex transformation]], and [[Mandelbrot set]] visualization show near 400% speedup compared to scalar code written in Dart.


McCutchan's work on Dart, now called SIMD.js, has been adopted by [[ECMAScript]] and Intel announced at IDF 2013 that they are implementing McCutchan's specification for both [[V8 (JavaScript engine)|V8]] and [[SpiderMonkey (JavaScript engine)|SpiderMonkey]].<ref>{{cite web |title=SIMD in JavaScript |url=https://01.org/node/1495 |website=01.org |date=8 May 2014}}</ref> However, by 2017, SIMD.js has been taken out of the ECMAScript standard queue in favor of pursuing a similar interface in [[WebAssembly]].<ref>{{cite web |title=tc39/ecmascript_simd: SIMD numeric type for EcmaScript. |url=https://github.com/tc39/ecmascript_simd/ |website=GitHub |publisher=Ecma TC39 |access-date=8 September 2019 |date=22 August 2019}}</ref> As of August 2020, the WebAssembly interface remains unfinished, but its portable 128-bit SIMD feature has already seen some use in many engines.{{Citation needed|date=March 2025}}
Intel announced at IDF 2013 that they were implementing McCutchan's specification for both [[V8 (JavaScript engine)|V8]] and [[SpiderMonkey]].<ref>{{cite web |title=SIMD in JavaScript |url=https://01.org/node/1495 |website=01.org |date=8 May 2014}}</ref> However, by 2017, SIMD.js was taken out of the [[ECMAScript]] standard queue in favor of pursuing a similar interface in [[WebAssembly]].<ref>{{cite web |title=tc39/ecmascript_simd: SIMD numeric type for EcmaScript. |url=https://github.com/tc39/ecmascript_simd/ |website=GitHub |publisher=Ecma TC39 |access-date=8 September 2019 |date=22 August 2019}}</ref> Support for SIMD was added to the WebAssembly 2.0 specification, which was finished on 2022 and became official in December 2024.<ref>{{cite web |url=https://webassembly.org/news/2025-03-20-wasm-2.0/ |title=Wasm 2.0 Completed - WebAssembly}}</ref> LLVM's autovectoring, when compiling C or C++ to WebAssembly, can target WebAssembly SIMD to automatically make use of SIMD, while SIMD intrinsic are also available.<ref>{{cite web |title=Using SIMD with WebAssembly |url=https://emscripten.org/docs/porting/simd.html |work=Emscripten 4.0.11-git (dev) documentation}}</ref>
 
Emscripten, Mozilla's C/C++-to-JavaScript compiler, with extensions can enable compilation of C++ programs that make use of SIMD intrinsics or GCC-style vector code to the SIMD API of JavaScript, resulting in equivalent speedups compared to scalar code.<ref>{{cite web |title=SIMD in JavaScript via C++ and Emscripten |first1=Peter |last1=Jensen |first2=Ivan |last2=Jibaja |first3=Ningxin |last3=Hu |first4=Dan |last4=Gohman |first5=John |last5=McCutchan |year=2015 |format=PDF |url=https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnx3cG12cDIwMTV8Z3g6NTkzYWE2OGNlNDAyMTRjOQ}}</ref> It also supports (and now prefers) the WebAssembly 128-bit SIMD proposal.<ref>{{cite web |title=Porting SIMD code targeting WebAssembly |url=https://emscripten.org/docs/porting/simd.html |website=Emscripten 1.40.1 documentation}}</ref>


==Commercial applications==
==Commercial applications==
It has generally proven difficult to find sustainable commercial applications for SIMD-only processors.
It has generally proven difficult to find sustainable commercial applications for SIMD-only processors in general-purpose computing.


One that has had some measure of success is the [[Geometric-Arithmetic Parallel Processor|GAPP]], which was developed by [[Lockheed Martin]] and taken to the commercial sector by their spin-off [[Teranex]]. The GAPP's recent incarnations have become a powerful tool in real-time [[digital image processing|video processing]] applications like conversion between various video standards and frame rates ([[NTSC]] to/from [[PAL]], NTSC to/from [[High-definition television|HDTV]] formats, etc.), [[deinterlacing]], [[Noise reduction|image noise reduction]], adaptive [[video compression]], and image enhancement.
One that has had some measure of success is the [[Geometric-Arithmetic Parallel Processor|GAPP]], which was developed by [[Lockheed Martin]] and taken to the commercial sector by their spin-off [[Teranex]]. The GAPP's recent incarnations have become a powerful tool in real-time [[digital image processing|video processing]] applications like conversion between various video standards and frame rates ([[NTSC]] to/from [[PAL]], NTSC to/from [[high-definition television]] (HDTV) formats, etc.), [[deinterlacing]], image [[noise reduction]], adaptive [[video compression]], and image enhancement.


A more ubiquitous application for SIMD is found in [[video game]]s: nearly every modern [[video game console]] since [[History of video game consoles (sixth generation)|1998]] has incorporated a SIMD processor somewhere in its architecture. The [[PlayStation 2]] was unusual in that one of its vector-float units could function as an autonomous [[Digital signal processor|DSP]] executing its own instruction stream, or as a coprocessor driven by ordinary CPU instructions. 3D graphics applications tend to lend themselves well to SIMD processing as they rely heavily on operations with 4-dimensional vectors. [[Microsoft]]'s [[DirectX|Direct3D 9.0]] now chooses at runtime processor-specific implementations of its own math operations, including the use of SIMD-capable instructions.
A more ubiquitous application for SIMD is found in [[video game]]s: nearly every modern [[video game console]] since [[History of video game consoles (sixth generation)|1998]] has incorporated a SIMD processor somewhere in its architecture. The [[PlayStation 2]] was unusual in that one of its vector-float units could function as an autonomous [[digital signal processor]] (DSP) executing its own instruction stream, or as a coprocessor driven by ordinary CPU instructions. 3D graphics applications tend to lend themselves well to SIMD processing as they rely heavily on operations with 4-dimensional vectors. [[Microsoft]]'s [[Direct3D]] 9.0 now chooses at runtime processor-specific implementations of its own math operations, including the use of SIMD-capable instructions.


A later processor that used vector processing is the [[Cell (microprocessor)|Cell Processor]] used in the Playstation 3, which was developed by [[IBM]] in cooperation with [[Toshiba]] and [[Sony]]. It uses a number of SIMD processors (a [[Non-Uniform Memory Access|NUMA]] architecture, each with independent [[cache memory|local store]] and controlled by a general purpose CPU) and is geared towards the huge datasets required by 3D and video processing applications. It differs from traditional ISAs by being SIMD from the ground up with no separate scalar registers.
A later processor that used vector processing is the [[Cell (processor)|Cell processor]] used in the PlayStation 3, which was developed by [[IBM]] in cooperation with [[Toshiba]] and [[Sony]]. It uses a number of SIMD processors (a [[non-uniform memory access]] (NUMA) architecture, each with independent [[cache memory|local store]] and controlled by a general purpose CPU) and is geared towards the huge datasets required by 3D and video processing applications. It differs from traditional ISAs by being SIMD from the ground up with no separate scalar registers.


Ziilabs produced an SIMD type processor for use on mobile devices, such as media players and mobile phones.<ref>{{cite web |url=https://secure.ziilabs.com/products/processors/zms05.aspx |title=ZiiLABS ZMS-05 ARM 9 Media Processor |website=ZiiLabs |access-date=2010-05-24 |url-status=dead |archive-url=https://web.archive.org/web/20110718153716/https://secure.ziilabs.com/products/processors/zms05.aspx |archive-date=2011-07-18 }}</ref>
Ziilabs produced an SIMD type processor for use on mobile devices, such as media players and mobile phones.<ref>{{cite web |url=https://secure.ziilabs.com/products/processors/zms05.aspx |title=ZiiLABS ZMS-05 ARM 9 Media Processor |website=ZiiLabs |access-date=2010-05-24 |url-status=dead |archive-url=https://web.archive.org/web/20110718153716/https://secure.ziilabs.com/products/processors/zms05.aspx |archive-date=2011-07-18}}</ref>


Larger scale commercial SIMD processors are available from ClearSpeed Technology, Ltd. and Stream Processors, Inc. [[ClearSpeed]]'s CSX600 (2004) has 96 cores each with two double-precision floating point units while the CSX700 (2008) has 192. Stream Processors is headed by computer architect [[Bill Dally]]. Their Storm-1 processor (2007) contains 80 SIMD cores controlled by a MIPS CPU.
Larger scale commercial SIMD processors are available from ClearSpeed Technology, Ltd. and Stream Processors, Inc. [[ClearSpeed]]'s CSX600 (2004) has 96 cores each with two double-precision floating point units while the CSX700 (2008) has 192. Stream Processors is headed by computer architect [[Bill Dally]]. Their Storm-1 processor (2007) contains 80 SIMD cores controlled by a [[MIPS architecture|MIPS]] CPU.


==See also==
==See also==
Line 146: Line 141:
* [[Instruction set architecture]]
* [[Instruction set architecture]]
*[[Flynn's taxonomy]]
*[[Flynn's taxonomy]]
* [[SWAR|SIMD within a register (SWAR)]]
* SIMD within a register ([[SWAR]])
* [[SPMD|Single Program, Multiple Data (SPMD)]]
* [[Single program, multiple data]] (SPMD)
* [[OpenCL]]
* [[OpenCL]]


Line 159: Line 154:
* [http://software.intel.com/en-us/articles/optimizing-the-rendering-pipeline-of-animated-models-using-the-intel-streaming-simd-extensions Article about Optimizing the Rendering Pipeline of Animated Models Using the Intel Streaming SIMD Extensions]
* [http://software.intel.com/en-us/articles/optimizing-the-rendering-pipeline-of-animated-models-using-the-intel-streaming-simd-extensions Article about Optimizing the Rendering Pipeline of Animated Models Using the Intel Streaming SIMD Extensions]
* [https://web.archive.org/web/20130921070044/http://www.yeppp.info/ "Yeppp!": cross-platform, open-source SIMD library from Georgia Tech]
* [https://web.archive.org/web/20130921070044/http://www.yeppp.info/ "Yeppp!": cross-platform, open-source SIMD library from Georgia Tech]
* [https://computing.llnl.gov/tutorials/parallel_comp/ Introduction to Parallel Computing from LLNL Lawrence Livermore National Laboratory] {{Webarchive|url=https://web.archive.org/web/20130610122229/https://computing.llnl.gov/tutorials/parallel_comp/ |date=2013-06-10 }}
* [https://computing.llnl.gov/tutorials/parallel_comp/ Introduction to Parallel Computing from LLNL Lawrence Livermore National Laboratory] {{Webarchive|url=https://web.archive.org/web/20130610122229/https://computing.llnl.gov/tutorials/parallel_comp/ |date=2013-06-10}}
* {{GitHub|simd-everywhere/simde}}: A portable implementation of platform-specific intrinsics for other platforms (e.g. SSE intrinsics for ARM NEON), using C/C++ headers


{{CPU technologies}}
{{CPU technologies}}

Latest revision as of 02:47, 7 November 2025

Template:Short description Script error: No such module "redirect hatnote". Script error: No such module "Labelled list hatnote". Script error: No such module "Unsubst". Template:Flynn's Taxonomy

File:SIMD2.svg
Single instruction, multiple data

Single instruction, multiple data (SIMD) is a type of parallel computing (processing) in Flynn's taxonomy. SIMD describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously. SIMD can be internal (part of the hardware design) and it can be directly accessible through an instruction set architecture (ISA), but it should not be confused with an ISA.

Such machines exploit data level parallelism, but not concurrency: there are simultaneous (parallel) computations, but each unit performs exactly the same instruction at any given moment (just with different data). A simple example is to add many pairs of numbers together, all of the SIMD units are performing an addition, but each one has different pairs of values to add. SIMD is especially applicable to common tasks such as adjusting the contrast in a digital image or adjusting the volume of digital audio. Most modern central processing unit (CPU) designs include SIMD instructions to improve the performance of multimedia use. In recent CPUs, SIMD units are tightly coupled with cache hierarchies and prefetch mechanisms, which minimize latency during large block operations. For instance, AVX-512-enabled processors can prefetch entire cache lines and apply fused multiply-add operations (FMA) in a single SIMD cycle.

Confusion between SIMT and SIMD

Script error: No such module "Labelled list hatnote".

File:ILLIAC IV.jpg
ILLIAC IV Array overview, from ARPA-funded Introductory description by Steward Denenberg, July 15, 1971[1]

SIMD has three different subcategories in Flynn's 1972 Taxonomy, one of which is single instruction, multiple threads (SIMT). SIMT should not be confused with software threads or hardware threads, both of which are task time-sharing (time-slicing). SIMT is true simultaneous parallel hardware-level execution, such as in the ILLIAC IV.

SIMD should not be confused with Vector processing, characterized by the Cray 1 and clarified in Duncan's taxonomy. The difference between SIMD and vector processors is primarily the presence of a Cray-style SET VECTOR LENGTH instruction.

History

The first known operational use to date of SIMD within a register was the TX-2, in 1958. It was capable of 36-bit operations and two 18-bit or four 9-bit sub-word operations.

The first commercial use of SIMD instructions was in the ILLIAC IV, which was completed in 1972.

Vector supercomputers of the early 1970s such as the CDC Star-100 and the Texas Instruments ASC could operate on a "vector" of data with a single instruction. Vector processing was especially popularized by Cray in the 1970s and 1980s. Vector processing architectures are now considered separate from SIMD computers: Duncan's Taxonomy includes them whereas Flynn's Taxonomy does not, due to Flynn's work (1966, 1972) pre-dating the Cray-1 (1977). The complexity of Vector processors however inspired a simpler arrangement known as SIMD within a register.

The first era of modern SIMD computers was characterized by massively parallel processing-style supercomputers such as the Thinking Machines Connection Machine CM-1 and CM-2. These computers had many limited-functionality processors that would work in parallel. For example, each of 65,536 single-bit processors in a Thinking Machines CM-2 would execute the same instruction at the same time, allowing, for instance, to logically combine 65,536 pairs of bits at a time, using a hypercube-connected network or processor-dedicated RAM to find its operands. Supercomputing moved away from the SIMD approach when inexpensive scalar multiple instruction, multiple data (MIMD) approaches based on commodity processors such as the Intel i860 XP became more powerful, and interest in SIMD waned.[2]

The current era of SIMD processors grew out of the desktop-computer market rather than the supercomputer market. As desktop processors became powerful enough to support real-time gaming and audio/video processing during the 1990s, demand grew for this type of computing power, and microprocessor vendors turned to SIMD to meet the demand.[3] This resurgence also coincided with the rise of DirectX and OpenGL shader models, which heavily leveraged SIMD under the hood. The graphics APIs encouraged programmers to adopt data-parallel programming styles, indirectly accelerating SIMD adoption in desktop software. Hewlett-Packard introduced Multimedia Acceleration eXtensions (MAX) instructions into PA-RISC 1.1 desktops in 1994 to accelerate MPEG decoding.[4] Sun Microsystems introduced SIMD integer instructions in its "VIS" instruction set extensions in 1995, in its UltraSPARC I microprocessor. MIPS followed suit with their similar MDMX system.

The first widely deployed desktop SIMD was with Intel's MMX extensions to the x86 architecture in 1996. This sparked the introduction of the much more powerful AltiVec system in the Motorola PowerPC and IBM's POWER systems. Intel responded in 1999 by introducing the all-new SSE system. Since then, there have been several extensions to the SIMD instruction sets for both architectures. Advanced vector extensions AVX, AVX2 and AVX-512 are developed by Intel. AMD supports AVX, AVX2, and AVX-512 in their current products.[5]

Disadvantages

With SIMD, an order of magnitude increase in code size is not uncommon, when compared to equivalent scalar or equivalent vector code, and an order of magnitude or greater effectiveness (work done per instruction) is achievable with Vector ISAs.[6]

ARM's Scalable Vector Extension takes another approach, known in Flynn's Taxonomy more commonly known today as "Predicated" (masked) SIMD. This approach is not as compact as vector processing but is still far better than non-predicated SIMD. Detailed comparative examples are given at Template:Section link. In addition, all versions of the ARM architecture have offered Load and Store multiple instructions, to Load or Store a block of data from a continuous block of memory, into a range or non-continuous set of registers.[7]

Chronology

SIMD supercomputer examples excluding vector processors
Year Example
1974 ILLIAC IV - an Array Processor comprising scalar 64-bit PEs
1974 ICL Distributed Array Processor (DAP)
1976 Burroughs Scientific Processor
1981 Geometric-Arithmetic Parallel Processor from Martin Marietta (continued at Lockheed Martin, then at Teranex and Silicon Optix)
1983–1991 Massively Parallel Processor (MPP), from NASA/Goddard Space Flight Center
1985 Connection Machine, models 1 and 2 (CM-1 and CM-2), from Thinking Machines Corporation
1987–1996 MasPar MP-1 and MP-2
1991 Zephyr DC from Wavetracer
2001 Xplor from Pyxsys, Inc.

Hardware

CPUs

Small-scale (64 or 128 bits) SIMD became popular on general-purpose CPUs in the early 1990s and continued through 1997 and later with Motion Video Instructions (MVI) for Alpha. SIMD instructions can be found, to one degree or another, on most CPUs, including IBM's AltiVec and Signal Processing Engine (SPE) for PowerPC, Hewlett-Packard's (HP) PA-RISC Multimedia Acceleration eXtensions (MAX), Intel's MMX and iwMMXt, Streaming SIMD Extensions (SSE), SSE2, SSE3 SSSE3 and SSE4.x, AMD's 3DNow!, ARC's ARC Video subsystem, SPARC's VIS and VIS2, Sun's MAJC, ARM's Neon technology, MIPS' MDMX (MaDMaX) and MIPS-3D.

Intel's AVX-512 SIMD instructions process 512 bits of data at once.

Coprocessors

The IBM, Sony, Toshiba co-developed Cell processor's Synergistic Processing Element's (SPE's) instruction set is heavily SIMD based.

Some, but not all, GPUs are SIMD-based. AMD GPUs since the TeraScale (microarchitecture) are SIMD-based, a feature that has remained in the 2020s microartectures RDNA and CDNA,[8] with a layer of single instruction, multiple threads (SIMT) above.[9] On the other hand, Nvidia's CUDA architectures use scalar cores with SIMT.[10]

Philips, now NXP, developed several SIMD processors named Xetal. The Xetal has 320 16-bit processor elements especially designed for vision tasks.

Apple's M1 and M2 chips also incorporate SIMD units deeply integrated with their GPU and Neural Engine, using Apple-designed SIMD pipelines optimized for image filtering, convolution, and matrix multiplication. This unified memory architecture helps SIMD instructions operate on shared memory pools more efficiently. (The CPU part implements ordinary NEON).

Software

File:Non-SIMD cpu diagram1.svg
The ordinary tripling of four 8-bit numbers. The CPU loads one 8-bit number into R1, multiplies it with R2, and then saves the answer from R3 back to RAM. This process is repeated for each number.
File:SIMD cpu diagram1.svg
The SIMD tripling of four 8-bit numbers. The CPU loads 4 numbers at once, multiplies them all in one SIMD-multiplication, and saves them all at once back to RAM. In theory, the speed can be multiplied by 4.

SIMD instructions are widely used to process 3D graphics, although modern graphics cards with embedded SIMD have largely taken over this task from the CPU. Some systems also include permute functions that re-pack elements inside vectors, making them especially useful for data processing and compression. They are also used in cryptography.[11][12][13] The trend of general-purpose computing on GPUs (GPGPU) may lead to wider use of SIMD in the future. Recent compilers such as LLVM, GNU Compiler Collection (GCC), and Intel's ICC offer aggressive auto-vectoring options. Developers can often enable these with flags like -O3 or -ftree-vectorize, which guide the compiler to restructure loops for SIMD compatibility.

Adoption of SIMD systems in personal computer software was at first slow, due to a number of problems. One was that many of the early SIMD instruction sets tended to slow overall performance of the system due to the re-use of existing floating point registers. Other systems, like MMX and 3DNow!, offered support for data types that were not interesting to a wide audience and had expensive context switching instructions to switch between using the FPU and MMX registers. Compilers also often lacked support, requiring programmers to resort to assembly language coding.

SIMD on x86 had a slow start. The introduction of 3DNow! by AMD and SSE by Intel confused matters somewhat, but today the system seems to have settled down (after AMD adopted SSE) and newer compilers should result in more SIMD-enabled software. Intel and AMD now both provide optimized math libraries that use SIMD instructions, and open source alternatives like libSIMD, SIMDx86 and SLEEF have started to appear (see also libm).[14]

Apple Computer had somewhat more success, even though they entered the SIMD market later than the rest. AltiVec offered a rich system and can be programmed using increasingly sophisticated compilers from Motorola, IBM and GNU, therefore assembly language programming is rarely needed. Additionally, many of the systems that would benefit from SIMD were supplied by Apple itself, for example iTunes and QuickTime. However, in 2006, Apple computers moved to Intel x86 processors. Apple's APIs and development tools (XCode) were modified to support SSE2 and SSE3 as well as AltiVec. Apple was the dominant purchaser of PowerPC chips from IBM and Freescale Semiconductor. Even though Apple has stopped using PowerPC processors in their products, further development of AltiVec is continued in several PowerPC and Power ISA designs from Freescale and IBM.

SIMD within a register, or SWAR, is a range of techniques and tricks used for performing SIMD in general-purpose registers on hardware that does not provide any direct support for SIMD instructions. This can be used to exploit parallelism in certain algorithms even on hardware that does not support SIMD directly.

Programmer interface

It is common for publishers of the SIMD instruction sets to make their own C and C++ language extensions with intrinsic functions or special datatypes (with operator overloading) guaranteeing the generation of vector code. Intel, AltiVec, and ARM NEON provide extensions widely adopted by the compilers targeting their CPUs. (More complex operations are the task of vector math libraries.)

The GNU C Compiler takes the extensions a step further by abstracting them into a universal interface that can be used on any platform by providing a way of defining SIMD datatypes.[15] The LLVM Clang compiler also implements the feature, with an analogous interface defined in the IR.[16] Rust's packed_simd crate (and the experimental std::simd) uses this interface, and so does Swift 2.0+.

C++ has an experimental interface std::experimental::simd that works similarly to the GCC extension. LLVM's libcxx seems to implement it.Script error: No such module "Unsubst". For GCC and libstdc++, a wrapper library that builds on top of the GCC extension is available.[17]

Microsoft added SIMD to .NET in RyuJIT.[18] The System.Numerics.Vector package, available on NuGet, implements SIMD datatypes.[19] Java also has a new proposed API for SIMD instructions available in OpenJDK 17 in an incubator module.[20] It also has a safe fallback mechanism on unsupported CPUs to simple loops.

Instead of providing an SIMD datatype, compilers can also be hinted to auto-vectorize some loops, potentially taking some assertions about the lack of data dependency. This is not as flexible as manipulating SIMD variables directly, but is easier to use. OpenMP 4.0+ has a #pragma omp simd hint.[21] This OpenMP interface has replaced a wide set of nonstandard extensions, including Cilk's #pragma simd,[22] GCC's #pragma GCC ivdep, and many more.[23]

An example of the use of platform-specific, generic vector-based, and generic hint-based interfaces is "SIMD everywhere", a collection of C/C++ headers implementing of platform-specific intrinsics for other platforms (e.g. SSE intrinsics for ARM NEON).[24]

SIMD multi-versioning

Consumer software is typically expected to work on a range of CPUs covering multiple generations, which could limit the programmer's ability to use new SIMD instructions to improve the computational performance of a program. The solution is to include multiple versions of the same code that uses either older or newer SIMD technologies, and pick one that best fits the user's CPU at run-time (dynamic dispatch). There are two main camps of solutions:

  • Function multi-versioning (FMV): a subroutine in the program or a library is duplicated and compiled for many instruction set extensions, and the program decides which one to use at run-time.
  • Library multi-versioning (LMV): the entire programming library is duplicated for many instruction set extensions, and the operating system or the program decides which one to load at run-time.

FMV, manually coded in assembly language, is quite commonly used in a number of performance-critical libraries such as glibc and libjpeg-turbo. Intel C++ Compiler, GNU Compiler Collection since GCC 6, and Clang since clang 7 allow for a simplified approach, with the compiler taking care of function duplication and selection. GCC and clang requires explicit target_clones labels in the code to "clone" functions,[25] while ICC does so automatically (under the command-line option /Qax). The Rust programming language also supports FMV. The setup is similar to GCC and Clang in that the code defines what instruction sets to compile for, but cloning is manually done via inlining.[26]

As using FMV requires code modification on GCC and Clang, vendors more commonly use library multi-versioning: this is easier to achieve as only compiler switches need to be changed. Glibc supports LMV and this functionality is adopted by the Intel-backed Clear Linux project.[27]

SIMD on the web

In 2013 John McCutchan announced that he had created a high-performance interface to SIMD instruction sets for the Dart programming language, bringing the benefits of SIMD to web programs for the first time. The interface consists of two types:[28]

  • Float32x4, 4 single precision floating point values.
  • Int32x4, 4 32-bit integer values.

Instances of these types are immutable and in optimized code are mapped directly to SIMD registers. Operations expressed in Dart typically are compiled into a single instruction without any overhead. This is similar to C and C++ intrinsics. Benchmarks for 4×4 matrix multiplication, 3D vertex transformation, and Mandelbrot set visualization show near 400% speedup compared to scalar code written in Dart.

Intel announced at IDF 2013 that they were implementing McCutchan's specification for both V8 and SpiderMonkey.[29] However, by 2017, SIMD.js was taken out of the ECMAScript standard queue in favor of pursuing a similar interface in WebAssembly.[30] Support for SIMD was added to the WebAssembly 2.0 specification, which was finished on 2022 and became official in December 2024.[31] LLVM's autovectoring, when compiling C or C++ to WebAssembly, can target WebAssembly SIMD to automatically make use of SIMD, while SIMD intrinsic are also available.[32]

Commercial applications

It has generally proven difficult to find sustainable commercial applications for SIMD-only processors in general-purpose computing.

One that has had some measure of success is the GAPP, which was developed by Lockheed Martin and taken to the commercial sector by their spin-off Teranex. The GAPP's recent incarnations have become a powerful tool in real-time video processing applications like conversion between various video standards and frame rates (NTSC to/from PAL, NTSC to/from high-definition television (HDTV) formats, etc.), deinterlacing, image noise reduction, adaptive video compression, and image enhancement.

A more ubiquitous application for SIMD is found in video games: nearly every modern video game console since 1998 has incorporated a SIMD processor somewhere in its architecture. The PlayStation 2 was unusual in that one of its vector-float units could function as an autonomous digital signal processor (DSP) executing its own instruction stream, or as a coprocessor driven by ordinary CPU instructions. 3D graphics applications tend to lend themselves well to SIMD processing as they rely heavily on operations with 4-dimensional vectors. Microsoft's Direct3D 9.0 now chooses at runtime processor-specific implementations of its own math operations, including the use of SIMD-capable instructions.

A later processor that used vector processing is the Cell processor used in the PlayStation 3, which was developed by IBM in cooperation with Toshiba and Sony. It uses a number of SIMD processors (a non-uniform memory access (NUMA) architecture, each with independent local store and controlled by a general purpose CPU) and is geared towards the huge datasets required by 3D and video processing applications. It differs from traditional ISAs by being SIMD from the ground up with no separate scalar registers.

Ziilabs produced an SIMD type processor for use on mobile devices, such as media players and mobile phones.[33]

Larger scale commercial SIMD processors are available from ClearSpeed Technology, Ltd. and Stream Processors, Inc. ClearSpeed's CSX600 (2004) has 96 cores each with two double-precision floating point units while the CSX700 (2008) has 192. Stream Processors is headed by computer architect Bill Dally. Their Storm-1 processor (2007) contains 80 SIMD cores controlled by a MIPS CPU.

See also

References

Template:Reflist

External links

Template:CPU technologies Template:Parallel computing Template:Authority control

de:Flynnsche Klassifikation#SIMD (Single Instruction, Multiple Data)

  1. Script error: No such module "citation/CS1".
  2. Script error: No such module "citation/CS1".
  3. Script error: No such module "citation/CS1".
  4. Script error: No such module "citation/CS1".
  5. Script error: No such module "citation/CS1".
  6. Script error: No such module "citation/CS1".
  7. Script error: No such module "citation/CS1".
  8. Script error: No such module "citation/CS1".
  9. Script error: No such module "citation/CS1".
  10. Script error: No such module "citation/CS1".
  11. RE: SSE2 speed, showing how SSE2 is used to implement SHA hash algorithms
  12. Salsa20 speed; Salsa20 software, showing a stream cipher implemented using SSE2
  13. Subject: up to 1.4x RSA throughput using SSE2, showing RSA implemented using a non-SIMD SSE2 integer multiply instruction.
  14. Script error: No such module "citation/CS1".
  15. Script error: No such module "citation/CS1".
  16. Script error: No such module "citation/CS1".
  17. Script error: No such module "citation/CS1".
  18. Script error: No such module "citation/CS1".
  19. Script error: No such module "citation/CS1".
  20. Script error: No such module "citation/CS1".
  21. Script error: No such module "citation/CS1".
  22. Script error: No such module "citation/CS1".
  23. Script error: No such module "citation/CS1".
  24. Script error: No such module "citation/CS1".
  25. Script error: No such module "citation/CS1".
  26. Script error: No such module "citation/CS1".
  27. Script error: No such module "citation/CS1".
  28. Script error: No such module "citation/CS1".
  29. Script error: No such module "citation/CS1".
  30. Script error: No such module "citation/CS1".
  31. Script error: No such module "citation/CS1".
  32. Script error: No such module "citation/CS1".
  33. Script error: No such module "citation/CS1".