Explicit parallelism

From Wikipedia, the free encyclopedia
Revision as of 14:03, 4 February 2024 by imported>Vanished user 09a18a8c3ed303b15ad9aa4fe245c66c (+ref)
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Short description Template:Use American English Template:Use mdy dates Template:More citations needed In computer programming, explicit parallelism is the representation of concurrent computations using primitives in the form of operators, function calls or special-purpose directives.[1] Most parallel primitives are related to process synchronization, communication and process partitioning.[2] As they seldom contribute to actually carry out the intended computation of the program but, rather, structure it, their computational cost is often considered as overhead.

The advantage of explicit parallel programming is increased programmer control over the computation. A skilled parallel programmer may take advantage of explicit parallelism to produce efficient code for a given target computation environment. However, programming with explicit parallelism is often difficult, especially for non-computing specialists, because of the extra work and skill involved in developing it.

In some instances, explicit parallelism may be avoided with the use of an optimizing compiler or runtime that automatically deduces the parallelism inherent to computations, known as implicit parallelism.

Programming languages that support explicit parallelism

Some of the programming languages that support explicit parallelism are:

References

Template:Reflist

Template:Parallel Computing

Template:Asbox

  1. Cite error: Invalid <ref> tag; no text was provided for refs named pra11
  2. Cite error: Invalid <ref> tag; no text was provided for refs named dij68