<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://debianws.lexgopc.com/wiki143/index.php?action=history&amp;feed=atom&amp;title=Embarrassingly_parallel</id>
	<title>Embarrassingly parallel - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://debianws.lexgopc.com/wiki143/index.php?action=history&amp;feed=atom&amp;title=Embarrassingly_parallel"/>
	<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Embarrassingly_parallel&amp;action=history"/>
	<updated>2026-05-05T23:53:25Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=Embarrassingly_parallel&amp;diff=1286229&amp;oldid=prev</id>
		<title>95.157.5.206: /* Examples */Monte Carlo Analysis is not a commonly used term. The main page on this topic has the Title Monte Carlo Method.</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Embarrassingly_parallel&amp;diff=1286229&amp;oldid=prev"/>
		<updated>2025-03-29T22:35:06Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Examples: &lt;/span&gt;Monte Carlo Analysis is not a commonly used term. The main page on this topic has the Title Monte Carlo Method.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{Short description|Parallel computing, a problem which is able to be trivially divided into parallelized tasks}}&lt;br /&gt;
In [[parallel computing]], an &amp;#039;&amp;#039;&amp;#039;embarrassingly parallel&amp;#039;&amp;#039;&amp;#039; workload or problem (also called &amp;#039;&amp;#039;&amp;#039;embarrassingly parallelizable&amp;#039;&amp;#039;&amp;#039;, &amp;#039;&amp;#039;&amp;#039;perfectly parallel&amp;#039;&amp;#039;&amp;#039;, &amp;#039;&amp;#039;&amp;#039;delightfully parallel&amp;#039;&amp;#039;&amp;#039; or &amp;#039;&amp;#039;&amp;#039;pleasingly parallel&amp;#039;&amp;#039;&amp;#039;) is one where little or no effort is needed to split the problem into a number of parallel tasks.&amp;lt;ref&amp;gt;{{cite book|last1=Herlihy|first1=Maurice|last2=Shavit|first2=Nir|title=The Art of Multiprocessor Programming, Revised Reprint|date=2012|publisher=Elsevier|isbn=9780123977953|page=14|edition=revised|url=https://books.google.com/books?id=vfvPrSz7R7QC&amp;amp;q=embarrasingly|access-date=28 February 2016|quote=Some computational problems are “embarrassingly parallel”: they can easily be divided into components that can be executed concurrently.}}&amp;lt;/ref&amp;gt; This is due to minimal or no dependency upon communication between the parallel tasks, or for results between them.&amp;lt;ref name=dbpp&amp;gt;Section 1.4.4 of: {{cite book&lt;br /&gt;
 |url=http://www.mcs.anl.gov/~itf/dbpp/text/node10.html &lt;br /&gt;
 |title=Designing and Building Parallel Programs &lt;br /&gt;
 |author=Foster, Ian &lt;br /&gt;
 |publisher=Addison–Wesley&lt;br /&gt;
 |archive-url=https://web.archive.org/web/20110301095228/http://www.mcs.anl.gov/~itf/dbpp/text/node10.html&lt;br /&gt;
 |archive-date=2011-03-01&lt;br /&gt;
 |year=1995 &lt;br /&gt;
 |url-status=dead&lt;br /&gt;
 |isbn=9780201575941&lt;br /&gt;
}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These differ from [[distributed computing]] problems, which need communication between tasks, especially communication of intermediate results. They are easier to perform on [[server farm]]s which lack the special infrastructure used in a true [[supercomputer]] cluster. They are well-suited to large, Internet-based [[volunteer computing]] platforms such as [[BOINC]], and suffer less from [[parallel slowdown]]. The opposite of embarrassingly parallel problems are [[inherently serial problem]]s, which cannot be parallelized at all.&lt;br /&gt;
&lt;br /&gt;
A common example of an embarrassingly parallel problem is 3D video rendering handled by a [[graphics processing unit]], where each frame (forward method) or pixel ([[Ray tracing (graphics)|ray tracing]] method) can be handled with no interdependency.&amp;lt;ref name=&amp;quot;ChalmersReinhard2011&amp;quot;&amp;gt;{{cite book|author1=Alan Chalmers|author2=Erik Reinhard|author3=Tim Davis|title=Practical Parallel Rendering|url=https://books.google.com/books?id=loxhtzzG1FYC&amp;amp;q=%22embarrassingly+parallel%22|date=21 March 2011|publisher=CRC Press|isbn=978-1-4398-6380-0}}&amp;lt;/ref&amp;gt; Some forms of [[password cracking]] are another embarrassingly parallel task that is easily distributed on [[central processing unit]]s, [[CPU core]]s, or clusters.&lt;br /&gt;
&lt;br /&gt;
==Etymology==&lt;br /&gt;
&amp;quot;Embarrassingly&amp;quot; is used here to refer to parallelization problems which are &amp;quot;embarrassingly easy&amp;quot;.&amp;lt;ref&amp;gt;Matloff, Norman (2011). &amp;#039;&amp;#039;The Art of R Programming: A Tour of Statistical Software Design&amp;#039;&amp;#039;, p.347. No Starch. {{ISBN|9781593274108}}.&amp;lt;/ref&amp;gt; The term may imply embarrassment on the part of developers or compilers: &amp;quot;Because so many important problems remain unsolved mainly due to their intrinsic computational complexity, it would be embarrassing not to develop parallel implementations of polynomial [[homotopy]] continuation methods.&amp;quot;&amp;lt;ref&amp;gt;{{cite book|last1=Leykin|first1=Anton|last2=Verschelde|first2=Jan|last3=Zhuang|first3=Yan|title=Mathematical Software - ICMS 2006 |chapter=Parallel Homotopy Algorithms to Solve Polynomial Systems |year=2006 |volume=4151|pages=225–234|doi=10.1007/11832225_22|series=Lecture Notes in Computer Science|isbn=978-3-540-38084-9}}&amp;lt;/ref&amp;gt; The term is first found in the literature in a 1986 book on multiprocessors by [[MATLAB]]&amp;#039;s creator [[Cleve Moler]],&amp;lt;ref name=hcmp&amp;gt;{{cite book&lt;br /&gt;
| contribution = Matrix Computation on Distributed Memory Multiprocessors&lt;br /&gt;
| author = Moler, Cleve&lt;br /&gt;
| publisher = Society for Industrial and Applied Mathematics, Philadelphia&lt;br /&gt;
| title = Hypercube Multiprocessors&lt;br /&gt;
| editor-last = Heath&lt;br /&gt;
| editor-first = Michael T.&lt;br /&gt;
| year = 1986&lt;br /&gt;
| isbn = 978-0898712094&lt;br /&gt;
}}&amp;lt;/ref&amp;gt; who claims to have invented the term.&amp;lt;ref&amp;gt;[http://blogs.mathworks.com/cleve/2013/11/12/the-intel-hypercube-part-2-reposted/#096367ea-045e-4f28-8fa2-9f7db8fb7b01 The Intel hypercube part 2 reposted on Cleve&amp;#039;s Corner blog on The MathWorks website]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An alternative term, &amp;#039;&amp;#039;pleasingly parallel&amp;#039;&amp;#039;, has gained some use, perhaps to avoid the negative connotations of embarrassment in favor of a positive reflection on the parallelizability of the problems: &amp;quot;Of course, there is nothing embarrassing about these programs at all.&amp;quot;&amp;lt;ref&amp;gt;Kepner, Jeremy (2009). &amp;#039;&amp;#039;Parallel MATLAB for Multicore and Multinode Computers&amp;#039;&amp;#039;, p.12. SIAM. {{isbn|9780898716733}}.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
A trivial example involves serving static data. It would take very little effort to have many processing units produce the same set of bits. Indeed, the famous [[&amp;quot;Hello, World!&amp;quot; program|Hello World]] problem could easily be parallelized with few programming considerations or computational costs.&lt;br /&gt;
&lt;br /&gt;
Some examples of embarrassingly parallel problems include:&lt;br /&gt;
* [[Monte Carlo method]]&amp;lt;ref name=&amp;quot;Kontoghiorghes2005&amp;quot;&amp;gt;{{cite book|author=Erricos John Kontoghiorghes|title=Handbook of Parallel Computing and Statistics|url=https://books.google.com/books?id=BnNnKPkFH2kC&amp;amp;q=%22embarrassingly+parallel%22|date=21 December 2005|publisher=CRC Press|isbn=978-1-4200-2868-3}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Distributed relational database queries using [http://www.mysqlperformanceblog.com/2011/05/14/distributed-set-processing-with-shard-query/ distributed set processing].&lt;br /&gt;
* [[Numerical integration]]&amp;lt;ref name=&amp;quot;Deng2013&amp;quot;&amp;gt;{{cite book|author=Yuefan Deng|title=Applied Parallel Computing|url=https://books.google.com/books?id=YS9wvVeWrXgC&amp;amp;q=%22embarrassingly+parallel%22|year=2013|publisher=World Scientific|isbn=978-981-4307-60-4}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Bulk processing of unrelated files of similar nature in general, such as photo gallery resizing and conversion.&lt;br /&gt;
* The [[Mandelbrot set]], [[Perlin noise]] and similar images, where each point is calculated independently.&lt;br /&gt;
* [[Rendering (computer graphics)|Rendering]] of [[computer graphics]]. In [[computer animation]], each [[video frame|frame]] or pixel may be rendered independently {{xref|(see: [[Parallel rendering]])}}.&lt;br /&gt;
* Some [[brute-force search]]es in [[cryptography]].&amp;lt;ref&amp;gt;{{Cite journal|url=https://tools.ietf.org/html/rfc7914#page-2|title=The scrypt Password-Based Key Derivation Function|first1=Simon|last1=Josefsson|first2=Colin|last2=Percival|author-link2=Colin Percival|date=August 2016|website=tools.ietf.org|doi=10.17487/RFC7914 |access-date=2016-12-12}}&amp;lt;/ref&amp;gt; Notable real-world examples include [[distributed.net]] and [[proof-of-work]] systems used in [[cryptocurrency]].&lt;br /&gt;
* [[BLAST (biotechnology)|BLAST]] searches in [[bioinformatics]] with split databases.&amp;lt;ref&amp;gt;{{cite journal |last1=Mathog |first1=DR |title=Parallel BLAST on split databases. |journal=Bioinformatics |date=22 September 2003 |volume=19 |issue=14 |pages=1865–6 |doi=10.1093/bioinformatics/btg250 |pmid=14512366|doi-access=free }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Large scale [[facial recognition system]]s that compare thousands of arbitrary acquired faces (e.g., a security or surveillance video via [[closed-circuit television]]) with similarly large number of previously stored faces (e.g., a &amp;#039;&amp;#039;[[rogues gallery]]&amp;#039;&amp;#039; or similar [[No Fly List|watch list]]).&amp;lt;ref&amp;gt;[http://lbrandy.com/blog/2008/10/how-we-made-our-face-recognizer-25-times-faster/ How we made our face recognizer 25 times faster] (developer blog post)&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Computer simulations comparing many independent scenarios.&lt;br /&gt;
* [[Genetic algorithm]]s.&amp;lt;ref name=&amp;quot;TsutsuiCollet2013&amp;quot;&amp;gt;{{cite book|author1=Shigeyoshi Tsutsui|author2=Pierre Collet|title=Massively Parallel Evolutionary Computation on GPGPUs|url=https://books.google.com/books?id=Hv68BAAAQBAJ&amp;amp;q=%22embarrassingly+parallel%22|date=5 December 2013|publisher=Springer Science &amp;amp; Business Media|isbn=978-3-642-37959-8}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* [[Statistical ensemble (mathematical physics)|Ensemble calculations]] of [[numerical weather prediction]].&lt;br /&gt;
* Event simulation and reconstruction in [[particle physics]].&lt;br /&gt;
* The [[marching squares]] algorithm.&lt;br /&gt;
* Sieving step of the [[quadratic sieve]] and the [[General number field sieve|number field sieve]].&lt;br /&gt;
* Tree growth step of the [[random forest]] machine learning technique.&lt;br /&gt;
* [[Discrete Fourier transform]] where each harmonic is independently calculated.&lt;br /&gt;
* [[Convolutional neural network]]s running on [[GPU]]s.&lt;br /&gt;
* Parallel search in [[constraint programming]]&amp;lt;ref name=&amp;quot;HamadiSais2018&amp;quot;&amp;gt;{{cite book|author1=Youssef Hamadi|author2=Lakhdar Sais|title=Handbook of Parallel Constraint Reasoning|url=https://books.google.com/books?id=w5JUDwAAQBAJ&amp;amp;q=%22embarrassingly+parallel%22|date=5 April 2018|publisher=Springer|isbn=978-3-319-63516-3}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Implementations==&lt;br /&gt;
* In [[R (programming language)]] – The Simple Network of Workstations (SNOW) package implements a simple mechanism for using a set of workstations or a [[Beowulf cluster]] for embarrassingly parallel computations.&amp;lt;ref&amp;gt;[http://www.stat.uiowa.edu/~luke/R/cluster/cluster.html Simple Network of Workstations (SNOW) package]&amp;lt;/ref&amp;gt; Similar R packages include &amp;quot;future&amp;quot;, &amp;quot;parallel&amp;quot; and others.&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
*[[Amdahl&amp;#039;s law]] defines value &amp;#039;&amp;#039;[[Amdahl&amp;#039;s law#Parallelization|P]]&amp;#039;&amp;#039;, which would be almost or exactly equal to 1 for embarrassingly parallel problems.&lt;br /&gt;
*[[Cellular automaton]]&lt;br /&gt;
*[[Connection Machine]]&lt;br /&gt;
*[[CUDA|CUDA framework]]&lt;br /&gt;
*[[Manycore processor]]&lt;br /&gt;
*[[Map (parallel pattern)]]&lt;br /&gt;
*[[Massively parallel]]&lt;br /&gt;
*[[Multiprocessing]]&lt;br /&gt;
*[[Parallel computing]]&lt;br /&gt;
*[[Process-oriented programming]]&lt;br /&gt;
*[[Shared-nothing architecture]] (SN)&lt;br /&gt;
*[[Symmetric multiprocessing]] (SMP)&lt;br /&gt;
*[[Vector processor]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==External links==&lt;br /&gt;
{{Wiktionary}}&lt;br /&gt;
&lt;br /&gt;
*[http://www.phy.duke.edu/~rgb/Beowulf/beowulf_book/beowulf_book/node30.html Embarrassingly Parallel Computations], Engineering a Beowulf-style Compute Cluster&lt;br /&gt;
*&amp;quot;[http://www.cs.ucsb.edu/~gilbert/reports/hpec04.pdf Star-P: High Productivity Parallel Computing]&amp;quot;&lt;br /&gt;
&lt;br /&gt;
{{Parallel computing}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Applications of distributed computing]]&lt;br /&gt;
[[Category:Distributed computing problems]]&lt;br /&gt;
[[Category:Parallel computing]]&lt;/div&gt;</summary>
		<author><name>95.157.5.206</name></author>
	</entry>
</feed>