<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://debianws.lexgopc.com/wiki143/index.php?action=history&amp;feed=atom&amp;title=Parallel_computing</id>
	<title>Parallel computing - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://debianws.lexgopc.com/wiki143/index.php?action=history&amp;feed=atom&amp;title=Parallel_computing"/>
	<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Parallel_computing&amp;action=history"/>
	<updated>2026-04-30T14:13:48Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=Parallel_computing&amp;diff=4487666&amp;oldid=prev</id>
		<title>imported&gt;Citation bot: Add: arxiv, bibcode. | Use this bot. Report bugs. | Suggested by Abductive | #UCB_toolbar</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Parallel_computing&amp;diff=4487666&amp;oldid=prev"/>
		<updated>2025-12-31T08:41:56Z</updated>

		<summary type="html">&lt;p&gt;Add: arxiv, bibcode. | &lt;a href=&quot;/wiki143/index.php?title=En:WP:UCB&amp;amp;action=edit&amp;amp;redlink=1&quot; class=&quot;new&quot; title=&quot;En:WP:UCB (page does not exist)&quot;&gt;Use this bot&lt;/a&gt;. &lt;a href=&quot;/wiki143/index.php?title=En:WP:DBUG&amp;amp;action=edit&amp;amp;redlink=1&quot; class=&quot;new&quot; title=&quot;En:WP:DBUG (page does not exist)&quot;&gt;Report bugs&lt;/a&gt;. | Suggested by Abductive | #UCB_toolbar&lt;/p&gt;
&lt;a href=&quot;http://debianws.lexgopc.com/wiki143/index.php?title=Parallel_computing&amp;amp;diff=4487666&amp;amp;oldid=3400775&quot;&gt;Show changes&lt;/a&gt;</summary>
		<author><name>imported&gt;Citation bot</name></author>
	</entry>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=Parallel_computing&amp;diff=3400775&amp;oldid=prev</id>
		<title>imported&gt;DrKay: CS1 errors: website ignored</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Parallel_computing&amp;diff=3400775&amp;oldid=prev"/>
		<updated>2025-10-25T11:57:58Z</updated>

		<summary type="html">&lt;p&gt;CS1 errors: website ignored&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Previous revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 11:57, 25 October 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l35&quot;&gt;Line 35:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 35:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Optimally, the [[speedup]] from parallelization would be linear—doubling the number of processing elements should halve the runtime, and doubling it a second time should again halve the runtime. However, very few parallel algorithms achieve optimal speedup. Most of them have a near-linear speedup for small numbers of processing elements, which flattens out into a constant value for large numbers of processing elements.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Optimally, the [[speedup]] from parallelization would be linear—doubling the number of processing elements should halve the runtime, and doubling it a second time should again halve the runtime. However, very few parallel algorithms achieve optimal speedup. Most of them have a near-linear speedup for small numbers of processing elements, which flattens out into a constant value for large numbers of processing elements.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The maximum potential speedup of an overall system can be calculated by [[Amdahl&#039;s law]].&amp;lt;ref name=&quot;:02&quot;&amp;gt;{{Citation |last=Bakos |first=Jason D. |title=Chapter 2 - Multicore and data-level optimization: OpenMP and SIMD |date=2016-01-01 |work=Embedded Systems |pages=49–103 |editor-last=Bakos |editor-first=Jason D. |url=https://linkinghub.elsevier.com/retrieve/pii/B978012800342800002X |access-date=2024-11-18 |place=Boston |publisher=Morgan Kaufmann |doi=10.1016/b978-0-12-800342-8.00002-x |isbn=978-0-12-800342-8|url-access=subscription }}&amp;lt;/ref&amp;gt; Amdahl&#039;s Law indicates that optimal performance improvement is achieved by balancing enhancements to both parallelizable and non-parallelizable components of a task. Furthermore, it reveals that increasing the number of processors yields diminishing returns, with negligible speedup gains beyond a certain point. &amp;lt;ref&amp;gt;{{Cite book |title=The Art of Multiprocessor Programming, Revised Reprint |date=22 May 2012 |publisher=Morgan Kaufmann |isbn=9780123973375}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite book |title=Programming Many-Core Chips |isbn=9781441997395 |last1=Vajda |first1=András |date=10 June 2011 |publisher=Springer}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The maximum potential speedup of an overall system can be calculated by [[Amdahl&#039;s law]].&amp;lt;ref name=&quot;:02&quot;&amp;gt;{{Citation |last=Bakos |first=Jason D. |title=Chapter 2 - Multicore and data-level optimization: OpenMP and SIMD |date=2016-01-01 |work=Embedded Systems |pages=49–103 |editor-last=Bakos |editor-first=Jason D. |url=https://linkinghub.elsevier.com/retrieve/pii/B978012800342800002X |access-date=2024-11-18 |place=Boston |publisher=Morgan Kaufmann |doi=10.1016/b978-0-12-800342-8.00002-x |isbn=978-0-12-800342-8|url-access=subscription }}&amp;lt;/ref&amp;gt; Amdahl&#039;s Law indicates that optimal performance improvement is achieved by balancing enhancements to both parallelizable and non-parallelizable components of a task. Furthermore, it reveals that increasing the number of processors yields diminishing returns, with negligible speedup gains beyond a certain point.&amp;lt;ref&amp;gt;{{Cite book |title=The Art of Multiprocessor Programming, Revised Reprint |date=22 May 2012 |publisher=Morgan Kaufmann |isbn=9780123973375}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite book |title=Programming Many-Core Chips |isbn=9781441997395 |last1=Vajda |first1=András |date=10 June 2011 |publisher=Springer}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Amdahl&amp;#039;s Law has limitations, including assumptions of fixed workload, neglecting [[inter-process communication]] and [[Synchronization (computer science)|synchronization]] overheads, primarily focusing on computational aspect and ignoring extrinsic factors such as data persistence, I/O operations, and memory access overheads.&amp;lt;ref&amp;gt;{{Cite book |last=Amdahl |first=Gene M. |chapter=Validity of the single processor approach to achieving large scale computing capabilities |date=1967-04-18 |title=Proceedings of the April 18-20, 1967, spring joint computer conference on - AFIPS &amp;#039;67 (Spring) |chapter-url=https://dl.acm.org/doi/10.1145/1465482.1465560 |location=New York, NY, USA |publisher=Association for Computing Machinery |pages=483–485 |doi=10.1145/1465482.1465560 |isbn=978-1-4503-7895-6}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;:1&amp;quot;&amp;gt;{{Cite book |title=Computer Architecture: A Quantitative Approach |date=2003 |publisher=Morgan Kaufmann |isbn=978-8178672663}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite book |title=Parallel Computer Architecture A Hardware/Software Approach |date=1999 |publisher=Elsevier Science |isbn=9781558603431}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Amdahl&amp;#039;s Law has limitations, including assumptions of fixed workload, neglecting [[inter-process communication]] and [[Synchronization (computer science)|synchronization]] overheads, primarily focusing on computational aspect and ignoring extrinsic factors such as data persistence, I/O operations, and memory access overheads.&amp;lt;ref&amp;gt;{{Cite book |last=Amdahl |first=Gene M. |chapter=Validity of the single processor approach to achieving large scale computing capabilities |date=1967-04-18 |title=Proceedings of the April 18-20, 1967, spring joint computer conference on - AFIPS &amp;#039;67 (Spring) |chapter-url=https://dl.acm.org/doi/10.1145/1465482.1465560 |location=New York, NY, USA |publisher=Association for Computing Machinery |pages=483–485 |doi=10.1145/1465482.1465560 |isbn=978-1-4503-7895-6}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;:1&amp;quot;&amp;gt;{{Cite book |title=Computer Architecture: A Quantitative Approach |date=2003 |publisher=Morgan Kaufmann |isbn=978-8178672663}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite book |title=Parallel Computer Architecture A Hardware/Software Approach |date=1999 |publisher=Elsevier Science |isbn=9781558603431}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l137&quot;&gt;Line 137:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 137:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  | first = Andrew S.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  | first = Andrew S.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  | date = 2002-02-01&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  | date = 2002-02-01&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt; | website = Informit&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-added&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  | publisher = Pearson Education, Informit&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  | publisher = Pearson Education, Informit&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  | access-date = 2018-05-10&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;  | access-date = 2018-05-10&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l208&quot;&gt;Line 208:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 207:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Disadvantages ==&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Disadvantages ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Parallel computing can incur significant overhead in practice, primarily due to the costs associated with merging data from multiple processes. Specifically, inter-process communication and synchronization can lead to overheads that are substantially higher—often by two or more orders of magnitude—compared to processing the same data on a single thread. &amp;lt;ref&amp;gt;{{Cite book |title=Operating System Concepts |isbn=978-0470128725 |last1=Silberschatz |first1=Abraham |last2=Galvin |first2=Peter B. |last3=Gagne |first3=Greg |date=29 July 2008 |publisher=Wiley }}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite book |title=Computer Organization and Design MIPS Edition: The Hardware/Software Interface (The Morgan Kaufmann Series in Computer Architecture and Design) |date=2013 |publisher=Morgan Kaufmann |isbn=978-0124077263}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite book |title=Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers |date=2005 |publisher=Pearson |isbn=978-0131405639}}&amp;lt;/ref&amp;gt; Therefore, the overall improvement should be carefully evaluated.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Parallel computing can incur significant overhead in practice, primarily due to the costs associated with merging data from multiple processes. Specifically, inter-process communication and synchronization can lead to overheads that are substantially higher—often by two or more orders of magnitude—compared to processing the same data on a single thread.&amp;lt;ref&amp;gt;{{Cite book |title=Operating System Concepts |isbn=978-0470128725 |last1=Silberschatz |first1=Abraham |last2=Galvin |first2=Peter B. |last3=Gagne |first3=Greg |date=29 July 2008 |publisher=Wiley }}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite book |title=Computer Organization and Design MIPS Edition: The Hardware/Software Interface (The Morgan Kaufmann Series in Computer Architecture and Design) |date=2013 |publisher=Morgan Kaufmann |isbn=978-0124077263}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite book |title=Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers |date=2005 |publisher=Pearson |isbn=978-0131405639}}&amp;lt;/ref&amp;gt; Therefore, the overall improvement should be carefully evaluated.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==Granularity==&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==Granularity==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l270&quot;&gt;Line 270:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 269:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====Distributed computing====&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====Distributed computing====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{main|Distributed computing}}&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{main|Distributed computing}}&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;A distributed computer (also known as a distributed memory multiprocessor) is a distributed memory computer system in which the processing elements are connected by a network. Distributed computers are highly scalable. The terms &quot;[[concurrent computing]]&quot;, &quot;parallel computing&quot;, and &quot;distributed computing&quot; have a lot of overlap, and no clear distinction exists between them.&amp;lt;ref&amp;gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[&lt;/del&gt;Distributed &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;computing#CITEREFGhosh2007&lt;/del&gt;|&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Ghosh (&lt;/del&gt;2007&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;)]], p&lt;/del&gt;. &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;10&lt;/del&gt;. &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[&lt;/del&gt;Distributed computing&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;#CITEREFKeidar2008&lt;/del&gt;|&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Keidar (2008)&lt;/del&gt;]].&amp;lt;/ref&amp;gt; The same system may be characterized both as &quot;parallel&quot; and &quot;distributed&quot;; the processors in a typical distributed system run concurrently in parallel.&amp;lt;ref&amp;gt;[[&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Distributed computing#CITEREFLynch1996&lt;/del&gt;|&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Lynch (&lt;/del&gt;1996&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;)]], p&lt;/del&gt;. &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;xix, &lt;/del&gt;1–2&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;. [[Distributed computing#CITEREFPeleg2000&lt;/del&gt;|Peleg (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;2000&lt;/del&gt;)]]&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;, p&lt;/del&gt;. 1&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;.&lt;/del&gt;&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;A distributed computer (also known as a distributed memory multiprocessor) is a distributed memory computer system in which the processing elements are connected by a network. Distributed computers are highly scalable. The terms &quot;[[concurrent computing]]&quot;, &quot;parallel computing&quot;, and &quot;distributed computing&quot; have a lot of overlap, and no clear distinction exists between them.&amp;lt;ref&amp;gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;{{cite book |last=Ghosh |first=Sukumar |title=&lt;/ins&gt;Distributed &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Systems – An Algorithmic Approach |publisher=Chapman &amp;amp; Hall/CRC &lt;/ins&gt;|&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;year=&lt;/ins&gt;2007 &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;|isbn=978-1-58488-564-1 |page=10}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{cite journal |doi=10&lt;/ins&gt;.&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;1145/1466390&lt;/ins&gt;.&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;1466402 |last1=Keidar |first1=Idit |author1-link=Idit Keidar |title=&lt;/ins&gt;Distributed computing &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;column 32 – The year in review &lt;/ins&gt;|&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;journal=[[ACM SIGACT News&lt;/ins&gt;]] &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;|volume=39 |issue=4 |year=2008 |pages=53–54 |url=http://webee.technion.ac&lt;/ins&gt;.&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;il/~idish/sigactNews/#column%2032 |citeseerx=10.1.1.116.1285 |s2cid=7607391 |access-date=2009-08-20 |archive-date=2014-01-16 |archive-url=https://web.archive.org/web/20140116123634/http://webee.technion.ac.il/~idish/sigactNews/#column%2032 |url-status=dead}}&lt;/ins&gt;&amp;lt;/ref&amp;gt; The same system may be characterized both as &quot;parallel&quot; and &quot;distributed&quot;; the processors in a typical distributed system run concurrently in parallel.&amp;lt;ref&amp;gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;{{citation |last=Lynch |first=Nancy A. |author-link=Nancy Lynch |title=Distributed Algorithms |publisher=&lt;/ins&gt;[[&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Morgan Kaufmann]] &lt;/ins&gt;|&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;year=&lt;/ins&gt;1996 &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;|isbn=978-1-55860-348-6 |url-access=registration |url=https://archive&lt;/ins&gt;.&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;org/details/distributedalgor0000lync |page=&lt;/ins&gt;1–2&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{cite book |last=Peleg |first=David &lt;/ins&gt;|&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;author-link=David &lt;/ins&gt;Peleg (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;scientist&lt;/ins&gt;) &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;|title=Distributed Computing: A Locality-Sensitive Approach |publisher=[[SIAM&lt;/ins&gt;]] &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;|year=2000 |isbn=978-0-89871-464-7 |url=http://www.ec-securehost.com/SIAM/DT05.html |access-date=2009-07-16 |archive-url=https://web&lt;/ins&gt;.&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;archive.org/web/20090806070332/http://www.ec-securehost.com/SIAM/DT05.html |archive-date=2009-08-06 |url-status=dead |page=&lt;/ins&gt;1&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;}}&lt;/ins&gt;&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=====Cluster computing=====&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=====Cluster computing=====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l306&quot;&gt;Line 306:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 305:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Reconfigurable computing]] is the use of a [[field-programmable gate array]] (FPGA) as a co-processor to a general-purpose computer. An FPGA is, in essence, a computer chip that can rewire itself for a given task.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Reconfigurable computing]] is the use of a [[field-programmable gate array]] (FPGA) as a co-processor to a general-purpose computer. An FPGA is, in essence, a computer chip that can rewire itself for a given task.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;FPGAs can be programmed with [[hardware description language]]s such as [[VHDL]]&amp;lt;ref&amp;gt;{{Cite journal|last1=Valueva|first1=Maria|last2=Valuev|first2=Georgii|last3=Semyonova|first3=Nataliya|last4=Lyakhov|first4=Pavel|last5=Chervyakov|first5=Nikolay|last6=Kaplun|first6=Dmitry|last7=Bogaevskiy|first7=Danil|date=2019-06-20|title=Construction of Residue Number System Using Hardware Efficient Diagonal Function|journal=Electronics|language=en|volume=8|issue=6|pages=694|doi=10.3390/electronics8060694|issn=2079-9292|quote=All simulated circuits were described in very high speed integrated circuit (VHSIC) hardware description language (VHDL). Hardware modeling was performed on Xilinx FPGA Artix 7 xc7a200tfbg484-2.|doi-access=free}}&amp;lt;/ref&amp;gt; or [[Verilog]].&amp;lt;ref&amp;gt;{{Cite book|last1=Gupta|first1=Ankit|last2=Suneja|first2=Kriti|title=2020 4th International Conference on Intelligent Computing and Control Systems (ICICCS) |chapter=Hardware Design of Approximate Matrix Multiplier based on FPGA in Verilog |date=May 2020&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;|chapter-url=https://ieeexplore.ieee.org/document/9121004&lt;/del&gt;|location=Madurai, India|publisher=IEEE|pages=496–498|doi=10.1109/ICICCS48265.2020.9121004|isbn=978-1-7281-4876-2|s2cid=219990653}}&amp;lt;/ref&amp;gt; Several vendors have created [[C to HDL]] languages that attempt to emulate the syntax and semantics of the [[C programming language]], with which most programmers are familiar. The best known C to HDL languages are [[Mitrionics|Mitrion-C]], [[Impulse C]], and [[Handel-C]]. Specific subsets of [[SystemC]] based on C++ can also be used for this purpose.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;FPGAs can be programmed with [[hardware description language]]s such as [[VHDL]]&amp;lt;ref&amp;gt;{{Cite journal|last1=Valueva|first1=Maria|last2=Valuev|first2=Georgii|last3=Semyonova|first3=Nataliya|last4=Lyakhov|first4=Pavel|last5=Chervyakov|first5=Nikolay|last6=Kaplun|first6=Dmitry|last7=Bogaevskiy|first7=Danil|date=2019-06-20|title=Construction of Residue Number System Using Hardware Efficient Diagonal Function|journal=Electronics|language=en|volume=8|issue=6|pages=694|doi=10.3390/electronics8060694|issn=2079-9292|quote=All simulated circuits were described in very high speed integrated circuit (VHSIC) hardware description language (VHDL). Hardware modeling was performed on Xilinx FPGA Artix 7 xc7a200tfbg484-2.|doi-access=free}}&amp;lt;/ref&amp;gt; or [[Verilog]].&amp;lt;ref&amp;gt;{{Cite book|last1=Gupta|first1=Ankit|last2=Suneja|first2=Kriti|title=2020 4th International Conference on Intelligent Computing and Control Systems (ICICCS) |chapter=Hardware Design of Approximate Matrix Multiplier based on FPGA in Verilog |date=May 2020|location=Madurai, India|publisher=IEEE|pages=496–498|doi=10.1109/ICICCS48265.2020.9121004|isbn=978-1-7281-4876-2|s2cid=219990653}}&amp;lt;/ref&amp;gt; Several vendors have created [[C to HDL]] languages that attempt to emulate the syntax and semantics of the [[C programming language]], with which most programmers are familiar. The best known C to HDL languages are [[Mitrionics|Mitrion-C]], [[Impulse C]], and [[Handel-C]]. Specific subsets of [[SystemC]] based on C++ can also be used for this purpose.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;AMD&amp;#039;s decision to open its [[HyperTransport]] technology to third-party vendors has become the enabling technology for high-performance reconfigurable computing.&amp;lt;ref name=&amp;quot;DAmour&amp;quot;&amp;gt;D&amp;#039;Amour, Michael R., Chief Operating Officer, DRC Computer Corporation. &amp;quot;Standard Reconfigurable Computing&amp;quot;. Invited speaker at the University of Delaware, February 28, 2007.&amp;lt;/ref&amp;gt; According to Michael R. D&amp;#039;Amour, Chief Operating Officer of DRC Computer Corporation, &amp;quot;when we first walked into AMD, they called us &amp;#039;the [[CPU socket|socket]] stealers.&amp;#039; Now they call us their partners.&amp;quot;&amp;lt;ref name=&amp;quot;DAmour&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;AMD&amp;#039;s decision to open its [[HyperTransport]] technology to third-party vendors has become the enabling technology for high-performance reconfigurable computing.&amp;lt;ref name=&amp;quot;DAmour&amp;quot;&amp;gt;D&amp;#039;Amour, Michael R., Chief Operating Officer, DRC Computer Corporation. &amp;quot;Standard Reconfigurable Computing&amp;quot;. Invited speaker at the University of Delaware, February 28, 2007.&amp;lt;/ref&amp;gt; According to Michael R. D&amp;#039;Amour, Chief Operating Officer of DRC Computer Corporation, &amp;quot;when we first walked into AMD, they called us &amp;#039;the [[CPU socket|socket]] stealers.&amp;#039; Now they call us their partners.&amp;quot;&amp;lt;ref name=&amp;quot;DAmour&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>imported&gt;DrKay</name></author>
	</entry>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=Parallel_computing&amp;diff=661054&amp;oldid=prev</id>
		<title>imported&gt;Headbomb: Altered template type. Add: chapter, title. | Use this tool. Report bugs. | #UCB_Gadget</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Parallel_computing&amp;diff=661054&amp;oldid=prev"/>
		<updated>2025-06-04T19:27:03Z</updated>

		<summary type="html">&lt;p&gt;Altered template type. Add: chapter, title. | &lt;a href=&quot;/wiki143/index.php?title=En:WP:UCB&amp;amp;action=edit&amp;amp;redlink=1&quot; class=&quot;new&quot; title=&quot;En:WP:UCB (page does not exist)&quot;&gt;Use this tool&lt;/a&gt;. &lt;a href=&quot;/wiki143/index.php?title=En:WP:DBUG&amp;amp;action=edit&amp;amp;redlink=1&quot; class=&quot;new&quot; title=&quot;En:WP:DBUG (page does not exist)&quot;&gt;Report bugs&lt;/a&gt;. | #UCB_Gadget&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Previous revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 19:27, 4 June 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l429&quot;&gt;Line 429:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 429:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==Further reading==&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==Further reading==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* {{cite &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;journal&lt;/del&gt;|last1=Rodriguez|first1=C.|last2=Villagra|first2=M.|last3=Baran|first3=B.|title=&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Asynchronous team algorithms for Boolean Satisfiability|journal=&lt;/del&gt;Bio-Inspired Models of Network, Information and Computing Systems&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;, 2007. Bionetics 2007. 2nd&lt;/del&gt;|date=29 August 2008|pages=66–69|doi=10.1109/BIMNICS.2007.4610083|s2cid=15185219}}&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* {{cite &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;book&lt;/ins&gt;|last1=Rodriguez|first1=C.|last2=Villagra|first2=M.|last3=Baran|first3=B. |title=&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;2007 2nd &lt;/ins&gt;Bio-Inspired Models of Network, Information and Computing Systems &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;|chapter=Asynchronous team algorithms for Boolean Satisfiability &lt;/ins&gt;|date=29 August 2008|pages=66–69|doi=10.1109/BIMNICS.2007.4610083|s2cid=15185219}}&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* Sechin, A.; Parallel Computing in Photogrammetry. GIM International. #1, 2016, pp.&amp;amp;nbsp;21–23.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* Sechin, A.; Parallel Computing in Photogrammetry. GIM International. #1, 2016, pp.&amp;amp;nbsp;21–23.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>imported&gt;Headbomb</name></author>
	</entry>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=Parallel_computing&amp;diff=106674&amp;oldid=prev</id>
		<title>imported&gt;OAbot: Open access bot: url-access updated in citation with #oabot.</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Parallel_computing&amp;diff=106674&amp;oldid=prev"/>
		<updated>2025-05-26T15:29:44Z</updated>

		<summary type="html">&lt;p&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/OABOT&quot; class=&quot;extiw&quot; title=&quot;wikipedia:OABOT&quot;&gt;Open access bot&lt;/a&gt;: url-access updated in citation with #oabot.&lt;/p&gt;
&lt;a href=&quot;http://debianws.lexgopc.com/wiki143/index.php?title=Parallel_computing&amp;amp;diff=106674&quot;&gt;Show changes&lt;/a&gt;</summary>
		<author><name>imported&gt;OAbot</name></author>
	</entry>
</feed>