<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://debianws.lexgopc.com/wiki143/index.php?action=history&amp;feed=atom&amp;title=CPU_cache</id>
	<title>CPU cache - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://debianws.lexgopc.com/wiki143/index.php?action=history&amp;feed=atom&amp;title=CPU_cache"/>
	<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=CPU_cache&amp;action=history"/>
	<updated>2026-05-04T21:23:12Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=CPU_cache&amp;diff=5278267&amp;oldid=prev</id>
		<title>~2025-31944-65: /* Address translation */The index only specifies which set the data might reside in; we need the tag to know if any actual line in that set would have our data and if we are using a physical tag then we need to wait until the physical address is available.</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=CPU_cache&amp;diff=5278267&amp;oldid=prev"/>
		<updated>2025-12-18T23:15:01Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Address translation: &lt;/span&gt;The index only specifies which set the data might reside in; we need the tag to know if any actual line in that set would have our data and if we are using a physical tag then we need to wait until the physical address is available.&lt;/p&gt;
&lt;a href=&quot;http://debianws.lexgopc.com/wiki143/index.php?title=CPU_cache&amp;amp;diff=5278267&amp;amp;oldid=2031286&quot;&gt;Show changes&lt;/a&gt;</summary>
		<author><name>~2025-31944-65</name></author>
	</entry>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=CPU_cache&amp;diff=2031286&amp;oldid=prev</id>
		<title>imported&gt;GreenDevolution: Minor fix to make use of higher/lower level consistent</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=CPU_cache&amp;diff=2031286&amp;oldid=prev"/>
		<updated>2025-06-25T00:25:31Z</updated>

		<summary type="html">&lt;p&gt;Minor fix to make use of higher/lower level consistent&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Previous revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 00:25, 25 June 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l21&quot;&gt;Line 21:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 21:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==History==&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==History==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:NeXTcube motherboard.jpg|thumb|[[Motherboard]] of a [[NeXTcube]] computer (1990). At the lower edge of the image left from the middle, there is the CPU [[Motorola 68040]] operated at 25 [[MHz]] with two separate level 1 caches of 4 KiB each on the chip, one for the instructions and one for data. The board has no external L2 cache.]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:NeXTcube motherboard.jpg|thumb|[[Motherboard]] of a [[NeXTcube]] computer (1990). At the lower edge of the image left from the middle, there is the CPU [[Motorola 68040]] operated at 25 [[MHz]] with two separate level 1 caches of 4 KiB each on the chip, one for the instructions and one for data. The board has no external L2 cache.]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Early examples of CPU caches include the [[Titan (1963 computer)|Atlas 2]]&amp;lt;ref&amp;gt;{{cite web|last=Landy|first=Barry|url=http://www.chilton-computing.org.uk/acl/technology/atlas50th/p005.htm|title=Atlas 2 at Cambridge Mathematical Laboratory (and Aldermaston and CAD Centre)|date=November 2012|quote=Two tunnel diode stores were developed at Cambridge; one, which worked very well, speeded up the fetching of operands, the other was intended to speed up the fetching of instructions. The idea was that most instructions are obeyed in sequence, so when an instruction was fetched that word was placed in the slave store in the location given by the fetch address modulo 32; the remaining bits of the fetch address were also stored. If the wanted word was in the slave it was read from there instead of main memory. This would give a major speedup to instruction loops up to 32 instructions long, and reduced effect for loops up to 64 words.}}&amp;lt;/ref&amp;gt; and the [[IBM System/360 Model 85]]&amp;lt;ref&amp;gt;{{cite web|url=http://www.bitsavers.org/pdf/ibm/360/functional_characteristics/A22-6916-1_360-85_funcChar_Jun68.pdf|title=IBM System/360 Model 85 Functional Characteristics|publisher=[[IBM]]|id=A22-6916-1|date=June 1968}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{cite journal|url=https://www.andrew.cmu.edu/course/15-440/assets/READINGS/liptay1968.pdf|last=Liptay|first=John S.|title=Structural aspects of the System/360 Model 85 - Part II The cache|journal=IBM Systems Journal|date=March 1968|volume=7|issue=1|pages=15–21|doi=10.1147/sj.71.0015}}&amp;lt;/ref&amp;gt; in the 1960s. The first CPUs that used a cache had only one level of cache; unlike later level 1 cache, it was not split into L1d (for data) and L1i (for instructions). Split L1 cache started in 1976 with the [[IBM 801]] CPU,&amp;lt;ref&amp;gt;{{cite journal|url=http://home.eng.iastate.edu/~zzhang/courses/cpre585-f03/reading/smith-csur82-cache.pdf|title=Cache Memories|last=Smith |first=Alan Jay|journal=Computing Surveys|volume=14|issue=3|date=September 1982|pages=473–530|doi=10.1145/356887.356892|s2cid=6023466}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{cite journal|title=Altering Computer Architecture is Way to Raise Throughput, Suggest IBM Researchers|journal=[[Electronics (magazine)|Electronics]]|volume=49|issue=25|date=December 1976|pages=30–31}}&amp;lt;/ref&amp;gt; became mainstream in the late 1980s, and in 1997 entered the embedded CPU market with the ARMv5TE. In 2015, even sub-dollar [[System on a chip|SoCs]] split the L1 cache. They also have L2 caches and, for larger processors, L3 caches as well. The L2 cache is usually not split, and acts as a common repository for the already split L1 cache. Every core of a [[multi-core processor]] has a dedicated L1 cache and is usually not shared between the cores. The L2 cache, and &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;higher&lt;/del&gt;-level caches, may be shared between the cores. L4 cache is currently uncommon, and is generally [[dynamic random-access memory]] (DRAM) on a separate die or chip, rather than [[static random-access memory]] (SRAM). An exception to this is when [[eDRAM]] is used for all levels of cache, down to L1. Historically L1 was also on a separate die, however bigger die sizes have allowed integration of it as well as other cache levels, with the possible exception of the last level. Each extra level of cache tends to be &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;bigger &lt;/del&gt;and &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;optimized differently&lt;/del&gt;.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Early examples of CPU caches include the [[Titan (1963 computer)|Atlas 2]]&amp;lt;ref&amp;gt;{{cite web|last=Landy|first=Barry|url=http://www.chilton-computing.org.uk/acl/technology/atlas50th/p005.htm|title=Atlas 2 at Cambridge Mathematical Laboratory (and Aldermaston and CAD Centre)|date=November 2012|quote=Two tunnel diode stores were developed at Cambridge; one, which worked very well, speeded up the fetching of operands, the other was intended to speed up the fetching of instructions. The idea was that most instructions are obeyed in sequence, so when an instruction was fetched that word was placed in the slave store in the location given by the fetch address modulo 32; the remaining bits of the fetch address were also stored. If the wanted word was in the slave it was read from there instead of main memory. This would give a major speedup to instruction loops up to 32 instructions long, and reduced effect for loops up to 64 words.}}&amp;lt;/ref&amp;gt; and the [[IBM System/360 Model 85]]&amp;lt;ref&amp;gt;{{cite web|url=http://www.bitsavers.org/pdf/ibm/360/functional_characteristics/A22-6916-1_360-85_funcChar_Jun68.pdf|title=IBM System/360 Model 85 Functional Characteristics|publisher=[[IBM]]|id=A22-6916-1|date=June 1968}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{cite journal|url=https://www.andrew.cmu.edu/course/15-440/assets/READINGS/liptay1968.pdf|last=Liptay|first=John S.|title=Structural aspects of the System/360 Model 85 - Part II The cache|journal=IBM Systems Journal|date=March 1968|volume=7|issue=1|pages=15–21|doi=10.1147/sj.71.0015}}&amp;lt;/ref&amp;gt; in the 1960s. The first CPUs that used a cache had only one level of cache; unlike later level 1 cache, it was not split into L1d (for data) and L1i (for instructions). Split L1 cache started in 1976 with the [[IBM 801]] CPU,&amp;lt;ref&amp;gt;{{cite journal|url=http://home.eng.iastate.edu/~zzhang/courses/cpre585-f03/reading/smith-csur82-cache.pdf|title=Cache Memories|last=Smith |first=Alan Jay|journal=Computing Surveys|volume=14|issue=3|date=September 1982|pages=473–530|doi=10.1145/356887.356892|s2cid=6023466}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{cite journal|title=Altering Computer Architecture is Way to Raise Throughput, Suggest IBM Researchers|journal=[[Electronics (magazine)|Electronics]]|volume=49|issue=25|date=December 1976|pages=30–31}}&amp;lt;/ref&amp;gt; became mainstream in the late 1980s, and in 1997 entered the embedded CPU market with the ARMv5TE. In 2015, even sub-dollar [[System on a chip|SoCs]] split the L1 cache. They also have L2 caches and, for larger processors, L3 caches as well. The L2 cache is usually not split, and acts as a common repository for the already split L1 cache. Every core of a [[multi-core processor]] has a dedicated L1 cache and is usually not shared between the cores. The L2 cache, and &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;lower&lt;/ins&gt;-level caches, may be shared between the cores. L4 cache is currently uncommon, and is generally [[dynamic random-access memory]] (DRAM) on a separate die or chip, rather than [[static random-access memory]] (SRAM). An exception to this is when [[eDRAM]] is used for all levels of cache, down to L1. Historically L1 was also on a separate die, however bigger die sizes have allowed integration of it as well as other cache levels, with the possible exception of the last level. Each extra level of cache tends to be &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;smaller &lt;/ins&gt;and &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;faster than the lower levels&lt;/ins&gt;.&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;lt;ref name=&quot;:0&quot; /&amp;gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Caches (like for RAM historically) have generally been sized in powers of: 2, 4, 8, 16 etc. [[Kibibyte|KiB]]; when up to [[Mebibyte|MiB]] sizes (i.e. for larger non-L1), very early on the pattern broke down, to allow for larger caches without being forced into the doubling-in-size paradigm, with e.g. [[Intel Core 2 Duo]] with 3&amp;amp;nbsp;MiB L2 cache in April 2008. This happened much later for L1 caches, as their size is generally still a small number of KiB. The [[IBM zEC12 (microprocessor)|IBM zEC12]] from 2012 is an exception however, to gain unusually large 96&amp;amp;nbsp;KiB L1 data cache for its time, and e.g. the [[IBM z13 (microprocessor)|IBM z13]] having a 96&amp;amp;nbsp;KiB L1 instruction cache (and 128&amp;amp;nbsp;KiB L1 data cache),&amp;lt;ref&amp;gt;{{cite web|last1=White|first1=Bill|last2=De Leon|first2=Cecilia A.|display-authors=etal |url=https://www.redbooks.ibm.com/redbooks/pdfs/sg248250.pdf|title=IBM z13 and IBM z13s Technical Introduction|page=20|date=March 2016|publisher=IBM}}&amp;lt;/ref&amp;gt; and Intel [[Ice Lake (microprocessor)|Ice Lake]]-based processors from 2018, having 48&amp;amp;nbsp;KiB L1 data cache and 48&amp;amp;nbsp;KiB L1 instruction cache. In 2020, some [[Intel Atom]] CPUs (with up to 24 cores) have (multiple of) 4.5&amp;amp;nbsp;MiB and 15&amp;amp;nbsp;MiB cache sizes.&amp;lt;ref&amp;gt;{{Cite press release|url=https://www.intel.com/content/www/us/en/newsroom/news/product-fact-sheet-accelerating-5g-network-infrastructure-core-edge.html|publisher=Intel Corporation |date=25 February 2020|title=Product Fact Sheet: Accelerating 5G Network Infrastructure, from the Core to the Edge|website=Intel Newsroom|quote=L1 cache of 32KB/core, L2 cache of 4.5MB per 4-core cluster and shared LLC cache up to 15MB.|language=en-US|access-date=2024-04-18}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite web|url=https://www.anandtech.com/show/15544/intel-launches-atom-p5900-a-10nm-atom-for-radio-access-networks|title=Intel Launches Atom P5900: A 10nm Atom for Radio Access Networks|last=Smith|first=Ryan|website=AnandTech |access-date=2020-04-12}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Caches (like for RAM historically) have generally been sized in powers of: 2, 4, 8, 16 etc. [[Kibibyte|KiB]]; when up to [[Mebibyte|MiB]] sizes (i.e. for larger non-L1), very early on the pattern broke down, to allow for larger caches without being forced into the doubling-in-size paradigm, with e.g. [[Intel Core 2 Duo]] with 3&amp;amp;nbsp;MiB L2 cache in April 2008. This happened much later for L1 caches, as their size is generally still a small number of KiB. The [[IBM zEC12 (microprocessor)|IBM zEC12]] from 2012 is an exception however, to gain unusually large 96&amp;amp;nbsp;KiB L1 data cache for its time, and e.g. the [[IBM z13 (microprocessor)|IBM z13]] having a 96&amp;amp;nbsp;KiB L1 instruction cache (and 128&amp;amp;nbsp;KiB L1 data cache),&amp;lt;ref&amp;gt;{{cite web|last1=White|first1=Bill|last2=De Leon|first2=Cecilia A.|display-authors=etal |url=https://www.redbooks.ibm.com/redbooks/pdfs/sg248250.pdf|title=IBM z13 and IBM z13s Technical Introduction|page=20|date=March 2016|publisher=IBM}}&amp;lt;/ref&amp;gt; and Intel [[Ice Lake (microprocessor)|Ice Lake]]-based processors from 2018, having 48&amp;amp;nbsp;KiB L1 data cache and 48&amp;amp;nbsp;KiB L1 instruction cache. In 2020, some [[Intel Atom]] CPUs (with up to 24 cores) have (multiple of) 4.5&amp;amp;nbsp;MiB and 15&amp;amp;nbsp;MiB cache sizes.&amp;lt;ref&amp;gt;{{Cite press release|url=https://www.intel.com/content/www/us/en/newsroom/news/product-fact-sheet-accelerating-5g-network-infrastructure-core-edge.html|publisher=Intel Corporation |date=25 February 2020|title=Product Fact Sheet: Accelerating 5G Network Infrastructure, from the Core to the Edge|website=Intel Newsroom|quote=L1 cache of 32KB/core, L2 cache of 4.5MB per 4-core cluster and shared LLC cache up to 15MB.|language=en-US|access-date=2024-04-18}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite web|url=https://www.anandtech.com/show/15544/intel-launches-atom-p5900-a-10nm-atom-for-radio-access-networks|title=Intel Launches Atom P5900: A 10nm Atom for Radio Access Networks|last=Smith|first=Ryan|website=AnandTech |access-date=2020-04-12}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l112&quot;&gt;Line 112:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 112:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The &amp;quot;size&amp;quot; of the cache is the amount of main memory data it can hold. This size can be calculated as the number of bytes stored in each data block times the number of blocks stored in the cache. (The tag, flag and [[ECC memory#Cache|error correction code]] bits are not included in the size,&amp;lt;ref&amp;gt;{{cite web |author=Sadler |first1=Nathan N. |last2=Sorin |first2=Daniel L. |year=2006 |title=Choosing an Error Protection Scheme for a Microprocessor&amp;#039;s L1 Data Cache |url=https://people.ee.duke.edu/~sorin/papers/iccd06_perc.pdf |page=4}}&amp;lt;/ref&amp;gt; although they do affect the physical area of a cache.)&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The &amp;quot;size&amp;quot; of the cache is the amount of main memory data it can hold. This size can be calculated as the number of bytes stored in each data block times the number of blocks stored in the cache. (The tag, flag and [[ECC memory#Cache|error correction code]] bits are not included in the size,&amp;lt;ref&amp;gt;{{cite web |author=Sadler |first1=Nathan N. |last2=Sorin |first2=Daniel L. |year=2006 |title=Choosing an Error Protection Scheme for a Microprocessor&amp;#039;s L1 Data Cache |url=https://people.ee.duke.edu/~sorin/papers/iccd06_perc.pdf |page=4}}&amp;lt;/ref&amp;gt; although they do affect the physical area of a cache.)&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;An effective memory address which goes along with the cache line (memory block) is split ([[Most significant bit|MSB]] to [[Least significant bit|LSB]]) into the tag, the index and the block offset.&amp;lt;ref&amp;gt;{{cite book |last1=Hennessy |first1=John L. |url=https://books.google.com/books?id=v3-1hVwHnHwC&amp;amp;q=Hennessey+%22block+offset%22&amp;amp;pg=PA120 |title=Computer Architecture: A Quantitative Approach |last2=Patterson |first2=David A. |publisher=Elsevier |year=2011 |isbn=978-0-12-383872-8 |page=B-9 |language=en}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{cite book |last1=Patterson |first1=David A. |url=https://books.google.com/books?id=3b63x-0P3_UC&amp;amp;q=Hennessey+%22block+offset%22&amp;amp;pg=PA484 |title=Computer Organization and Design: The Hardware/Software Interface |last2=Hennessy |first2=John L. |publisher=Morgan Kaufmann |year=2009 |isbn=978-0-12-374493-7 |page=484 |language=en}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;An effective memory address which goes along with the cache line (memory block) is split ([[Most significant bit|MSB]] to [[Least significant bit|LSB]]) into the tag, the index and the block offset.&amp;lt;ref &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;name=&quot;:0&quot;&lt;/ins&gt;&amp;gt;{{cite book |last1=Hennessy |first1=John L. |url=https://books.google.com/books?id=v3-1hVwHnHwC&amp;amp;q=Hennessey+%22block+offset%22&amp;amp;pg=PA120 |title=Computer Architecture: A Quantitative Approach |last2=Patterson |first2=David A. |publisher=Elsevier |year=2011 |isbn=978-0-12-383872-8 |page=B-9 |language=en}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{cite book |last1=Patterson |first1=David A. |url=https://books.google.com/books?id=3b63x-0P3_UC&amp;amp;q=Hennessey+%22block+offset%22&amp;amp;pg=PA484 |title=Computer Organization and Design: The Hardware/Software Interface |last2=Hennessy |first2=John L. |publisher=Morgan Kaufmann |year=2009 |isbn=978-0-12-374493-7 |page=484 |language=en}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{| style=&amp;quot;width:30%; text-align:center&amp;quot; border=&amp;quot;1&amp;quot;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{| style=&amp;quot;width:30%; text-align:center&amp;quot; border=&amp;quot;1&amp;quot;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l210&quot;&gt;Line 210:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 210:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Modern processors have multiple interacting on-chip caches. The operation of a particular cache can be completely specified by the cache size, the cache block size, the number of blocks in a set, the cache set replacement policy, and the cache write policy (write-through or write-back).&amp;lt;ref name=&amp;quot;ccs.neu.edu&amp;quot; /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Modern processors have multiple interacting on-chip caches. The operation of a particular cache can be completely specified by the cache size, the cache block size, the number of blocks in a set, the cache set replacement policy, and the cache write policy (write-through or write-back).&amp;lt;ref name=&amp;quot;ccs.neu.edu&amp;quot; /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;While all of the cache blocks in a particular cache are the same size and have the same associativity, typically the &quot;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;lower&lt;/del&gt;-level&quot; caches (called Level 1 cache) have a smaller number of blocks, smaller block size, and fewer blocks in a set, but have very short access times. &quot;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Higher&lt;/del&gt;-level&quot; caches (i.e. Level 2 and &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;above&lt;/del&gt;) have progressively larger numbers of blocks, larger block size, more blocks in a set, and relatively longer access times, but are still much faster than main memory.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;While all of the cache blocks in a particular cache are the same size and have the same associativity, typically the &quot;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;higher&lt;/ins&gt;-level&quot; caches (called Level 1 cache) have a smaller number of blocks, smaller block size, and fewer blocks in a set, but have very short access times. &quot;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Lower&lt;/ins&gt;-level&quot; caches (i.e. Level 2 and &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;below&lt;/ins&gt;) have progressively larger numbers of blocks, larger block size, more blocks in a set, and relatively longer access times, but are still much faster than main memory.&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;lt;ref name=&quot;:0&quot; /&amp;gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Cache entry replacement policy is determined by a [[cache algorithm]] selected to be implemented by the processor designers. In some cases, multiple algorithms are provided for different kinds of work loads.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Cache entry replacement policy is determined by a [[cache algorithm]] selected to be implemented by the processor designers. In some cases, multiple algorithms are provided for different kinds of work loads.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l297&quot;&gt;Line 297:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 297:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;When considering a chip with [[Multi-core processor|multiple cores]], there is a question of whether the caches should be shared or local to each core. Implementing shared cache inevitably introduces more wiring and complexity. But then, having one cache per &amp;#039;&amp;#039;chip&amp;#039;&amp;#039;, rather than &amp;#039;&amp;#039;core&amp;#039;&amp;#039;, greatly reduces the amount of space needed, and thus one can include a larger cache.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;When considering a chip with [[Multi-core processor|multiple cores]], there is a question of whether the caches should be shared or local to each core. Implementing shared cache inevitably introduces more wiring and complexity. But then, having one cache per &amp;#039;&amp;#039;chip&amp;#039;&amp;#039;, rather than &amp;#039;&amp;#039;core&amp;#039;&amp;#039;, greatly reduces the amount of space needed, and thus one can include a larger cache.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Typically, sharing the L1 cache is undesirable because the resulting increase in latency would make each core run considerably slower than a single-core chip. However, for the &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;highest&lt;/del&gt;-level cache, the last one called before accessing memory, having a global cache is desirable for several reasons, such as allowing a single core to use the whole cache, reducing data redundancy by making it possible for different processes or threads to share cached data, and reducing the complexity of utilized cache coherency protocols.&amp;lt;ref&amp;gt;{{cite web |last1=Tian |first1=Tian |last2=Shih |first2=Chiu-Pi |date=2012-03-08 |title=Software Techniques for Shared-Cache Multi-Core Systems |url=https://software.intel.com/en-us/articles/software-techniques-for-shared-cache-multi-core-systems |access-date=2015-11-24 |publisher=[[Intel]]}}&amp;lt;/ref&amp;gt; For example, an eight-core chip with three levels may include an L1 cache for each core, one intermediate L2 cache for each pair of cores, and one L3 cache shared between all cores.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Typically, sharing the L1 cache is undesirable because the resulting increase in latency would make each core run considerably slower than a single-core chip. However, for the &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;lowest&lt;/ins&gt;-level cache, the last one called before accessing memory, having a global cache is desirable for several reasons, such as allowing a single core to use the whole cache, reducing data redundancy by making it possible for different processes or threads to share cached data, and reducing the complexity of utilized cache coherency protocols.&amp;lt;ref&amp;gt;{{cite web |last1=Tian |first1=Tian |last2=Shih |first2=Chiu-Pi |date=2012-03-08 |title=Software Techniques for Shared-Cache Multi-Core Systems |url=https://software.intel.com/en-us/articles/software-techniques-for-shared-cache-multi-core-systems |access-date=2015-11-24 |publisher=[[Intel]]}}&amp;lt;/ref&amp;gt; For example, an eight-core chip with three levels may include an L1 cache for each core, one intermediate L2 cache for each pair of cores, and one L3 cache shared between all cores.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;A shared &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;highest&lt;/del&gt;-level cache, which is called before accessing memory, is usually referred to as a &#039;&#039;last level cache&#039;&#039; (LLC). Additional techniques are used for increasing the level of parallelism when LLC is shared between multiple cores, including slicing it into multiple pieces which are addressing certain ranges of memory addresses, and can be accessed independently.&amp;lt;ref&amp;gt;{{cite web |author=Lempel |first=Oded |date=2013-07-28 |title=2nd Generation Intel Core Processor Family: Intel Core i7, i5 and i3 |url=http://www.hotchips.org/wp-content/uploads/hc_archives/hc23/HC23.19.9-Desktop-CPUs/HC23.19.911-Sandy-Bridge-Lempel-Intel-Rev%207.pdf |url-status=dead |archive-url=https://web.archive.org/web/20200729000210/http://www.hotchips.org/wp-content/uploads/hc_archives/hc23/HC23.19.9-Desktop-CPUs/HC23.19.911-Sandy-Bridge-Lempel-Intel-Rev%207.pdf |archive-date=2020-07-29 |access-date=2014-01-21 |website=hotchips.org |pages=7&amp;amp;ndash;10, 31&amp;amp;ndash;45}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;A shared &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;lowest&lt;/ins&gt;-level cache, which is called before accessing memory, is usually referred to as a &#039;&#039;last level cache&#039;&#039; (LLC). Additional techniques are used for increasing the level of parallelism when LLC is shared between multiple cores, including slicing it into multiple pieces which are addressing certain ranges of memory addresses, and can be accessed independently.&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;lt;ref name=&quot;:0&quot; /&amp;gt;&lt;/ins&gt;&amp;lt;ref&amp;gt;{{cite web |author=Lempel |first=Oded |date=2013-07-28 |title=2nd Generation Intel Core Processor Family: Intel Core i7, i5 and i3 |url=http://www.hotchips.org/wp-content/uploads/hc_archives/hc23/HC23.19.9-Desktop-CPUs/HC23.19.911-Sandy-Bridge-Lempel-Intel-Rev%207.pdf |url-status=dead |archive-url=https://web.archive.org/web/20200729000210/http://www.hotchips.org/wp-content/uploads/hc_archives/hc23/HC23.19.9-Desktop-CPUs/HC23.19.911-Sandy-Bridge-Lempel-Intel-Rev%207.pdf |archive-date=2020-07-29 |access-date=2014-01-21 |website=hotchips.org |pages=7&amp;amp;ndash;10, 31&amp;amp;ndash;45}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====Separate versus unified====&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====Separate versus unified====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>imported&gt;GreenDevolution</name></author>
	</entry>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=CPU_cache&amp;diff=510062&amp;oldid=prev</id>
		<title>imported&gt;Citation bot: Added work. | Use this bot. Report bugs. | Suggested by CorrectionsJackal | Category:Computer memory | #UCB_Category 49/194</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=CPU_cache&amp;diff=510062&amp;oldid=prev"/>
		<updated>2025-05-27T06:26:32Z</updated>

		<summary type="html">&lt;p&gt;Added work. | &lt;a href=&quot;/wiki143/index.php?title=En:WP:UCB&amp;amp;action=edit&amp;amp;redlink=1&quot; class=&quot;new&quot; title=&quot;En:WP:UCB (page does not exist)&quot;&gt;Use this bot&lt;/a&gt;. &lt;a href=&quot;/wiki143/index.php?title=En:WP:DBUG&amp;amp;action=edit&amp;amp;redlink=1&quot; class=&quot;new&quot; title=&quot;En:WP:DBUG (page does not exist)&quot;&gt;Report bugs&lt;/a&gt;. | Suggested by CorrectionsJackal | &lt;a href=&quot;/wiki143/index.php?title=Category:Computer_memory&quot; title=&quot;Category:Computer memory&quot;&gt;Category:Computer memory&lt;/a&gt; | #UCB_Category 49/194&lt;/p&gt;
&lt;a href=&quot;http://debianws.lexgopc.com/wiki143/index.php?title=CPU_cache&amp;amp;diff=510062&quot;&gt;Show changes&lt;/a&gt;</summary>
		<author><name>imported&gt;Citation bot</name></author>
	</entry>
</feed>