Monday, June 3, 2013

Memory Hierarchy - cache Fundamentals

Locality principle

Locality principle shows at any time during the program to access only a relatively small part of the address space of the content. The following are two types of locality:
Temporal locality
If a data to be accessed in the near future, then it may be accessed again. For example, most programs are a good package loop structure, so this part of the instruction and data access will be repeated, showing a very high temporal locality.
Spatial locality
If a data item is accessed, and his address adjacent data items may be accessed quickly. Because the program is usually executed in sequence, so the program also presents a high spatial locality; access to data the same display a natural spatial locality.

Find a block in the cache

Relevance of the three ways in the cache in which: all connected, set associative and direct mapped, are required to mark the selected block is tested to determine whether to match the address from the processor.
All connected to the cache in each block by comparing the first group are connected via the address to find the group, and the group is selected by marking each block must be detected; directly mapped simple, requires only a comparator, as each address only the only one corresponding cache block.
For 4-way set associative, it requires four comparators and a 4-to-1 multiplexer is used in the selected group of four members to choose between. These circuits are set up earlier than the SRAM until the CAM (Content Addressable Memory Content Addressable Memory) and the emergence, CAM is the storage unit and the comparator are combined in a circuit configuration of the component. The emergence of CAM means designers can provide a higher degree of association. In 2008, CAM greater capacity and power consumption makes the two-way and four-way set associative structures generally use standard SRAM and comparators built, eight-way set associative, and more structure built by the CAM.

Replacement block selection

Occurred in the absence of direct-mapped cache, the requested cache block can only be placed in the unique position, and that position originally occupied block must be replaced. In the associated cache, the requested block on what position needs to be selected, so replacement should be selected which one block. All connected in the cache, all blocks are likely to be replaced. Set associative cache in, we'll pick in the selected group is replaced by a block.
Least recently used algorithm (LRU).
In the LRU algorithm, the replaced block is not used for a long time that the one block.
LRU algorithm by tracking the relative time of each block usage. For a two-way set associative cache, the group tracked the usage of the two blocks can be achieved: keep one in each group separately, by setting this bit to indicate which one is accessed. When the correlation is increased, LRU implementation becomes more difficult.
For higher correlation, or use approximate LRU algorithm, or random replacement policy. In the cache, replacement algorithm is implemented in hardware, which means that requires replacement algorithm is easy to implement. When the cache becomes larger, all replacement algorithm deletion rate has fallen, the absolute difference is also smaller. In fact, sometimes random replacement algorithm implemented in hardware approximate performance than LRU algorithm performance is better.
Random Act
Randomly selected candidate block may be used to assist in achieving some hardware. For example, a TLB miss, MIPS with random replacement.

Multi-level cache (multilevel cache)

All modern computers use cache. And most processors are additional one cache. This two-chip cache is usually located, when a cache miss occurs will access it. If the secondary cache contains the required data, then a cache miss at the expense of secondary cache access time, which is faster than accessing main memory faster. If a secondary cache is not included in the required data, you need to access the memory, thus creating a greater lack of consideration.

No comments:

Post a Comment