Memory Hierarchy
1.
Registers
- Speed: Fastest in the hierarchy
- Size: Smallest (32–128 bits per
register)
- Cost: Most expensive per bit
- Access Time: A few nanoseconds (ns)
- Location: Directly within the CPU.
- Functionality:
- Registers are used to hold data
that the CPU is actively working on.
- They store operands for
arithmetic operations, memory addresses, and instruction results.
- Examples: Accumulator, Program
Counter, Instruction Register, Stack Pointer.
- Trade-offs:
- Registers are extremely fast
but limited in number and size due to space constraints in the CPU.
2. Cache
Memory
- Speed: Slower than registers, faster
than RAM
- Size: Small (typically in kilobytes
to megabytes)
- Cost: More expensive than RAM,
cheaper than registers
- Access Time: A few nanoseconds (ns) to tens
of nanoseconds
- Purpose:
- Cache is a small amount of
high-speed memory placed between the CPU and main memory to store
frequently used data.
- It works by exploiting the locality
of reference, where programs tend to access the same memory locations
repeatedly (temporal locality) or memory locations near recently accessed
data (spatial locality).
- Cache Levels:
- L1 Cache:
- Closest to the CPU core,
typically split into separate instruction and data caches.
- Smallest (16–64 KB), but
fastest.
- L2 Cache:
- Larger than L1 (256 KB–2 MB),
shared between cores in multi-core processors.
- Slower but still much faster
than RAM.
- L3 Cache:
- Largest (2 MB–50 MB), shared
across all cores in multi-core processors.
- Acts as a buffer between the
CPU and RAM, significantly slower than L1 and L2.
- Trade-offs:
- Cache provides rapid access to
data but is expensive, so it's limited in size.
3. Main
Memory (RAM)
- Speed: Slower than cache but faster
than secondary storage
- Size: Larger than cache (GB range,
e.g., 8 GB to 64 GB)
- Cost: Cheaper per bit than cache,
more expensive than secondary storage
- Access Time: Tens to hundreds of nanoseconds
(ns)
- Purpose:
- RAM (Random Access Memory)
stores data and instructions that are currently being used by the CPU.
- It is a volatile memory,
meaning all data is lost when the system powers off.
- Types of RAM:
- Dynamic RAM (DRAM): Slower, cheaper, and commonly
used for system memory.
- Static RAM (SRAM): Faster, more expensive, used
for cache memory.
- Trade-offs:
- RAM is fast and provides reasonably
large storage, but its volatility makes it unsuitable for long-term data
retention.
4.
Secondary Storage
- Speed: Slower than RAM, significantly
slower than cache
- Size: Very large (hundreds of
gigabytes to multiple terabytes)
- Cost: Much cheaper per bit than RAM
- Access Time: Milliseconds (thousands of
nanoseconds)
- Purpose:
- Secondary storage holds data
and programs that are not currently in use by the CPU but need to be
retained for long-term use.
- Persistent storage: Data is retained even when
the power is off.
- Types of Secondary Storage:
- Hard Disk Drives (HDD):
- Uses magnetic storage to store
data.
- Slower but cheaper than SSDs.
- Solid State Drives (SSD):
- Uses NAND flash memory to
store data.
- Much faster than HDDs (faster
read/write times) but more expensive per gigabyte.
- Trade-offs:
- Secondary storage provides a
large capacity at a low cost, but access speeds are significantly slower
than RAM.
5.
Tertiary Storage
- Speed: Slowest in the hierarchy
- Size: Extremely large (terabytes to
petabytes)
- Cost: Cheapest per bit
- Access Time: Can range from seconds to
minutes
- Purpose:
- Tertiary storage is used
primarily for backup and archival purposes.
- It is often removed from the
main system and stored externally (e.g., magnetic tapes, optical disks).
- Examples:
- Magnetic Tape: Used for large-scale backups;
very high capacity but slow access.
- Optical Discs (CDs, DVDs,
Blu-ray): Used
for distribution of media and data archiving.
- Trade-offs:
- Extremely low cost per bit and
large capacity but with long access times and slow transfer rates.
6.
Virtual Memory
- Speed: Dependent on secondary storage
(e.g., HDD or SSD)
- Size: Dependent on the size of
secondary storage
- Purpose:
- Virtual memory allows the
system to extend the available physical memory (RAM) by using a portion
of the secondary storage (usually a hard disk or SSD) as if it were
additional RAM.
- It’s implemented via paging,
where portions of programs are swapped in and out of physical memory as
needed.
- Page faults occur when the CPU tries to
access data not currently in RAM, triggering a swap between virtual
memory and RAM.
- Trade-offs:
- Virtual memory allows for
larger programs to run on systems with limited physical memory, but
accessing virtual memory is much slower than accessing actual RAM.
7.
Locality of Reference
The memory
hierarchy is designed around the concept of locality of reference:
- Temporal Locality: Recently accessed data is
likely to be accessed again soon.
- Spatial Locality: Data located close to recently
accessed data is likely to be accessed soon.
Cache memory
and virtual memory systems exploit these properties by keeping frequently
accessed data closer to the CPU and avoiding constant fetches from slower,
larger storage.
Key
Trade-offs in the Memory Hierarchy:
- Speed vs. Capacity: The faster the memory, the
smaller and more expensive it is. Registers and caches are very fast but
limited in size, while secondary storage like HDDs and SSDs is much larger
but significantly slower.
- Cost vs. Performance: High-speed memory (e.g., cache
and RAM) is more expensive per bit, but it’s essential for performance.
Slower memory (e.g., HDDs, SSDs, tape) is cheaper and offers larger
capacities.
- Power Efficiency: Lower levels in the hierarchy
(e.g., SSDs, HDDs) generally consume less power when idle compared to
volatile memory (e.g., DRAM).
No comments:
Post a Comment