Non-Uniform Memory Access (NUMA): All processors have access to all the parts of the main memory using loads and stores. The access time for this differs depending on the section of the main memory that is accessed. So in essence, memory access time is dependent on the memory location relative to the processor.
With NUMA, the processor is able to access its own local memory faster than the memory shared between processors.
Let's contextualize this.
Modern CPUs operate at a faster speed than the main memory they use. You'll find that most times, the CPU needs to stall while waiting for requested data to arrive from the memory. One of the ways modern computer systems attempted to solve this was by limiting the number of memory accesses. Commodity processors were installed with an ever-increasing amount of high-speed cache memory. Specialised algorithms were used to minimise cache misses. The increase in the size of operating systems and of the applications run on them has generally created more problems than the ones solved by the cache-processing improvements.
This is where NUMA comes into play.
NUMA attempts to address the problem by providing a separate memory for each processor. This resolves the conflict that happens when multiple processors attempt to address the same memory. NUMA improves the performance over a single shared memory by a factor of approximately the number of processors. In some cases, it might be necessary for multiple processors to require the same data. When this is the case, NUMA systems come with specialized hardware and software to enable the movement of data across multiple memory banks.
Please leave a comment below and share with other students in your network if this solution was helpful. Happy learning!