Tuesday, July 16, 2024

Operating Systems: Week 4 - Memory Virtualization

 This week was a blur of mapping and translation.  We spent a lot of time learning the ins and outs of how memory is used by processes and how the operating system manages with those memory needs of a process in an efficient and safe manner. We learned about using a paging technique rather than segmentation to manage the usage of physical memory.  Then taking the next step we learned how specialized hardware inside the CPU (specifically inside the MMU) called the translation-lookaside buffer can help speed up memory access by not needing a page table for the most frequent memory requests.  We learned how to manually map a virtual memory address to a physical memory address.  A process that tripped me up a number of times initially until I finally sat down and walked through the translation while writing a small script to perform the translation, somehow the manual step of writing the script reinforces the knowledge and helps me absorb it.  We started to look at what happens when we've run out of physical memory yet processes are still asking for more.  Virtualized memory allows use to move pages out to slower storage (SSD, HDD, etc) but at a performance hit when we need that page again.  Strategies or algorithms around how to efficiently decide what pages to swap out, what pages to swap in, and when are another area we started to learn about, things like FIFO, LIFO, LRU, MFU, MRU.  These choices can effect how the end user's experience is perceived and thus it's important to understand the advantages and trade offs of each.

0 comments:

Post a Comment