Monday, July 22, 2024

Operating Systems: Week 5 - Concurrency

This week there was a lot of spinning (get it? a threading joke).  We covered a lot of information this week about how an operating system can allow a programmer to safely and efficiently perform multiple tasks or requests.

The first part in enabling this ability is a way to allow a program to run multiple segments of code simultaneously, and this can be done with threading. Previously our programs all executed in a linear fashion.  We've been able to run multiple programs at once due to virtualization of the CPU, but within our program everything remains very linear.  Threads allow us to change that.  One can think of spawning a thread as sort of like spawning a new process, but a process that shares the same address space and so can access the same data.  

And that's for better OR for worse.  Given that multiple threads all have access to the same data, it's easy to imagine that the possibility for things to go wrong would be quite easy.  Thread A changes a variable, Thread B isn't aware of the change, still assuming the previous value, things can easily spiral out of sync and cause crashes or other indeterminate outcomes. 

To help control and manage multiple threads and give us a sense of concurrency, we learned about locks.  Locks allow programmers to have some control over how a thread access certain portion of code, this way a programmer can write their code where shared data is protected by locks and only one thread can access the region at at time.  Locks help synchronize and prevent race conditions that the operating scheduler might generate unknowingly.  We covered multiple different types of locks, spin locks, ticket locks, Futex locks.  Covered building locks with hardware primitives.  We also covered how if we want to use data structures across threads, they also need to be implemented in a thread safe manner, with locks.

0 comments:

Post a Comment