Last night I described how scheduling works, and the min-heaps Linux uses to implement most of it's schedulers. But once we have multiple programs running (seamingly) simultaneously, we need a way for them to synchronize and communicate.
Producer-consumer queues (atomic ringbuffers) are arguably the easiest technique though mutex locks are often used too, both of which need lowlevel primitives to build upon.
Linux *really* needs this due to multiple cores and hardware interruptions!
The primitives CPUs provide to deal with this are called "atomics", whereby you're uninterruptible memory operations. Like cmpswap that is equivalent to:
if (*pointer == expected) *pointer = value;
I believe I read that CPUs will take locks on this memory address to prevent dataraces. And this is enough to, say, atomically increment & decrement that number to form a mutex.
For efficiency the scheduler provides another primitive: "conditions".
An "atomic condition" allows a thread a thread to block upon some condition on an atomic pointer, until another thread indicates it might have just become true.
In implementing this the scheduler needs to be careful about data races as it puts threads to sleep. It needs to check the condition both before *and* after removing it from the appropriate queue, before it yields to another thread.
There's been efforts to move the initial check into userspace for more efficiency.
Producer-Consumer queues are what underlies Unix-style streams in Linux, allowing data to be efficiently enqueued from one thread & dequeued in the same order from the other thread.
It's a segment of memory with atomic read & write heads thatwrap around to the start. If the read head catches up to the write head, the queue is empty. And if the write head catches up to the read head it's full.
Make those heads atomic & add some conditions, this ringbuffer becomes a producer-consumer queue.
Huh. I always thought atomics worked by... magic.
For people who care about, support, or build Free, Libre, and Open Source Software (FLOSS).