Go Runtime Scheduler Design Internals

Concurrency is one of the most exciting features of Go language. A single threaded program runs serially, but if you have tasks that can run concurrently, you create threads for such tasks. Threads execute independently and progress concurrently. Go supports creation of thousands of threads in an application!! How is it possible? It’s Go’s runtime. Go programs are compiled and the executable is self-contained (runtime is linked with the binary).

Let’s understand the design motivation of the Go runtime. The runtime must follow the resource constraints. The system must run multiple threads. CPU core can run only one thread a time and if there are more threads than available cores, threads are paused/resumes (context switched). During a context switch, thread execution state is preserved and another thread is loaded. Creation of a thread requires resources and hence there is a limit.

Under the constraints, Go runtime maximise CPU utilisation, minimises latencies and memory footprint.

Go provides concurrency with language primitives of Goroutines and channels. Using Goroutines applications can grow dynamically (forking new Goroutines). Channels are internal to Go runtime and system has no knowledge of channels.

Let’s understand Goroutines in detail. It is essentially a light weight thread, exists on the user space. Goroutines are frugal with resources. Unlike system threads, a Goroutine has a small stack and it grows as needed. It is one of the reasons that Go can support creation of thousands of Goroutines.

So how are thousands of Goroutines managed by the runtime? Instead of delegating the responsibility to the system scheduler, Go uses its own scheduler. It is coded in Go itself.

How does Go scheduler work? Let’s understand thread model used in applications. An application can use system thread, managed by the OS. These threads can make system call and access system resources (e.g. CPU). However, these threads are expensive as they consume system resources such as signal masks, PID, cgroup etc. Context switch is expensive as they trap in to kernel. In contrast, user threads are created and managed by the application, consume less resources and context switch is fast because it does not go through the kernel. A user thread needs a system thread to execute the code on CPU or accessing any other system resources.  

Now the next decision is to maintain a ratio of user threads and system threads. The first model is using N user threads and one system threads. It gives fast context switching but can’t use multiple cores (if available). Also if a user thread is blocked hence there is no available system thread, other user threads will wait. Other scheme is to have 1-on-1 mapping of user threads to system threads. It provides good CPU utilisation but context switching is slow. Then third option is to create many-to-many mapping.

Go takes the third option. Goroutines are distributed on a set of system/OS threads. There are three major entities in the Go scheduler: A set of machines (M), Goroutines (G) and processors (P). There are minor entities such as global and local run queue and thread cache.

Let’s understand “MGP”: Machines are a representation for OS threads. An OS thread is part of a pool of worker threads. On Linux, it is a standard POSIX thread. A M runs a set of Gs. A G Goroutine represents a user space Goroutine; it has its own IP, stack and blocking info. The P represents a logical entity for available processors or a scheduling context. Every worker (M) needs a P to run G. The number of P is pre-decided (GOMAXPROCS) and fixed during the run. 

A M runs a set of Gs. A G Goroutine represents a user space Goroutine; it has its own IP, stack and blocking info. The P represents a logical entity for available processors or a scheduling context. Every worker (M) needs a P to run G. The number of P is pre-decided (GOMAXPROCS) and fixed during the run. 

In this diagram, we have a setup with 2 Processors, defined by GOMAXPROCS. There are two worker machine (M). Each M has a local run queue. As new Goroutines are created and is runnable, they are added to the local run queue. If local run queue is full, new G’s are added to the global run queue. Idle G are kept in an idle queue.

What happens when a G makes a system call? The scheduler would know that G is blocked and hence its M is also blocked. P is not utilised, and can be used by some other M.

The scheduler takes back the P, and creates a new M, and assign to the new worker machine (M). The runnable queue is also moved to the new worker. When M’ comes back, it would try to find an idle P. If not possible, it would move the G’ to the global queue and park itself in Thread cache. The scheduler makes sure there are enough threads to run all contexts. There can be more than M even for a P=1, because a worker might get stuck in a syscall.

A scenario can occur in which a P is idle as its run queue is exhausted. In such was it would try to pick G’s from global queue. If global queue is also empty, what to do?  There are two major scheduling paradigms for distributing work. The first is work-sharing in which G would get distributed to other P’s. In contrast, work-stealing scheduler an idle processor would steal G’s from other P’s run queue. Till a P is busy, there is no G movement. So, an idle P would steal about half of Gs from another P. Work Stealing has better resource utilisation and lower migration of Gs.

In this diagram, we have an idle P, local and global run queue are empty, so it will randomly pick a P and steal half of Gs.

Go has a sophisticated scheduler, it offers massive concurrency and intelligent scheduling and always tries to achieve maximum utilisation, minimum latencies.

Advertisements

NAT Protocol Simplified Explanation

  • The purpose of NAT protocol is to reduce usage of public IPs
  • A host needs a public IP to connect to Internet
  • If the host is part of a LAN with a gateway router, a host can use private IP to make requests to public Internet.
  • The public Internet would see that all the requests are originating from a LAN (i.e. the gateway router)
  • A router has a local LAN IP and a public IP.
  • The request flows as following:
    • A local host in the LAN can make a request to a web server on Internet.
    • The host request goes frpm local host IP and port to the local gateway.
    • The gateway maintains a NAT table.
      • An entry in the NAT table will have the source and destination mapping
        --------------------------------------------------------------------
        Local Host IP | Local Host port | Gateway public IP | Gateway port |
        --------------------------------------------------------------------
        
        • The gateway creates a port that maps request to and from the local host to the public Internet web server.
        • Hence the public server would always see the gateway IP and port and LAN host would be anonymous.
      • NAT is essentially a kind of multiplexing local hosts requests over gateways single IP and multiple ports, assigned to each local host.

Reference

Written with StackEdit.

Design Problems of PostGres- Part I

This post is a quick summary of Why Uber moved from PostGres to MySQL.

PostGres Rows and CTID

  • PostGres provides transactions. Transactions need multiple versions of data. So PG is Multi Versioned DB.
  • PG considers each row immutable. Any change to a row creates a new row version.
  • A row is represented as an offset in disk, called ctid
  • Every row will have a unique ctid because they occupy a different space. However, multiple rows (ctid) can share same disk offset (e.g. multiple verisons of a row)
------------------
| Row 0 (ctid 1) |
------------------
                       ----------> Disk block offset x
------------------
| Row 1 (ctid 2) |
------------------

PostGres Index Management

  • Each index has key and values as CTID
  • Any change in a row’s data creates a new CTID. That would need changing all indexes.
  • This is expensive because
    • PG uses WAL technique so each write is at least twice
    • PG replicates WAL to secondary nodes.
    • The WAL has CTID and disk offset level information.
    • Replication across geographies is expensive since data volume is high.

PostGres Replication

  • At a secondary, while a transaction is in progress on a row, the WAL copy will wait.
  • If transaction runs for a long time, PG will terminate the transaction after a timeout.
  • So there are two problems:
    • Transactions can terminate unexpectedly
    • Replicas may be lagging the master more than exptected

Written with StackEdit.

Careful with both hands while using the fork!

fork() is one of the most useful features of C/Linux/UNIX. But it’s like a double edged sword, so be careful with fork 🙂
Of late, I got stuck in a weird problem with one of the client application(A) that interacts with another application(B). Application A was hanging when used application B; otherwise alone A runs just fine.
Now, what to do? We did a thorough examination of both the applications and found that A is waiting on a pipe P. P has its right end with B and A has got the read end. But why this wait? There is no need to keep this pipe open in first place.
So here fork() comes in to picture. Actually, A forks B and then B interacts with A. When A fork() B, B gets a copy of all open file descriptors(FD) of A as well. There you go!
After getting these FDs, B does not take care to close them. But A checks if any of its FD is still open. Since the file is open with B, the kernel will tell A that some of your files are being accessed. So just wait 😦 And this wait never ends…
This was it. A simple close() call in B for all FDs worked for us. And B happily got away with A.

A word of advice: Always call exit() from a child. exit() does basic clean up and calls _exit() which more work including closing all files open with a child.

Just to verify, you can use this test program:

#include "fcntl.h"
#include "stdlib.h"

int main()
{
int fd = -1;
int status;
char buf[512];

fd = open("abc.txt", O_CREAT);

int pid = fork();

if(pid == 0) { // Child
puts("Child says bye");
exit(status);
} else { // Parent
sleep(1);
int ch = read(fd, buf, 16);
printf("\nRead returns %d\n", ch);
exit(status);
}
}