Internals of Linux Process Signals

Linux Signals

  • A process in Linux is represented as task_struct.
  • A process A sends a signal to process B using system call kill() or kill -<sig num> <pid>.
  • Kernel updates the task_struct of the receiving process B. It changes the signal field with the passed value.
  • The scheduler checks the signals on a process before running the process. If the signal is set, it calls the signal handler.
  • A process can define its handlers for all signals except SIGKILL and SIGSTOP.
  • There is always a default signal handler for each signal.

References

Advertisements

Height of a Binary Tree: Recursion Unrolled & Explained


tags: >-
development, C++, tree, height, programming, recursion, internals, complete
flow
categories: development

Height of a Binary Tree is the longest path in the tree.

Code
int getHeight(node *root)
{
    if (root == NULL) {
        return -1;
    }
    int leftHeight = getHeight(root->left);
    int rightHeight = getHeight(root->right);

    return (max(leftHeight, rightHeight) + 1);     
}

Suppose the tree is as follows:

         [10]
          |
       [8]  [12]
              |
           [4] [5]

The recursion flow is as following:

1. getHeight(10)
2. getHeight(left of 10) ==> getHeight(8)
3. getHeight(left of 8) ==> getHeight(NULL)
4. getHeight(NULL) returns -1
5. leftHeight = -1 and flow goes back to line # 3
6. Now, it calls right of 8, getHeight(right of 8)
7. getHeight(NULL) returns -1 to rHeight
8. Both subtree of 8 are traversed and leftHeight = -1, rightHeight = -1
9. It compares both values and return the max + 1
10. max( -1, -1) + 1 = 0
11. The node 8 is left subtree of 10, and returns to line # 2.
12. At node 10, leftHeight = 0
13. Now, rightHeight is calculated by moving to right of 10.
14. getHeight(right of 10) ==> getHeight(12)
15. Similar to above, height at node 12 is calculated as 1.
16. At the end, node 12 returns its height at line #14.
17. Again we compare max (0, 1) + 1
18. The answer is 2   
Summary

Usually, all recursion steps are not visualized and just assumed to be working. This post tries to exhaust a recursion flow and help understand it better.

Notes on Dockerfile and Build Cache

Dockerfile is an instruction set to set up a new container. It looks like a BASH script that serially runs all the mentioned commands. The commands are predefined by Dockerfile syntax.

Unlike BASH script, Dockerfile runs and applies effects of a command to the output of the previous step. Each step of a Dockerfile creates, by default, a container which is kept hidden. You can list such ephemeral containers by running following command:

$ docker images -a 

All containers with <none> name are ephemeral.

Why Docker need Ephemeral Containers

  • Each ephemeral container acts as a cached output of a step in the Dockerfile.
  • Next container build would use the cached output instead of running the step again.

How it Works?

  • Each step starting from the from base checks if the next step has cached output.
  • The check is with the asked instruction and the instruction that was run by the cached output.
  • If instructions do not match, the cache is invalidated. The step is built normally.
  • To disable caching, provide the no-cache option.
    $ docker build --no-cache ...
    

Reference

xargs: ls: terminated by signal 13

The following command hits a problem with sig 13.

$ find . -type f|xargs ls -altr|head
-rw-r--r-- 1 root root  254111 Mar 17  2018 ./60/62
-rw-r--r-- 1 root root  135111 Mar 17  2018 ./60/66
xargs: ls: terminated by signal 13

Why did it fail?

$ strace find . -type f|xargs ls -altr|head
newfstatat(AT_FDCWD, "xyxyx", {st_mode=S_IFREG|0644, st_size=80241, ...}, AT_SYMLINK_NOFOLLOW) = 0
write(1, "40ee30e7c76a7541d61acb6ec4\n./60/"..., 4096) = -1 EPIPE (Broken pipe)
--- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=376666, si_uid=1190} ---
+++ killed by SIGPIPE +++

xargs process is writing to a closed pipe, gets a SIGPIPE and gets killed.

The head command is getting finished before xargs could finish. It closes the read end of the pipe and xargs get a SIGPIPE as the pipe is no more being read.

Written with StackEdit.

Go Runtime Scheduler Design Internals

Concurrency is one of the most exciting features of Go language. A single threaded program runs serially, but if you have tasks that can run concurrently, you create threads for such tasks. Threads execute independently and progress concurrently. Go supports creation of thousands of threads in an application!! How is it possible? It’s Go’s runtime. Go programs are compiled and the executable is self-contained (runtime is linked with the binary).

Let’s understand the design motivation of the Go runtime. The runtime must follow the resource constraints. The system must run multiple threads. CPU core can run only one thread a time and if there are more threads than available cores, threads are paused/resumes (context switched). During a context switch, thread execution state is preserved and another thread is loaded. Creation of a thread requires resources and hence there is a limit.

Under the constraints, Go runtime maximise CPU utilisation, minimises latencies and memory footprint.

Go provides concurrency with language primitives of Goroutines and channels. Using Goroutines applications can grow dynamically (forking new Goroutines). Channels are internal to Go runtime and system has no knowledge of channels.

Let’s understand Goroutines in detail. It is essentially a light weight thread, exists on the user space. Goroutines are frugal with resources. Unlike system threads, a Goroutine has a small stack and it grows as needed. It is one of the reasons that Go can support creation of thousands of Goroutines.

So how are thousands of Goroutines managed by the runtime? Instead of delegating the responsibility to the system scheduler, Go uses its own scheduler. It is coded in Go itself.

How does Go scheduler work? Let’s understand thread model used in applications. An application can use system thread, managed by the OS. These threads can make system call and access system resources (e.g. CPU). However, these threads are expensive as they consume system resources such as signal masks, PID, cgroup etc. Context switch is expensive as they trap in to kernel. In contrast, user threads are created and managed by the application, consume less resources and context switch is fast because it does not go through the kernel. A user thread needs a system thread to execute the code on CPU or accessing any other system resources.  

Now the next decision is to maintain a ratio of user threads and system threads. The first model is using N user threads and one system threads. It gives fast context switching but can’t use multiple cores (if available). Also if a user thread is blocked hence there is no available system thread, other user threads will wait. Other scheme is to have 1-on-1 mapping of user threads to system threads. It provides good CPU utilisation but context switching is slow. Then third option is to create many-to-many mapping.

Go takes the third option. Goroutines are distributed on a set of system/OS threads. There are three major entities in the Go scheduler: A set of machines (M), Goroutines (G) and processors (P). There are minor entities such as global and local run queue and thread cache.

Let’s understand “MGP”: Machines are a representation for OS threads. An OS thread is part of a pool of worker threads. On Linux, it is a standard POSIX thread. A M runs a set of Gs. A G Goroutine represents a user space Goroutine; it has its own IP, stack and blocking info. The P represents a logical entity for available processors or a scheduling context. Every worker (M) needs a P to run G. The number of P is pre-decided (GOMAXPROCS) and fixed during the run. 

A M runs a set of Gs. A G Goroutine represents a user space Goroutine; it has its own IP, stack and blocking info. The P represents a logical entity for available processors or a scheduling context. Every worker (M) needs a P to run G. The number of P is pre-decided (GOMAXPROCS) and fixed during the run. 

In this diagram, we have a setup with 2 Processors, defined by GOMAXPROCS. There are two worker machine (M). Each M has a local run queue. As new Goroutines are created and is runnable, they are added to the local run queue. If local run queue is full, new G’s are added to the global run queue. Idle G are kept in an idle queue.

What happens when a G makes a system call? The scheduler would know that G is blocked and hence its M is also blocked. P is not utilised, and can be used by some other M.

The scheduler takes back the P, and creates a new M, and assign to the new worker machine (M). The runnable queue is also moved to the new worker. When M’ comes back, it would try to find an idle P. If not possible, it would move the G’ to the global queue and park itself in Thread cache. The scheduler makes sure there are enough threads to run all contexts. There can be more than M even for a P=1, because a worker might get stuck in a syscall.

A scenario can occur in which a P is idle as its run queue is exhausted. In such was it would try to pick G’s from global queue. If global queue is also empty, what to do?  There are two major scheduling paradigms for distributing work. The first is work-sharing in which G would get distributed to other P’s. In contrast, work-stealing scheduler an idle processor would steal G’s from other P’s run queue. Till a P is busy, there is no G movement. So, an idle P would steal about half of Gs from another P. Work Stealing has better resource utilisation and lower migration of Gs.

In this diagram, we have an idle P, local and global run queue are empty, so it will randomly pick a P and steal half of Gs.

Go has a sophisticated scheduler, it offers massive concurrency and intelligent scheduling and always tries to achieve maximum utilisation, minimum latencies.

Design Problems of PostGres- Part II

The problems in PostGres DB are solved in MySQL to an extent. The design merits of MySQL are the following:

  • The primary index has a mapping of key to an offset in the disk. But all secondary index tables have a mapping of key to the primary index’s key.
Primary index  
--------------------  
| key | Disk offset|  
--------------------  
Secondary index  
---------------------  
| key | primary key |  
---------------------  
  • The drawback is that a lookup of the secondary table needs two hops.
  • On the other hand, a change in a row only needs modification of the primary index. It avoids changing the secondary index for every change in row’s data
  • MySQL replication uses logical information instead of data with physical information used in PostGres.
  • Logs are compact compared to PG.
  • MySQL manages its own cache, unlike PG that uses buffer cache.
  • PG buffer cache use is inefficient since a read operation would have to land in the buffer cache and hence a context switch.
  • PG uses a process per connection. It is expensive. MySQL uses threads for a connection and scales without too much resource consumption.