Euler Traversal & LCA in a Tree

Euler Traversal

  • Start as ROOT --> Subtree-1 --> ROOT --> Subtree-2 --> ROOT --> Subtree-3 --> ROOT
  • Recurse at each subtree as above
  • All nodes of a subtree appear together, contiguously in the traversal.

Why Euler Traversal Gets LCA?

enter image description here
Assume we want LCA of a node u in subtree a and node v in subtree c. A Euler walk of the tree will have all nodes of the subtree a followed by the LCA node ROOT and all the nodes of subtree c.

Thus Euler walk/ traversal of a tree will always have the LCA for any two nodes.

References

Written with StackEdit.

Advertisements

Istio: A Novice Explanation

Istio: A Novice Explanation

What is ISTIO

  • A microservices manager
  • A service mesh based system. Service mesh means a system built with many microservices 🙂
  • Manages traffic, policies for authorization, encryption, load balancing, tracing, logging (all repetitive tasks are clubbed in ISTIO)
  • It is another layer on a microservice. ISTIO is hosted on the same container/VM of your microservice.

How ISTIO work

  • It needs a separate cluster to function.
  • It has four components in its architecture
    • Envoy
    • Pilot
    • Citadel
    • Galley
  • It uses a proxy server called Envoy to monitor TCP/IP traffic and pass them to another component called Mixer.
  • Citadel enforces communication security.
  • Galley takes care of user authorization part

Why ISTIO

  • At a high scale of microservices, ISTIO eases managing common tasks
  • In my opinion, the USP is that ISTIO allows these tasks without any code modification to services.

ISTIO & Kubernetes

  • Istio configurations are merged with Kubernetes service deployment YAML.
  • ISTIO adds a new section to the service or deployment YAML.
  • There are sections suchs as:
    • VirtualService: It exposes a service to public IPs through a load balancer.
    • Gateway: A public gateway with ingress config (who can contact a service)

ISTIO Commands

$ kubectl get svc -n istio-system
$ kubectl get pods -n istio-system

References

Height of a Binary Tree: Recursion Unrolled & Explained


tags: >-
development, C++, tree, height, programming, recursion, internals, complete
flow
categories: development

Height of a Binary Tree is the longest path in the tree.

Code
int getHeight(node *root)
{
    if (root == NULL) {
        return -1;
    }
    int leftHeight = getHeight(root->left);
    int rightHeight = getHeight(root->right);

    return (max(leftHeight, rightHeight) + 1);     
}

Suppose the tree is as follows:

         [10]
          |
       [8]  [12]
              |
           [4] [5]

The recursion flow is as following:

1. getHeight(10)
2. getHeight(left of 10) ==> getHeight(8)
3. getHeight(left of 8) ==> getHeight(NULL)
4. getHeight(NULL) returns -1
5. leftHeight = -1 and flow goes back to line # 3
6. Now, it calls right of 8, getHeight(right of 8)
7. getHeight(NULL) returns -1 to rHeight
8. Both subtree of 8 are traversed and leftHeight = -1, rightHeight = -1
9. It compares both values and return the max + 1
10. max( -1, -1) + 1 = 0
11. The node 8 is left subtree of 10, and returns to line # 2.
12. At node 10, leftHeight = 0
13. Now, rightHeight is calculated by moving to right of 10.
14. getHeight(right of 10) ==> getHeight(12)
15. Similar to above, height at node 12 is calculated as 1.
16. At the end, node 12 returns its height at line #14.
17. Again we compare max (0, 1) + 1
18. The answer is 2   
Summary

Usually, all recursion steps are not visualized and just assumed to be working. This post tries to exhaust a recursion flow and help understand it better.

Notes on HAProxy Configuration

HAProxy

The primary objective of a load balancer is:

  • acting as a reverse proxy (exposing an IP and hiding server implementation behind a backend IP).
  • Load balancing and higher availability with multiple backend servers
  • Request queuing

The most four basic sections of an HAProxy configuration are:

  • Default
  • Frontend
  • Backend
  • Global

All the sections are similar to Nginx config. The interesting thing is layer 4 or layer 7 routing of requests to backend servers. Layer 4 is TCP based and layer 7 is the application layer, based on HTTP. A health check with layer 4 is a successful TCP connection of HAProxy with a backend server. A layer 7 health check is a successful response of an HTTP API (HEAD/GET). The latter is slower but smarter.


HAProxy Logs & Config
  • /var/log/haproxy.log
  • /etc/haproxy/haproxy.cfg
REFERENCE

Notes on Dockerfile and Build Cache

Dockerfile is an instruction set to set up a new container. It looks like a BASH script that serially runs all the mentioned commands. The commands are predefined by Dockerfile syntax.

Unlike BASH script, Dockerfile runs and applies effects of a command to the output of the previous step. Each step of a Dockerfile creates, by default, a container which is kept hidden. You can list such ephemeral containers by running following command:

$ docker images -a 

All containers with <none> name are ephemeral.

Why Docker need Ephemeral Containers

  • Each ephemeral container acts as a cached output of a step in the Dockerfile.
  • Next container build would use the cached output instead of running the step again.

How it Works?

  • Each step starting from the from base checks if the next step has cached output.
  • The check is with the asked instruction and the instruction that was run by the cached output.
  • If instructions do not match, the cache is invalidated. The step is built normally.
  • To disable caching, provide the no-cache option.
    $ docker build --no-cache ...
    

Reference

xargs: ls: terminated by signal 13

The following command hits a problem with sig 13.

$ find . -type f|xargs ls -altr|head
-rw-r--r-- 1 root root  254111 Mar 17  2018 ./60/62
-rw-r--r-- 1 root root  135111 Mar 17  2018 ./60/66
xargs: ls: terminated by signal 13

Why did it fail?

$ strace find . -type f|xargs ls -altr|head
newfstatat(AT_FDCWD, "xyxyx", {st_mode=S_IFREG|0644, st_size=80241, ...}, AT_SYMLINK_NOFOLLOW) = 0
write(1, "40ee30e7c76a7541d61acb6ec4\n./60/"..., 4096) = -1 EPIPE (Broken pipe)
--- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=376666, si_uid=1190} ---
+++ killed by SIGPIPE +++

xargs process is writing to a closed pipe, gets a SIGPIPE and gets killed.

The head command is getting finished before xargs could finish. It closes the read end of the pipe and xargs get a SIGPIPE as the pipe is no more being read.

Written with StackEdit.

Kubernetes for Dummies: StatefulSet

Kubernetes provides virtualized infrastructure components such as storage, compute and network. Imagine an operating system that allows users to allocate resources and run their applications.

StatefulSet is a type of application that needs to persist information across lifetimes 🙂

The storage is exposed as a class and name. Storage class mimic real world and has meta information such as size, type and provider of the storage. Kubernetes is following plugin based approcah like Linux kernel VFS :-). You can act as a provider of storage and define a class for such storage.

Storage type could be block or file. Like Linux, a file type storage gets a mountpoint.

The following snippet defines a storage class of type SSD, using Google’s storage. Retain means we keep the data even after the node (pod) is down.

  ---
  kind: StorageClass
  apiVersion: storage.k8s.io/v1
  metadata:
    name: my-storage-class
  provisioner: kubernetes.io/gce-pd
  reclaimPolicy: Retain
  parameters:
    type: pd-ssd
    replication-type: none
  ---

gce-pd means Google Cloud Engine- Persistent Disk.

The Kubernetes template for the instance would also specify how to use the storage with a PersistentVolumeClaim. The idea is that the instance first claims a persistent volume of a particular class. It also mentions access type and needed size.

volumeClaimTemplates:
  - metadata:
      name: a-new-claim
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "my-storage-class"
      resources:
        requests:
          storage: 1Gi

ReadWriteOnce means that the volume is mounted once for read and write. It just menas a regular storage space.

Now just use the claim under containers section.

containers:
      - name: nginx
        image: k8s.gcr.io/nginx-slim:0.8
        volumeMounts:
        - name: a-new-claim
          mountPath: /usr/share/nginx/html

You will have a mounted path /usr/share/nginx/html in the container. Behavior is similar to local mount exported by NFS 🙂

REFERENCES

Written with StackEdit.

Why Using Golang sync Pool is a Bad Idea?

Not an absolutely bad idea, but you need a careful understanding before using sync Pool.

Golang sync Pool is used to create a self-managed object pool, with just New/Get/Set functions. The Pool implementation uses a mutex based locking for thread-safe operations from multiple Go routines.

Design Principles
  • Golang sync. Pool use garbage collection to manage objects. The pool is essentially a list of user-defined blob objects that are GC’ed periodically.
  • The application uses Get/Put for using an object from the Pool.
  • The application defines a New method to create a new object.
  • The application does not bother about object memory management.
  • Unreferenced that is objects present in the Pool at a given time is GC’ed.
  • There is no finalizer for objects deletion.
  • Get randomly selected object from the list; if no object is available, creates a new object and returns it.
  • Put places an object back to the Pool.
  • The design wants to avoid hotspots and randomize object selections in Put and Get operations. So, Put and Get calls may not be co-related.
    • A Put op may randomly return without placing the object back to the Pool.
    • This intentional delay helps the Get to pick a new object.
What are Potential Problems with Pool
  • If the object has allocated resources (socket, buffer), there is no way to free them!
  • If the application has got an object and starts processing it with async operations, there is a risk of data corruption if the application returns the object to the Pool.
Best Use cases for Pool
  • Plain buffer management
  • Non-composite objects lifecycle management
Not to Use If
  • Objects that dynamically allocate resources (buffer, sockets)
  • Objects that are processed asynchronously.
References

How to use Gist with Stackedit for WordPress blog?

How to use Gist with Stackedit for WordPress blog?

  • Create the gist and copy the git number
    An example is https://gist.github.com/vishalkanaujia/c7d692de9a3b337e9ad8e655533b9856
    The gist id is the last string of the address:
    “c7d692de9a3b337e9ad8e655533b9856”
  • In the stackedit source (markdown file), add the following code:
    [g i s t]c7d692de9a3b337e9ad8e655533b9856[/g i s t]
    Please type as “gist”
  • WordPress will automatically interpret the command and pull your gist.

Reference

Written with StackEdit.