What is Special in HTTP 2.0?

Useful Points:

  1. Unlike text protocol HTTP/1.0 & HTTP/1.1, HTTP/2.0 is binary based protocol. So tools like curl are needed to make requests instead of plain telnet.
  2. The TCP connection is now multiplexed for multiple requests. So a client should open only one connection.
  3. Few browsers support HTTP/2.0 with only TLS.
  4. The compression of data is not encouraged.

Reference


Discussion on Microservices Integration

Discussion on Microservices Integration

Microservices is a distributed system pattern and components need to communicate through the network.

If communication data includes internal technical details of participating components, the system loses the property of loose-coupling.

If a change in one service causes changes in many other services, we lose the property of high-cohesion.

An unreliable network and added latency for data communication dictate choices for services integration.

There are two major types:

  • Direct Communication
  • Event-based, asynchronous communication

Direct communication is a request/response-based. It is useful for low-latency and immediate consumption scenarios. It is prone to failures (server unresponsive, network latency) and hence caller/clients need to retry. This communication is not easily extensible and soon become brittle.

Asynchronous mode is a “fire and forget” approach. An event is generated and it is up to the consumers to handle it. This model scales very well: the publisher and consumers have no coupling. Both are independently deployable. However, it is hard to monitor the status of each event handler.

References

Written with StackEdit.


goto: Modern Continuous Delivery

The presentation https://www.youtube.com/watch?v=wjF4X9t3FMk is a great touch base on microservices deployment and development flow. The key ideas are the following:

  1. Security is needed at every step of the microservice lifecycle.
  2. End to end testing is essential.
  3. Production test is also important using a canary, red-blue strategy.
  4. Continuous integration and deployment are best modeled in a pipeline.
  5. Docker, Kubernetes are here to stay!

Written with StackEdit.


Understanding REST (REpresentational State Transfer)

REST is an software architecture pattern and defines a set of constraints for creating web services.

The basic building block of REST are:

The benefits of the above principles are:

  • fault-tolerant systems
  • portable applications
  • scale-out systems
  • isolated components updates

References

https://en.wikipedia.org/wiki/Representational_state_transfer


MVC Explained

MVC is an architecture to separate an application in three cohesive, loosely coupled verticals.

  1. Model: The data of your application and methods to access it.
  2. View: The final output/expected result.
  3. Controller: The interface that handles requests from the model

I’m trying to map it to a Linux Filesystem (e.g. ext2).

Model: The file system block manager and allocator for the storage device.
View: read()/write() methods.
Controller: Maps a file descriptor to file system blocks.


What is a Data Platform?

Over time, organizations need to go beyond a single DB for querying and storing data to a set of DBs that cater to different business requirements. A Data Platform might comprise:

  • Search Index
  • A relational DB
  • NoSQL DB
  • Data Warehouse

Why a Data Warehouse?

It is a subject of interest to understand how the application uses the DB. The inspection can happen with a set of queries to know the DB usage. But it might affect your primary workload, so you can create isolated replica nodes for such purpose.
However, there is a time when the schema of DB data is not suitable for querying that global view of the DB. So using an ETL pipeline, data is stored in the desired schema in a data warehouse such as S3.

Why a Search Index

Used for allowing applications to search the DB. Primarily Lucene based solution such as ES, Solr. The index is mostly eventually consistent with the DB. It is expensive to update index in the write path.


Simplifying go-kit toolkit for Microservices – Part I

Introduction

go-kit is one of the most complete and flexible toolkits for developing microservices in Go language. At the same time, the learning curve of go-kit is steep. In this post, I’m trying to explain go-kit fundamental components using general purpose client-server model of Linux.

General Purpose Client-Server Architecture

A server in Linux binds & listens on a port, accepts connections and server requests. A client creates its own socket, makes a request to the server (IP: PORT). The request is just a buffer. The server spawns a thread or runs the handler in the same process. The handler is a function that understands request parameters and typecast it to its expected type.
The client receives a response from the server. The response is read into a buffer and then interpreted.

go-kit in Client-Server Paradigm

go-kit helps create a service. A service listens on a well-known port and IP. It is the server equivalent of Linux server. The server defines handlers for different types of request (PUT, GET, POST). A handler is a function called for a service request. The handler is a generic wrapper over service specific business logic implementation. A handler for GET call may eventually call get_status_reservation() in a sample reservation system.

go-kit intends to keep the core logic away from go-kit influence. The core logic (set of functions that your service implements) stays in a file called service.go. The remaining go-kit code tries to access these functions in an abstract manner. There are entities called endpoints, transport and server. Each of these allows a generic interaction of service functions through HTTP (or gRPC).

The overall objective is to expose service functionality through go-kit toolchain.

Transport

It defines the send and receiver buffer structure. Each service API could expect and send different data. The transport defines structures that define request and response for each API

In the next post, I will share endpoints, the most interesting part of go-kit.


Istio: A Novice Explanation

Istio: A Novice Explanation

What is ISTIO

  • A microservices manager
  • A service mesh based system. Service mesh means a system built with many microservices 🙂
  • Manages traffic, policies for authorization, encryption, load balancing, tracing, logging (all repetitive tasks are clubbed in ISTIO)
  • It is another layer on a microservice. ISTIO is hosted on the same container/VM of your microservice.

How ISTIO work

  • It needs a separate cluster to function.
  • It has four components in its architecture
    • Envoy
    • Pilot
    • Citadel
    • Galley
  • It uses a proxy server called Envoy to monitor TCP/IP traffic and pass them to another component called Mixer.
  • Citadel enforces communication security.
  • Galley takes care of user authorization part

Why ISTIO

  • At a high scale of microservices, ISTIO eases managing common tasks
  • In my opinion, the USP is that ISTIO allows these tasks without any code modification to services.

ISTIO & Kubernetes

  • Istio configurations are merged with Kubernetes service deployment YAML.
  • ISTIO adds a new section to the service or deployment YAML.
  • There are sections suchs as:
    • VirtualService: It exposes a service to public IPs through a load balancer.
    • Gateway: A public gateway with ingress config (who can contact a service)

ISTIO Commands

$ kubectl get svc -n istio-system
$ kubectl get pods -n istio-system

References


Architecture Pattern: CQRS

Command and Query Responsibility Segregation provides excellent decoupling for shared data at a nominal price of higher latency for latest data. It fits very well in micro service architecture for cases of data sharing among services where one service is a reader and another a writer.

Implementation

Suppose there are two serviceA & B that need to share a database. We do not want to create a DB dependency. So, we can create two DBs and purpose them for writing only and reading only.

Service A always writes to its DB. And after every write it sends and event to a pub/sub. B is subscribed to the pub/sub. B get all write events from the pub/sub and update its DB. All clients of service B always read B’s DB.


Opera Browser: Architecture

Opera is built on Chromium project. Chromium is an open  source project that uses WebKit, a free and open source rendering engine. WebKit is open sourced by Apple.

So, Google Chrome and Opera resemble a lot. They use process for each tab and look similar. The benefit of using processes to handle tabs is:

  1. Better security for browser as processes can’t circumvent security rules easily.
  2. Identification of inactivated tabs and swapping them out of memory, to keep browser light-weight.

To understand more about rendering and Chromium architecture, please visit the following:

  1. https://www.chromium.org/developers/design-documents/multi-process-architecture
  2. https://www.chromium.org/developers/design-documents/displaying-a-web-page-in-chrome