Imagine an application that is broken down into multiple microservices; each microservice has multiple instances, and each deployed instance has multiple versions. Typically, even a simple application deployment with this kind of model can span hundreds of microservices. When an application deployment gets this large, distributed, and complex, the result is often failure. But you need to fail fast and recover quickly. You need a mechanism that is fault-tolerant, one that provides more visibility and control into the complex network of microservices and ensures reliable, secure, and timely communication between them. For this deployment model, we need to keep track of the traffic flow between microservices, route traffic for microservices based on request content or traffic origination point, and handle failures in a graceful manner when a number of microservices are not reachable. We also need to enforce strong identity assertion between services and limit the entities that can access a service. Most importantly, we want to do all this without changing the application code. Service mesh architecture was created to handle these requirements.
The goal is to get a request in a reliable, secure and timely manner across this mesh of microservices from origination to destination microservice. Typically, this is achieved by using “proxies” to intercept all incoming and outgoing network traffic. Proxies in a service mesh architecture are implemented using the sidecar pattern: a sidecar is conceptually attached to the main (or parent) application and complements that parent by providing platform features.
Kubernetes supports a microservices architecture through the Service construct. It allows developers to abstract away the functionality of a set of Pods, and expose it to other developers through a well-defined API. It allows adding a name to this level of abstraction and perform rudimentary L4 load balancing. But it doesn’t help with higher-level problems, such as L7 metrics, traffic splitting, rate limiting, circuit breaking, etc.
With Istio, developers can implement the core logic for the microservices, and let the framework take care of the rest – traffic management, discovery, service identity and security, and policy enforcement. Better yet, this can be also done for existing microservices without rewriting or recompiling any of their parts.
Istio Pilot (for traffic management): In addition to providing content and policy-based load balancing and routing, Pilot also maintains a canonical representation of services in the mesh.
Istio Auth (for access control): Istio Auth controls access to the microservices based on traffic origination points and users, and also provides a key management system to manage keys and certificates.
Istio Mixer (for monitoring, reporting, and quota management): Istio Mixer provides in-depth monitoring and logs data collection for microservices, as well as collection of request traces. It uses Prometheus, Grafana, and Zipkin to provide some of these in-depth metrics.
Istio gateway
Istio pilot
Multiple cluster use-case
https://developer.ibm.com/code/2017/07/21/service-mesh-architecture-and-istio/
http://blog.kubernetes.io/2017/05/managing-microservices-with-istio-service-mesh.html
https://medium.com/microservices-in-practice/service-mesh-for-microservices-2953109a3c9a