Karmada (Kubernetes Armada) is a Kubernetes management system that enables you to run your cloud-native applications across multiple Kubernetes clusters and clouds, with no changes to your applications. By speaking Kubernetes-native APIs and providing advanced scheduling capabilities, Karmada enables truly open, multi-cloud Kubernetes.
Karmada aims to provide turnkey automation for multi-cluster application management in multi-cloud and hybrid cloud scenarios,
with key features such as centralized multi-cloud management, high availability, failure recovery, and traffic scheduling.
Zero change upgrade, from single-cluster to multi-cluster
Seamless integration of existing K8s tool chain
Out of the Box
Built-in policy sets for scenarios, including: Active-active, Remote DR, Geo Redundant, etc.
Cross-cluster applications auto-scaling, failover and load-balancing on multi-cluster.
Avoid Vendor Lock-in
Integration with mainstream cloud providers
Automatic allocation, migration across clusters
Not tied to proprietary vendor orchestration
Centralized Management
Location agnostic cluster management
Support clusters in Public cloud, on-prem or edge
Fruitful Multi-Cluster Scheduling Policies
Cluster Affinity, Multi Cluster Splitting/Rebalancing,
Multi-Dimension HA: Region/AZ/Cluster/Provider
Open and Neutral
Jointly initiated by Internet, finance, manufacturing, teleco, cloud providers, etc.
Target for open governance with CNCF
Notice: this project is developed in continuation of Kubernetes Federation v1 and v2. Some basic concepts are inherited from these two versions.
Architecture
The Karmada Control Plane consists of the following components:
Karmada API Server
Karmada Controller Manager
Karmada Scheduler
ETCD stores the Karmada API objects, the API Server is the REST endpoint all other components talk to, and the Karmada Controller Manager performs operations based on the API objects you create through the API server.
The Karmada Controller Manager runs the various controllers, the controllers watch Karmada objects and then talk to the underlying clusters’ API servers to create regular Kubernetes resources.
Cluster Controller: attach Kubernetes clusters to Karmada for managing the lifecycle of the clusters by creating cluster objects.
Policy Controller: the controller watches PropagationPolicy objects. When the PropagationPolicy object is added, it selects a group of resources matching the resourceSelector and creates ResourceBinding with each single resource object.
Binding Controller: the controller watches ResourceBinding object and create Work object corresponding to each cluster with a single resource manifest.
Execution Controller: the controller watches Work objects. When Work objects are created, it will distribute the resources to member clusters.
Concepts
Resource template: Karmada uses Kubernetes Native API definition for federated resource template, to make it easy to integrate with existing tools that already adopt on Kubernetes
Propagation Policy: Karmada offers a standalone Propagation(placement) Policy API to define multi-cluster scheduling and spreading requirements.
Support 1:n mapping of Policy: workload, users don’t need to indicate scheduling constraints every time creating federated applications.
With default policies, users can just interact with K8s API
Override Policy: Karmada provides standalone Override Policy API for specializing cluster relevant configuration automation. E.g.:
Override image prefix according to member cluster region
Override StorageClass according to cloud provider
The following diagram shows how Karmada resources are involved when propagating resources to member clusters.
Quick Start
This guide will cover:
Install karmada control plane components in a Kubernetes cluster which is known as host cluster.
Start a Kubernetes cluster to run the Karmada control plane, aka. the host cluster.
Build Karmada control plane components based on a current codebase.
Deploy Karmada control plane components on the host cluster.
Create member clusters and join Karmada.
If everything goes well, at the end of the script output, you will see similar messages as follows:
Local Karmada is running.
To start using your Karmada environment, run:
export KUBECONFIG="$HOME/.kube/karmada.config"
Please use 'kubectl config use-context karmada-host/karmada-apiserver' to switch the host and control plane cluster.
To manage your member clusters, run:
export KUBECONFIG="$HOME/.kube/members.config"
Please use 'kubectl config use-context member1/member2/member3' to switch to the different member cluster.
The karmada-apiserver is the main kubeconfig to be used when interacting with the Karmada control plane, while karmada-host is only used for debugging Karmada installation with the host cluster. You can check all clusters at any time by running: kubectl config view. To switch cluster contexts, run kubectl config use-context [CONTEXT_NAME]
Demo
Propagate application
In the following steps, we are going to propagate a deployment by Karmada.
If you’re interested in being a contributor and want to get involved in
developing the Karmada code, please see CONTRIBUTING for
details on submitting patches and the contribution workflow.
License
Karmada is under the Apache 2.0 license. See the LICENSE file for details.
Karmada
Karmada: Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration
Karmada (Kubernetes Armada) is a Kubernetes management system that enables you to run your cloud-native applications across multiple Kubernetes clusters and clouds, with no changes to your applications. By speaking Kubernetes-native APIs and providing advanced scheduling capabilities, Karmada enables truly open, multi-cloud Kubernetes.
Karmada aims to provide turnkey automation for multi-cluster application management in multi-cloud and hybrid cloud scenarios, with key features such as centralized multi-cloud management, high availability, failure recovery, and traffic scheduling.
Karmada is a sandbox project of the Cloud Native Computing Foundation (CNCF).
Why Karmada:
K8s Native API Compatible
Out of the Box
Avoid Vendor Lock-in
Centralized Management
Fruitful Multi-Cluster Scheduling Policies
Open and Neutral
Notice: this project is developed in continuation of Kubernetes Federation v1 and v2. Some basic concepts are inherited from these two versions.
Architecture
The Karmada Control Plane consists of the following components:
ETCD stores the Karmada API objects, the API Server is the REST endpoint all other components talk to, and the Karmada Controller Manager performs operations based on the API objects you create through the API server.
The Karmada Controller Manager runs the various controllers, the controllers watch Karmada objects and then talk to the underlying clusters’ API servers to create regular Kubernetes resources.
Concepts
Resource template: Karmada uses Kubernetes Native API definition for federated resource template, to make it easy to integrate with existing tools that already adopt on Kubernetes
Propagation Policy: Karmada offers a standalone Propagation(placement) Policy API to define multi-cluster scheduling and spreading requirements.
Override Policy: Karmada provides standalone Override Policy API for specializing cluster relevant configuration automation. E.g.:
The following diagram shows how Karmada resources are involved when propagating resources to member clusters.
Quick Start
This guide will cover:
karmada
control plane components in a Kubernetes cluster which is known ashost cluster
.karmada
control plane.karmada
.Prerequisites
Install the Karmada control plane
1. Clone this repo to your machine:
2. Change to the karmada directory:
3. Deploy and run Karmada control plane:
run the following script:
This script will do following tasks for you:
host cluster
.host cluster
.If everything goes well, at the end of the script output, you will see similar messages as follows:
There are two contexts in Karmada:
kubectl config use-context karmada-apiserver
kubectl config use-context karmada-host
The
karmada-apiserver
is the main kubeconfig to be used when interacting with the Karmada control plane, whilekarmada-host
is only used for debugging Karmada installation with the host cluster. You can check all clusters at any time by running:kubectl config view
. To switch cluster contexts, runkubectl config use-context [CONTEXT_NAME]
Demo
Propagate application
In the following steps, we are going to propagate a deployment by Karmada.
1. Create nginx deployment in Karmada.
First, create a deployment named
nginx
:2. Create PropagationPolicy that will propagate nginx to member cluster
Then, we need to create a policy to propagate the deployment to our member cluster.
3. Check the deployment status from Karmada
You can check deployment status from Karmada, don’t need to access member cluster:
Kubernetes compatibility
Key:
✓
Karmada and the Kubernetes version are exactly compatible.+
Karmada has features or API objects that may not be present in the Kubernetes version.-
The Kubernetes version has features or API objects that Karmada can’t use.Meeting
Regular Community Meeting:
Resources:
Contact
If you have questions, feel free to reach out to us in the following ways:
Talks and References
For blogs please refer to website.
Contributing
If you’re interested in being a contributor and want to get involved in developing the Karmada code, please see CONTRIBUTING for details on submitting patches and the contribution workflow.
License
Karmada is under the Apache 2.0 license. See the LICENSE file for details.