Provider configuration
To take advantage of the auto-scaling and dask-distributed computing capabilities, Nebari can be deployed on a handful of the most commonly used cloud providers. Nebari utilizes many of the resources these cloud providers have to offer; however, the Kubernetes engine (or service) is at it's core. Each cloud provider has slightly different ways that Kubernetes is configured but fear not, all of this is handled by Nebari.
The provider
section of the configuration file allows you to configure the cloud provider that you are deploying to.
Including the region, instance types, and other cloud specific configurations.
Select the provider of your choice:
- GCP
- AWS
- Azure
- DigitalOcean
- Existing Kubernetes clusters
- Local (testing)
Google Cloud has the best support for Nebari and is a great default choice for a production deployment. It allows auto-scaling to zero within the node group. There are no major restrictions.
To see available instance types refer to GCP docs.
note
By default the GKE release channel is set to UNSPECIFIED
to prevent the cluster from auto-updating. This has the advantage of ensuring that the Kubernetes version doesn't upgrade and potentially introduce breaking changes. If you'd prefer your cluster's Kubernetes version to update automatically, you can specify a release channel; the options are either stable
, regular
or rapid
.
### Provider configuration ###
google_cloud_platform:
project: test-test-test
region: us-central1
kubernetes_version: "1.24.11-gke.1000"
release_channel: "UNSPECIFIED" # default is hidden
node_groups:
general:
instance: n1-standard-4
min_nodes: 1
max_nodes: 1
user:
instance: n1-standard-2
min_nodes: 0
max_nodes: 5
worker:
instance: n1-standard-2
min_nodes: 0
max_nodes: 5
Amazon Web Services has similar support to DigitalOcean and doesn't allow auto-scaling below 1 node.
Consult AWS instance types for possible options.
### Provider configuration ###
amazon_web_services:
region: us-west-2
kubernetes_version: "1.18"
node_groups:
general:
instance: "m5.xlarge"
min_nodes: 1
max_nodes: 1
user:
instance: "m5.large"
min_nodes: 1
max_nodes: 5
worker:
instance: "m5.large"
min_nodes: 1
max_nodes: 5
Permissions boundary (Optional)
Permissions boundaries in AWS is a powerful feature designed to control the maximum permissions a user or role can have within an AWS Identity and Access Management (IAM) policy. By setting a permissions boundary, administrators can enforce restrictions on the extent of permissions that can be granted, even if policies would otherwise allow broader access.
Nebari supports setting permissions boundary while deploying Nebari to be applied on all the policies
created by Nebari. Here is an example of how you would set permissions boundary in nebari-config.yaml
.
amazon_web_services:
# the arn for the permission's boundary policy
permissions_boundary: arn:aws:iam::01234567890:policy/<permissions-boundary-policy-name>
Microsoft Azure has similar settings for Kubernetes version, region, and instance names - using Azure's available values of course.
Azure also requires a field named storage_account_postfix
which will have been generated by nebari init
. This allows nebari to create a Storage Account bucket that should be globally unique.
### Provider configuration ###
azure:
region: Central US
kubernetes_version: 1.19.11
node_groups:
general:
instance: Standard_D4_v3
min_nodes: 1
max_nodes: 1
user:
instance: Standard_D2_v2
min_nodes: 0
max_nodes: 5
worker:
instance: Standard_D2_v2
min_nodes: 0
max_nodes: 5
storage_account_postfix: t65ft6q5
DigitalOcean has a restriction with autoscaling in that the minimum nodes allowed (min_nodes
= 1) is one but is by far the least expensive provider even accounting for spot/pre-emptible
instances.
In addition, Digital Ocean doesn't have accelerator/gpu support.
Digital Ocean is a good choice for trying out Nebari, but we recommend selecting a different provider for your production Nebari deployment.
To see available instance types refer to Digital Ocean Instance Types.
Additionally the Digital Ocean cli doctl
has support for listing droplets.
digital_ocean:
region: nyc3
kubernetes_version: "1.21.10-do.0"
node_groups:
general:
instance: "g-4vcpu-16gb"
min_nodes: 1
max_nodes: 1
user:
instance: "g-2vcpu-8gb"
min_nodes: 1
max_nodes: 5
worker:
instance: "g-2vcpu-8gb"
min_nodes: 1
max_nodes: 5
Originally designed for Nebari deployments on a "local" minikube cluster, this feature has now expanded to allow users to deploy Nebari to any existing kubernetes cluster.
The default options for an existing
deployment are still set to deploy to a minikube cluster.
Deploying to a local existing kubernetes cluster has different options than the cloud providers. kube_context
is an optional key that can be used to deploy to a non-default context.
The default node selectors will allow pods to be scheduled anywhere. This can be adjusted to schedule pods on different labeled nodes, allowing for similar functionality to node groups in the cloud.
### Provider configuration ###
existing:
kube_context: minikube
node_selectors:
general:
key: kubernetes.io/os
value: linux
user:
key: kubernetes.io/os
value: linux
worker:
key: kubernetes.io/os
value: linux
Local deployment is intended for Nebari deployments on a "local" cluster created and management by Kind. It is great for experimentation and development.
warning
Currently, local mode is only supported for Linux-based operating systems.
### Provider configuration ###
local:
kube_context: minikube
node_selectors:
general:
key: kubernetes.io/os
value: linux
user:
key: kubernetes.io/os
value: linux
worker:
key: kubernetes.io/os
value: linux
note
Many of the cloud providers regularly update their internal Kubernetes versions so if you wish to specify a particular version, please check the following resources. This is completely optional as Nebari will, by default, select the most recent version available for your preferred cloud provider: Digital Ocean; Google Cloud Platform; Amazon Web Services; Microsoft Azure.