Requirements
- Nginx Ingress Controller is not in use
- Out of the box, Nginx Ingress Controller partially works with Capsule8. The Console can receive alerts, but cannot serve policies to sensors. See the bottom of this page for a configuration change that will fix this issue to make it completely work.
- kubectl v1.22 or higher
- The linked manifest file was written for, and tested with, client and server versions 1.22. Earlier Kubernetes versions may require changes to the manifest.
- eksctl (optional)
- Google Cloud Platform service account key file which will be provided to you by Capsule8
- Save the key file provided to you by Capsule8 to
~/.capsule8/service-account.json
(on the machine where you're running kubectl). - Make a note of the email address in that file.
- Save the key file provided to you by Capsule8 to
- Linux distribution support and hardware requirements can be found here.
The Kubernetes manifest yaml file will install the Capsule8 Console, Capsule8 Sensor, and a Postgres database for evaluation purposes.
The sensor will be configured to send alerts to the Console.
The sensor will be configured to download detection configuration from the Console.
Users who do not want to use the Console should instead follow Installing the Sensor on Kubernetes (Without Console).
Limitations
This manifest will not install an Ingress, so the Console will not be permanently reachable outside the cluster after these steps. Network configurations vary widely by usecase, and we're unable to guess a configuration that works with your environment. Instead, we'll use kubectl port-forward to temporarily connect to verify functionality.
Permanent solutions to route to the Console include pointing your own Ingress at the capsule8-console service and modifying the service to use the NodePort type.
This manifest hardcodes credentials. When you're ready to use Console and Sensor in production, be sure to modify your configmaps to change the following hardcoded demo credentials:
capsule8-console.yaml
console.auth_session_key
- must be 64 characters, recommend
sha256sum <(head -c1024 /dev/urandom)
to generate
- must be 64 characters, recommend
console.database
capsule8-sensor.yaml
- Authorization header for sending alert (found in a webhook block under
alert_output
) - Authorization header for downloading policy (found in
policy_input
) - These must be regenerated to match the new
console.auth_session_key
. Regenerate valid JWT's with e.g.kubectl exec capsule8-console-66f5b999d9-qszrx -- /capsule8-console generate-token --host
- Authorization header for sending alert (found in a webhook block under
Installing Capsule8 on Kubernetes
Please perform the following steps as shown below:
1. Initial Setup Verification
Before starting, verify that kubectl
is configured to point to your target installation cluster
# Verify the correct cluster is selected $ kubectl config current-context
2. Create Kubernetes Secret
Set an environment variable in the terminal that you plan on using:
$ export CAPSULE8_SERVICE_ACCOUNT_EMAIL=${SERVICE_ACCOUNT_EMAIL}
Replace ${SERVICE_ACCOUNT_EMAIL}
with the email from your service account key file before running the following kubectl command to create a new Kubernetes Secret. This secret will be used to authenticate your kubelet so that it can pull from our private container registry.
$ kubectl create secret docker-registry capsule8-registry-secret \ --docker-username=_json_key \ --docker-server=https://us.gcr.io \ --docker-email=$CAPSULE8_SERVICE_ACCOUNT_EMAIL \ --docker-password="$(cat ~/.capsule8/service-account.json)"
Now run the below command to see your new Secret:
$ kubectl get secrets
You should now see your new Secret. This secret will be used to authenticate your kubelet so that it can pull from our private container registry although other registries can be used.
Note: Please note that access is granted specifically for our manifests which references the K8s docker-registry capsule8-registry-secret
3. Apply the Manifest
Download a copy of the manifest provided by Capsule8 and apply it:
# Apply the manifest to create the initial resources $ kubectl apply -f https://capsule8-assets.s3.amazonaws.com/latest/capsule8-manifests.yaml # Wait for the pods to come online $ kubectl get pods -w # kubectl port-forward is a quick way to verify functionality, # but is typically not suitable for production $ kubectl port-forward service/capsule8-console 8080:3030
Open http://localhost:8080 in a browser to access the Console.
Notes:
- It is recommended that all production deployments use a managed service for the database (e.g. AWS RDS) instead.
- If you do not already have a test cluster, Capsule8 recommends using eksctl to spin up an EKS cluster which can be as simple as running
$ eksctl create cluster
. For more information on eksctl, see the official AWS documentation
Additional Nginx Ingress Controller configuration
- Nginx's default configuration strips headers containing underscores.
- Sensors requesting detection configuration from the Console need to use headers with underscores. When these headers are missing, Console fails to identify the sensor.
- To stop stripping the headers, ensure your nginx ingress controller's configmap contains at least the two items below.
- This configmap likely already exists on your cluster; take care not to lose any existing customizations.
- The manifest below assumes the default name of ingress-nginx-controller
- Custom nginx ingress controller installs may have configured a non-default-named configmap.
apiVersion: v1 data: enable-underscores-in-headers: "true" ignore-invalid-headers: "false" kind: ConfigMap metadata: name: ingress-nginx-controller
Comments
0 comments
Please sign in to leave a comment.