K3s Guide: Run Apps On Local Kubernetes With K3d

by Kenji Nakamura 49 views

Hey guys! Ever wanted to run your applications on Kubernetes locally but found the setup a bit daunting? Well, you're in the right place! This guide will walk you through deploying your app on a local Kubernetes cluster using k3d, a lightweight Kubernetes distribution. We'll cover everything from setting up your environment to deploying your application, ensuring a smooth and efficient workflow. Let's dive in!

🎯 Goal: Local Kubernetes Deployment

The main goal here is to get the CEFR Speaking Exam Simulator up and running on a local Kubernetes cluster using k3d. We're ditching Docker Desktop in favor of a more streamlined approach that fits our organizational policies and keeps things running smoothly. This means using k3d as our Kubernetes runtime, backed by the Docker CLI on Colima. This setup allows us to manage our deployments effectively without the overhead of a full-fledged Kubernetes cluster. Plus, we're committed to keeping our secrets safe, so no keys will be committed to the git repository.

✅ Decision: Why k3d?

We've chosen k3d as our local Kubernetes option for several compelling reasons. First and foremost, it offers a lightweight and efficient way to run Kubernetes clusters on Docker. This is crucial because our organizational policies restrict the use of Docker Desktop, making k3d a perfect alternative. By leveraging the Docker CLI on Colima, we can create and manage Kubernetes clusters with ease.

k3d simplifies the deployment process, allowing us to focus on developing and testing our applications rather than wrestling with complex configurations. It provides a seamless experience for local development and testing, ensuring that our applications behave as expected in a production-like environment. Furthermore, k3d integrates well with other tools in our ecosystem, such as kubectl, making it a natural fit for our workflow.

🔧 Prerequisites: Setting Up Your Environment

Before we jump into the implementation, let's make sure you have everything you need. Here’s a checklist of prerequisites:

  1. Colima running with Docker CLI: Ensure Colima is up and running with Docker CLI configured. This is our foundation for containerization.
  2. kubectl installed: kubectl is the Kubernetes command-line tool, essential for interacting with your cluster.
  3. k3d installed: This is our tool of choice for running local Kubernetes clusters.
  4. Local image built: Make sure you've built your Docker image using the command docker build -t speak-check:latest .. This image will be deployed to our cluster.

With these prerequisites in place, you'll be ready to tackle the deployment process.

🛠️ Implementation (k3d only): Step-by-Step Deployment

Alright, let's get our hands dirty and deploy the application! Here’s a step-by-step guide to setting up your local Kubernetes cluster with k3d and deploying your app.

1) Create Cluster and Expose Ports

First, we need to create a k3d cluster and expose the necessary ports. This is done using the following command:

k3d cluster create speak-check \
  -p "8501:30080@loadbalancer" \
  -p "27018:30017@loadbalancer"

This command does a few things: it creates a cluster named speak-check, and it exposes ports 8501 and 27018 via the k3d load balancer. Port 8501 will be used to access the application, while port 27018 will be used for MongoDB.

2) Import Local Image into the Cluster

Next, we need to import the local Docker image into the k3d cluster. This ensures that the image is available within the cluster for deployment. Use the following command:

k3d image import speak-check:latest -c speak-check

This command imports the speak-check:latest image into the speak-check cluster.

3) Prepare Namespace, Config, and Secrets

Now, let's prepare the Kubernetes namespace, configuration, and secrets. We'll create a dedicated namespace for our application and apply the necessary configurations and secrets. Remember, we're keeping our secrets secure by not committing them to the repository.

kubectl create ns speak-check
kubectl -n speak-check apply -f k8s/configmap.yaml -f k8s/secret.yaml

This creates a namespace called speak-check and applies the configurations from k8s/configmap.yaml and the secret manifest from k8s/secret.yaml. The actual secret values will be created at deploy time, ensuring they are not stored in the repository.

4) Deploy MongoDB and the App

Finally, we're ready to deploy MongoDB and the application itself. This involves applying the Kubernetes manifests for MongoDB and the application.

kubectl -n speak-check apply -f k8s/mongo-statefulset.yaml -f k8s/mongo-service.yaml \
  -f k8s/app-deployment.yaml -f k8s/app-service.yaml
kubectl -n speak-check rollout status deploy/speak-check-app

This command applies the manifests for MongoDB (StatefulSet and Service) and the application (Deployment and Service). It then checks the rollout status of the speak-check-app deployment to ensure everything is running smoothly.

🗂️ Files to Add (under k8s/): Manifests and Configuration

To make this all work, you'll need to add the following files under the k8s/ directory in your project. These files define the Kubernetes resources needed for our deployment.

  • k8s/namespace.yaml—Namespace speak-check
  • k8s/configmap.yaml—Non-sensitive config (e.g., MONGODB_DB)
  • k8s/secret.yaml—Empty Secret manifest (reference only; create data at deploy time)
  • k8s/mongo-statefulset.yaml—MongoDB with PVC at /data/db
  • k8s/mongo-service.yamlClusterIP Service on 27017
  • k8s/app-deployment.yaml—Streamlit app Deployment
  • k8s/app-service.yaml—Service exposing port 8501 (NodePort 30080 target for LB)
  • docs/k8s.md—Setup, deploy, troubleshoot guide

These files are the backbone of our deployment, defining everything from the namespace and configurations to the MongoDB StatefulSet and application deployment. Make sure to create these files and populate them with the appropriate configurations for your application.

🔐 Secrets Handling (do not commit keys): Keeping Your Data Safe

Security is paramount, especially when dealing with sensitive information like API keys and tokens. We're taking a proactive approach to ensure that no secrets are committed to our repository. Instead, we'll create them at deploy time from our local environment.

Here’s how we do it:

kubectl -n speak-check create secret generic app-secrets \
  --from-literal=OPENAI_API_KEY="$OPENAI_API_KEY" \
  --from-literal=GITHUB_PERSONAL_ACCESS_TOKEN="$GITHUB_PERSONAL_ACCESS_TOKEN"

This command creates a Kubernetes secret named app-secrets in the speak-check namespace. It populates the secret with the values of the OPENAI_API_KEY and GITHUB_PERSONAL_ACCESS_TOKEN environment variables. By using --from-literal, we ensure that the secret values are read directly from the environment variables, preventing them from being stored in any configuration files.

🌐 Access: Connecting to Your Deployed App

Now that our application is deployed, let's see how we can access it. Here are the access points you'll need:

  • App: http://localhost:8501 (via k3d LB → NodePort 30080)
  • Health: curl http://localhost:8501/_stcore/health
  • MongoDB (local tools): localhost:27018 → cluster 27017

The application is accessible via http://localhost:8501. This is because k3d load balancer forwards traffic from port 8501 on your local machine to NodePort 30080 in the cluster. You can also check the health of the application by sending a curl request to http://localhost:8501/_stcore/health. For MongoDB, you can connect using local tools via localhost:27018, which forwards traffic to the cluster's internal port 27017.

✅ Acceptance Criteria: Ensuring a Successful Deployment

To ensure our deployment is successful, we have a set of acceptance criteria that need to be met. These criteria help us verify that our application is running correctly and that all components are functioning as expected.

  • App reachable at http://localhost:8501: You should be able to access the application in your browser.
  • Both pods Ready; health endpoint returns ok: Verify that all pods are in the Ready state and that the health endpoint returns a 200 OK status.
  • MongoDB data persists across pod restarts (PVC): Ensure that data in MongoDB persists even if the pod restarts, thanks to the Persistent Volume Claim (PVC).
  • Works with Colima + k3d (no Docker Desktop): Confirm that the deployment works seamlessly with Colima and k3d, without requiring Docker Desktop.
  • No secrets are committed to the repo: Double-check that no sensitive information is stored in the repository.
  • Docs include full setup/teardown & troubleshooting: Ensure that the documentation covers the entire setup, teardown process, and troubleshooting steps.

📝 Notes: Why k3d is the Right Choice

In summary, k3d provides the simplest Desktop-free local Kubernetes flow, aligning perfectly with our current tooling (Colima + Docker CLI). It offers a streamlined and efficient way to develop and test our applications in a Kubernetes environment without the overhead of a full-fledged cluster. This approach not only simplifies our workflow but also ensures that we adhere to our organizational policies regarding Docker Desktop. k3d's ease of use and integration with our existing tools make it the ideal choice for local Kubernetes deployments.

So there you have it! Running your app on local Kubernetes with k3d is totally achievable. Follow these steps, and you'll be deploying like a pro in no time. Happy coding, guys!