iPad Code-Server Owns Its Namespace

Code-server pod can create resources in its namespace

The new capability is running additional pods and services (and ingress and statefulsets) within the development namespace using the developer service account. This limited access role can’t impact other services but can start a container and use internal DNS and services to access them.

This sounds like the most contrived development setup ever

Yes, admittedly it’s a bit odd to peer at from the outside. I incrementally worked my way into it and have been very happy with it, more info in this article.

The biggest benefit is being resistent to frequently disrupted by kids and pets and life. I can move to my iPad or even my phone and continue what I was doing. Normally crafting a git commit message would take too long and just walking away means I have to go back to where I was, rediscover my progress, and then craft that commit message. With code-server, all I do is pull up the URL from elsewhere and the in-progress edits are highlighted, files are open, and commit message draft is present, if gross.

Also, about half of what I use this for is simply to get into a terminal and run some commands. A majority of the configuration edits I make are using vi inside the terminal inside the code-server pod.

What’s new?

Start with the code-server in kubernetes article and then add the following:

  1. The code-server build tools container has Kubectl and Helm installed (as of about a month ago)
  2. The code-server.yml file includes some new RBAC stuff
  3. Instruction for making a kubeconfig out of the RBAC token and cert

The readme shows how to empower the code-server pod with namespace configuration powers. The key is putting that configuration file in the ~/.kube/config file so you can run kubectl from anywhere and it’ll pick up that local config.

If you have more Kubernetes clusters to manage, it can be one of your available cluster/contexts.

Give me an example of how this can help

Say your web application needs a database backend. You don’t want to install a database onto the code-server pod but want it to be production-like.

Now you can spin up a posgres pod, dump data into it, and reset it across branch-changes.

To do this, I created a directory ~/postgres. In there, I made 3 files:

  • configmap.yml
  • service.yml
  • statefulset.yml

The contents of each:

piVersion: v1
kind: ConfigMap
metadata:
  name: postgres-config-demo
  labels:
    app: postgres
data:
  POSTGRES_DB: demopostgresdb
  POSTGRES_USER: demopostgresadmin
  POSTGRES_PASSWORD: demopostgrespwd
piVersion: v1
kind: Service
metadata:
  name: postgres
  labels:
    app: postgres
spec:
  ports:
  - port: 5432
    name: postgres
  clusterIP: None
  selector:
    app: postgres
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgres-demo
spec:
  serviceName: "postgres"
  replicas: 2
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - name: postgres
        image: postgres:latest
        envFrom:
          - configMapRef:
              name: postgres-config-demo
        ports:
        - containerPort: 5432
          name: postgredb
        volumeMounts:
        - name: postgredb
          mountPath: /var/lib/postgresql/data
          subPath: postgres
  volumeClaimTemplates:
  - metadata:
      name: postgredb
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: openebs-sc-statefulset
      resources:
        requests:
          storage: 3Gi

I put some of my own configurations in there such as the storageClassName but for the most part it was lifted directly from a bmc blog post.

After applying all of those files, I have kubectl get configmap,svc,statefulset,pod

NAME                             DATA   AGE
configmap/postgres-config-demo   3      7m30s

NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/code-server        ClusterIP   10.233.38.239   <none>        80/TCP     23d
service/code-server-hugo   ClusterIP   10.233.59.166   <none>        80/TCP     23d
service/postgres           ClusterIP   None            <none>        5432/TCP   6m57s

NAME                             READY   AGE
statefulset.apps/postgres-demo   2/2     4m53s

NAME                              READY   STATUS    RESTARTS   AGE
pod/code-server-598cb4c9d-jdnfm   1/1     Running   0          4h56m
pod/postgres-demo-0               1/1     Running   0          4m53s
pod/postgres-demo-1               1/1     Running   0          4m12s

You can see some of my code server stuff mingled in with the database stuff. Keeping specs for these makes deletion of them easier when it’s done.

Since we’re inside the namespace, the service name is the DNS for it. psql -h postgres -U demopostgresadmin -p 5432 demopostgresdb will prompt for the super strong password of demopostgrespwd and then you’re in and can make tables and do whatever you like.

Password for user demopostgresadmin: 
psql (11.7 (Debian 11.7-0+deb10u1), server 12.3 (Debian 12.3-1.pgdg100+1))
WARNING: psql major version 11, server major version 12.
         Some psql features might not work.
Type "help" for help.

demopostgresdb=# \list

Obviously databases aren’t the only thing that can run as a service like this. Additionally, scaling the set down to zero pods can retain resources while preserving the volume claims for later use.

Why 2 replicas?

I didn’t notice there were 2 in the stolen yamls. Set it to 1 or whatever you want. Resilience isn’t really a goal here.

How were you running psql in the container, it’s not there!

In the container, just run sudo apt-get update and then sudo apt-get install postgres-client to get psql. I don’t actually have that use case so I’ll restart the pod at some point and it’ll clean itself up.

If you will need this as part of your use case, add that postgres-client installation step to your Dockerfile so it’ll always be there when you need it.

What’s next?

Probably some sort of shared storage situation. I could set it up with nfs right now but I’m curious if there’s a better way to do it with a read-write-many or object storage solution.

I want to have a directory that I can copy files to on the code-server pod which will immediately be reflected in the mounted locations on the other pods. This is done for GCK using the docker volume approach. Once this is easily solved, it may lead to a GKK for working directly in Kubernetes.