Skip to main content
/ DevOps

Ephemeral Developer Workstations: Docker Containers with Auto-Destruction

Sacha Roussakis-NotterSacha Roussakis-Notter
18 min read
Docker
Kubernetes
TypeScript
Share

Build disposable cloud development environments with Docker, automatic TTL cleanup, and orchestration. Complete guide to spawning and destroying workstations on demand.

The Rise of Ephemeral Development

Gartner predicts that by 2026, 60% of cloud workloads will be built and deployed using cloud development environments. The era of "works on my machine" is ending.

area chart
Cloud Development Environment Adoption
adoption
202220232024202520260%15%30%45%60%
Hover for detailsbuun.group

Ephemeral workstations—disposable development environments that spin up instantly and destroy themselves—are transforming how teams develop software.

Companies like Spotify, Bloomberg, and Anthropic report up to 50% productivity increases after adopting ephemeral development environments.

flowchart

Ephemeral Dev

Click Button

Container Spawns

Ready in Seconds

Auto-Destroys

Traditional Dev

Local Machine

Manual Setup

Config Drift

Works on My Machine

Ctrl+scroll to zoom • Drag to pan51%

What You'll Build

In this tutorial, we'll build a complete ephemeral workstation system from scratch. By the end, you'll have:

  • Instant provisioning via Docker containers
  • Automatic destruction after 1 hour (configurable TTL)
  • WebSocket terminal access from the browser
  • Resource limits for safe multi-tenancy
  • Session persistence options

Prerequisites: Basic knowledge of Docker, TypeScript, and Node.js. You should have Docker installed on your development machine.

Architecture Overview

Before we dive into code, let's understand how all the pieces fit together:

flowchart

TTL Cleanup

Container Pool

API Server

Browser

Destroy

Destroy

Web UI

xterm.js

Authentication

Orchestrator

WebSocket Handler

Workstation 1

Workstation 2

Workstation N

Cleanup Job

Session Timers

Ctrl+scroll to zoom • Drag to pan46%

How it works:

  1. User requests a workstation → The Web UI sends an authenticated request to our API
  2. Orchestrator spawns a container → Docker creates an isolated container with dev tools
  3. WebSocket bridge connects → xterm.js in the browser connects to the container's shell
  4. TTL timer starts → A cleanup job tracks when the container should be destroyed
  5. Auto-destruction → When TTL expires, the container is gracefully stopped and removed

Part 1: The Workstation Container

First, we need to create a Docker image that contains all the development tools your team needs. This image will be the foundation for every ephemeral workstation.

Understanding the Dockerfile

dockerfile
1# Dockerfile.workstation
2FROM ubuntu:24.04
3
4# Avoid interactive prompts during package installation
5ENV DEBIAN_FRONTEND=noninteractive
6
7# Install essential development tools
8RUN apt-get update && apt-get install -y \
9 curl \
10 git \
11 vim \
12 neovim \
13 tmux \
14 zsh \
15 build-essential \
16 python3 \
17 python3-pip \
18 sudo \
19 openssh-server \
20 && rm -rf /var/lib/apt/lists/*
21
22# Install Node.js
23RUN curl -fsSL https://deb.nodesource.com/setup_20.x | bash - \
24 && apt-get install -y nodejs
25
26# Install Docker CLI (for Docker-in-Docker workflows)
27RUN curl -fsSL https://get.docker.com | sh
28
29# Create non-root user
30RUN useradd -m -s /bin/zsh developer \
31 && echo "developer ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
32
33# Install Oh My Zsh for the user
34USER developer
35WORKDIR /home/developer
36RUN sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)" "" --unattended
37
38# Switch back to root for entrypoint
39USER root
40
41# Copy entrypoint script
42COPY entrypoint.sh /entrypoint.sh
43RUN chmod +x /entrypoint.sh
44
45EXPOSE 22 3000-3999
46
47ENTRYPOINT ["/entrypoint.sh"]

Let's break down what each section does:

SectionPurpose
FROM ubuntu:24.04Uses Ubuntu as our base—stable and well-supported
DEBIAN_FRONTEND=noninteractivePrevents apt from asking questions during build
Development toolsInstalls git, vim, tmux, zsh—customize for your team
Node.jsAdds JavaScript/TypeScript runtime
Docker CLIEnables Docker-in-Docker for container workflows
Non-root userCreates developer user for security (never run as root!)
Oh My ZshBetter shell experience with syntax highlighting
Exposed portsSSH (22) and common dev ports (3000-3999)

The Entrypoint Script

The entrypoint runs when the container starts. It sets up the environment and keeps the container alive:

bash
1#!/bin/bash
2# entrypoint.sh
3
4# Start SSH server for remote connections
5service ssh start
6
7# Set up workspace directory with correct permissions
8mkdir -p /home/developer/workspace
9chown developer:developer /home/developer/workspace
10
11# Keep container running indefinitely
12# This allows the container to stay alive while we connect via exec
13exec tail -f /dev/null

Why `tail -f /dev/null`? Docker containers exit when their main process exits. By running tail -f /dev/null, we keep a process running that does nothing but prevents the container from stopping. This lets us docker exec into it whenever we want.

Build and Test the Image

bash
1# Build the workstation image
2docker build -t workstation:latest -f Dockerfile.workstation .
3
4# Test it manually
5docker run -d --name test-workstation workstation:latest
6docker exec -it test-workstation su - developer
7# You should now be in a zsh shell as the developer user
8
9# Clean up
10docker rm -f test-workstation

Part 2: Container Orchestration

The orchestrator is the brain of our system. It manages the complete lifecycle of workstations—creating them, tracking them, and destroying them when their time is up.

Setting Up the Project

bash
1# Initialize a new Node.js project
2mkdir workstation-orchestrator && cd workstation-orchestrator
3npm init -y
4npm install dockerode typescript @types/node
5npx tsc --init

Understanding the Orchestrator

Let's build the orchestrator step by step:

typescript
1// src/orchestrator.ts
2import Docker from 'dockerode';
3import { randomUUID } from 'crypto';
4
5// Initialize Docker client - connects to local Docker daemon
6const docker = new Docker();

What is dockerode? It's a Node.js library that wraps the Docker API, letting us create, manage, and destroy containers programmatically.

Defining Our Data Types

typescript
1// Configuration for creating a new workstation
2interface WorkstationConfig {
3 userId: string; // Who owns this workstation
4 ttlMinutes: number; // How long before auto-destruction (e.g., 60)
5 memoryMB: number; // RAM limit (e.g., 2048 for 2GB)
6 cpuCores: number; // CPU limit (e.g., 1.0)
7}
8
9// Represents a running workstation
10interface Workstation {
11 id: string; // Our unique ID (UUID)
12 containerId: string; // Docker's container ID
13 userId: string; // Owner
14 createdAt: Date; // When it was spawned
15 expiresAt: Date; // When it will be destroyed
16 status: 'starting' | 'running' | 'stopping' | 'stopped';
17}
18
19// In-memory store for tracking workstations
20// IMPORTANT: Use Redis or a database in production!
21const workstations = new Map<string, Workstation>();

Why track workstations ourselves? Docker tracks containers, but we need additional metadata like TTL expiration, user ownership, and our custom status. This lets us build features like "list my workstations" or "time remaining until destruction."

The Spawn Function

This is the core function that creates a new workstation:

typescript
1export async function spawnWorkstation(
2 config: WorkstationConfig
3): Promise<Workstation> {
4 // Generate a unique ID for this workstation
5 const id = randomUUID();
6
7 // Calculate when this workstation should be destroyed
8 const expiresAt = new Date(Date.now() + config.ttlMinutes * 60 * 1000);
9
10 // Create the Docker container with security and resource limits
11 const container = await docker.createContainer({
12 Image: 'workstation:latest',
13 name: `workstation-${id}`,
14
15 HostConfig: {
16 // Memory limit in bytes
17 Memory: config.memoryMB * 1024 * 1024,
18
19 // CPU limit in nanoseconds (1 core = 1e9 nanoseconds)
20 NanoCpus: config.cpuCores * 1e9,
21
22 // Auto-remove container when it stops (cleanup)
23 AutoRemove: true,
24
25 // Use bridge networking for isolation
26 NetworkMode: 'bridge',
27
28 // SECURITY: Drop all Linux capabilities by default
29 CapDrop: ['ALL'],
30
31 // Only add back the minimum required capabilities
32 CapAdd: ['CHOWN', 'SETUID', 'SETGID'],
33
34 // Prevent privilege escalation attacks
35 SecurityOpt: ['no-new-privileges'],
36 },
37
38 // Labels for querying and cleanup
39 Labels: {
40 'workstation.id': id,
41 'workstation.user': config.userId,
42 'workstation.expires': expiresAt.toISOString(),
43 },
44
45 // Environment variables available inside the container
46 Env: [
47 `WORKSTATION_ID=${id}`,
48 `USER_ID=${config.userId}`,
49 ],
50 });
51
52 // Start the container
53 await container.start();
54
55 // Create our workstation record
56 const workstation: Workstation = {
57 id,
58 containerId: container.id,
59 userId: config.userId,
60 createdAt: new Date(),
61 expiresAt,
62 status: 'running',
63 };
64
65 // Track it in our store
66 workstations.set(id, workstation);
67
68 // Schedule automatic destruction
69 scheduleDestruction(id, config.ttlMinutes);
70
71 return workstation;
72}

Security Deep Dive:

SettingWhat It DoesWhy It Matters
CapDrop: ['ALL']Removes all Linux capabilitiesPrevents container from doing privileged operations
CapAdd: ['CHOWN', 'SETUID', 'SETGID']Adds back only what's neededAllows changing file ownership and switching users
no-new-privilegesPrevents privilege escalationEven if code runs setuid, it can't gain root
AutoRemove: trueDeletes container on stopNo orphaned containers wasting disk space

The Destroy Function

When TTL expires or a user requests it, we need to gracefully destroy the workstation:

typescript
1export async function destroyWorkstation(id: string): Promise<void> {
2 const workstation = workstations.get(id);
3 if (!workstation) return;
4
5 try {
6 const container = docker.getContainer(workstation.containerId);
7
8 // Graceful stop: send SIGTERM, wait 10 seconds, then SIGKILL
9 // This gives processes time to save state and exit cleanly
10 await container.stop({ t: 10 });
11
12 // Remove the container (might already be removed due to AutoRemove)
13 await container.remove({ force: true });
14 } catch (error) {
15 // Container might already be gone - that's OK
16 console.error(`Error destroying workstation ${id}:`, error);
17 } finally {
18 // Always remove from our tracking, even if Docker operations failed
19 workstations.delete(id);
20 }
21}

Why graceful shutdown? If a developer is in the middle of saving a file when TTL expires, we want to give their editor time to finish. The 10-second timeout balances user experience with resource cleanup.

Scheduling Destruction

typescript
1function scheduleDestruction(id: string, minutes: number): void {
2 setTimeout(async () => {
3 console.log(`TTL expired for workstation ${id}, destroying...`);
4 await destroyWorkstation(id);
5 }, minutes * 60 * 1000);
6}

Important caveat: This simple setTimeout approach works for single-server deployments. For production with multiple servers, use a distributed scheduler like:

  • Redis-based job queues (Bull, BullMQ)
  • Kubernetes TTL controllers
  • AWS Step Functions with Wait states

Query Functions

typescript
1// Get a specific workstation by ID
2export async function getWorkstation(id: string): Promise<Workstation | null> {
3 return workstations.get(id) || null;
4}
5
6// List all workstations for a user
7export async function listUserWorkstations(
8 userId: string
9): Promise<Workstation[]> {
10 return Array.from(workstations.values())
11 .filter(w => w.userId === userId);
12}

Part 3: WebSocket Terminal Bridge

Now we need to connect the browser to the container's shell. This is where the magic happens—users type in their browser, and keystrokes flow into the container.

How the Bridge Works

sequence
Container ShellDocker ExecWebSocket ServerBrowserloop[Bidirectional Communication]Connect with auth tokenCreate exec instanceSpawn /bin/zshUser types "ls -la"Write to stdinExecute commandOutput resultRead from stdoutDisplay output
Ctrl+scroll to zoom • Drag to pan44%

The Terminal Bridge Implementation

typescript
1// src/terminal-bridge.ts
2import Docker from 'dockerode';
3import { WebSocket } from 'ws';
4
5const docker = new Docker();
6
7export async function attachTerminal(
8 containerId: string,
9 ws: WebSocket
10): Promise<void> {
11 // Get reference to the container
12 const container = docker.getContainer(containerId);
13
14 // Create an "exec" instance - this is like running docker exec
15 const exec = await container.exec({
16 Cmd: ['/bin/zsh'], // Command to run (the shell)
17 AttachStdin: true, // We'll send input
18 AttachStdout: true, // We want output
19 AttachStderr: true, // We want error output too
20 Tty: true, // Allocate a pseudo-TTY (enables colors, etc.)
21 User: 'developer', // Run as our non-root user
22 WorkingDir: '/home/developer/workspace', // Start in workspace
23 });
24
25 // Start the exec and get a bidirectional stream
26 const stream = await exec.start({
27 hijack: true, // Take over the connection for raw data
28 stdin: true, // Enable stdin
29 Tty: true, // TTY mode
30 });
31
32 // PIPE 1: Container output → WebSocket → Browser
33 stream.on('data', (chunk: Buffer) => {
34 // Only send if WebSocket is still open
35 if (ws.readyState === WebSocket.OPEN) {
36 // Send as JSON message with type for client parsing
37 ws.send(JSON.stringify({
38 type: 'output',
39 data: chunk.toString()
40 }));
41 }
42 });
43
44 // PIPE 2: Browser → WebSocket → Container input
45 ws.on('message', (message: Buffer) => {
46 try {
47 const msg = JSON.parse(message.toString());
48
49 if (msg.type === 'input') {
50 // User typed something - send to container
51 stream.write(msg.data);
52 } else if (msg.type === 'resize') {
53 // Terminal window resized - adjust PTY size
54 exec.resize({ h: msg.rows, w: msg.cols });
55 }
56 } catch (error) {
57 console.error('Error processing message:', error);
58 }
59 });
60
61 // CLEANUP: When WebSocket closes, end the stream
62 ws.on('close', () => {
63 stream.end();
64 });
65
66 // CLEANUP: When stream ends, close WebSocket
67 stream.on('end', () => {
68 ws.close();
69 });
70}

Key concepts explained:

ConceptWhat It Means
docker execRuns a command inside a running container
TTYPseudo-terminal—makes the shell think it's connected to a real terminal
hijack: trueRaw mode—bytes flow directly without HTTP framing
resizeWhen user resizes browser window, we tell the shell to adjust columns/rows

Message Protocol

The browser and server communicate using a simple JSON protocol:

typescript
1// Browser → Server messages
2{ type: 'input', data: 'ls -la\n' } // User typed command
3{ type: 'resize', cols: 120, rows: 40 } // Terminal resized
4
5// Server → Browser messages
6{ type: 'output', data: 'file1.txt file2.txt\n' } // Shell output

Part 4: Kubernetes TTL Controller

For production deployments at scale, Docker alone isn't enough. Kubernetes provides built-in TTL management that's more robust than our setTimeout approach.

Why Kubernetes for Production?

bar chart
Docker vs Kubernetes for Workstations
docker
kubernetes
Auto-healingScalingTTL ReliabilityMulti-node0255075100
Hover for detailsbuun.group

Kubernetes Job with TTL

In Kubernetes, we use Jobs instead of raw containers. Jobs have built-in TTL support:

yaml
1# workstation-job.yaml
2apiVersion: batch/v1
3kind: Job
4metadata:
5 name: workstation-WORKSTATION_ID
6 labels:
7 app: workstation
8 user: USER_ID
9spec:
10 # KEY FEATURE: Automatically delete job 1 hour after it completes
11 # This handles cleanup even if our orchestrator crashes
12 ttlSecondsAfterFinished: 3600
13
14 # Don't retry on failure - just let it die
15 backoffLimit: 0
16
17 template:
18 metadata:
19 labels:
20 app: workstation
21 workstation-id: WORKSTATION_ID
22 spec:
23 # Never restart the container
24 restartPolicy: Never
25
26 containers:
27 - name: workstation
28 image: workstation:latest
29
30 # Resource requests and limits
31 resources:
32 requests:
33 memory: "512Mi" # Minimum guaranteed memory
34 cpu: "250m" # 0.25 CPU cores minimum
35 limits:
36 memory: "2Gi" # Maximum 2GB RAM
37 cpu: "1000m" # Maximum 1 CPU core
38
39 ports:
40 - containerPort: 22
41
42 # Security context
43 securityContext:
44 runAsNonRoot: true # Must run as non-root
45 runAsUser: 1000 # UID of developer user
46 allowPrivilegeEscalation: false
47 capabilities:
48 drop:
49 - ALL # Drop all capabilities

What `ttlSecondsAfterFinished` does: Kubernetes will automatically garbage-collect the Job (and its Pod) 3600 seconds after it completes. This is cluster-level TTL—it works even if your orchestrator crashes.

Active Deadline for Running Time Limit

The TTL above only kicks in after the job finishes. What if we want to kill it while it's still running?

yaml
1spec:
2 # HARD LIMIT: Kill the workstation after 1 hour regardless of state
3 # This prevents runaway workstations that never finish
4 activeDeadlineSeconds: 3600
5
6 # Combined with ttlSecondsAfterFinished for full lifecycle:
7 # - activeDeadlineSeconds: Forces completion at 1 hour
8 # - ttlSecondsAfterFinished: Cleans up 1 hour after completion
9 ttlSecondsAfterFinished: 3600

Part 5: Container Lifecycle State Machine

To build a robust system, we need to think about all the states a workstation can be in and how it transitions between them.

state diagram

User requests workstation

Resources available

No resources

Container started

Start failed

TTL expired or user request

Health check failed

Container removed

PENDING

CREATING

FAILED

RUNNING

STOPPING

STOPPED

Active session

10 second grace period

Ctrl+scroll to zoom • Drag to pan34%

Implementing the State Machine

typescript
1// Define all possible states
2type WorkstationState =
3 | 'pending' // Request received, waiting for resources
4 | 'creating' // Container is being created
5 | 'running' // Container is running, user can connect
6 | 'stopping' // Graceful shutdown in progress
7 | 'stopped' // Container removed successfully
8 | 'failed'; // Something went wrong
9
10// Define valid state transitions
11interface StateTransition {
12 from: WorkstationState;
13 to: WorkstationState;
14 action: string; // What triggered this transition
15}
16
17// Only these transitions are allowed
18const validTransitions: StateTransition[] = [
19 { from: 'pending', to: 'creating', action: 'start_creation' },
20 { from: 'pending', to: 'failed', action: 'resource_unavailable' },
21 { from: 'creating', to: 'running', action: 'container_ready' },
22 { from: 'creating', to: 'failed', action: 'creation_error' },
23 { from: 'running', to: 'stopping', action: 'ttl_expired' },
24 { from: 'running', to: 'stopping', action: 'user_stop' },
25 { from: 'running', to: 'failed', action: 'health_check_failed' },
26 { from: 'stopping', to: 'stopped', action: 'cleanup_complete' },
27];
28
29// Check if a transition is valid
30function canTransition(
31 current: WorkstationState,
32 target: WorkstationState
33): boolean {
34 return validTransitions.some(
35 t => t.from === current && t.to === target
36 );
37}

Why use a state machine? It prevents bugs like:

  • Trying to connect to a workstation that's still creating
  • Destroying a workstation twice
  • Transitioning from 'stopped' back to 'running'

Part 6: Resource Pool Management

Cold-starting containers takes time (typically 2-5 seconds). For instant workstation provisioning, we can pre-warm a pool of containers.

How Pooling Works

flowchart

Instant

Replenish

In Use

User A Workstation

User B Workstation

Container Pool

Warm Container 1

Warm Container 2

Warm Container 3

New Request

Create New Container

Ctrl+scroll to zoom • Drag to pan36%

Pool Manager Implementation

typescript
1// src/pool-manager.ts
2interface PoolConfig {
3 minSize: number; // Always keep this many warm containers
4 maxSize: number; // Never exceed this many total containers
5 warmupCount: number; // How many to create at startup
6}
7
8class WorkstationPool {
9 private available: string[] = []; // Container IDs ready to use
10 private inUse: Map<string, string> = new Map(); // workstationId -> containerId
11
12 constructor(private config: PoolConfig) {}
13
14 // Initialize pool at startup
15 async initialize(): Promise<void> {
16 console.log(`Initializing pool with ${this.config.warmupCount} containers...`);
17
18 // Create warmup containers in parallel for speed
19 const promises = [];
20 for (let i = 0; i < this.config.warmupCount; i++) {
21 promises.push(this.createWarmContainer());
22 }
23
24 const containerIds = await Promise.all(promises);
25 this.available.push(...containerIds);
26
27 console.log(`Pool initialized with ${this.available.length} containers`);
28 }
29
30 // Get a container for a workstation (instant!)
31 async acquire(workstationId: string): Promise<string> {
32 // Try to get from pool first
33 let containerId = this.available.pop();
34
35 if (!containerId) {
36 // Pool empty - check if we can create more
37 if (this.inUse.size >= this.config.maxSize) {
38 throw new Error('Pool exhausted - maximum containers reached');
39 }
40
41 // Create a new one (slower path)
42 containerId = await this.createWarmContainer();
43 }
44
45 // Track that this container is now in use
46 this.inUse.set(workstationId, containerId);
47
48 // Replenish pool in background (don't await)
49 this.replenish();
50
51 return containerId;
52 }
53
54 // Return a container when workstation is destroyed
55 async release(workstationId: string): Promise<void> {
56 const containerId = this.inUse.get(workstationId);
57 if (!containerId) return;
58
59 this.inUse.delete(workstationId);
60
61 // IMPORTANT: Destroy the container, don't reuse it!
62 // Reusing containers is a security risk - previous user's data might remain
63 await this.destroyContainer(containerId);
64 }
65
66 // Background replenishment
67 private async replenish(): Promise<void> {
68 while (this.available.length < this.config.minSize) {
69 const containerId = await this.createWarmContainer();
70 this.available.push(containerId);
71 }
72 }
73
74 private async createWarmContainer(): Promise<string> {
75 const container = await docker.createContainer({
76 Image: 'workstation:latest',
77 // Created but not started - ready to go
78 });
79 return container.id;
80 }
81
82 private async destroyContainer(containerId: string): Promise<void> {
83 const container = docker.getContainer(containerId);
84 await container.remove({ force: true });
85 }
86}

Security note: We destroy containers after use instead of recycling them. Recycling could leak data between users—one developer might see another's files or environment variables.

Part 7: Cleanup Strategies

Even with TTL and pooling, things can go wrong. Containers might get orphaned if our orchestrator crashes. We need a backup cleanup system.

Cron-Based Cleanup

This runs every 5 minutes and cleans up any expired workstations:

typescript
1// src/cleanup.ts
2import cron from 'node-cron';
3import Docker from 'dockerode';
4
5const docker = new Docker();
6
7// Run every 5 minutes
8cron.schedule('*/5 * * * *', async () => {
9 console.log('Running workstation cleanup scan...');
10
11 // Get all containers with our label
12 const containers = await docker.listContainers({
13 all: true, // Include stopped containers
14 filters: {
15 label: ['workstation.id'], // Only our workstations
16 },
17 });
18
19 const now = new Date();
20 let cleaned = 0;
21
22 for (const containerInfo of containers) {
23 // Check if this container has expired
24 const expires = containerInfo.Labels['workstation.expires'];
25 if (!expires) continue;
26
27 const expiresAt = new Date(expires);
28 if (now > expiresAt) {
29 console.log(`Cleaning up expired workstation: ${containerInfo.Id.slice(0, 12)}`);
30
31 const container = docker.getContainer(containerInfo.Id);
32
33 // Try graceful stop first
34 await container.stop({ t: 10 }).catch(() => {
35 // Might already be stopped
36 });
37
38 // Force remove
39 await container.remove({ force: true });
40 cleaned++;
41 }
42 }
43
44 console.log(`Cleanup complete. Removed ${cleaned} expired workstations.`);
45});

Why labels? By labeling our containers with workstation.id and workstation.expires, the cleanup job can find and evaluate them without needing access to our orchestrator's database.

Kubernetes CronJob Alternative

For Kubernetes deployments:

yaml
1apiVersion: batch/v1
2kind: CronJob
3metadata:
4 name: workstation-cleanup
5spec:
6 schedule: "*/5 * * * *" # Every 5 minutes
7 jobTemplate:
8 spec:
9 template:
10 spec:
11 restartPolicy: OnFailure
12 serviceAccountName: workstation-cleanup # Needs pod delete permissions
13 containers:
14 - name: cleanup
15 image: bitnami/kubectl:latest
16 command:
17 - /bin/sh
18 - -c
19 - |
20 # Delete completed workstation pods
21 kubectl get pods -l app=workstation \
22 --field-selector=status.phase=Succeeded \
23 -o name | xargs -r kubectl delete
24
25 # Delete failed workstation pods
26 kubectl get pods -l app=workstation \
27 --field-selector=status.phase=Failed \
28 -o name | xargs -r kubectl delete

Part 8: Security Considerations

Security isn't optional for workstations—you're giving users shell access to your infrastructure.

Defense in Depth

LayerImplementationWhat It Prevents
ProcessNon-root user, dropped capabilitiesKernel exploits, privilege escalation
NetworkIsolated bridge networkAccess to other containers, host network
FilesystemRead-only root, tmpfs workspacePersistent malware, disk exhaustion
ResourcesCPU/memory limits, PID limitsFork bombs, resource starvation
TimeTTL enforcement, active deadlinesOrphaned resources, crypto mining

Production Security Configuration

typescript
1const securityConfig = {
2 HostConfig: {
3 // RESOURCE LIMITS
4 Memory: 2 * 1024 * 1024 * 1024, // 2GB max
5 NanoCpus: 1 * 1e9, // 1 CPU max
6 PidsLimit: 100, // Max 100 processes (stops fork bombs)
7
8 // FILESYSTEM
9 ReadonlyRootfs: true, // Can't modify system files
10 Tmpfs: {
11 // Writable workspace in memory
12 '/home/developer/workspace': 'rw,size=1g',
13 '/tmp': 'rw,size=500m',
14 },
15
16 // CAPABILITIES
17 CapDrop: ['ALL'], // Remove all capabilities
18 CapAdd: ['CHOWN', 'SETUID', 'SETGID'], // Add only what's needed
19 SecurityOpt: ['no-new-privileges'], // Prevent escalation
20
21 // NETWORK
22 NetworkMode: 'workstation-network', // Isolated network
23 },
24};

Performance Results

Companies using ephemeral environments report significant improvements across key metrics:

bar chart
Ephemeral Environment Impact
after
before
Onboarding TimeEnvironment SetupWorks on My MachineCloud Costs0255075100
Hover for detailsbuun.group
MetricImprovement
Onboarding time90% reduction (weeks to minutes)
Dev productivity50% increase
Cloud costs50% reduction
Deployment frequency10x increase

Source: Coder State of Development Environments 2025

Putting It All Together

Here's how all the pieces connect:

flowchart

Infrastructure

Core Services

Entry Points

REST API

WebSocket Server

Orchestrator

Pool Manager

Session Manager

Docker / Kubernetes

Cleanup Job

Ctrl+scroll to zoom • Drag to pan48%

Request flow:

  1. User clicks "New Workstation" → REST API
  2. API calls Orchestrator → Orchestrator gets container from Pool
  3. Pool provides pre-warmed container → Orchestrator configures and starts it
  4. User connects terminal → WebSocket Server
  5. WebSocket bridges to container shell
  6. TTL expires → Cleanup destroys container

Brisbane Cloud Development Services

At Buun Group, we help Queensland businesses build cloud development infrastructure:

  • Custom workstation images for your tech stack
  • Kubernetes orchestration with auto-scaling
  • Browser-based IDEs with integrated terminals
  • Secure multi-tenant environments

We've deployed ephemeral workstation systems for teams across Australia.

Ready to modernize your dev environments?

Topics

ephemeral development environmentDocker workstationcloud development environmentcontainer orchestrationdeveloper workstationdisposable dev environmentKubernetes TTLDocker auto cleanup

Share this post

Share

Comments

Sign in to join the conversation

Login

No comments yet. Be the first to share your thoughts!

Found an issue with this article?

/ Let's Talk

Want to work with us?

Whether you need help with architecture, development, or technical consulting, our team is here to help bring your vision to life.