A Local Dev Services Dashboard in Python
Working on five projects simultaneously means five development servers scattered across terminals. Port 4000 runs the blog—or was it 4001? Port 5173 is Vite, port 8070 is Python, and port 8123 is… something. You tab through browser windows guessing URLs, grep through ps aux output, and run lsof -i :8123 to figure out what’s listening where.
A single-page Python dashboard solves this: one localhost:9000 URL shows all services, their ports, running status, and start/stop controls. Click a button to launch your Vite server, another to stop the stale Jekyll process, a third to check which Docker containers are up. Auto-refresh every 10 seconds keeps status current without manual polling.
This post walks through the implementation: JSON service definitions for declarative config, Python HTTP server with embedded HTML dashboard, process detection via ss and pgrep, Docker Compose integration for containerized stacks, and nohup for background service launching. You’ll consolidate all local dev services into one dashboard.
Problem Statement
Multiple concurrent projects result in multiple development servers:
1
2
3
4
Port 4000 - Jekyll blog (or 4001?)
Port 5173 - Vite project
Port 8070 - Python application
Port 8123 - Unknown service
This leads to multiple browser tabs, terminal windows, and frequent lsof -i :PORT commands to determine service locations.
Proposed Solution
A dashboard providing consolidated visibility:
- Service running status
- Port assignments per service
- Start/stop controls
- Docker Compose project status
- Auto-refresh every 10 seconds
Project Structure
1
2
3
local-dashboard/
├── server.py # Dashboard server
└── services.json # Service definitions
Configuration File
Service definitions live in a single JSON file that the dashboard reads on each request. This eliminates the need to restart the dashboard when adding new services:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
{
"services": [
{
"name": "Blog (Jekyll)",
"domain": "blog.lan",
"port": 4000,
"directory": "/home/user/projects/blog",
"start_cmd": "bundle exec jekyll serve --future --host 0.0.0.0",
"process_match": "jekyll serve"
},
{
"name": "Frontend",
"domain": "app.lan",
"port": 5173,
"directory": "/home/user/projects/frontend",
"start_cmd": "npm run dev -- --host 0.0.0.0",
"process_match": "vite"
},
{
"name": "API Docs",
"domain": "docs.lan",
"port": 8080,
"directory": "/home/user/projects/api/docs",
"start_cmd": "python3 -m http.server 8080",
"process_match": "http.server 8080"
}
],
"docker_compose": [
{
"name": "Backend Stack",
"domain": "api.lan",
"directory": "/home/user/projects/backend",
"compose_file": "docker-compose.yml"
}
],
"tunnels": [
{
"name": "Remote Ollama",
"domain": "ollama.lan",
"port": 11434,
"process_match": "ssh.*11434"
}
]
}
Each service configuration includes:
name: Display name in the dashboarddomain: Custom domain (e.g.,blog.lan) for direct browser accessport: Listening port for status checks viassdirectory: Working directory wherestart_cmdexecutesstart_cmd: Command to launch the service (run vianohup)process_match: Regex pattern forpgrepto find running processes
The process_match field enables detection of services started outside the dashboard, like SSH tunnels or manually launched servers.
Dashboard Server Implementation
The server provides three main capabilities: status checking (which services are running), service control (start/stop operations), and a web UI. The implementation uses Python’s built-in http.server module—no dependencies required:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
#!/usr/bin/env python3
"""
Local Development Services Dashboard
"""
import json
import subprocess
import os
import signal
from pathlib import Path
from http.server import HTTPServer, BaseHTTPRequestHandler
from urllib.parse import urlparse, parse_qs
PORT = 9000
CONFIG_FILE = Path(__file__).parent / "services.json"
def load_config():
with open(CONFIG_FILE) as f:
return json.load(f)
def is_port_listening(port: int) -> bool:
"""Check if a port is listening."""
result = subprocess.run(
["ss", "-tln", f"sport = :{port}"],
capture_output=True,
text=True
)
return f":{port}" in result.stdout
def find_process_by_pattern(pattern: str) -> list[dict]:
"""Find processes matching a pattern."""
result = subprocess.run(
["pgrep", "-af", pattern],
capture_output=True,
text=True
)
processes = []
for line in result.stdout.strip().split("\n"):
if line:
parts = line.split(" ", 1)
if len(parts) == 2:
processes.append({"pid": int(parts[0]), "cmd": parts[1]})
return processes
def get_service_status(service: dict) -> dict:
"""Get status of a service."""
port = service.get("port")
pattern = service.get("process_match")
status = {
"name": service["name"],
"domain": service.get("domain", ""),
"port": port,
"directory": service.get("directory", ""),
"running": False,
"pid": None
}
if port and is_port_listening(port):
status["running"] = True
if pattern:
procs = find_process_by_pattern(pattern)
if procs:
status["running"] = True
status["pid"] = procs[0]["pid"]
return status
def get_docker_compose_status(dc: dict) -> dict:
"""Get status of a docker-compose project."""
directory = dc["directory"]
compose_file = dc.get("compose_file", "docker-compose.yml")
result = subprocess.run(
["docker", "compose", "-f", compose_file, "ps", "--format", "json"],
cwd=directory,
capture_output=True,
text=True
)
containers = []
running_count = 0
total_count = 0
if result.returncode == 0 and result.stdout.strip():
for line in result.stdout.strip().split("\n"):
try:
container = json.loads(line)
containers.append({
"name": container.get("Name", ""),
"state": container.get("State", ""),
"status": container.get("Status", "")
})
total_count += 1
if container.get("State") == "running":
running_count += 1
except json.JSONDecodeError:
pass
return {
"name": dc["name"],
"domain": dc.get("domain", ""),
"directory": directory,
"running": running_count > 0,
"containers": containers,
"running_count": running_count,
"total_count": total_count
}
def start_service(service: dict) -> dict:
"""Start a service using nohup."""
directory = service.get("directory")
start_cmd = service.get("start_cmd")
if not directory or not start_cmd:
return {"success": False, "error": "Missing directory or start_cmd"}
if not os.path.isdir(directory):
return {"success": False, "error": f"Directory not found: {directory}"}
log_file = Path(directory) / ".server.log"
cmd = f"nohup {start_cmd} > {log_file} 2>&1 &"
result = subprocess.run(
cmd,
shell=True,
cwd=directory,
capture_output=True,
text=True
)
if result.returncode == 0:
return {"success": True, "message": f"Started {service['name']}"}
else:
return {"success": False, "error": result.stderr}
def stop_service(service: dict) -> dict:
"""Stop a service by killing its process."""
pattern = service.get("process_match")
port = service.get("port")
killed = False
if pattern:
procs = find_process_by_pattern(pattern)
for proc in procs:
try:
os.kill(proc["pid"], signal.SIGTERM)
killed = True
except ProcessLookupError:
pass
if not killed and port:
result = subprocess.run(
["fuser", "-k", f"{port}/tcp"],
capture_output=True
)
if result.returncode == 0:
killed = True
if killed:
return {"success": True, "message": f"Stopped {service['name']}"}
else:
return {"success": False, "error": "Could not find process to stop"}
def docker_compose_action(dc: dict, action: str) -> dict:
"""Start/stop docker-compose project."""
directory = dc["directory"]
compose_file = dc.get("compose_file", "docker-compose.yml")
if action == "start":
cmd = ["docker", "compose", "-f", compose_file, "up", "-d"]
elif action == "stop":
cmd = ["docker", "compose", "-f", compose_file, "down"]
else:
return {"success": False, "error": f"Unknown action: {action}"}
result = subprocess.run(
cmd,
cwd=directory,
capture_output=True,
text=True
)
if result.returncode == 0:
return {"success": True, "message": f"{action.title()}ed {dc['name']}"}
else:
return {"success": False, "error": result.stderr}
Web Interface
The dashboard serves an embedded HTML page with a dark theme:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
HTML_TEMPLATE = """<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Local Services Dashboard</title>
<style>
:root {
--bg: #1a1a2e;
--card-bg: #16213e;
--text: #eee;
--text-muted: #888;
--green: #00d26a;
--red: #ff6b6b;
--blue: #4dabf7;
--border: #2a2a4a;
}
* { box-sizing: border-box; margin: 0; padding: 0; }
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
background: var(--bg);
color: var(--text);
padding: 2rem;
min-height: 100vh;
}
h1 { margin-bottom: 2rem; font-weight: 300; font-size: 1.8rem; }
h2 {
font-size: 1rem;
font-weight: 500;
color: var(--text-muted);
margin: 2rem 0 1rem;
text-transform: uppercase;
letter-spacing: 0.1em;
}
.grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(320px, 1fr));
gap: 1rem;
}
.card {
background: var(--card-bg);
border-radius: 8px;
padding: 1.25rem;
border: 1px solid var(--border);
}
.card-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 0.75rem;
}
.card-title { font-size: 1.1rem; font-weight: 500; }
.status {
display: flex;
align-items: center;
gap: 0.5rem;
font-size: 0.85rem;
}
.status-dot {
width: 10px;
height: 10px;
border-radius: 50%;
}
.status-dot.running {
background: var(--green);
box-shadow: 0 0 8px var(--green);
}
.status-dot.stopped { background: var(--red); }
.card-meta {
font-size: 0.8rem;
color: var(--text-muted);
margin-bottom: 1rem;
}
.card-meta a { color: var(--blue); text-decoration: none; }
.card-meta a:hover { text-decoration: underline; }
.card-actions { display: flex; gap: 0.5rem; }
button {
padding: 0.5rem 1rem;
border: none;
border-radius: 4px;
cursor: pointer;
font-size: 0.85rem;
transition: opacity 0.2s;
}
button:hover { opacity: 0.8; }
button:disabled { opacity: 0.5; cursor: not-allowed; }
.btn-start { background: var(--green); color: #000; }
.btn-stop { background: var(--red); color: #fff; }
.btn-open { background: var(--blue); color: #000; }
.docker-containers {
font-size: 0.75rem;
color: var(--text-muted);
margin-top: 0.5rem;
}
.refresh-btn {
position: fixed;
bottom: 2rem;
right: 2rem;
background: var(--blue);
color: #000;
width: 50px;
height: 50px;
border-radius: 50%;
font-size: 1.5rem;
}
.toast {
position: fixed;
bottom: 2rem;
left: 50%;
transform: translateX(-50%);
background: var(--card-bg);
border: 1px solid var(--border);
padding: 1rem 2rem;
border-radius: 8px;
display: none;
}
.toast.show { display: block; }
.toast.success { border-color: var(--green); }
.toast.error { border-color: var(--red); }
</style>
</head>
<body>
<h1>Local Services Dashboard</h1>
<h2>Development Servers</h2>
<div class="grid" id="services"></div>
<h2>Docker Compose</h2>
<div class="grid" id="docker"></div>
<h2>SSH Tunnels</h2>
<div class="grid" id="tunnels"></div>
<button class="refresh-btn" onclick="refresh()">↻</button>
<div class="toast" id="toast"></div>
<script>
async function fetchStatus() {
const res = await fetch('/api/status');
return res.json();
}
function renderService(svc, type) {
const running = svc.running;
const statusClass = running ? 'running' : 'stopped';
const statusText = running ? 'Running' : 'Stopped';
let meta = '';
if (svc.domain) {
meta += `<a href="https://${svc.domain}" target="_blank">${svc.domain}</a>`;
}
if (svc.port) meta += ` · Port ${svc.port}`;
if (svc.pid) meta += ` · PID ${svc.pid}`;
let actions = '';
if (type === 'service') {
actions = `
<button class="btn-start" onclick="action('start', '${svc.name}')" ${running ? 'disabled' : ''}>Start</button>
<button class="btn-stop" onclick="action('stop', '${svc.name}')" ${!running ? 'disabled' : ''}>Stop</button>
`;
} else if (type === 'docker') {
actions = `
<button class="btn-start" onclick="dockerAction('start', '${svc.name}')" ${running ? 'disabled' : ''}>Start</button>
<button class="btn-stop" onclick="dockerAction('stop', '${svc.name}')" ${!running ? 'disabled' : ''}>Stop</button>
`;
}
if (svc.domain) {
actions += `<button class="btn-open" onclick="window.open('https://${svc.domain}', '_blank')">Open</button>`;
}
let extra = '';
if (svc.running_count !== undefined) {
extra = `<div class="docker-containers">${svc.running_count}/${svc.total_count} containers running</div>`;
}
return `
<div class="card">
<div class="card-header">
<span class="card-title">${svc.name}</span>
<span class="status">
<span class="status-dot ${statusClass}"></span>
${statusText}
</span>
</div>
<div class="card-meta">${meta}</div>
<div class="card-actions">${actions}</div>
${extra}
</div>
`;
}
async function refresh() {
const data = await fetchStatus();
document.getElementById('services').innerHTML =
data.services.map(s => renderService(s, 'service')).join('');
document.getElementById('docker').innerHTML =
data.docker_compose.map(s => renderService(s, 'docker')).join('');
document.getElementById('tunnels').innerHTML =
data.tunnels.map(s => renderService(s, 'tunnel')).join('');
}
function showToast(message, type) {
const toast = document.getElementById('toast');
toast.textContent = message;
toast.className = 'toast show ' + type;
setTimeout(() => toast.className = 'toast', 3000);
}
async function action(act, name) {
const res = await fetch(`/api/${act}?name=${encodeURIComponent(name)}`, {method: 'POST'});
const data = await res.json();
showToast(data.message || data.error, data.success ? 'success' : 'error');
setTimeout(refresh, 1000);
}
async function dockerAction(act, name) {
const res = await fetch(`/api/docker/${act}?name=${encodeURIComponent(name)}`, {method: 'POST'});
const data = await res.json();
showToast(data.message || data.error, data.success ? 'success' : 'error');
setTimeout(refresh, 2000);
}
refresh();
setInterval(refresh, 10000);
</script>
</body>
</html>
"""
HTTP Handler Implementation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
class DashboardHandler(BaseHTTPRequestHandler):
def log_message(self, format, *args):
pass # Suppress logging
def send_json(self, data, status=200):
self.send_response(status)
self.send_header("Content-Type", "application/json")
self.send_header("Access-Control-Allow-Origin", "*")
self.end_headers()
self.wfile.write(json.dumps(data).encode())
def send_html(self, html):
self.send_response(200)
self.send_header("Content-Type", "text/html")
self.end_headers()
self.wfile.write(html.encode())
def do_GET(self):
parsed = urlparse(self.path)
if parsed.path == "/" or parsed.path == "":
self.send_html(HTML_TEMPLATE)
elif parsed.path == "/api/status":
config = load_config()
services = [get_service_status(s) for s in config.get("services", [])]
docker = [get_docker_compose_status(d) for d in config.get("docker_compose", [])]
tunnels = [get_service_status(t) for t in config.get("tunnels", [])]
self.send_json({
"services": services,
"docker_compose": docker,
"tunnels": tunnels
})
else:
self.send_response(404)
self.end_headers()
def do_POST(self):
parsed = urlparse(self.path)
params = parse_qs(parsed.query)
name = params.get("name", [None])[0]
if not name:
self.send_json({"success": False, "error": "Missing name"}, 400)
return
config = load_config()
if parsed.path == "/api/start":
service = next((s for s in config.get("services", []) if s["name"] == name), None)
if service:
self.send_json(start_service(service))
else:
self.send_json({"success": False, "error": "Not found"}, 404)
elif parsed.path == "/api/stop":
service = next((s for s in config.get("services", []) if s["name"] == name), None)
if service:
self.send_json(stop_service(service))
else:
self.send_json({"success": False, "error": "Not found"}, 404)
elif parsed.path == "/api/docker/start":
dc = next((d for d in config.get("docker_compose", []) if d["name"] == name), None)
if dc:
self.send_json(docker_compose_action(dc, "start"))
else:
self.send_json({"success": False, "error": "Not found"}, 404)
elif parsed.path == "/api/docker/stop":
dc = next((d for d in config.get("docker_compose", []) if d["name"] == name), None)
if dc:
self.send_json(docker_compose_action(dc, "stop"))
else:
self.send_json({"success": False, "error": "Not found"}, 404)
def main():
server = HTTPServer(("0.0.0.0", PORT), DashboardHandler)
print(f"Dashboard running at http://localhost:{PORT}")
try:
server.serve_forever()
except KeyboardInterrupt:
print("\nShutting down...")
server.shutdown()
if __name__ == "__main__":
main()
Running the Dashboard
1
2
3
4
5
6
# Direct execution
cd ~/projects/local-dashboard
python3 server.py
# Background execution (survives terminal close)
nohup python3 server.py > dashboard.log 2>&1 &
Access the dashboard at http://localhost:9000.
Adding Services
Edit services.json to add new services. The dashboard reloads the configuration on each request, eliminating the need for restart.
Standard Dev Server
1
2
3
4
5
6
7
8
{
"name": "My React App",
"domain": "react.lan",
"port": 3000,
"directory": "/home/user/projects/react-app",
"start_cmd": "npm start",
"process_match": "react-scripts start"
}
Static File Server
1
2
3
4
5
6
7
8
{
"name": "Documentation",
"domain": "docs.lan",
"port": 8080,
"directory": "/home/user/projects/docs/build",
"start_cmd": "python3 -m http.server 8080",
"process_match": "http.server 8080"
}
Docker Compose Project
1
2
3
4
5
6
{
"name": "API Stack",
"domain": "api.lan",
"directory": "/home/user/projects/api",
"compose_file": "docker-compose.yml"
}
Examples
Example 1: Managing a Full-Stack Project
You’re working on a web app with Jekyll blog (port 4000), React frontend (5173), Python API (8000), and PostgreSQL (via Docker Compose):
services.json:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
{
"services": [
{
"name": "Jekyll Blog",
"domain": "blog.lan",
"port": 4000,
"directory": "/home/user/myproject/blog",
"start_cmd": "bundle exec jekyll serve --host 0.0.0.0",
"process_match": "jekyll serve"
},
{
"name": "React Frontend",
"domain": "app.lan",
"port": 5173,
"directory": "/home/user/myproject/frontend",
"start_cmd": "npm run dev -- --host 0.0.0.0",
"process_match": "vite"
},
{
"name": "Python API",
"domain": "api.lan",
"port": 8000,
"directory": "/home/user/myproject/api",
"start_cmd": "uvicorn main:app --reload --host 0.0.0.0",
"process_match": "uvicorn"
}
],
"docker_compose": [
{
"name": "Database Stack",
"directory": "/home/user/myproject/infra",
"compose_file": "docker-compose.yml"
}
]
}
Usage:
- Open
http://localhost:9000 - Click “Start” for each service (or start all via Docker Compose)
- Dashboard shows green status indicators when all services are running
- Click domain links to open each service in browser
- When done, stop all services from the dashboard
Example 2: Monitoring SSH Tunnels
You have an SSH tunnel forwarding remote Ollama to localhost:11434:
1
2
3
4
5
6
7
8
9
10
{
"tunnels": [
{
"name": "Remote Ollama",
"domain": "ollama.lan",
"port": 11434,
"process_match": "ssh.*11434:localhost:11434"
}
]
}
The dashboard shows tunnel status without needing to grep ps aux. If the tunnel dies, the status indicator turns red immediately (on next refresh).
Example 3: Mixed Local and Container Services
Some services run directly, others in Docker:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
{
"services": [
{
"name": "Development Docs",
"port": 8080,
"directory": "/home/user/docs",
"start_cmd": "python3 -m http.server 8080",
"process_match": "http.server 8080"
}
],
"docker_compose": [
{
"name": "Backend Services",
"directory": "/home/user/backend",
"compose_file": "docker-compose.yml"
},
{
"name": "Monitoring Stack",
"directory": "/home/user/monitoring",
"compose_file": "docker-compose.yml"
}
]
}
The dashboard shows all services in one view: local Python servers and Docker containers side by side. Docker Compose sections display container counts (e.g., “3/5 containers running”).
Example 4: Auto-Start on Boot
Use systemd to launch the dashboard automatically:
1
2
3
4
5
6
7
8
9
10
11
12
13
# ~/.config/systemd/user/dev-dashboard.service
[Unit]
Description=Local Dev Services Dashboard
After=network.target
[Service]
Type=simple
WorkingDirectory=/home/user/projects/local-dashboard
ExecStart=/usr/bin/python3 server.py
Restart=always
[Install]
WantedBy=default.target
Enable with:
1
systemctl --user enable --now dev-dashboard
Now localhost:9000 is always available when you log in.
Troubleshooting
Issue: Services show as stopped but are actually running
Symptom: Dashboard shows red status for a service you know is running.
Cause: process_match regex doesn’t match the actual process command.
Solution: Check the exact command running:
1
pgrep -af "your-pattern"
Adjust the regex in services.json to match what you see. For example, if the process is /usr/bin/node /path/to/vite, use process_match: "node.*vite" instead of just "vite".
Issue: Start button doesn’t work
Symptom: Clicking “Start” shows success message but service doesn’t run.
Cause: Working directory doesn’t exist or start_cmd has errors.
Solution: Check the log file created by nohup:
1
cat /path/to/service/directory/.server.log
The error will be in this file. Common issues:
- Missing dependencies (
npm installneeded first) - Wrong directory path
- Port already in use
Issue: Dashboard shows “X/Y containers running” but none are visible in Docker
Symptom: Docker Compose section shows container counts but docker ps shows nothing.
Cause: Dashboard uses docker compose ps which includes stopped containers. The count compares running vs total.
Solution: This is expected behavior. “0/5 containers running” means 5 defined, 0 running. Click “Start” to launch them.
Extensions
This dashboard functions locally on localhost:9000. For network-wide access from other machines:
- Configure a reverse proxy (Caddy/nginx) to handle HTTPS
- Set up local DNS resolution for custom domains (
.lan) - Configure firewall rules to allow access from your local network
- Consider adding authentication for security
The next post covers network-wide access with Caddy and local DNS: Caddy and Local DNS for Network-Wide Dev Access.
Conclusion
A single-page Python dashboard consolidates fragmented local development infrastructure. Instead of tracking ports across terminals and browser tabs, one interface shows all services with status indicators and start/stop controls.
Key insights:
- JSON configuration enables declarative service management without code changes
ssandpgrepprovide reliable status detection for both ports and processes- Docker Compose integration works alongside local services seamlessly
nohupbackground launching persists services after dashboard exit- Auto-refresh keeps status current without manual polling
With this dashboard, you manage five services with five clicks instead of five terminal commands. Add a systemd service and it launches on boot, ensuring localhost:9000 is always one bookmark away from controlling your entire local dev environment.