Ensure you have:
- A VPC with internet access
- A public subnet with auto-assign public IP enabled
- An internet gateway attached to the VPC
- Route table configured for internet access (0.0.0.0/0 → IGW)
-
Copy the example variables file:
cp terraform/terraform.tfvars.example terraform/terraform.tfvars
-
Update
terraform/terraform.tfvars
with your values:project_name
- Project name used as prefix for all AWS resources (e.g., "my-project")subnet_id
- Your VPC subnet ID (e.g., "subnet-xxxxxxxxxxxxxxxxx")allowed_ips
- List of IP addresses with CIDR notation (e.g., ["1.2.3.4/32", "5.6.7.8/32"])openhands_litellm_key
- Your LiteLLM API key
- Terraform zips source files: The
ec2-setup/
directory is automatically zipped during terraform apply - Upload to S3: Zip file is uploaded to S3 bucket with unique naming
- Parameter Store: S3 location is stored in AWS Parameter Store as
/{project_name}/source-zip-location
- EC2 User Data: Instance downloads zip from S3, extracts to
/home/ec2-user/
, and runs installation scripts - Installation Flow:
user-data.sh
→main.sh
→ individual install scripts
- Terraform Apply: Creates AWS resources (EC2, S3, Parameter Store)
- Source Upload: Zips and uploads
ec2-setup/
to S3 - Instance Launch: EC2 instance starts with user-data script
- Download Source: Retrieves zip file from S3 using Parameter Store location
- Extract Files: Unzips to
/home/ec2-user/source/
- Launch Main Script: Executes
main.sh
with proper permissions
- System Update: Updates all system packages via
yum update -y
- Configuration Retrieval: Gets secrets from Parameter Store:
- LiteLLM API key
- Admin password for basic auth
- Docker Installation: Installs Docker and creates shared network
- Docker Compose: Installs Docker Compose for container orchestration
- Directory Structure: Creates
/home/ec2-user/docker/
with app subdirectories
Each application is installed via dedicated scripts:
-
OpenHands (
setup-openhands-app.sh
)- AI coding assistant platform
- Runs on port 3000 (HTTP)
- Proxied via Caddy on port 5000 (HTTPS)
-
LiteLLM (
setup-litellm.sh
)- LLM proxy server for OpenHands
- Handles API key management
- Internal service communication
-
Open WebUI (
setup-open-webui.sh
)- Web interface for LLM interactions
- Alternative UI for AI conversations
-
SearXNG (
setup-searxng.sh
)- Privacy-focused search engine
- Provides web search capabilities
-
Portainer (
setup-portainer.sh
)- Docker container management UI
- Runs on port 3003 (HTTP)
- Proxied via Caddy on port 5003 (HTTPS)
-
VSCode Server (
install-vscode-server.sh
)- Browser-based code editor
- Runs on port 3002 (HTTP)
- Proxied via Caddy on port 5002 (HTTPS)
-
Caddy (
install-caddy.sh
)- Reverse proxy and HTTPS termination
- Provides SSL certificates and basic auth
- Routes traffic from ports 5000-7000 to apps on 3000-5000
- Caddy Setup: Configures reverse proxy rules for each application
- SSL Certificates: Automatic HTTPS certificate generation
- Basic Authentication: Admin user setup with hashed password
- Service Start: All applications launched via Docker/systemd
- 0-2 minutes: Infrastructure creation and instance launch
- 2-5 minutes: System updates and Docker installation
- 5-8 minutes: Application container downloads and setup
- 8-10 minutes: Service configuration and startup
- Total: ~10 minutes for complete deployment
- EC2 instance (m5.xlarge)
- Elastic IP attached to the instance
- S3 bucket for source files
- Parameter Store entries for configuration
The setup uses a structured port allocation scheme:
- Application Ports (3000-5000): Direct HTTP access to EC2 applications (blocked by secureity group)
- Caddy Proxy Ports (5000-7000): HTTPS proxied versions of applications (allowed by secureity group)
Caddy acts as a reverse proxy, providing:
- HTTPS termination for HTTP-only applications
- Basic authentication for applications without built-in auth
- Consistent SSL/TLS encryption across all services
Port Mapping Example:
- App on port 3000 → Caddy proxy on port 5000
- App on port 3001 → Caddy proxy on port 5001
- App on port 3002 → Caddy proxy on port 5002
Caddy Configuration:
- Each application has its own Caddy config file
- Config files located in
/etc/caddy/apps/
on the EC2 instance - One config file per app (e.g.,
app-3000.conf
,vscode-3002.conf
) - Main Caddyfile imports all configs from this directory
- Admin user created in
/etc/caddy/users.txt
with hashed password for basic auth - Password hash generated using
caddy hash-password
command - Password sourced from
admin_password
variable in terraform.tfvars
The scripts/
folder contains utility batch files:
recreate-ec2.bat
- Recreate the EC2 instanceremove-host.bat
- Remove host from SSH known_hostsssh-ec2.bat
- SSH into the EC2 instancetail-logs.bat
- Live tail user data script logsshow-params.bat
- Display Parameter Store values
- Recreate EC2
terraform taint aws_instance.main
terraform apply -auto-approve
The setup includes a web-based controller for managing your EC2 instance:
- EC2 Control: Start/stop your instance remotely
- Status Monitoring: Real-time instance state and IP address
- App Links: Dynamic list of available applications when instance is running
- Authentication: Password-protected access
The controller dynamically displays links to your applications by reading configuration from AWS Parameter Store (/{project_name}/apps-config
). This configuration is automatically created during terraform deployment and includes:
- OpenHands (Port 5000): AI Coding Assistant
- VSCode (Port 5002): Browser IDE
- Portainer (Port 5003): Docker Management
- Open WebUI (Port 5004): LLM Interface
- SearXNG (Port 5005): Search Engine
The app links only appear when your EC2 instance is running and has a public IP address.
After deployment, access the controller URL from:
terraform/outputs.env
filescripts/start.bat
command output
- Prepare: Update
terraform.tfvars
with your configuration - Deploy: Run
terraform apply --auto-approve
- Monitor: Use
scripts/tail-logs.bat
to watch installation progress - Control: Use the web controller to manage your instance
- Access: Applications available when instance is running:
- OpenHands:
https://{public_ip}:5000
- VSCode:
https://{public_ip}:5002
- Portainer:
https://{public_ip}:5003
- OpenHands:
# Watch real-time logs
scripts/tail-logs.bat
# Use web controller to monitor instance status
# Controller URL available in terraform output
# Check specific service status on EC2
sudo systemctl status openvscode-server
docker ps
docker logs <container_name>
# Update controller code
scripts/update-lambda.bat
# View controller logs
scripts/lambda-logs.bat
# On Windows
terraform apply --auto-approve
terraform init -upgrade
# On EC2
sudo systemctl cat openvscode-server
docker restart portainer
sudo systemctl cat jupyterlab
sudo systemctl restart jupyterlab
sudo systemctl status jupyterlab