These modules are intended as a reference implementation. They reflect
how we deploy Onyx and encode sensible defaults, but every environment is
different — you should treat them as a starting point to fork and adapt to
your account, networking, and compliance requirements rather than a black
box to consume as-is.
What gets provisioned
The composedonyx module wires together the building blocks below. You can
also use them individually if you need more granular control.
| Module | Resource |
|---|---|
vpc | VPC, public/private subnets across AZs, NAT, and an S3 gateway endpoint |
eks | EKS cluster, managed node groups (main + dedicated Vespa), addons, IRSA |
postgres | RDS for PostgreSQL with backups and CloudWatch alarms |
redis | ElastiCache for Redis replication group with TLS in transit |
s3 | S3 bucket for the Onyx file store, locked to the VPC’s S3 endpoint |
opensearch | (Optional) Amazon OpenSearch domain inside the VPC |
waf | (Optional, recommended) AWS WAFv2 web ACL with managed rule sets, rate limits, and geo controls |
onyx | Top-level composition that wires the modules above together |
Prerequisites
Clone the Onyx repo
modules/aws/. You will create a small root module
that calls them.Quickstart
The snippet below is a minimal root module that provisions a complete Onyx stack on AWS using the composedonyx module.
main.tf
Common configuration
The composedonyx module exposes the inputs you’ll most often want to tune.
Core
Prefix for every resource created by the module. Combined with the active
Terraform workspace, so the same code base can manage multiple environments.
AWS region for all resources.
Master username for the RDS Postgres instance.
Master password for the RDS Postgres instance. Marked sensitive — pass via
TF_VAR_postgres_password or your secrets manager rather than hard-coding.Base tags applied to every AWS resource created by the modules.
Networking
When
true, the module creates a new VPC sized for EKS. Set to false to
reuse an existing VPC — see Using an existing VPC.Whether the EKS API endpoint is reachable from the public internet. Combine
with
cluster_endpoint_public_access_cidrs to lock it down to specific IPs.Enable the private EKS API endpoint. Recommended for production. You can
enable both public and private together, or only private.
CIDR blocks allowed to reach the public EKS API endpoint when it is enabled.
Database & cache
Enables RDS IAM authentication and wires an IRSA role into the EKS module so
Onyx workloads can connect to Postgres without a static password. Requires
rds_db_connect_arn to be set.Days to retain automated RDS backups. Set to
0 to disable.Optional auth token for Redis. Marked sensitive.
OpenSearch (optional)
OpenSearch is off by default. Enable it if you want a managed search backend instead of the in-cluster Vespa node group.Provision an Amazon OpenSearch domain inside the VPC.
WAF
Thewaf module is optional but strongly recommended for any
internet-facing deployment. It provisions an AWS WAFv2 web ACL with the
common managed rule sets, rate limits, and optional geo blocking that you can
attach to the load balancer fronting Onyx. Tune the inputs below to fit your
traffic profile.
Optional IP allowlist. Leave empty to allow all source IPs subject to the
managed rule sets and rate limits.
Country codes to block. Leave empty to disable geo restrictions.
modules/aws/onyx/variables.tf.
Using an existing VPC
If you already have a VPC you want to deploy into, setcreate_vpc = false
and pass in the VPC details, including the ID of an existing S3 gateway VPC
endpoint that the bucket policy will reference.
Production hardening
For production environments we recommend the following deltas from the quickstart:Outputs
Onceterraform apply finishes, the onyx module exposes the values you’ll
need to configure the Helm chart.
| Output | Description |
|---|---|
cluster_name | EKS cluster name — pass to aws eks update-kubeconfig |
postgres_endpoint | RDS hostname |
postgres_port | RDS port |
postgres_db_name | Database name (defaults to postgres) |
postgres_username | Master username (sensitive) |
redis_connection_url | Redis primary endpoint (sensitive) |
opensearch_endpoint | OpenSearch domain endpoint, when enable_opensearch = true |
opensearch_dashboard_endpoint | OpenSearch Dashboards endpoint, when enabled |
terraform output:
Install Onyx with Helm
Terraform only stands up the infrastructure — Onyx itself is installed via the Helm chart.Install the chart
The EKS module creates an IRSA-backed You’ll also want to wire the chart’s database, Redis, and (optionally)
OpenSearch settings to the Terraform outputs via your own
ServiceAccount named
onyx-workload-access in the onyx namespace, which has access to the
S3 bucket the module created. Point the chart at it and disable the
in-cluster MinIO so file storage uses the real S3 bucket.values.yaml.
See the Kubernetes deployment guide for
the full set of values.Workspaces and multiple environments
Every resource theonyx module creates is named with the active Terraform
workspace, so the same root module can manage isolated environments without
collisions:
name = "onyx" module call in workspace prod will produce
onyx-prod-prefixed resources, while the same call in dev produces
onyx-dev resources.
Notes & gotchas
- Sensitive outputs.
postgres_username,redis_connection_url, and the EKS CA data are marked sensitive. Hand them off to your secrets manager or Helm values file rather than echoing them to logs. - State storage. The quickstart uses local state for brevity. For shared or production use, configure an S3 backend with DynamoDB locking.
- First apply is infra-only. EKS takes several minutes to become active.
The
null_resource.wait_for_clusterblock in the quickstart blocks the Kubernetes/Helm providers until the API server is reachable. - The Vespa node group is tainted. The
eksmodule provisions a dedicated node group for Vespa with avespa-dedicated=true:NoScheduletaint. The Helm chart’s Vespa pods tolerate it; everything else lands on the main node group. - Reference, not a product. The modules encode the choices we make for our own deployments. Read them, copy what’s useful, and replace what isn’t.
Next Steps
Configure Authentication
Set up authentication for your Onyx deployment with OAuth, OIDC, or SAML.
More Onyx Configuration Options
Learn about all available configuration options for your Onyx deployment.