Automating Load Balancer Creation for NGINX HTTP Servers in OpenStack with Terraform
Infrastructure as Code (IaC) has become a cornerstone of modern DevOps practices, enabling teams to automate and manage infrastructure efficiently. Terraform, an open-source IaC tool by HashiCorp, allows you to define and provision infrastructure using declarative configuration files. In this article, we’ll walk through how to use Terraform to automate the creation of a load balancer for an NGINX HTTP server in an OpenStack environment.
OpenStack is a popular open-source cloud computing platform that provides Infrastructure as a Service (IaaS). It supports load balancing through its Octavia service, which we’ll leverage to distribute traffic across multiple NGINX instances. NGINX, a high-performance web server, will serve as our HTTP server backend.
Prerequisites
Before diving into the Terraform configuration, ensure you have the following:
- OpenStack Environment: Access to an OpenStack cloud with the Octavia load balancing service enabled.
- Terraform Installed: Version 1.5 or later installed on your local machine.
- OpenStack Credentials: An clouds.yaml file or environment variables (OS_AUTH_URL, OS_USERNAME, etc.) configured for OpenStack authentication.
- Basic Networking: A pre-existing network, subnet, and security groups in OpenStack for the NGINX servers and load balancer.
Step 1: Project Setup
Create a new directory for your Terraform project and initialize it:
bash
mkdir terraform-openstack-nginx-lb
cd terraform-openstack-nginx-lb
terraform init
Step 2: Define Providers
Create a providers.tf file to configure the OpenStack provider for Terraform:
hcl
terraform {
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = "~> 1.51.0"
}
}
}
provider "openstack" {
# Authentication details can be provided via clouds.yaml or environment variables
cloud = "my-openstack-cloud" # Reference to your clouds.yaml entry
}
Run terraform init again if you haven’t already to download the OpenStack provider.
Step 3: Create NGINX Instances
Next, define the NGINX server instances in a nginx.tf file. We’ll create two instances for redundancy and load balancing:
hcl
resource "openstack_compute_instance_v2" "nginx_server" {
count = 2
name = "nginx-server-${count.index + 1}"
image_name = "ubuntu-20.04" # Replace with your preferred image
flavor_name = "m1.small" # Replace with your desired flavor
key_pair = "my-keypair" # Replace with your SSH keypair name
network {
name = "my-network" # Replace with your network name
}
user_data = <<-EOF
#!/bin/bash
apt-get update
apt-get install -y nginx
systemctl enable nginx
systemctl start nginx
echo "Hello from NGINX server ${count.index + 1}" > /var/www/html/index.html
EOF
security_groups = ["default", "web"] # Ensure "web" allows port 80
}
This configuration:
- Launches two Ubuntu instances.
- Installs NGINX via user_data and serves a simple HTML page.
- Attaches the instances to a pre-existing network and security group.
Step 4: Configure the Load Balancer
Now, create a loadbalancer.tf file to set up the load balancer using OpenStack’s Octavia service:
hcl
# Create a load balancer
resource "openstack_lb_loadbalancer_v2" "lb" {
name = "nginx-lb"
vip_subnet_id = "my-subnet-id" # Replace with your subnet ID
}
# Create a listener for HTTP traffic
resource "openstack_lb_listener_v2" "listener" {
name = "nginx-listener"
protocol = "HTTP"
protocol_port = 80
loadbalancer_id = openstack_lb_loadbalancer_v2.lb.id
}
# Create a pool for the NGINX servers
resource "openstack_lb_pool_v2" "pool" {
name = "nginx-pool"
protocol = "HTTP"
lb_method = "ROUND_ROBIN"
loadbalancer_id = openstack_lb_loadbalancer_v2.lb.id
listener_id = openstack_lb_listener_v2.listener.id
}
# Add NGINX servers as members to the pool
resource "openstack_lb_member_v2" "members" {
count = 2
pool_id = openstack_lb_pool_v2.pool.id
address = openstack_compute_instance_v2.nginx_server[count.index].access_ip_v4
protocol_port = 80
subnet_id = "my-subnet-id" # Replace with your subnet ID
}
# Optional: Create a monitor to check server health
resource "openstack_lb_monitor_v2" "monitor" {
name = "nginx-monitor"
pool_id = openstack_lb_pool_v2.pool.id
type = "HTTP"
delay = 20
timeout = 10
max_retries = 3
url_path = "/"
expected_codes = "200"
}
This configuration:
- Creates a load balancer with a virtual IP (VIP).
- Sets up an HTTP listener on port 80.
- Defines a pool with a round-robin algorithm to distribute traffic.
- Adds the NGINX instances as members of the pool.
- Optionally, adds a health monitor to ensure only healthy servers receive traffic.
Step 5: Output the Load Balancer IP
To access the load balancer, add an output in an outputs.tf file:
hcl
output "load_balancer_vip" {
value = openstack_lb_loadbalancer_v2.lb.vip_address
description = "The IP address of the load balancer"
}
Step 6: Apply the Configuration
Run the following commands to provision the infrastructure:
bash
terraform plan # Review the planned changes
terraform apply # Apply the configuration (type "yes" to confirm)
Once complete, Terraform will output the load balancer’s VIP address. Open a browser and navigate to http://<vip_address> to verify that the NGINX servers are responding.
Step 7: Testing and Validation
- Visit the VIP address multiple times to confirm that traffic is distributed between the two NGINX servers (you’ll see “Hello from NGINX server 1” or “server 2”).
- Check the OpenStack dashboard or CLI (openstack loadbalancer show nginx-lb) to ensure the load balancer and its components are active.
Cleanup
When you’re done, destroy the infrastructure to avoid unnecessary costs:
bash
terraform destroy
Conclusion
Using Terraform to automate the creation of a load balancer for NGINX servers in OpenStack simplifies the process of setting up scalable and resilient web services. By defining the infrastructure in code, you can version control it, replicate it across environments, and modify it with ease. This example can be extended by adding more NGINX instances, configuring HTTPS with certificates, or integrating with other OpenStack services like auto-scaling.
For production use, consider additional security measures (e.g., restricting security groups, using floating IPs) and fine-tuning the load balancer settings based on your workload.