feat: enhance project with CI/CD pipeline, automated formatting, and improved outputs#32
Conversation
…yment - Add CI/CD workflow with format check, validate, and plan stages - Add outputs.tf for ALB DNS, CloudFront domain, and RDS endpoints - Add module-level outputs for alb-tg, aws-rds, and aws-waf-cdn-acm-route53
📝 WalkthroughWalkthroughThis change adds Terraform module and root-level outputs to expose key AWS infrastructure attributes (ALB DNS, CloudFront domain, RDS endpoints), and introduces a GitHub Actions CI/CD workflow that validates Terraform code through format checks, validation, and plan stages on pull requests. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~5 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (1)
.github/workflows/terraform-ci.yml (1)
69-74: Consider parameterizing the AWS region.The AWS region is hardcoded to
us-east-1. If the infrastructure is deployed to a different region, the plan will either fail or produce misleading results. Consider using a repository variable or extracting the region from the Terraform configuration.♻️ Proposed fix to use a repository variable
- name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v4 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - aws-region: us-east-1 + aws-region: ${{ vars.AWS_REGION || 'us-east-1' }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/terraform-ci.yml around lines 69 - 74, The workflow currently hardcodes aws-region: us-east-1 in the aws-actions/configure-aws-credentials@v4 step; change this to read a configurable value (for example replace the literal with a repository variable like ${{ vars.AWS_REGION }} or a workflow input/secret) so the region can be overridden without editing the workflow, and update any README or workflow defaults to document the new AWS_REGION variable; reference the "Configure AWS Credentials" step and the aws-region key when making this change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/terraform-ci.yml:
- Around line 55-82: The Terraform plan job fails because module output
definitions reference incorrect resource labels in the module outputs (see
alb-tg/outputs.tf and aws-rds/outputs.tf); open those output blocks and update
the referenced resource names/attributes to match the actual resource blocks
declared in their modules (or adjust the resource labels in the modules to match
the outputs), ensuring each output uses the correct symbol (e.g., target_group
ARN/ID, ALB listener ARN, RDS instance identifier) and valid attribute paths so
terraform validate/plan succeed; after changes run terraform validate locally or
in CI to confirm the mismatches are resolved.
- Around line 3-11: The workflow's event triggers currently list branches:
[main] under the push and pull_request keys so it won't run for PRs targeting
master; update the branches arrays under the on: push: and on: pull_request:
sections to include "master" (or replace "main" with "master"), or list both
["main","master"] to support both default-branch names so the workflow runs for
PRs targeting master; locate the branches: [main] entries in the workflow's
on/push and on/pull_request blocks and modify them accordingly.
In `@DevOps-Project-11/modules/alb-tg/outputs.tf`:
- Around line 1-14: The outputs in outputs.tf reference non-existent resources
(aws_lb.alb and aws_lb_target_group.tg); update the output values to use the
actual resource labels defined in main.tf: replace aws_lb.alb.dns_name and
aws_lb.alb.arn with aws_lb.web-elb.dns_name and aws_lb.web-elb.arn respectively,
and replace aws_lb_target_group.tg.arn with aws_lb_target_group.web-tg.arn so
outputs "alb_dns_name", "alb_arn", and "target_group_arn" point to the correct
resources.
In `@DevOps-Project-11/modules/aws-rds/outputs.tf`:
- Around line 1-14: The outputs rds_cluster_endpoint, rds_reader_endpoint, and
rds_cluster_port reference a non-existent resource label
aws_rds_cluster.rds-cluster; update each output value to reference the actual
RDS resource name aws_rds_cluster.aurora_cluster (e.g., use
aws_rds_cluster.aurora_cluster.endpoint,
aws_rds_cluster.aurora_cluster.reader_endpoint, and
aws_rds_cluster.aurora_cluster.port) so Terraform can resolve the resource.
In `@DevOps-Project-11/modules/aws-waf-cdn-acm-route53/outputs.tf`:
- Around line 1-14: The outputs currently reference incorrect resource labels:
update output "cloudfront_domain_name" and "cloudfront_distribution_id" to use
aws_cloudfront_distribution.cdn-web-elb-distribution instead of
aws_cloudfront_distribution.cf, and update output "acm_certificate_arn" to use
aws_acm_certificate.cert instead of aws_acm_certificate.acm; ensure the value
expressions reference the correct resource attributes (domain_name, id, arn) on
those exact resource symbols.
---
Nitpick comments:
In @.github/workflows/terraform-ci.yml:
- Around line 69-74: The workflow currently hardcodes aws-region: us-east-1 in
the aws-actions/configure-aws-credentials@v4 step; change this to read a
configurable value (for example replace the literal with a repository variable
like ${{ vars.AWS_REGION }} or a workflow input/secret) so the region can be
overridden without editing the workflow, and update any README or workflow
defaults to document the new AWS_REGION variable; reference the "Configure AWS
Credentials" step and the aws-region key when making this change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 65b5bf29-90cd-448a-9499-97dd4b8ee0c4
📒 Files selected for processing (5)
.github/workflows/terraform-ci.ymlDevOps-Project-11/modules/alb-tg/outputs.tfDevOps-Project-11/modules/aws-rds/outputs.tfDevOps-Project-11/modules/aws-waf-cdn-acm-route53/outputs.tfDevOps-Project-11/outputs.tf
| on: | ||
| push: | ||
| branches: [main] | ||
| paths: | ||
| - "DevOps-Project-11/**" | ||
| pull_request: | ||
| branches: [main] | ||
| paths: | ||
| - "DevOps-Project-11/**" |
There was a problem hiding this comment.
Branch mismatch: workflow triggers on main but PR targets master.
The workflow is configured to trigger on push and pull_request events to the main branch, but according to the PR objectives, this PR targets the master branch. This means the workflow will not run for this PR or future changes targeting master.
Either update the workflow to trigger on master, or ensure the repository's default branch is main.
🐛 Proposed fix if targeting master branch
on:
push:
- branches: [main]
+ branches: [master]
paths:
- "DevOps-Project-11/**"
pull_request:
- branches: [main]
+ branches: [master]
paths:
- "DevOps-Project-11/**"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| on: | |
| push: | |
| branches: [main] | |
| paths: | |
| - "DevOps-Project-11/**" | |
| pull_request: | |
| branches: [main] | |
| paths: | |
| - "DevOps-Project-11/**" | |
| on: | |
| push: | |
| branches: [master] | |
| paths: | |
| - "DevOps-Project-11/**" | |
| pull_request: | |
| branches: [master] | |
| paths: | |
| - "DevOps-Project-11/**" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/terraform-ci.yml around lines 3 - 11, The workflow's event
triggers currently list branches: [main] under the push and pull_request keys so
it won't run for PRs targeting master; update the branches arrays under the on:
push: and on: pull_request: sections to include "master" (or replace "main" with
"master"), or list both ["main","master"] to support both default-branch names
so the workflow runs for PRs targeting master; locate the branches: [main]
entries in the workflow's on/push and on/pull_request blocks and modify them
accordingly.
| plan: | ||
| name: Terraform Plan | ||
| runs-on: ubuntu-latest | ||
| needs: validate | ||
| if: github.event_name == 'pull_request' | ||
| steps: | ||
| - name: Checkout code | ||
| uses: actions/checkout@v4 | ||
|
|
||
| - name: Setup Terraform | ||
| uses: hashicorp/setup-terraform@v3 | ||
| with: | ||
| terraform_version: ${{ env.TF_VERSION }} | ||
|
|
||
| - name: Configure AWS Credentials | ||
| uses: aws-actions/configure-aws-credentials@v4 | ||
| with: | ||
| aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} | ||
| aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} | ||
| aws-region: us-east-1 | ||
|
|
||
| - name: Terraform Init | ||
| working-directory: ${{ env.WORKING_DIR }} | ||
| run: terraform init | ||
|
|
||
| - name: Terraform Plan | ||
| working-directory: ${{ env.WORKING_DIR }} | ||
| run: terraform plan -var-file=variables.tfvars -no-color -input=false |
There was a problem hiding this comment.
Plan job is well-configured but will fail due to module output errors.
The plan job structure is correct—it runs only on PRs, configures AWS credentials, and uses the correct var-file. However, it will fail during terraform validate (in the earlier job) due to the resource label mismatches in the module outputs (alb-tg/outputs.tf and aws-rds/outputs.tf).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/terraform-ci.yml around lines 55 - 82, The Terraform plan
job fails because module output definitions reference incorrect resource labels
in the module outputs (see alb-tg/outputs.tf and aws-rds/outputs.tf); open those
output blocks and update the referenced resource names/attributes to match the
actual resource blocks declared in their modules (or adjust the resource labels
in the modules to match the outputs), ensuring each output uses the correct
symbol (e.g., target_group ARN/ID, ALB listener ARN, RDS instance identifier)
and valid attribute paths so terraform validate/plan succeed; after changes run
terraform validate locally or in CI to confirm the mismatches are resolved.
| output "alb_dns_name" { | ||
| description = "The DNS name of the Application Load Balancer" | ||
| value = aws_lb.alb.dns_name | ||
| } | ||
|
|
||
| output "alb_arn" { | ||
| description = "The ARN of the Application Load Balancer" | ||
| value = aws_lb.alb.arn | ||
| } | ||
|
|
||
| output "target_group_arn" { | ||
| description = "The ARN of the ALB target group" | ||
| value = aws_lb_target_group.tg.arn | ||
| } |
There was a problem hiding this comment.
Resource label mismatches will cause terraform to fail.
All outputs reference incorrect resource labels. According to DevOps-Project-11/modules/alb-tg/main.tf:
- The ALB resource is labeled
web-elb, notalb - The target group resource is labeled
web-tg, nottg
These mismatches will cause terraform validate and terraform plan to fail with undefined reference errors.
🐛 Proposed fix to correct resource references
output "alb_dns_name" {
description = "The DNS name of the Application Load Balancer"
- value = aws_lb.alb.dns_name
+ value = aws_lb.web-elb.dns_name
}
output "alb_arn" {
description = "The ARN of the Application Load Balancer"
- value = aws_lb.alb.arn
+ value = aws_lb.web-elb.arn
}
output "target_group_arn" {
description = "The ARN of the ALB target group"
- value = aws_lb_target_group.tg.arn
+ value = aws_lb_target_group.web-tg.arn
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| output "alb_dns_name" { | |
| description = "The DNS name of the Application Load Balancer" | |
| value = aws_lb.alb.dns_name | |
| } | |
| output "alb_arn" { | |
| description = "The ARN of the Application Load Balancer" | |
| value = aws_lb.alb.arn | |
| } | |
| output "target_group_arn" { | |
| description = "The ARN of the ALB target group" | |
| value = aws_lb_target_group.tg.arn | |
| } | |
| output "alb_dns_name" { | |
| description = "The DNS name of the Application Load Balancer" | |
| value = aws_lb.web-elb.dns_name | |
| } | |
| output "alb_arn" { | |
| description = "The ARN of the Application Load Balancer" | |
| value = aws_lb.web-elb.arn | |
| } | |
| output "target_group_arn" { | |
| description = "The ARN of the ALB target group" | |
| value = aws_lb_target_group.web-tg.arn | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@DevOps-Project-11/modules/alb-tg/outputs.tf` around lines 1 - 14, The outputs
in outputs.tf reference non-existent resources (aws_lb.alb and
aws_lb_target_group.tg); update the output values to use the actual resource
labels defined in main.tf: replace aws_lb.alb.dns_name and aws_lb.alb.arn with
aws_lb.web-elb.dns_name and aws_lb.web-elb.arn respectively, and replace
aws_lb_target_group.tg.arn with aws_lb_target_group.web-tg.arn so outputs
"alb_dns_name", "alb_arn", and "target_group_arn" point to the correct
resources.
| output "rds_cluster_endpoint" { | ||
| description = "The writer endpoint of the Aurora RDS cluster" | ||
| value = aws_rds_cluster.rds-cluster.endpoint | ||
| } | ||
|
|
||
| output "rds_reader_endpoint" { | ||
| description = "The reader endpoint of the Aurora RDS cluster" | ||
| value = aws_rds_cluster.rds-cluster.reader_endpoint | ||
| } | ||
|
|
||
| output "rds_cluster_port" { | ||
| description = "The port of the Aurora RDS cluster" | ||
| value = aws_rds_cluster.rds-cluster.port | ||
| } |
There was a problem hiding this comment.
Resource label mismatch will cause terraform to fail.
All outputs reference aws_rds_cluster.rds-cluster, but according to DevOps-Project-11/modules/aws-rds/main.tf (lines 9-25), the RDS cluster resource is labeled aurora_cluster (with underscore), not rds-cluster (with hyphen).
This mismatch will cause terraform validate and terraform plan to fail with undefined reference errors.
🐛 Proposed fix to correct resource references
output "rds_cluster_endpoint" {
description = "The writer endpoint of the Aurora RDS cluster"
- value = aws_rds_cluster.rds-cluster.endpoint
+ value = aws_rds_cluster.aurora_cluster.endpoint
}
output "rds_reader_endpoint" {
description = "The reader endpoint of the Aurora RDS cluster"
- value = aws_rds_cluster.rds-cluster.reader_endpoint
+ value = aws_rds_cluster.aurora_cluster.reader_endpoint
}
output "rds_cluster_port" {
description = "The port of the Aurora RDS cluster"
- value = aws_rds_cluster.rds-cluster.port
+ value = aws_rds_cluster.aurora_cluster.port
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| output "rds_cluster_endpoint" { | |
| description = "The writer endpoint of the Aurora RDS cluster" | |
| value = aws_rds_cluster.rds-cluster.endpoint | |
| } | |
| output "rds_reader_endpoint" { | |
| description = "The reader endpoint of the Aurora RDS cluster" | |
| value = aws_rds_cluster.rds-cluster.reader_endpoint | |
| } | |
| output "rds_cluster_port" { | |
| description = "The port of the Aurora RDS cluster" | |
| value = aws_rds_cluster.rds-cluster.port | |
| } | |
| output "rds_cluster_endpoint" { | |
| description = "The writer endpoint of the Aurora RDS cluster" | |
| value = aws_rds_cluster.aurora_cluster.endpoint | |
| } | |
| output "rds_reader_endpoint" { | |
| description = "The reader endpoint of the Aurora RDS cluster" | |
| value = aws_rds_cluster.aurora_cluster.reader_endpoint | |
| } | |
| output "rds_cluster_port" { | |
| description = "The port of the Aurora RDS cluster" | |
| value = aws_rds_cluster.aurora_cluster.port | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@DevOps-Project-11/modules/aws-rds/outputs.tf` around lines 1 - 14, The
outputs rds_cluster_endpoint, rds_reader_endpoint, and rds_cluster_port
reference a non-existent resource label aws_rds_cluster.rds-cluster; update each
output value to reference the actual RDS resource name
aws_rds_cluster.aurora_cluster (e.g., use
aws_rds_cluster.aurora_cluster.endpoint,
aws_rds_cluster.aurora_cluster.reader_endpoint, and
aws_rds_cluster.aurora_cluster.port) so Terraform can resolve the resource.
| output "cloudfront_domain_name" { | ||
| description = "The domain name of the CloudFront distribution" | ||
| value = aws_cloudfront_distribution.cf.domain_name | ||
| } | ||
|
|
||
| output "cloudfront_distribution_id" { | ||
| description = "The ID of the CloudFront distribution" | ||
| value = aws_cloudfront_distribution.cf.id | ||
| } | ||
|
|
||
| output "acm_certificate_arn" { | ||
| description = "The ARN of the ACM certificate" | ||
| value = aws_acm_certificate.acm.arn | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Verify CloudFront and ACM resource labels in the module's main.tf
echo "=== Searching for CloudFront distribution resource label ==="
rg -n 'resource\s+"aws_cloudfront_distribution"' DevOps-Project-11/modules/aws-waf-cdn-acm-route53/
echo ""
echo "=== Searching for ACM certificate resource label ==="
rg -n 'resource\s+"aws_acm_certificate"' DevOps-Project-11/modules/aws-waf-cdn-acm-route53/Repository: NotHarshhaa/DevOps-Projects
Length of output: 411
Fix resource label references in outputs — both CloudFront and ACM certificate references are incorrect.
The outputs reference non-existent resource labels:
- Line 3 references
aws_cloudfront_distribution.cf, but the resource is labeledaws_cloudfront_distribution.cdn-web-elb-distribution(in cdn.tf) - Line 13 references
aws_acm_certificate.acm, but the resource is labeledaws_acm_certificate.cert(in acm.tf)
Update lines 3 and 8 to use aws_cloudfront_distribution.cdn-web-elb-distribution and line 13 to use aws_acm_certificate.cert.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@DevOps-Project-11/modules/aws-waf-cdn-acm-route53/outputs.tf` around lines 1
- 14, The outputs currently reference incorrect resource labels: update output
"cloudfront_domain_name" and "cloudfront_distribution_id" to use
aws_cloudfront_distribution.cdn-web-elb-distribution instead of
aws_cloudfront_distribution.cf, and update output "acm_certificate_arn" to use
aws_acm_certificate.cert instead of aws_acm_certificate.acm; ensure the value
expressions reference the correct resource attributes (domain_name, id, arn) on
those exact resource symbols.
Hello @NotHarshhaa,
Thank you for building and maintaining this excellent Two-Tier AWS architecture project. It has been a valuable learning resource for the DevOps community. I would like to contribute the following enhancements that improve the developer experience and project reliability — without altering the original tutorial flow.
Summary of Changes
1. GitHub Actions CI/CD Pipeline (
.github/workflows/terraform-ci.yml)pushandpull_requestto themainbranch..tffiles follow the canonicalterraform fmtstyle.terraform init -backend=false+terraform validateto catch syntax and configuration errors early.terraform plan -var-file=variables.tfvarson pull requests using AWS credentials stored in GitHub Secrets, giving reviewers a clear preview of infrastructure changes before merging.2. Enhanced Terraform Outputs (
outputs.tf+ module-level outputs)outputs.tffiles to thealb-tg,aws-rds, andaws-waf-cdn-acm-route53modules to expose key resource attributes.outputs.tfthat prints the following after a successfulterraform apply:3. Code Formatting
terraform fmt -recursiveacross all.tffiles to ensure consistent formatting.Why These Changes Add Value
terraform apply, users immediately see the ALB and CloudFront endpoints they need — no manual digging through the AWS Console.Setup Required for
terraform planJobFor the Plan job to run, the following GitHub repository secrets must be configured:
AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYTesting
terraform fmt -check -recursivepasses with no diffterraform validatepasses successfullyThank you for considering this contribution. I am happy to make any adjustments based on your feedback.
Best regards
Summary by CodeRabbit