C4W4 Capstone Part-1 Terraform Crashing Error

In C4W4 capstone part-1 I 4.1 landing Zone am getting terraform crashing the terminal error. My terraform/modules/extract_job/glue.tf is as follows: # Complete the "aws_glue_connection" "rds_connection" resource
resource “aws_glue_connection” “rds_connection” {
name = “${var.project}-connection-rds”

At connection_properties, add var.username and var.password to the USERNAME and PASSWORD parameters respectively

connection_properties = {
JDBC_CONNECTION_URL = “jdbc:postgresql://{var.host}:{var.port}/${var.database}”
USERNAME = var.username
PASSWORD = var.password
}

At the physical_connection_requirements configuration, set the subnet_id to data.aws_subnet.private_a.id and

the security_group_id_list to a list containing the element data.aws_security_group.db_sg.id

physical_connection_requirements {
availability_zone = data.aws_subnet.private_a.availability_zone
security_group_id_list = [data.aws_security_group.db_sg.id]
subnet_id = data.aws_subnet.private_a.id
}
}

Complete the resource "aws_glue_job" "rds_ingestion_etl_job"

resource “aws_glue_job” “rds_ingestion_etl_job” {
name = “${var.project}-rds-extract-job”

Set the role_arn parameter to aws_iam_role.glue_role.arn

role_arn = aws_iam_role.glue_role.arn
glue_version = “4.0”

Set the connections parameter to a list containing the RDS connection you just created with aws_glue_connection.rds_connection.name

connections = [aws_glue_connection.rds_connection.name]
command {
name = “glueetl”
script_location = “s3://{aws_s3_bucket.scripts.id}/{aws_s3_object.glue_job_extract_db.key}”
python_version = 3
}

At default_arguments, complete the arguments

default_arguments = {
“–enable-job-insights” = “true”
“–job-language” = “python”
# Set "--rds_connection" as aws_glue_connection.rds_connection.name
“–rds_connection” = aws_glue_connection.rds_connection.name
# Set "--data_lake_bucket" as data.aws_s3_bucket.data_lake.bucket
“–data_lake_bucket” = data.aws_s3_bucket.data_lake.bucket
}

Set up the timeout to 5 and the number of workers to 2. The time unit here is minutes.

timeout = 5
number_of_workers = 2

worker_type = “G.1X”
}

Complete the resource "aws_glue_job" "api_users_ingestion_etl_job"

resource “aws_glue_job” “api_users_ingestion_etl_job” {
name = “${var.project}-api-users-extract-job”
role_arn = aws_iam_role.glue_role.arn
glue_version = “4.0”

command {
name = “glueetl”
script_location = “s3://{aws_s3_bucket.scripts.id}/{aws_s3_object.glue_job_extract_api.key}”
python_version = 3
}

Set the arguments in the default_arguments configuration parameter

default_arguments = {
“–enable-job-insights” = “true”
“–job-language” = “python”
# Set "--api_start_date" to "2020-01-01"
“–api_start_date” = “2020-01-01”
# Set "--api_end_date" to "2020-01-31"
“–api_end_date” = “2020-01-31”
# Replace the placeholder with the value from the CloudFormation outputs
“–api_url” = “http://ec2-54-146-80-3.compute-1.amazonaws.com/users
# Notice the target path. This line of the code code is complete - no changes are required
“–target_path” = “s3://${data.aws_s3_bucket.data_lake.bucket}/landing_zone/api/users”
}

Set up the timeout to 5 and the number of workers to 2. The time unit here is minutes.

timeout = 5
number_of_workers = 2

worker_type = “G.1X”
}

Complete the resource "aws_glue_job" "api_sessions_ingestion_etl_job"

resource “aws_glue_job” “api_sessions_ingestion_etl_job” {
name = “${var.project}-api-sessions-extract-job”
role_arn = aws_iam_role.glue_role.arn
glue_version = “4.0”

command {
name = “glueetl”
script_location = “s3://{aws_s3_bucket.scripts.id}/{aws_s3_object.glue_job_extract_api.key}”
python_version = 3
}

Set the arguments in the default_arguments configuration parameter

default_arguments = {
“–enable-job-insights” = “true”
“–job-language” = “python”
# Set "--api_start_date" to "2020-01-01"
“–api_start_date” = “2020-01-01”
# Set "--api_end_date" to "2020-01-31"
“–api_end_date” = “2020-01-31”
# Replace the placeholder with the value from the CloudFormation outputs
“–api_url” = “http://ec2-54-146-80-3.compute-1.amazonaws.com/sessions
# Notice the target path. This line of the code code is complete - no changes are required
“–target_path” = “s3://${data.aws_s3_bucket.data_lake.bucket}/landing_zone/api/sessions”
}

Set up the timeout to 5 and the number of workers to 2. The time unit here is minutes.

timeout = 5
number_of_workers = 2

worker_type = “G.1X”
} And I am not finding an error in code then I dont think why the error is comming. If somehow managed to retain the terminal while running terraform apply comand then it starts asking many questions like var.log curated datase name enter: and many other name, passowrd, usernames etc answers of whose I dont know. My whole specialization is going to complete and being late only because of this lab error and incompletion. Kindly @mentors resolve this.

Hello @hamza_safwan
Please, use this command terraform apply -no-color 2> errors.txt so that the terraform logs get outputted into a text file. Then you can see what’s going wrong and what’s causing the issue.

@Amir_Zare I did then in logs it is said that there is some issue with variables in main.tf module’extract_job’ i.e
Error: Unsupported argument

on main.tf line 7, in module “extract_job”:
7: private_subnet_a_id = var.private_subnet_a_id

An argument named “private_subnet_a_id” is not expected here.

Error: Unsupported argument

on main.tf line 14, in module “extract_job”:
14: data_lake_name = var.data_lake_name

An argument named “data_lake_name” is not expected here.
while we are not said to edit this main.tf file not add or remove anything. We are only supposed to do some uncomments in main.tf. Kindly look into it that what is the matter with current version of this lab and help me accordingly so that i can complete my specialization which is stuck only due to this lab. in this version the ./modules/extract_job/glue.tf. is of 119 lines in previous one it was of 116 lines.

@hamza_safwan the current version of the main.tf file does not have any private_subnet_a_id in it. I suppose your lab version is old. Kindly, request a lab refresh using this form and try again after your lab has been refreshed.

@Amir_Zare Now, in current version where main.tf does not have any private_subnet_a_id in it, the .modules/extract_job/glue.tf is giving following errors:
Error: Reference to undeclared resource

on modules/extract_job/glue.tf line 15, in resource “aws_glue_connection” “rds_connection”:
15: availability_zone = data.aws_subnet.private_a.availability_zone

A data resource “aws_subnet” “private_a” has not been declared in
module.extract_job.

Error: Reference to undeclared resource

on modules/extract_job/glue.tf line 17, in resource “aws_glue_connection” “rds_connection”:
17: subnet_id = data.aws_subnet.private_a.id

A data resource “aws_subnet” “private_a” has not been declared in
module.extract_job.

Error: Reference to undeclared resource

on modules/extract_job/glue.tf line 60, in resource “aws_glue_job” “api_users_ingestion_etl_job”:
60: script_location = “s3://{aws_s3_bucket.scripts.id}/{aws_s3_object.glue_job_extract_api.id}”

A managed resource “aws_s3_bucket” “scripts” has not been declared in
module.extract_job.

Error: Reference to undeclared resource

on modules/extract_job/glue.tf line 60, in resource “aws_glue_job” “api_users_ingestion_etl_job”:
60: script_location = “s3://{aws_s3_bucket.scripts.id}/{aws_s3_object.glue_job_extract_api.id}”

A managed resource “aws_s3_object” “glue_job_extract_api” has not been
declared in module.extract_job.

Error: Reference to undeclared resource

on modules/extract_job/glue.tf line 75, in resource “aws_glue_job” “api_users_ingestion_etl_job”:
75: “–target_path” = “s3://${data.aws_s3_bucket.data_lake.bucket}/landing_zone/api/users”

A data resource “aws_s3_bucket” “data_lake” has not been declared in
module.extract_job.

Error: Reference to undeclared resource

on modules/extract_job/glue.tf line 93, in resource “aws_glue_job” “api_sessions_ingestion_etl_job”:
93: script_location = “s3://{aws_s3_bucket.scripts.id}/{aws_s3_object.glue_job_extract_api.id}”

A managed resource “aws_s3_bucket” “scripts” has not been declared in
module.extract_job.

Error: Reference to undeclared resource

on modules/extract_job/glue.tf line 93, in resource “aws_glue_job” “api_sessions_ingestion_etl_job”:
93: script_location = “s3://{aws_s3_bucket.scripts.id}/{aws_s3_object.glue_job_extract_api.id}”

A managed resource “aws_s3_object” “glue_job_extract_api” has not been
declared in module.extract_job.

Error: Reference to undeclared resource

on modules/extract_job/glue.tf line 108, in resource “aws_glue_job” “api_sessions_ingestion_etl_job”:
108: “–target_path” = “s3://${data.aws_s3_bucket.data_lake.bucket}/landing_zone/api/sessions”

A data resource “aws_s3_bucket” “data_lake” has not been declared in
module.extract_job.

@hamza_safwan I guess your files versions are messed up, and there are conflicts between them. Please, fill out the form and try again after.

@Amir_Zare . Yep, I have filled the form. Can you guide me when I should try back or I will receive an email when workspace is refreshed.

Also when I will try again, the version will be refreshed and updated automatically or I would need to pull from help tab in coursera’s vs code env.

@hamza_safwan All the versions will be updated after the refresh. Please, wait for 2 business days before trying again.