C2W3 - Practice Lab 1 : Environment Bucket Access Maybe Broken - Implementing DataOps with Terraform

Hello

After completing all the required files for Terraform and saved them I get a bunch of these very curious error messages after the Terraform init :

Initializing the backend…
Successfully configured the backend “s3”! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules…

│ Error: Unsupported argument
*│ *
│ on modules/bastion_host/variables.tf line 27, in variable “None”:
│ 27: None = None
*│ *
│ An argument named “None” is not expected here.

I check the entire file there is not a single “None” left. Same thing for all the other files that are supposed to be impacted.

It seems Terraform can’t have access to my bucket which is well defined in S3.
I suspect something is missing regarding IAM / Security Group Policy not from my configuration but by default when creating the environment.

Previously today the Terraform init worked properly, I get some errors but that were proper errors not like the “None” error.

Now I am pretty sure that my files are correct but I can’t get through these “None” errors where Terraform can’t even have access to my edited (and saved) files.

I tested by generating the solution, I get exactly the same error with the “None” !

I went through the lab by submitting in this state as the lab seems to just check if the files are completed correctly not if you go through the Terraform init plan and apply…

I would like to pass this practice lab by getting the Terraform proper result !

Please let me know if you also encountered this problem or this is only coming from me ?

Your help would be greatly apprciated !

Thanks

Hello @Christophe_Lancien
It is strange. According to the error, line 27 from the modules/bastion_host/variables.tf must have still been None=None when you ran the terraform init command. Did you make sure to save the changes to your files before running the command? This has happened to me in the past when I had the files open and had changed them, but forgot to save before progressing.

Thank you for your reply @Amir_Zare

The problem affect all the configuration files for Terraform not only variables.tf
I saved all the files multiple time after editing…
Even when I deployed the solution, Terraform seemed to be able to access only the original unedited files…

My S3 bucket was present, so there must be an access issue stemming from the environment configuration.

If I am the only one affected I think the problem may come from the fact I engaged too quickly between 2 labs attemps, not giving AWS enough time to provide a clear environment by waiting like 30min before a new attempt. I will try again starting fresh this time without stressing AWS !

Sorry for the inconvenience @Christophe_Lancien

I think you are the first one facing this issue as I haven’t seen this being reported in the past. It shouldn’t be like this. When you run cd terraform and run terraform commands there, terraform looks for files in that directory. Could you please this time check to see if there are any Jupyter notebook checkpoint folders available there? They could mess up terraform actions too as they create files with the same names. You can search for folders with the name .ipynb_checkpoints to see if there are any present.

Please, let us know how it goes in your next tries.

1 Like