How to pass a token environment variable into a provider module?












1















I have made a Terraform script that can successfully spin up 2 DigitalOcean droplets as nodes and install a Kubernetes master on one and a worker on the other.



For this, it uses a bash shell environment variable that is defined as:



export DO_ACCESS_TOKEN="..."
export TF_VAR_DO_ACCESS_TOKEN=$DO_ACCESS_TOKEN


It can then be used in the script:



provider "digitalocean" {
version = "~> 1.0"
token = "${var.DO_ACCESS_TOKEN}"
}


Now, having all these files in one directory, is getting a bit messy. So I'm trying to implement this as modules.



I'm thus having a provider module offering access to my DigitalOcean account, a droplet module spinning up a droplet with a given name, a Kubernetes master module and a Kubernetes worker module.



I can run the terraform init command.



But when running the terraform plan command, it asks me for the provider token (which it rightfully did not do before I implemented modules):



$ terraform plan
provider.digitalocean.token
The token key for API operations.

Enter a value:


It seems that it cannot find the token defined in the bash shell environment.



I have the following modules:



.
├── digitalocean
│   ├── droplet
│   │   ├── create-ssh-key-certificate.sh
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── vars.tf
│   └── provider
│   ├── main.tf
│   └── vars.tf
└── kubernetes
├── master
│   ├── configure-cluster.sh
│   ├── configure-user.sh
│   ├── create-namespace.sh
│   ├── create-role-binding-deployment-manager.yml
│   ├── create-role-deployment-manager.yml
│   ├── kubernetes-bootstrap.sh
│   ├── main.tf
│   ├── outputs.tf
│   └── vars.tf
└── worker
├── kubernetes-bootstrap.sh
├── main.tf
├── outputs.tf
└── vars.tf


In my project directory, I have a vars.tf file:



$ cat vars.tf 
variable "DO_ACCESS_TOKEN" {}
variable "SSH_PUBLIC_KEY" {}
variable "SSH_PRIVATE_KEY" {}
variable "SSH_FINGERPRINT" {}


and I have a provider.tf file:



$ cat provider.tf 
module "digitalocean" {
source = "/home/stephane/dev/terraform/modules/digitalocean/provider"
DO_ACCESS_TOKEN = "${var.DO_ACCESS_TOKEN}"
}


And it calls the digitalocean provider module defined as:



$ cat digitalocean/provider/vars.tf 
variable "DO_ACCESS_TOKEN" {}

$ cat digitalocean/provider/main.tf
provider "digitalocean" {
version = "~> 1.0"
token = "${var.DO_ACCESS_TOKEN}"
}


UPDATE: The provided solution led me to organize my project like:



.
├── env
│   ├── dev
│   │   ├── backend.tf -> /home/stephane/dev/terraform/utils/backend.tf
│   │   ├── digital-ocean.tf -> /home/stephane/dev/terraform/providers/digital-ocean.tf
│   │   ├── kubernetes-master.tf -> /home/stephane/dev/terraform/stacks/kubernetes-master.tf
│   │   ├── kubernetes-worker-1.tf -> /home/stephane/dev/terraform/stacks/kubernetes-worker-1.tf
│   │   ├── outputs.tf -> /home/stephane/dev/terraform/stacks/outputs.tf
│   │   ├── terraform.tfplan
│   │   ├── terraform.tfstate
│   │   ├── terraform.tfstate.backup
│   │   ├── terraform.tfvars
│   │   └── vars.tf -> /home/stephane/dev/terraform/utils/vars.tf
│   ├── production
│   └── staging
└── README.md


With a custom library of providers, stacks and modules, layered like:



.
├── modules
│   ├── digitalocean
│   │   └── droplet
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   ├── scripts
│   │   │   └── create-ssh-key-and-csr.sh
│   │   └── vars.tf
│   └── kubernetes
│   ├── master
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   ├── scripts
│   │   │   ├── configure-cluster.sh
│   │   │   ├── configure-user.sh
│   │   │   ├── create-namespace.sh
│   │   │   ├── create-role-binding-deployment-manager.yml
│   │   │   ├── create-role-deployment-manager.yml
│   │   │   ├── kubernetes-bootstrap.sh
│   │   │   └── sign-ssh-csr.sh
│   │   └── vars.tf
│   └── worker
│   ├── main.tf
│   ├── outputs.tf
│   ├── scripts
│   │   └── kubernetes-bootstrap.sh -> /home/stephane/dev/terraform/modules/kubernetes/master/scripts/kubernetes-bootstrap.sh
│   └── vars.tf
├── providers
│   └── digital-ocean.tf
├── stacks
│   ├── kubernetes-master.tf
│   ├── kubernetes-worker-1.tf
│   └── outputs.tf
└── utils
├── backend.tf
└── vars.tf









share|improve this question





























    1















    I have made a Terraform script that can successfully spin up 2 DigitalOcean droplets as nodes and install a Kubernetes master on one and a worker on the other.



    For this, it uses a bash shell environment variable that is defined as:



    export DO_ACCESS_TOKEN="..."
    export TF_VAR_DO_ACCESS_TOKEN=$DO_ACCESS_TOKEN


    It can then be used in the script:



    provider "digitalocean" {
    version = "~> 1.0"
    token = "${var.DO_ACCESS_TOKEN}"
    }


    Now, having all these files in one directory, is getting a bit messy. So I'm trying to implement this as modules.



    I'm thus having a provider module offering access to my DigitalOcean account, a droplet module spinning up a droplet with a given name, a Kubernetes master module and a Kubernetes worker module.



    I can run the terraform init command.



    But when running the terraform plan command, it asks me for the provider token (which it rightfully did not do before I implemented modules):



    $ terraform plan
    provider.digitalocean.token
    The token key for API operations.

    Enter a value:


    It seems that it cannot find the token defined in the bash shell environment.



    I have the following modules:



    .
    ├── digitalocean
    │   ├── droplet
    │   │   ├── create-ssh-key-certificate.sh
    │   │   ├── main.tf
    │   │   ├── outputs.tf
    │   │   └── vars.tf
    │   └── provider
    │   ├── main.tf
    │   └── vars.tf
    └── kubernetes
    ├── master
    │   ├── configure-cluster.sh
    │   ├── configure-user.sh
    │   ├── create-namespace.sh
    │   ├── create-role-binding-deployment-manager.yml
    │   ├── create-role-deployment-manager.yml
    │   ├── kubernetes-bootstrap.sh
    │   ├── main.tf
    │   ├── outputs.tf
    │   └── vars.tf
    └── worker
    ├── kubernetes-bootstrap.sh
    ├── main.tf
    ├── outputs.tf
    └── vars.tf


    In my project directory, I have a vars.tf file:



    $ cat vars.tf 
    variable "DO_ACCESS_TOKEN" {}
    variable "SSH_PUBLIC_KEY" {}
    variable "SSH_PRIVATE_KEY" {}
    variable "SSH_FINGERPRINT" {}


    and I have a provider.tf file:



    $ cat provider.tf 
    module "digitalocean" {
    source = "/home/stephane/dev/terraform/modules/digitalocean/provider"
    DO_ACCESS_TOKEN = "${var.DO_ACCESS_TOKEN}"
    }


    And it calls the digitalocean provider module defined as:



    $ cat digitalocean/provider/vars.tf 
    variable "DO_ACCESS_TOKEN" {}

    $ cat digitalocean/provider/main.tf
    provider "digitalocean" {
    version = "~> 1.0"
    token = "${var.DO_ACCESS_TOKEN}"
    }


    UPDATE: The provided solution led me to organize my project like:



    .
    ├── env
    │   ├── dev
    │   │   ├── backend.tf -> /home/stephane/dev/terraform/utils/backend.tf
    │   │   ├── digital-ocean.tf -> /home/stephane/dev/terraform/providers/digital-ocean.tf
    │   │   ├── kubernetes-master.tf -> /home/stephane/dev/terraform/stacks/kubernetes-master.tf
    │   │   ├── kubernetes-worker-1.tf -> /home/stephane/dev/terraform/stacks/kubernetes-worker-1.tf
    │   │   ├── outputs.tf -> /home/stephane/dev/terraform/stacks/outputs.tf
    │   │   ├── terraform.tfplan
    │   │   ├── terraform.tfstate
    │   │   ├── terraform.tfstate.backup
    │   │   ├── terraform.tfvars
    │   │   └── vars.tf -> /home/stephane/dev/terraform/utils/vars.tf
    │   ├── production
    │   └── staging
    └── README.md


    With a custom library of providers, stacks and modules, layered like:



    .
    ├── modules
    │   ├── digitalocean
    │   │   └── droplet
    │   │   ├── main.tf
    │   │   ├── outputs.tf
    │   │   ├── scripts
    │   │   │   └── create-ssh-key-and-csr.sh
    │   │   └── vars.tf
    │   └── kubernetes
    │   ├── master
    │   │   ├── main.tf
    │   │   ├── outputs.tf
    │   │   ├── scripts
    │   │   │   ├── configure-cluster.sh
    │   │   │   ├── configure-user.sh
    │   │   │   ├── create-namespace.sh
    │   │   │   ├── create-role-binding-deployment-manager.yml
    │   │   │   ├── create-role-deployment-manager.yml
    │   │   │   ├── kubernetes-bootstrap.sh
    │   │   │   └── sign-ssh-csr.sh
    │   │   └── vars.tf
    │   └── worker
    │   ├── main.tf
    │   ├── outputs.tf
    │   ├── scripts
    │   │   └── kubernetes-bootstrap.sh -> /home/stephane/dev/terraform/modules/kubernetes/master/scripts/kubernetes-bootstrap.sh
    │   └── vars.tf
    ├── providers
    │   └── digital-ocean.tf
    ├── stacks
    │   ├── kubernetes-master.tf
    │   ├── kubernetes-worker-1.tf
    │   └── outputs.tf
    └── utils
    ├── backend.tf
    └── vars.tf









    share|improve this question



























      1












      1








      1








      I have made a Terraform script that can successfully spin up 2 DigitalOcean droplets as nodes and install a Kubernetes master on one and a worker on the other.



      For this, it uses a bash shell environment variable that is defined as:



      export DO_ACCESS_TOKEN="..."
      export TF_VAR_DO_ACCESS_TOKEN=$DO_ACCESS_TOKEN


      It can then be used in the script:



      provider "digitalocean" {
      version = "~> 1.0"
      token = "${var.DO_ACCESS_TOKEN}"
      }


      Now, having all these files in one directory, is getting a bit messy. So I'm trying to implement this as modules.



      I'm thus having a provider module offering access to my DigitalOcean account, a droplet module spinning up a droplet with a given name, a Kubernetes master module and a Kubernetes worker module.



      I can run the terraform init command.



      But when running the terraform plan command, it asks me for the provider token (which it rightfully did not do before I implemented modules):



      $ terraform plan
      provider.digitalocean.token
      The token key for API operations.

      Enter a value:


      It seems that it cannot find the token defined in the bash shell environment.



      I have the following modules:



      .
      ├── digitalocean
      │   ├── droplet
      │   │   ├── create-ssh-key-certificate.sh
      │   │   ├── main.tf
      │   │   ├── outputs.tf
      │   │   └── vars.tf
      │   └── provider
      │   ├── main.tf
      │   └── vars.tf
      └── kubernetes
      ├── master
      │   ├── configure-cluster.sh
      │   ├── configure-user.sh
      │   ├── create-namespace.sh
      │   ├── create-role-binding-deployment-manager.yml
      │   ├── create-role-deployment-manager.yml
      │   ├── kubernetes-bootstrap.sh
      │   ├── main.tf
      │   ├── outputs.tf
      │   └── vars.tf
      └── worker
      ├── kubernetes-bootstrap.sh
      ├── main.tf
      ├── outputs.tf
      └── vars.tf


      In my project directory, I have a vars.tf file:



      $ cat vars.tf 
      variable "DO_ACCESS_TOKEN" {}
      variable "SSH_PUBLIC_KEY" {}
      variable "SSH_PRIVATE_KEY" {}
      variable "SSH_FINGERPRINT" {}


      and I have a provider.tf file:



      $ cat provider.tf 
      module "digitalocean" {
      source = "/home/stephane/dev/terraform/modules/digitalocean/provider"
      DO_ACCESS_TOKEN = "${var.DO_ACCESS_TOKEN}"
      }


      And it calls the digitalocean provider module defined as:



      $ cat digitalocean/provider/vars.tf 
      variable "DO_ACCESS_TOKEN" {}

      $ cat digitalocean/provider/main.tf
      provider "digitalocean" {
      version = "~> 1.0"
      token = "${var.DO_ACCESS_TOKEN}"
      }


      UPDATE: The provided solution led me to organize my project like:



      .
      ├── env
      │   ├── dev
      │   │   ├── backend.tf -> /home/stephane/dev/terraform/utils/backend.tf
      │   │   ├── digital-ocean.tf -> /home/stephane/dev/terraform/providers/digital-ocean.tf
      │   │   ├── kubernetes-master.tf -> /home/stephane/dev/terraform/stacks/kubernetes-master.tf
      │   │   ├── kubernetes-worker-1.tf -> /home/stephane/dev/terraform/stacks/kubernetes-worker-1.tf
      │   │   ├── outputs.tf -> /home/stephane/dev/terraform/stacks/outputs.tf
      │   │   ├── terraform.tfplan
      │   │   ├── terraform.tfstate
      │   │   ├── terraform.tfstate.backup
      │   │   ├── terraform.tfvars
      │   │   └── vars.tf -> /home/stephane/dev/terraform/utils/vars.tf
      │   ├── production
      │   └── staging
      └── README.md


      With a custom library of providers, stacks and modules, layered like:



      .
      ├── modules
      │   ├── digitalocean
      │   │   └── droplet
      │   │   ├── main.tf
      │   │   ├── outputs.tf
      │   │   ├── scripts
      │   │   │   └── create-ssh-key-and-csr.sh
      │   │   └── vars.tf
      │   └── kubernetes
      │   ├── master
      │   │   ├── main.tf
      │   │   ├── outputs.tf
      │   │   ├── scripts
      │   │   │   ├── configure-cluster.sh
      │   │   │   ├── configure-user.sh
      │   │   │   ├── create-namespace.sh
      │   │   │   ├── create-role-binding-deployment-manager.yml
      │   │   │   ├── create-role-deployment-manager.yml
      │   │   │   ├── kubernetes-bootstrap.sh
      │   │   │   └── sign-ssh-csr.sh
      │   │   └── vars.tf
      │   └── worker
      │   ├── main.tf
      │   ├── outputs.tf
      │   ├── scripts
      │   │   └── kubernetes-bootstrap.sh -> /home/stephane/dev/terraform/modules/kubernetes/master/scripts/kubernetes-bootstrap.sh
      │   └── vars.tf
      ├── providers
      │   └── digital-ocean.tf
      ├── stacks
      │   ├── kubernetes-master.tf
      │   ├── kubernetes-worker-1.tf
      │   └── outputs.tf
      └── utils
      ├── backend.tf
      └── vars.tf









      share|improve this question
















      I have made a Terraform script that can successfully spin up 2 DigitalOcean droplets as nodes and install a Kubernetes master on one and a worker on the other.



      For this, it uses a bash shell environment variable that is defined as:



      export DO_ACCESS_TOKEN="..."
      export TF_VAR_DO_ACCESS_TOKEN=$DO_ACCESS_TOKEN


      It can then be used in the script:



      provider "digitalocean" {
      version = "~> 1.0"
      token = "${var.DO_ACCESS_TOKEN}"
      }


      Now, having all these files in one directory, is getting a bit messy. So I'm trying to implement this as modules.



      I'm thus having a provider module offering access to my DigitalOcean account, a droplet module spinning up a droplet with a given name, a Kubernetes master module and a Kubernetes worker module.



      I can run the terraform init command.



      But when running the terraform plan command, it asks me for the provider token (which it rightfully did not do before I implemented modules):



      $ terraform plan
      provider.digitalocean.token
      The token key for API operations.

      Enter a value:


      It seems that it cannot find the token defined in the bash shell environment.



      I have the following modules:



      .
      ├── digitalocean
      │   ├── droplet
      │   │   ├── create-ssh-key-certificate.sh
      │   │   ├── main.tf
      │   │   ├── outputs.tf
      │   │   └── vars.tf
      │   └── provider
      │   ├── main.tf
      │   └── vars.tf
      └── kubernetes
      ├── master
      │   ├── configure-cluster.sh
      │   ├── configure-user.sh
      │   ├── create-namespace.sh
      │   ├── create-role-binding-deployment-manager.yml
      │   ├── create-role-deployment-manager.yml
      │   ├── kubernetes-bootstrap.sh
      │   ├── main.tf
      │   ├── outputs.tf
      │   └── vars.tf
      └── worker
      ├── kubernetes-bootstrap.sh
      ├── main.tf
      ├── outputs.tf
      └── vars.tf


      In my project directory, I have a vars.tf file:



      $ cat vars.tf 
      variable "DO_ACCESS_TOKEN" {}
      variable "SSH_PUBLIC_KEY" {}
      variable "SSH_PRIVATE_KEY" {}
      variable "SSH_FINGERPRINT" {}


      and I have a provider.tf file:



      $ cat provider.tf 
      module "digitalocean" {
      source = "/home/stephane/dev/terraform/modules/digitalocean/provider"
      DO_ACCESS_TOKEN = "${var.DO_ACCESS_TOKEN}"
      }


      And it calls the digitalocean provider module defined as:



      $ cat digitalocean/provider/vars.tf 
      variable "DO_ACCESS_TOKEN" {}

      $ cat digitalocean/provider/main.tf
      provider "digitalocean" {
      version = "~> 1.0"
      token = "${var.DO_ACCESS_TOKEN}"
      }


      UPDATE: The provided solution led me to organize my project like:



      .
      ├── env
      │   ├── dev
      │   │   ├── backend.tf -> /home/stephane/dev/terraform/utils/backend.tf
      │   │   ├── digital-ocean.tf -> /home/stephane/dev/terraform/providers/digital-ocean.tf
      │   │   ├── kubernetes-master.tf -> /home/stephane/dev/terraform/stacks/kubernetes-master.tf
      │   │   ├── kubernetes-worker-1.tf -> /home/stephane/dev/terraform/stacks/kubernetes-worker-1.tf
      │   │   ├── outputs.tf -> /home/stephane/dev/terraform/stacks/outputs.tf
      │   │   ├── terraform.tfplan
      │   │   ├── terraform.tfstate
      │   │   ├── terraform.tfstate.backup
      │   │   ├── terraform.tfvars
      │   │   └── vars.tf -> /home/stephane/dev/terraform/utils/vars.tf
      │   ├── production
      │   └── staging
      └── README.md


      With a custom library of providers, stacks and modules, layered like:



      .
      ├── modules
      │   ├── digitalocean
      │   │   └── droplet
      │   │   ├── main.tf
      │   │   ├── outputs.tf
      │   │   ├── scripts
      │   │   │   └── create-ssh-key-and-csr.sh
      │   │   └── vars.tf
      │   └── kubernetes
      │   ├── master
      │   │   ├── main.tf
      │   │   ├── outputs.tf
      │   │   ├── scripts
      │   │   │   ├── configure-cluster.sh
      │   │   │   ├── configure-user.sh
      │   │   │   ├── create-namespace.sh
      │   │   │   ├── create-role-binding-deployment-manager.yml
      │   │   │   ├── create-role-deployment-manager.yml
      │   │   │   ├── kubernetes-bootstrap.sh
      │   │   │   └── sign-ssh-csr.sh
      │   │   └── vars.tf
      │   └── worker
      │   ├── main.tf
      │   ├── outputs.tf
      │   ├── scripts
      │   │   └── kubernetes-bootstrap.sh -> /home/stephane/dev/terraform/modules/kubernetes/master/scripts/kubernetes-bootstrap.sh
      │   └── vars.tf
      ├── providers
      │   └── digital-ocean.tf
      ├── stacks
      │   ├── kubernetes-master.tf
      │   ├── kubernetes-worker-1.tf
      │   └── outputs.tf
      └── utils
      ├── backend.tf
      └── vars.tf






      terraform






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Dec 31 '18 at 13:34







      Stephane

















      asked Dec 29 '18 at 18:39









      StephaneStephane

      2,417114979




      2,417114979
























          1 Answer
          1






          active

          oldest

          votes


















          1














          The simplest option you have here is to just not define the provider at all and just use the DIGITALOCEAN_TOKEN environment variable as mentioned in the Digital Ocean provider docs.



          This will always use the latest version of the Digital Ocean provider but otherwise will be functionally the same as what you're currently doing.



          However, if you did want to define the provider block so that you can specify the version of the provider used or also take this opportunity to define a partial state configuration or set the required Terraform version then you just need to make sure that the provider defining files are in the same directory you are applying or in a sourced module (if you're doing partial state configuration then they must be in the directory, not in a module as state configuration happens before module fetching).



          I normally achieve this by simply symlinking my provider file everywhere that I want to apply my Terraform code (so everywhere that isn't just a module).



          As an example you might have a directory structure that looks something like this:



          .
          ├── modules
          │   └── kubernetes
          │   ├── master
          │   │   ├── main.tf
          │   │   ├── output.tf
          │   │   └── variables.tf
          │   └── worker
          │   ├── main.tf
          │   ├── output.tf
          │   └── variables.tf
          ├── production
          │   ├── digital-ocean.tf -> ../providers/digital-ocean.tf
          │   ├── kubernetes-master.tf -> ../stacks/kubernetes-master.tf
          │   ├── kubernetes-worker.tf -> ../stacks/kubernetes-worker.tf
          │   └── terraform.tfvars
          ├── providers
          │   └── digital-ocean.tf
          ├── stacks
          │   ├── kubernetes-master.tf
          │   └── kubernetes-worker.tf
          └── staging
          ├── digital-ocean.tf -> ../providers/digital-ocean.tf
          ├── kubernetes-master.tf -> ../stacks/kubernetes-master.tf
          ├── kubernetes-worker.tf -> ../stacks/kubernetes-worker.tf
          └── terraform.tfvars


          This layout has 2 "locations" where you would perform Terraform actions (eg plan/apply): staging and production (given as an example of keeping things as similar as possible with slight variations between environments). These directories contain only symlinked files other than the terraform.tfvars file which allows you to vary only a few constrained things, keeping your staging and production environments the same.



          The provider file that is symlinked would contain any provider specific configuration (in the case of AWS this would normally include the region that things should be created in, with Digital Ocean this is probably just clamping the version of the provider that should be used) but could also contain a partial Terraform state configuration to minimise what configuration you need to pass when running terraform init or even just setting the required Terraform version. An example might look something like this:



          provider "digitalocean" {
          version = "~> 1.0"
          }

          terraform {
          required_version = "=0.11.10"

          backend "s3" {
          region = "eu-west-1"
          encrypt = true
          kms_key_id = "alias/terraform-state"
          dynamodb_table = "terraform-locks"
          }
          }





          share|improve this answer
























          • Your solution not only solved my issue, but it also provided me with guidelines on how to layer my project. On top of it, I discovered what CNC means and what CN Cloud does. Pretty cool !

            – Stephane
            Dec 31 '18 at 12:47











          • The key part for me was when you wrote state configuration happens before module fetching.

            – Stephane
            Dec 31 '18 at 12:54











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53972333%2fhow-to-pass-a-token-environment-variable-into-a-provider-module%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1














          The simplest option you have here is to just not define the provider at all and just use the DIGITALOCEAN_TOKEN environment variable as mentioned in the Digital Ocean provider docs.



          This will always use the latest version of the Digital Ocean provider but otherwise will be functionally the same as what you're currently doing.



          However, if you did want to define the provider block so that you can specify the version of the provider used or also take this opportunity to define a partial state configuration or set the required Terraform version then you just need to make sure that the provider defining files are in the same directory you are applying or in a sourced module (if you're doing partial state configuration then they must be in the directory, not in a module as state configuration happens before module fetching).



          I normally achieve this by simply symlinking my provider file everywhere that I want to apply my Terraform code (so everywhere that isn't just a module).



          As an example you might have a directory structure that looks something like this:



          .
          ├── modules
          │   └── kubernetes
          │   ├── master
          │   │   ├── main.tf
          │   │   ├── output.tf
          │   │   └── variables.tf
          │   └── worker
          │   ├── main.tf
          │   ├── output.tf
          │   └── variables.tf
          ├── production
          │   ├── digital-ocean.tf -> ../providers/digital-ocean.tf
          │   ├── kubernetes-master.tf -> ../stacks/kubernetes-master.tf
          │   ├── kubernetes-worker.tf -> ../stacks/kubernetes-worker.tf
          │   └── terraform.tfvars
          ├── providers
          │   └── digital-ocean.tf
          ├── stacks
          │   ├── kubernetes-master.tf
          │   └── kubernetes-worker.tf
          └── staging
          ├── digital-ocean.tf -> ../providers/digital-ocean.tf
          ├── kubernetes-master.tf -> ../stacks/kubernetes-master.tf
          ├── kubernetes-worker.tf -> ../stacks/kubernetes-worker.tf
          └── terraform.tfvars


          This layout has 2 "locations" where you would perform Terraform actions (eg plan/apply): staging and production (given as an example of keeping things as similar as possible with slight variations between environments). These directories contain only symlinked files other than the terraform.tfvars file which allows you to vary only a few constrained things, keeping your staging and production environments the same.



          The provider file that is symlinked would contain any provider specific configuration (in the case of AWS this would normally include the region that things should be created in, with Digital Ocean this is probably just clamping the version of the provider that should be used) but could also contain a partial Terraform state configuration to minimise what configuration you need to pass when running terraform init or even just setting the required Terraform version. An example might look something like this:



          provider "digitalocean" {
          version = "~> 1.0"
          }

          terraform {
          required_version = "=0.11.10"

          backend "s3" {
          region = "eu-west-1"
          encrypt = true
          kms_key_id = "alias/terraform-state"
          dynamodb_table = "terraform-locks"
          }
          }





          share|improve this answer
























          • Your solution not only solved my issue, but it also provided me with guidelines on how to layer my project. On top of it, I discovered what CNC means and what CN Cloud does. Pretty cool !

            – Stephane
            Dec 31 '18 at 12:47











          • The key part for me was when you wrote state configuration happens before module fetching.

            – Stephane
            Dec 31 '18 at 12:54
















          1














          The simplest option you have here is to just not define the provider at all and just use the DIGITALOCEAN_TOKEN environment variable as mentioned in the Digital Ocean provider docs.



          This will always use the latest version of the Digital Ocean provider but otherwise will be functionally the same as what you're currently doing.



          However, if you did want to define the provider block so that you can specify the version of the provider used or also take this opportunity to define a partial state configuration or set the required Terraform version then you just need to make sure that the provider defining files are in the same directory you are applying or in a sourced module (if you're doing partial state configuration then they must be in the directory, not in a module as state configuration happens before module fetching).



          I normally achieve this by simply symlinking my provider file everywhere that I want to apply my Terraform code (so everywhere that isn't just a module).



          As an example you might have a directory structure that looks something like this:



          .
          ├── modules
          │   └── kubernetes
          │   ├── master
          │   │   ├── main.tf
          │   │   ├── output.tf
          │   │   └── variables.tf
          │   └── worker
          │   ├── main.tf
          │   ├── output.tf
          │   └── variables.tf
          ├── production
          │   ├── digital-ocean.tf -> ../providers/digital-ocean.tf
          │   ├── kubernetes-master.tf -> ../stacks/kubernetes-master.tf
          │   ├── kubernetes-worker.tf -> ../stacks/kubernetes-worker.tf
          │   └── terraform.tfvars
          ├── providers
          │   └── digital-ocean.tf
          ├── stacks
          │   ├── kubernetes-master.tf
          │   └── kubernetes-worker.tf
          └── staging
          ├── digital-ocean.tf -> ../providers/digital-ocean.tf
          ├── kubernetes-master.tf -> ../stacks/kubernetes-master.tf
          ├── kubernetes-worker.tf -> ../stacks/kubernetes-worker.tf
          └── terraform.tfvars


          This layout has 2 "locations" where you would perform Terraform actions (eg plan/apply): staging and production (given as an example of keeping things as similar as possible with slight variations between environments). These directories contain only symlinked files other than the terraform.tfvars file which allows you to vary only a few constrained things, keeping your staging and production environments the same.



          The provider file that is symlinked would contain any provider specific configuration (in the case of AWS this would normally include the region that things should be created in, with Digital Ocean this is probably just clamping the version of the provider that should be used) but could also contain a partial Terraform state configuration to minimise what configuration you need to pass when running terraform init or even just setting the required Terraform version. An example might look something like this:



          provider "digitalocean" {
          version = "~> 1.0"
          }

          terraform {
          required_version = "=0.11.10"

          backend "s3" {
          region = "eu-west-1"
          encrypt = true
          kms_key_id = "alias/terraform-state"
          dynamodb_table = "terraform-locks"
          }
          }





          share|improve this answer
























          • Your solution not only solved my issue, but it also provided me with guidelines on how to layer my project. On top of it, I discovered what CNC means and what CN Cloud does. Pretty cool !

            – Stephane
            Dec 31 '18 at 12:47











          • The key part for me was when you wrote state configuration happens before module fetching.

            – Stephane
            Dec 31 '18 at 12:54














          1












          1








          1







          The simplest option you have here is to just not define the provider at all and just use the DIGITALOCEAN_TOKEN environment variable as mentioned in the Digital Ocean provider docs.



          This will always use the latest version of the Digital Ocean provider but otherwise will be functionally the same as what you're currently doing.



          However, if you did want to define the provider block so that you can specify the version of the provider used or also take this opportunity to define a partial state configuration or set the required Terraform version then you just need to make sure that the provider defining files are in the same directory you are applying or in a sourced module (if you're doing partial state configuration then they must be in the directory, not in a module as state configuration happens before module fetching).



          I normally achieve this by simply symlinking my provider file everywhere that I want to apply my Terraform code (so everywhere that isn't just a module).



          As an example you might have a directory structure that looks something like this:



          .
          ├── modules
          │   └── kubernetes
          │   ├── master
          │   │   ├── main.tf
          │   │   ├── output.tf
          │   │   └── variables.tf
          │   └── worker
          │   ├── main.tf
          │   ├── output.tf
          │   └── variables.tf
          ├── production
          │   ├── digital-ocean.tf -> ../providers/digital-ocean.tf
          │   ├── kubernetes-master.tf -> ../stacks/kubernetes-master.tf
          │   ├── kubernetes-worker.tf -> ../stacks/kubernetes-worker.tf
          │   └── terraform.tfvars
          ├── providers
          │   └── digital-ocean.tf
          ├── stacks
          │   ├── kubernetes-master.tf
          │   └── kubernetes-worker.tf
          └── staging
          ├── digital-ocean.tf -> ../providers/digital-ocean.tf
          ├── kubernetes-master.tf -> ../stacks/kubernetes-master.tf
          ├── kubernetes-worker.tf -> ../stacks/kubernetes-worker.tf
          └── terraform.tfvars


          This layout has 2 "locations" where you would perform Terraform actions (eg plan/apply): staging and production (given as an example of keeping things as similar as possible with slight variations between environments). These directories contain only symlinked files other than the terraform.tfvars file which allows you to vary only a few constrained things, keeping your staging and production environments the same.



          The provider file that is symlinked would contain any provider specific configuration (in the case of AWS this would normally include the region that things should be created in, with Digital Ocean this is probably just clamping the version of the provider that should be used) but could also contain a partial Terraform state configuration to minimise what configuration you need to pass when running terraform init or even just setting the required Terraform version. An example might look something like this:



          provider "digitalocean" {
          version = "~> 1.0"
          }

          terraform {
          required_version = "=0.11.10"

          backend "s3" {
          region = "eu-west-1"
          encrypt = true
          kms_key_id = "alias/terraform-state"
          dynamodb_table = "terraform-locks"
          }
          }





          share|improve this answer













          The simplest option you have here is to just not define the provider at all and just use the DIGITALOCEAN_TOKEN environment variable as mentioned in the Digital Ocean provider docs.



          This will always use the latest version of the Digital Ocean provider but otherwise will be functionally the same as what you're currently doing.



          However, if you did want to define the provider block so that you can specify the version of the provider used or also take this opportunity to define a partial state configuration or set the required Terraform version then you just need to make sure that the provider defining files are in the same directory you are applying or in a sourced module (if you're doing partial state configuration then they must be in the directory, not in a module as state configuration happens before module fetching).



          I normally achieve this by simply symlinking my provider file everywhere that I want to apply my Terraform code (so everywhere that isn't just a module).



          As an example you might have a directory structure that looks something like this:



          .
          ├── modules
          │   └── kubernetes
          │   ├── master
          │   │   ├── main.tf
          │   │   ├── output.tf
          │   │   └── variables.tf
          │   └── worker
          │   ├── main.tf
          │   ├── output.tf
          │   └── variables.tf
          ├── production
          │   ├── digital-ocean.tf -> ../providers/digital-ocean.tf
          │   ├── kubernetes-master.tf -> ../stacks/kubernetes-master.tf
          │   ├── kubernetes-worker.tf -> ../stacks/kubernetes-worker.tf
          │   └── terraform.tfvars
          ├── providers
          │   └── digital-ocean.tf
          ├── stacks
          │   ├── kubernetes-master.tf
          │   └── kubernetes-worker.tf
          └── staging
          ├── digital-ocean.tf -> ../providers/digital-ocean.tf
          ├── kubernetes-master.tf -> ../stacks/kubernetes-master.tf
          ├── kubernetes-worker.tf -> ../stacks/kubernetes-worker.tf
          └── terraform.tfvars


          This layout has 2 "locations" where you would perform Terraform actions (eg plan/apply): staging and production (given as an example of keeping things as similar as possible with slight variations between environments). These directories contain only symlinked files other than the terraform.tfvars file which allows you to vary only a few constrained things, keeping your staging and production environments the same.



          The provider file that is symlinked would contain any provider specific configuration (in the case of AWS this would normally include the region that things should be created in, with Digital Ocean this is probably just clamping the version of the provider that should be used) but could also contain a partial Terraform state configuration to minimise what configuration you need to pass when running terraform init or even just setting the required Terraform version. An example might look something like this:



          provider "digitalocean" {
          version = "~> 1.0"
          }

          terraform {
          required_version = "=0.11.10"

          backend "s3" {
          region = "eu-west-1"
          encrypt = true
          kms_key_id = "alias/terraform-state"
          dynamodb_table = "terraform-locks"
          }
          }






          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Dec 31 '18 at 11:22









          ydaetskcoRydaetskcoR

          21.5k45373




          21.5k45373













          • Your solution not only solved my issue, but it also provided me with guidelines on how to layer my project. On top of it, I discovered what CNC means and what CN Cloud does. Pretty cool !

            – Stephane
            Dec 31 '18 at 12:47











          • The key part for me was when you wrote state configuration happens before module fetching.

            – Stephane
            Dec 31 '18 at 12:54



















          • Your solution not only solved my issue, but it also provided me with guidelines on how to layer my project. On top of it, I discovered what CNC means and what CN Cloud does. Pretty cool !

            – Stephane
            Dec 31 '18 at 12:47











          • The key part for me was when you wrote state configuration happens before module fetching.

            – Stephane
            Dec 31 '18 at 12:54

















          Your solution not only solved my issue, but it also provided me with guidelines on how to layer my project. On top of it, I discovered what CNC means and what CN Cloud does. Pretty cool !

          – Stephane
          Dec 31 '18 at 12:47





          Your solution not only solved my issue, but it also provided me with guidelines on how to layer my project. On top of it, I discovered what CNC means and what CN Cloud does. Pretty cool !

          – Stephane
          Dec 31 '18 at 12:47













          The key part for me was when you wrote state configuration happens before module fetching.

          – Stephane
          Dec 31 '18 at 12:54





          The key part for me was when you wrote state configuration happens before module fetching.

          – Stephane
          Dec 31 '18 at 12:54


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53972333%2fhow-to-pass-a-token-environment-variable-into-a-provider-module%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Monofisismo

          Angular Downloading a file using contenturl with Basic Authentication

          Olmecas