WintelGuy.com

Getting Started with Terraform MCP Server and VS Code

Contents

Introduction

Modern Infrastructure as Code (IaC) workflows increasingly rely on intelligent tooling to improve productivity, reduce errors, and accelerate delivery. While Terraform provides a powerful declarative model for provisioning infrastructure, writing and maintaining Terraform configurations can still involve frequent context switching between documentation, provider schemas, and validation tools.

This is where the Model Context Protocol (MCP) enters the picture.

MCP is a standardized way for AI-powered tools to securely interact with external systems through structured, tool-based communication. Instead of relying purely on text prediction, an AI assistant can invoke specialized tools, such as Terraform-aware services, to retrieve schemas, validate configuration logic, or generate accurate resource definitions based on provider metadata.

A Terraform MCP Server acts as a bridge between Terraform and AI-enabled development environments such as Visual Studio Code. It exposes Terraform-specific capabilities (for example, provider inspection or configuration generation) in a structured way that AI tools can safely consume. The result is a development workflow that is:

  • More accurate (schema-aware rather than guess-based)
  • Faster (less manual documentation lookup)
  • Safer (controlled tool execution instead of arbitrary code suggestions)
  • More productive (AI-assisted infrastructure authoring)
  • More efficient (accurate, context-aware suggestions)
  • Less error-prone (real-time provider data)

In this tutorial, we will:

  • Explain what MCP is and how it works at a high level
  • Provide an overview of the Terraform MCP Server and its capabilities
  • Walk through prerequisites and local installation steps
  • Configure Visual Studio Code to use the Terraform MCP server
  • Demonstrate to use it for Terraform code generation
  • Discuss MCP integration into HCP Terraform development environment

By the end of this guide, you will have a working Terraform MCP setup integrated into your local development environment and understand how it enhances your Infrastructure as Code workflow.

Back to Top

What Is MCP?

The Model Context Protocol (MCP) is an open-source protocol developed by Anthropic that standardizes how AI applications interact with external tools, services, and systems.

At a high level, MCP enables AI assistants to move beyond simple text prediction and safely execute structured operations, such as reading files, inspecting schemas, validating configurations, or querying APIs, through clearly defined tool interfaces.

MCP allows AI clients to:

  • Discover available tools
  • Invoke them with structured input
  • Receive structured output
  • Incorporate real execution results into responses

MCP introduces a standardized communication layer between AI clients and executable tools. It is built around a simple but powerful architecture:

  • MCP Client - This is the AI-powered application (for example, an AI assistant in VS Code) that discovers tools through the MCP Server, invokes them with structured input, and incorporates their output into its responses.
  • MCP Server - This is a service that exposes tool capabilities in a structured way that AI clients can interact with. It defines the tool schema, handles execution requests, and returns results.
  • Tools - These are are discrete, well-defined operations the server provides, such as retrieving documentation, inspecting local files, or generate configuration snippets.
  • Transport Layer - It handles the underlying communication between components. MCP commonly uses local process communication (such as standard input/output streams) to connect the client and server.

How MCP Works (Conceptual Flow):

  • Tool Registration - The MCP Server registers available tools (for example, Terraform provider inspection or configuration generation) and their input/output schemas.
  • Tool Discovery - The MCP Client (AI assistant) queries the server to discover available tools and their capabilities.
  • Tool Invocation - When the AI assistant determines that a tool should be used (for example, to generate a resource block), it sends a structured request to the MCP Server with the necessary input parameters.
  • Execution and Response - The MCP Server executes the requested tool operation, captures the output, and returns it to the client in a structured format.
  • Incorporation into AI Responses - The AI assistant incorporates the real execution results into its responses, providing accurate, context-aware suggestions or information to the user.

With MCP servers, AI assistants can provide much more accurate and useful responses by relying on real-time or domain-specific data sources. For example, a Terraform-aware AI assistant can use an MCP server to retrieve the exact schema for an AWS S3 bucket resource, ensuring that any generated configuration is valid and up-to-date with the latest provider version.

In short, MCP transforms AI from a predictive assistant into a tool-aware development partner.

Back to Top

Overview of Terraform MCP Server

The Terraform MCP Server is designed specifically for the Terraform ecosystem. It enables AI models and MCP-aware tools to interact with Terraform provider documentation, modules, policies, and workspace information in real time, helping ensure that generated configurations are accurate, current, and based on up-to-date provider metadata rather than relying on potentially outdated training data.

Key Capabilities and Features

At its core, the Terraform MCP Server acts as a bridge between AI clients and Terraform-related data sources, such as the Terraform Registry, Terraform Enterprise, or HCP Terraform. By exposing a suite of specialized tools via MCP, the server allows AI clients (like Visual Studio Code, Claude Desktop, or other MCP-compatible editors) to request and receive precise, structured information about providers, modules, and other Terraform resources.

Its key capabilities include:

  • Real-Time Provider Documentation - AI clients can query the server for up-to-date documentation on Terraform providers, including resource types, arguments, and usage examples. This ensures that any generated configuration is based on the latest provider information.
  • Module Discovery and Metadata - The server can provide detailed information about Terraform modules, including input variables, outputs, and usage examples. This allows AI clients to generate module blocks with correct parameters and understand how to use modules effectively.
  • Terraform Cloud / Enterprise Support - The server can integrate with Terraform Cloud or Terraform Enterprise to provide insights into workspaces, runs, and policies, allowing AI clients to generate configurations that are aware of the current state of the Terraform environment.
  • Policy and Governance Awareness - For users of Terraform Enterprise or HCP Terraform, the server can expose information about Sentinel policies, workspaces, and runs. This enables AI clients to generate context-aware configurations that align with organizational policies and governance requirements.
  • Multiple Transport Modes - The server can support various transport mechanisms (for example, local process communication or network-based APIs) to interact with AI clients in different environments.

These capabilities allow AI assistants to dynamically query real provider data, significantly improving the reliability of generated Terraform configurations and reducing the risk of errors stemming from outdated or incorrect schema information.

How It Works

When connected to an AI client that supports MCP, the Terraform MCP Server exposes a set of tools and resources that the model can invoke automatically based on the user's prompts. For example, when you ask about an AWS resource configuration, the AI client can use tools such as search_providers and get_provider_details to fetch the exact documentation for the AWS provider and include accurate schema details in the output.

A typical workflow involves:

  • The AI client discovers what MCP tools the server supports.
  • A user prompt is received that requires Terraform knowledge.
  • The MCP client translates the request into one or more tool invocations.
  • The server executes the tools, possibly querying the Terraform Registry, and returns structured results.
  • The AI model incorporates these results into a response.

Transport and Session Modes

To accommodate different deployment scenarios, the Terraform MCP Server supports:

  • Stdio Transport - Default mode for local environments; communicates over standard input/output using JSON-RPC messages, and is simple to integrate with local editors and development workflows.
  • StreamableHTTP Transport - HTTP-based transport that can be useful for remote or distributed setups; supports both direct HTTP requests and server-sent events (SSE).
  • Session Modes - When using HTTP transport, the server can operate in stateful mode (maintains context between requests) or stateless mode (each request is independent), giving flexibility in how state and context are managed.

Ecosystem Integration

Because it is built on the MCP standard, the Terraform MCP Server integrates with a variety of MCP-aware clients, editors, and AI assistants that support the protocol. This makes it easier for engineers to combine interactive AI workflows with Terraform authoring, whether locally in Visual Studio Code or in more advanced agent workflows.

Note: The feature remains in beta at the time of writing, and it’s recommended to use it in development or experimentation environments rather than critical production infrastructures.

This overview should help readers understand what the Terraform MCP Server is, what it enables, and how it fits into AI-assisted Terraform development before diving into installation and usage steps.

Back to Top

Local Installation of Terraform MCP Server

This tutorial illustrates how to set up the Terraform MCP Server in a local Windows-based development environment with Visual Studio Code.

At a high level, you need:

  • An MCP-compatible AI client that supports the Model Context Protocol (such as GitHub Copilot extension) installed into Visual Studio Code
  • Docker to run the Terraform MCP Server locally
  • A working local Terraform installation
  • An optional HCP Terraform environment

The MCP server acts as a bridge between GitHub Copilot (via MCP support in VS Code) and Terraform's documentation and registry APIs. GitHub Copilot detects when Terraform-specific context is needed and invokes MCP tools exposed by the Terraform MCP Server. The server retrieves structured, real-time provider and module data and passes it back to Copilot, which uses it to generate accurate Terraform configuration snippets. Docker simplifies the setup by allowing you to run the server without manually compiling or installing dependencies.

Back to Top

Prerequisites

First of all, ensure Docker is installed and running. If required, follow the steps outlined in the Install Docker Desktop on Windows guide to set up Docker on your machine.

Next, add the GitHub Copilot Chat extension to VS Code and sign up for GitHub Copilot Free.

GitHub Copilot Free imposes monthly limits on the number of AI-generated completions, but it should be sufficient for testing and experimentation with the Terraform MCP Server. If you find yourself hitting limits, consider upgrading to one of the paid GitHub Copilot plans for higher usage quotas and additional features.

Back to Top

Add MCP Server

You can install the Terraform MCP Server in your user profile or in the current workspace. The workspace configuration is stored in a .vscode/mcp.json file. The user profile configuration is stored in the VS Code user settings directory, which is typically located at %APPDATA%\Code\User on Windows.

In this example, we will set up the Terraform MCP Server in the current workspace. To do this, create a .vscode/mcp.json file in the root of your workspace with the following content:

{ "servers": { "terraform": { "command": "docker", "args": [ "run", "-i", "--rm", "hashicorp/terraform-mcp-server:0.4.0" ], "type": "stdio" } } }

After saving the file, go to the Extensions view, right-click the terraform server in the MCP SERVERS - INSTALLED section or select the gear icon, and then click Start Server.

In the Output panel, you should see logs indicating that the Terraform MCP Server has started successfully. You may see warning messages about TFE client issues, which is expected as we did not include any related configuration in the mcp.json file. The server will still function and provide provider documentation and registry data, but Terraform Cloud features will be unavailable until a valid token is provided.

Next, make sure the Terraform MCP Server tools are enabled. To complete this, click the tool icon (Configure Tools...) at the bottom of the Chat box and ensure that the tools from the terraform MCP server are selected. You should see search_providers, get_provider_details, search_modules, and a few others in the list of available tools. If they are not selected, click on terraform or select individual tools to enable them for use in the Chat.

At this stage the Terraform MCP Server is fully set up and ready to use in your VS Code workspace. In the next section we will show how to use it to get real-time provider information and generate Terraform configuration snippets based on the latest data from the Terraform Registry.

Back to Top

Test MCP Server

To validate that the Terraform MCP Server is working correctly, create a file, for example random_pet_example.tf, in your workspace with the following content:

/* File: random_pet_example.tf Purpose: A simple example illustrating how to use the 'random_pet' resource. */

Activate the Chat view in VS Code (Ctrl+Alt+I) and add the random_pet_example.tf file as context to the conversation. You can do this by dragging the file into the Chat view or by clicking the Add Context button in the Chat view.

In the Chat input box, type the following text and press Enter:

Using the official provider information from #terraform add an example illustrating how to use the 'random_pet' resource. Add detailed comments and include: - input variables for all supported parameters, - outputs, - Terrafrom CLI examples.

The AI assistant will use the Terraform MCP Server tools (as instructed by the #terraform tag) to fetch real-time data from the Terraform documentation and provide a response based on the latest provider information. If you satisfied with the response, click the Keep button to use the generated code. Otherwise, modify the prompt and click Send to have the assistant generate a new response, which may yield a different example.

You can also issue follow-up questions or request changes to the generated code, for example, to modify the outputs:

Modify the "Outputs" section to include a single output block showing the "keepers" and the pet's "id" values.

Below is the resulting code generated by the AI assistant based on our prompts (your results may differ):

/* File: random_pet_example.tf Purpose: A complete example illustrating how to use the `random_pet` resource from the official HashiCorp `hashicorp/random` provider. This example includes: - Input variables for all supported parameters (keepers, length, prefix, separator) - Detailed comments explaining each setting - Outputs exposing the generated pet name - Example Terraform CLI commands showing common usage patterns Provider compatibility note: This example targets the `hashicorp/random` provider (version >= 3.8.1). Adjust the version constraint in the `terraform` block as needed. */ terraform { required_providers { random = { source = "hashicorp/random" version = ">= 3.8.1" } } } /* Input variables for all supported `random_pet` parameters. - `keepers` : A map of strings. When any value in this map changes, the `random_pet` resource will be replaced. Use this to tie the generated name to other resources (for example an AMI id). - `length` : Number of words to generate for the pet name (default 2). - `prefix` : Optional string to prefix the generated name with. - `separator`: The string used to separate name words. Defaults to "-". */ variable "keepers" { type = map(string) description = "Arbitrary map of values that, when changed, will trigger recreation of the resource." default = {} } variable "length" { type = number description = "The number of words in the generated pet name." default = 2 } variable "prefix" { type = string description = "A string to prefix the generated name with. If empty, no prefix is applied." default = "test" } variable "separator" { type = string description = "Character(s) used to separate words in the generated name." default = "-" } /* Example `random_pet` resource using the variables above. Notes: - `keepers` is passed directly; changing any entry will force a new name. - `prefix`, `length`, and `separator` are passed in as-is. If `prefix` is an empty string, no visible prefix will be present in the generated name. */ resource "random_pet" "example" { keepers = var.keepers length = var.length prefix = var.prefix separator = var.separator } /* Outputs: expose the generated pet name and a convenience output for tagging or other downstream uses. */ output "random_pet_info" { description = "Map containing the provided `keepers` and the generated pet `id`." value = { keepers = var.keepers id = random_pet.example.id } } /* Terraform CLI examples 1) Initialize the working directory (first run, or after changing providers): terraform init 2) Quick plan using inline `-var` overrides: (note map syntax for `keepers`) terraform plan \ -var='prefix=web' \ -var='length=3' \ -var='separator=_' \ -var='keepers={ami_id="ami-0123456789abcdef0"}' 3) Apply with the same inline variable overrides: terraform apply -auto-approve \ -var='prefix=web' \ -var='length=3' \ -var='separator=_' 4) Use a variable definition file (`example.tfvars`): # example.tfvars prefix = "web" length = 3 separator = "_" keepers = { ami_id = "ami-0123456789abcdef0" } # then run terraform plan -var-file="example.tfvars" terraform apply -var-file="example.tfvars" Notes on `keepers` usage: - Use `keepers` to tie the generated name to external inputs. For example, if you want the name to change each time the AMI changes, set `keepers = { ami_id = var.ami_id }`. - Keep values in `keepers` stable when you want the generated name to remain stable. */

Note: terraform apply may generate an error if the prefix attribute is assigned an empty string value (""). In such a case, provide a non-empty default value for var.prefix (for example, "test") or comment-out the line with the prefix attribute in the resource.random_pet.example block.

Back to Top

Integrating Terraform MCP with HCP Terraform

By default, Terraform MCP can retrieve information only from the public Terraform Registry. Integrating the Terraform MCP Server with HCP Terraform extends its capabilities to include organization and workspace-level data, private registry access, and other HCP-specific tools. This allows AI clients to generate configurations that are aware of the current state of your Terraform environment, including policies, runs, and private modules, rather than relying solely on public documentation and registry data.

To integrate Terraform MCP with HCP Terraform, follow these high-level configuration steps:

  • Modify the Terraform MCP configuration file (mcp.json) to include HCP-specific settings:
    {
      "servers": {
        "terraform": {
          "command": "docker",
          "args": [
            "run",
            "-i",
            "--rm",
            "-e", "TFE_TOKEN=${input:tfe_token}",
            "hashicorp/terraform-mcp-server:0.4.0"
          ],
          "type": "stdio"
        }
      },
      "inputs": [
        {
          "type": "promptString",
          "id": "tfe_token",
          "description": "Terraform API Token",
          "password": true
        }
      ]
    }
      
  • Create an HCP Terraform API token
  • Restart the MCP server and when prompted, enter the HCP Terraform API token.

Once connected, the Terraform MCP Server will expose additional HCP Terraform-specific tools, for example:

  • list_workspaces - Retrieve a list of workspaces in your HCP Terraform organization.
  • get_workspace_details - Get detailed information about a specific workspace, including current runs, variables, and state versions.
  • search_policies - List Sentinel policies associated with your organization or specific workspaces.
  • get_policy_details - Retrieve details about a specific Sentinel policy, including its rules and enforcement levels.
  • search_private_modules - Search for modules in your private registry within HCP Terraform.
  • get_private_module_details - Get detailed information about a specific private module, including its input variables, outputs, and usage examples.

With HCP Terraform integration, AI clients can generate Terraform configurations that are not only accurate based on public provider data but also contextually aware of your organization's specific Terraform environment, policies, and private modules. This leads to more relevant and compliant configuration suggestions, improving both productivity and governance in your Terraform workflows.

Back to Top

Using Instruction Files

Instruction files allow you to define custom rules and guidelines for how AI assistants should use MCP tools in specific contexts.

VS Code supports various instruction files, such as:

  • .github/copilot-instructions.md - For GitHub Copilot. It is stored within the workspace's root directory and automatically applies to all chat requests in the workspace.
  • AGENTS.md - For VS Code AI agents. One or more of these files can be created in the workspace's root or its subdirectories. Useful if you work with multiple AI agents in your workspace. Automatically applies to all chat requests in the workspace or a specific subfolder.
  • *.instructions.md - For a specific MCP tool or a subset of files. It applies only to requests related to that tool or file type.

As a general guideline, start with a single .github/copilot-instructions.md file for project-wide coding standards. Add .instructions.md files when you need different rules for different file types or frameworks. Use AGENTS.md if you work with multiple AI agents in your workspace.

For example, you can describe the general documentation standard, naming conventions, and project structure in the .github/copilot-instructions.md file and add Terraform-specific coding and formatting standards in a separate terraform.instructions.md file.

In the instruction file, you can specify that the AI assistant should always use the Terraform MCP Server to fetch provider documentation and registry data, rather than relying on its training data.

Example: .github/copilot-instructions.md

--- applyTo: "**" --- # Project General Coding Instructions ## Project Purpose This is an **educational repository** demonstrating Terraform concepts through focused, self-contained examples. ## Module and Repository Structure Organize Terraform modules and repositories as follows: ``` ├── README.md ├── main.tf ├── variables.tf ├── outputs.tf ├── ... ├── modules/ │ ├── moduleA/ │ │ ├── README.md │ │ ├── variables.tf │ │ ├── main.tf │ │ ├── outputs.tf │ ├── moduleB/ │ ├── .../ ├── examples/ │ ├── exampleA/ │ │ ├── main.tf │ ├── exampleB/ │ ├── .../ ``` ## Documentation & Naming Conventions **File Header Pattern**: Every `.tf` file starts with a documentation block: ```terraform /* --------------------------------------------------------------------------- File: <filename> Purpose: <what concept/behavior is demonstrated> Warning: <any cost/destructive operation warnings> Assumptions & Requirements: <preconditions like AWS access, keys, etc.> Workflow: <numbered steps: terraform init → plan → apply → [test] → destroy> --------------------------------------------------------------------------- */ ``` **Comment Structure**: Use consistent block separators: ```terraform /* --------------------------------------------------------------------------- Section: Variable/Resource/Output Name Description/Purpose --------------------------------------------------------------------------- */ ``` ... and so on, and so forth ...

Example: .github/instructions/terraform.instructions.md

--- applyTo: "**/*.{tf,hcl}" description: "HashiCorp style guidelines for writing Terraform code" --- # Terraform Code Style Guidelines This project follows HashiCorp's official Terraform style guide for consistent, maintainable infrastructure-as-code. ## Project Context When generating Terraform code, always use the Terraform MCP Server to fetch real-time provider documentation and registry data. Do not rely on training data for provider information. ## Code Formatting - Indent two spaces for each nesting level, do not use tabs - Align equals signs when multiple single-line arguments appear consecutively - Place arguments at the top of blocks, followed by nested blocks with one blank line separation - Put meta-arguments (count, for_each) first, followed by other arguments, then nested blocks - Place lifecycle blocks last, separated by blank lines - Separate top-level blocks with one blank line ## Comments and Documentation - Use `#` for single-line comments and `/* */` for multi-line comments - Write self-documenting code; use comments only to clarify complexity - Add comments above resource blocks to explain non-obvious business logic ... and so on, and so forth ...

By creating instruction files in your repository, you can provide instructions that guide the AI's behavior when generating code or responding to prompts. This is especially useful for ensuring that the AI uses the Terraform MCP Server tools in a way that aligns with your project's conventions, policies, or specific requirements.

Back to Top

Summary and Key Takeaways

The Terraform MCP Server brings structured, tool-driven intelligence to Terraform development by connecting AI assistants to official Terraform data sources. Instead of relying solely on predictive text generation, AI tools can retrieve up-to-date provider documentation, module metadata, and workspace-level data from HCP Terraform, when integrated.

By following this tutorial, you have learned to:

  • Understand the role of the Model Context Protocol (MCP)
  • Install and run the Terraform MCP Server locally using Docker
  • Configure VS Code to communicate with the MCP server
  • Use AI-assisted Terraform code generation backed by real provider data
  • Integrate Terraform MCP with HCP Terraform for workspace-aware context
  • Customize AI behavior using instruction files such as:
    • .github/copilot-instructions.md
    • AGENTS.md
    • Modular *.instructions.md files

When combined with Visual Studio Code and GitHub Copilot, Terraform MCP transforms AI from a general-purpose code assistant into a structured, Terraform-aware development partner. By providing real-time access to official documentation and registry data, it enables a more reliable and context-aware Infrastructure as Code workflow.

As AI-assisted Infrastructure as Code continues to evolve, Terraform MCP provides a practical and controlled foundation for adopting these capabilities in both individual and enterprise environments.

Back to Top