Working with Existing Infrastructure Using Terraform Import
Contents
Introduction
When working with Terraform, it's common to start by creating resources from scratch using configuration files.
But in real-world environments, many resources already exist, perhaps they were created manually in the cloud console,
provisioned by scripts, or deployed long before Terraform was introduced. Instead of recreating these resources,
Terraform provides an import functionality that allows you to bring existing infrastructure under Terraform's
management.
Importing is a key step for teams that want to standardize infrastructure management without disrupting live systems.
It ensures that Terraform is aware of the current infrastructure state, enabling you to manage, update, and track
resources consistently applying Infrastructure as Code (IaC) methods.
This tutorial introduces Terraform's two approaches to importing:
- the import command - an imperative method that links resources directly to Terraform's state, and
- the
import
block - a newer declarative approach that integrates smoothly into configuration files.
We'll explore how both methods work, their features and limitations, and typical use cases where imports are most useful.
You'll also see practical examples to understand the workflows step by step. Finally, we'll compare imports with related
options like data sources and outline best practices for managing imports effectively.
Back to Top
What is Terraform Import?
Terraform import is the process of bringing an existing infrastructure resource under Terraform's management by adding it
to the Terraform state. The state is Terraform's internal database that keeps track of which resources are managed and how
they map to your configuration.
When you import a resource, Terraform doesn't create or modify the resource itself, it simply records in the state file that
the resource already exists in your cloud provider and is now associated with a particular Terraform configuration block.
This means Terraform can begin managing that resource going forward, applying changes defined in your configuration and
tracking updates over time.
Key Points:
- Purpose: Import brings existing resources under Terraform management.
- No resource creation: Import does not provision infrastructure, it only updates the Terraform state.
- Configuration required: To fully manage an imported resource, you must still define the corresponding resource
block in your
.tf
files. Terraform uses this configuration to compare desired state against the imported
state.
- Two approaches available:
- Import command (
terraform import
) - an imperative, CLI-based method.
- Import block - a declarative method added in newer Terraform versions (1.5+).
Let's explore the two Terraform import methods in more details.
Back to Top
Terraform Import Methods
Terraform provides two main ways to import existing resources into its state: the import command and the import block.
Both achieve the same end result - associating an existing infrastructure resource with Terraform's state and configuration,
but they differ in how you define and execute the import.
Import Command
The terraform import
CLI command is the traditional, imperative approach to importing resources.
It requires two parameters:
- the Terraform address of the resource (e.g.,
google_storage_bucket.my_bucket
), and
- the provider-specific ID of the resource (e.g.,
my-bucket
for a GCP storage bucket).
Features and Limitations:
- Requires a separate workflow (outside of the regular plan + apply process).
- Modifies the state upon execution.
- Does not generate Terraform configuration automatically, you must create or update
.tf
files yourself.
- Useful for quick, one-time imports.
- Can import only one resource at a time (bulk imports require scripting).
- Easy to lose track of imports if not documented separately.
Back to Top
Import Block
Introduced in Terraform v1.5, the import block is a declarative approach that defines imports inside configuration files.
It looks like this:
import {
to = google_compute_instance.my_instance
id = "projects/my-project/zones/us-central1-a/instances/instance-1"
}
Here, to
specifies the Terraform resource address, and id
specifies the provider's
identifier for the resource.
Features and Limitations:
- Imports are part of the configuration and can be incorporated into the standard Terraform plan + apply workflow.
- Multiple resources can be imported with a single plan + apply action.
- Provides visibility into planned changes before applying them.
- Requires matching resource blocks defined in
.tf
files.
- Provides an option to auto-generate resource configuration.
Back to Top
Import Workflows
Successfully importing a resource into Terraform involves more than just running an import command or writing an
import block. A complete workflow includes preparation, the import process, and post-import validation to ensure
the resource is correctly managed by Terraform.
Below are the step-by-step workflows for both import methods.
Workflow with the Import Command
Preparation:
- Identify the resource you want to import.
- Look up the provider-specific ID (e.g., for GCP this might be
projects/my-project/zones/us-central1-a/instances/my-instance
).
- Add a placeholder resource block in your
.tf
configuration file with at least the resource type and name.
For example:
resource "google_compute_instance" "my_instance" {
# Configuration to be filled after import
}
Run the Import Command:
- Use the CLI to import the resource into the Terraform state by running the following command:
terraform import <resource_address> <resource_id>
- Terraform records the resource in the state but does not change your configuration file.
Inspect the Imported Resource:
- Run
terraform state show <resource_address>
to review attributes imported into the state.
- Compare these attributes with the resource's actual settings in the provider console.
Update Configuration:
- Manually update your
.tf
file to reflect the real resource configuration.
- Ensure the configuration matches the imported resource, otherwise Terraform may attempt to modify or recreate it.
Validate and Plan:
- Run
terraform plan
to confirm that the configuration and state align.
- Adjust configuration code as needed until
terraform plan
shows no changes.
Post-Import Operations:
- From this point on, Terraform fully manages the resource.
- Future changes should be made only through Terraform to avoid drift.
Back to Top
Workflow with the Import Block
Preparation:
- Identify the resource you want to import and find its provider-specific ID.
- Add a corresponding resource block to your Terraform configuration.
Example:
resource "google_compute_instance" "my_instance" {
name = "my-instance"
machine_type = "e2-medium"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "debian-cloud/debian-11"
}
}
network_interface {
network = "default"
}
}
- Then, add a matching import block. For example:
import {
to = google_compute_instance.my_instance
id = "projects/my-project/zones/us-central1-a/instances/my-instance"
}
Plan and Verify:
- Run
terraform plan
. Terraform shows the planned import operation along with any detected differences between
your configuration and the actual resource.
- If terraform plan indicates changes, reconcile your configuration with the actual resource settings
to avoid unintended changes.
- Continue iterating until there are no discrepancies.
Apply the Import:
- Run
terraform apply
to perform the import. Terraform updates the state file to include the resource.
- Run
terraform state show <resource_address>
or inspect the state file to confirm that the resource
is tracked.
Post-Import Operations:
- The resource is now part of your configuration and state.
- Remove the import block once the import is complete.
- Any future changes should be managed through Terraform code.
Back to Top
Key Differences in Workflow
- The import command requires you to update configuration after importing.
- The import block requires you to write configuration before importing.
- Both methods ultimately converge on the same steady-state: the resource exists in both Terraform's state and
configuration, allowing Terraform to fully manage it.
Back to Top
Comparison: Import vs. Data Blocks
Terraform's data blocks provide another way to work with existing infrastructure. At first glance,
imports and data blocks may seem similar, since both involve resources that already exist outside of Terraform. However,
their purpose and behavior are quite different.
Import
- Purpose: Brings an existing resource under Terraform's management.
- Effect on State: The resource is added to the Terraform state file.
- Configuration: Requires a
resource
block in .tf
files so Terraform knows the
desired configuration.
- Lifecycle: Once imported, Terraform can modify, update, or destroy the resource as part of normal operations.
- When to Use:
- You want Terraform to take ownership of a resource.
- You need the resource to be fully managed and tracked.
Data Block
- Purpose: Reads information about an existing resource but does not manage it.
- Effect on State: Terraform does track the data retrieved from data blocks in the state file but does not add
the resources to the state.
- Configuration: Uses a
data
block instead of a resource
block.
- Lifecycle: The resource remains managed outside of Terraform. Terraform only references its attributes.
- When to Use:
- You need to look up values for existing infrastructure without changing it.
- Useful for referencing resources (e.g., VPC IDs, image names, networks, or secrets) managed by other tools
or other teams.
Rule of thumb:
- Use import when Terraform should own and manage the resource.
- Use data blocks when Terraform only needs to read or reference the resource.
Back to Top
Examples: Terraform Import in Action
We'll walk through two examples:
- Use import command to import a GCP Storage bucket.
- Import a GCP compute instance with the help of an import block.
Each example includes:
gcloud
commands to create the resource manually (outside Terraform).
- Terraform code (resource blocks).
- Terraform commands.
- Step-by-step import workflow.
Back to Top
Import GCP Storage Bucket
This example illustrates how to use the terraform import
command to import a GCP storage bucket
into Terraform state.
Preparation
Create a GCP bucket manually:
# Variables - Replace with your project id and bucket name
PROJECT_ID="my-project-id"
BUCKET_NAME="my-import-bucket-582"
# Create the bucket
gcloud storage buckets create gs://$BUCKET_NAME --project=$PROJECT_ID --location=US --default-storage-class=NEARLINE
gcloud storage buckets list --project=$PROJECT_ID --format='text(name,storage_url)'
This creates a multi-regional GCP storage bucket with the default storage class set to "Nearline".
We will use this bucket for our import exercise.
Create Terraform Configuration with Bucket Placeholder
In an empty directory create a file named main.tf
with the following content:
# main.tf
variable "project_id" {
type = string
}
provider "google" {
project = var.project_id
}
# Placeholder for our bucket
resource "google_storage_bucket" "my_bucket" {
name = "my-import-bucket-582"
location = "US"
}
Run Import Command
Use the following Terraform command to import the bucket's configuration into state:
terraform import google_storage_bucket.my_bucket my-import-bucket-582
Where:
google_storage_bucket.my_bucket
- the resource address in Terraform,
my-import-bucket-582
- the bucket name (resource ID for GCP).
$ terraform init
$ terraform import google_storage_bucket.my_bucket my-import-bucket-582
google_storage_bucket.my_bucket: Importing from ID "my-import-bucket-582"...
google_storage_bucket.my_bucket: Import prepared!
Prepared google_storage_bucket for import
google_storage_bucket.my_bucket: Refreshing state... [id=my-import-bucket-582]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
Verify and Adjust
Check the state by running terraform state show google_storage_bucket.my_bucket
:
$ terraform state show google_storage_bucket.my_bucket
# google_storage_bucket.my_bucket:
resource "google_storage_bucket" "my_bucket" {
default_event_based_hold = false
effective_labels = {}
enable_object_retention = false
force_destroy = false
id = "my-import-bucket-582"
labels = {}
location = "US"
name = "my-import-bucket-582"
project = "project_id"
project_number = 463183993749
public_access_prevention = "inherited"
requester_pays = false
rpo = "DEFAULT"
self_link = "https://www.googleapis.com/storage/v1/b/my-import-bucket-582"
storage_class = "NEARLINE"
terraform_labels = {}
time_created = "2025-08-20T15:39:19.798Z"
uniform_bucket_level_access = true
updated = "2025-08-20T15:39:19.798Z"
url = "gs://my-import-bucket-582"
hierarchical_namespace {
enabled = false
}
soft_delete_policy {
effective_time = "2025-08-20T15:39:19.798Z"
retention_duration_seconds = 604800
}
}
Run terraform plan
and look for any mismatches.
$ terraform plan
google_storage_bucket.my_bucket: Refreshing state... [id=my-import-bucket-582]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# google_storage_bucket.my_bucket will be updated in-place
~ resource "google_storage_bucket" "my_bucket" {
id = "my-import-bucket-582"
name = "my-import-bucket-582"
~ storage_class = "NEARLINE" -> "STANDARD"
# (17 unchanged attributes hidden)
# (2 unchanged blocks hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
If needed, update the resource arguments in the main.tf
to reflect the actual bucket settings (labels, versioning, storage class, etc.).
For example, add storage_class = "NEARLINE"
to the google_storage_bucket
resource block
and re-run terraform plan
.
Repeat the steps until everything matches and terraform plan
shows "No changes".
$ terraform plan
google_storage_bucket.my_bucket: Refreshing state... [id=my-import-bucket-582]
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
At this point, the terraform import
command linked the bucket to Terraform's state file and
there are no discrepancies between the bucket and our configuration.
From now on, any changes (like enabling versioning or adding labels) should be made through Terraform.
For example, you can use terraform destroy
to remove the bucket.
Back to Top
Import GCP Compute Instance
This example demonstrates how to import a GCP Compute Instance into Terraform state with the help of
an import
block.
Preparation - Create a VM manually
First, create a default network if it does not exist in your project (it is required for the Terraform
config below to work):
# Variables - Replace with your project id
PROJECT_ID="my-project-id"
# Create a default network (some projects may not have one)
gcloud compute networks create default \
--subnet-mode=auto \
--project=$PROJECT_ID
Now create a VM:
# Variables - Replace with your project id and adjust VM parameters
PROJECT_ID="my-project-id"
ZONE="us-central1-a"
VM_NAME="my-import-vm"
# Create a small VM
gcloud compute instances create $VM_NAME \
--project=$PROJECT_ID \
--zone=$ZONE \
--machine-type=e2-micro \
--image-family=debian-11 \
--image-project=debian-cloud \
--network=default
This creates a small Debian VM in us-central1-a
attached to the default
VPC network.
Create Terraform Configuration with Import Block
In an empty directory create a file named main.tf
with the following content:
variable "project_id" {
type = string
# default = "my-project-id" # Replace with your project id
}
variable "zone_id" {
default = "us-central1-a"
}
variable "vm_name" {
default = "my-import-vm"
}
provider "google" {
project = var.project_id
}
# Import block for the VM
import {
to = google_compute_instance.my_instance
id = "projects/${var.project_id}/zones/${var.zone_id}/instances/${var.vm_name}"
}
Add VM Resource Block to the Configuration
Use the -generate-config-out
option within terraform plan
to auto-generate a file with
the VM configuration:
terraform plan -generate-config-out=diff.txt
.
Use this file to determine the required attributes and corresponding values for the VM's google_compute_instance
resource block. Add this resource block to the main.tf
file.
For example:
# Resource block for the VM
resource "google_compute_instance" "my_instance" {
name = var.vm_name
machine_type = "e2-micro"
zone = var.zone_id
boot_disk {
initialize_params {
image = "debian-cloud/debian-11"
}
}
network_interface {
network = "default"
}
service_account {
scopes = [
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring.write",
"https://www.googleapis.com/auth/pubsub",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/trace.append"
]
}
}
Verify and Adjust
Run terraform plan
and check for any planned configuration changes.
$ terraform plan
google_compute_instance.my_instance: Preparing import... [id=projects/project_id/zones/us-central1-a/instances/my-import-vm]
google_compute_instance.my_instance: Refreshing state... [id=projects/project_id/zones/us-central1-a/instances/my-import-vm]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# google_compute_instance.my_instance will be updated in-place
# (imported from "projects/project_id/zones/us-central1-a/instances/my-import-vm")
~ resource "google_compute_instance" "my_instance" {
...
name = "my-import-vm"
zone = "us-central1-a"
boot_disk {
...
}
~ network_interface {
...
- access_config {
- nat_ip = "34.9.210.217" -> null
- network_tier = "PREMIUM" -> null
public_ptr_domain_name = null
}
}
scheduling {
...
}
service_account {
...
}
...
}
Plan: 1 to import, 0 to add, 1 to change, 0 to destroy.
If required, update the resource arguments in the main.tf
to match the actual VM settings.
For example, add access_config { }
to the network_interface
sub-block and re-run
terraform plan
.
Repeat the steps until everything matches and the plan shows "0 to change".
Apply and Verify
Run terraform apply
to import the VM in the state.
Run terraform state list
or cat terraform.tfstate
to validate that the VM is now
fully managed by Terraform.
To summarize:
- We used Terraform's
block to define the VM to be imported into state.
- The
-generate-config-out
CLI option helped us determine exact attributes and values for the VM resource block.
- Running plan + apply brought the VM under Terraform management.
Unlike the command-based workflow, this import method is driven by the instructions included in the .tf
files
and can be easily incorporated into the normal Terraform plan + apply process.
Back to Top
Best Practices
Importing resources into Terraform can be powerful, but it also introduces risks if not done carefully.
Following best practices helps ensure that resources are imported properly and remain under
Terraform management.
Back Up Your State
- Always back up your Terraform state file (
terraform.tfstate
) before performing imports.
A failed import can lead to mismatched state, and restoring from a backup is often the simplest recovery method.
Document Imported Resources
- Keep track of which resources were imported, the IDs used, and the resource addresses.
- For import commands, record the exact CLI commands you ran.
- For import blocks, version-control them in Git so the process is traceable and auditable.
Align Configuration With Actual Environment
- After importing, immediately inspect the Terraform state file and compare it with your configuration.
- Make sure the Terraform resource block fully reflects the resource's actual attributes.
- If they don't match, Terraform may try to recreate or alter the resource on the next apply.
Prefer Declarative Imports When Possible
- Import blocks allow to create a systematic, version-controlled import process.
- Use
terraform import
for ad-hoc or one-time imports.
Validate With terraform plan
- After every import, run
terraform plan
to confirm that state and configuration are in sync.
- The goal is for the plan to show "No changes" before moving forward.
Handle Bulk Imports Strategically
- For multiple resources, script the generation of import blocks or CLI commands.
Automation prevents mistakes.
Avoid Drift by Using Terraform Exclusively
- Once imported, manage the resource only through Terraform.
- Manual changes in the cloud console can cause drift, leading to unexpected updates or conflicts.
Clean Up Unnecessary Imports
- Remove unused or temporary resources from Terraform to keep configurations clean.
- If an import block was only needed once, remove it after the import is complete (state already holds the resource).
In short: back up, document, validate, and use declarative imports whenever possible. These practices ensure that imported resources
transition smoothly into Terraform's lifecycle without surprises.
Back to Top
Conclusion
Terraform import is an essential feature for adopting Infrastructure as Code in environments where resources
already exist. Instead of recreating infrastructure from scratch, import allows you to bring resources under
Terraform's management and continue managing them consistently through code.
You've seen that there are two ways to perform imports:
- The import command, a quick, CLI-driven method useful for one-time or ad-hoc imports.
- The import block, a newer, declarative option that integrates directly into Terraform configuration and is better
suited for repeatable, version-controlled workflows
Both approaches follow a similar lifecycle: prepare by identifying the resource, perform the import, align your
configuration with the resource's actual attributes, and validate with terraform plan
before moving forward.
Once imported, the resource should be managed only through Terraform to avoid drift and ensure consistency.
We also compared imports with data blocks, highlighting that imports are for resources Terraform should own and manage,
while data sources are for resources Terraform should only reference.
With imports, you can gradually transition existing infrastructure into Terraform management, standardize workflows
across teams, and improve reliability by consolidating infrastructure state into code. Following best practices - backing
up state, documenting imports, and validating after each operation, ensures that the process is reliable and maintainable.
More Terraform Tutorials:
Understanding Terraform Variable Precedence
Terraform Value Types Tutorial
Terraform count
Explained with Practical Examples
Terraform for_each
Tutorial with Practical Examples
Exploring Terraform dynamic
Blocks with GCP Examples
Working with External Data in Terraform
Terraform Modules FAQ