Emergency Email Migration: Scripted Tools to Move Users Off a Compromised or Deprecated Gmail
Email MigrationScriptingAdmin Tools

Emergency Email Migration: Scripted Tools to Move Users Off a Compromised or Deprecated Gmail

UUnknown
2026-02-19
9 min read
Advertisement

Scripted, zero-downtime recipes to bulk-export Gmail mailboxes, provision new mailboxes, sync incrementally, and cut over MX with verification.

Emergency Email Migration: Scripted Tools to Move Users Off a Compromised or Deprecated Gmail

Hook: If a provider policy change or account compromise has put mail delivery, compliance, or user privacy at risk, you don’t have time for manual, one-at-a-time exports. This guide gives IT teams and devops engineers repeatable, scriptable recipes to batch-export mailboxes, re-map addresses, and provision new mailboxes with minimal downtime and verifiable integrity.

Why this matters in 2026

Late 2025 and early 2026 saw major providers pushing aggressive AI/data integrations and stricter authentication (OAuth2-only, deprecation of legacy IMAP auth) — and some policy shifts that forced admins to decide whether to remain on a platform or migrate. The reality for teams now: plan for fast mass migration capability. Expect more policy pivots, more AI scanning opt-in defaults, and more mandatory API-driven authentication across providers.

Pro tip: treat migration like a deployment — automate, monitor, verify. One-off scripts will fail under scale and under attack.

Executive summary — what you can achieve with these scripts

  • Bulk-export Gmail mailboxes to mbox/IMAP using safe, resumable tools.
  • Batch remap addresses from old-provider@old to new-provider@new with CSV-driven mapping.
  • Provision large numbers of mailboxes on new providers via APIs (Microsoft 365, self-hosted, cloud providers).
  • Stage dual-delivery, continuously sync until cutover, then finalize a low-downtime switch of MX records.
  • Perform integrity checks, checksums, and message-count verification before and after migration.

Overview: Zero-downtime migration pattern

  1. Plan & inventory — export user list and mailbox sizes, tag critical accounts.
  2. Pre-provision — create mailboxes on target, assign aliases, and set sending policies.
  3. Pre-sync — do an initial full copy (mbox or IMAP sync) and then schedule frequent incremental syncs.
  4. Dual-delivery or forwarding — route new inbound to both old and new (when supported) to avoid misses.
  5. TTL & MX cutover — lower DNS TTLs before cutover, change MX, and finalize last incremental sync.
  6. Verification & rollback — run scripted checks, confirm counts and integrity, and keep a rollback window.

Tools and choices (2026 context)

  • imapsync — battle-tested for IMAP-to-IMAP incremental syncs. Works well for mailboxes with millions of messages when tuned.
  • isync/mbsync — lighter-weight, good for Unix automation and mailbox snapshots.
  • Google Workspace Admin API and tools for admins — use for data export and user provisioning where available.
  • Microsoft Graph API / PowerShell — recommended for bulk provisioning Office 365 / Microsoft 365 mailboxes.
  • Cloud DNS APIs (Cloudflare, AWS Route 53, Google Cloud DNS) — script MX TTL changes and cutover.
  • Checksumming tools (sha256sum, borg, rclone checks) — verify exported artifacts.

Pre-flight: inventory and risk checklist

  • Collect a CSV: email,address_aliases,role,priority,mailbox_size,notes.
  • Identify accounts with MFA or devices that may block IMAP access after compromise.
  • Confirm legal holds, retention policies, and compliance requirements (store exports in WORM or secure object store if required).
  • Lower MX TTLs to 60–300 seconds at least 48 hours before cutover.
  • Confirm admin API access and service accounts for both source and target platforms with required scopes.

Step 1 — Bulk export strategy

For personal Gmail accounts that are compromised, full automation may be limited. For Google Workspace tenants, use admin APIs or Data Export. When API exports aren’t possible or are too slow, use IMAP-based tools.

Option A: Use imapsync for IMAP-to-IMAP transfer

imapsync is ideal when you can access both source and target via IMAP. Use OAuth2 for Gmail where basic auth is disabled. Pre-stage OAuth tokens or use impersonation on Workspace.

# batch_imapsync.sh
#!/usr/bin/env bash
# CSV format: old_email,old_host,old_user,old_pass,new_email,new_host,new_user,new_pass
CSV='users.csv'
LOGDIR='./logs'
mkdir -p "$LOGDIR"
while IFS=',' read -r OLD_EMAIL OLD_HOST OLD_USER OLD_PASS NEW_EMAIL NEW_HOST NEW_USER NEW_PASS; do
  OUT="$LOGDIR/$(echo "$OLD_EMAIL" | tr '@' '_').log"
  echo "Starting imapsync for $OLD_EMAIL -> $NEW_EMAIL" | tee -a "$OUT"
  imapsync \
    --host1 "$OLD_HOST" --user1 "$OLD_USER" --password1 "$OLD_PASS" \
    --host2 "$NEW_HOST" --user2 "$NEW_USER" --password2 "$NEW_PASS" \
    --syncinternaldates --useuid --addheader --no-modulesversions \
    --exclude 'Trash|Spam' --fastio1 --fastio2 --buffersize 8192 \
    --tmpdir '/tmp/imapsync' >> "$OUT" 2>&1
  echo "Completed: $OLD_EMAIL" | tee -a "$OUT"
done < "$CSV"

Notes: store credentials securely (do not keep plain passwords in CSV in production). Use encrypted secrets (HashiCorp Vault, Azure KeyVault, GCP Secret Manager) and retrieve per-process. For Google Workspace, prefer service account impersonation and XOAUTH2 to avoid storing passwords.

Option B: Use Google Takeout / Admin export (where permissible)

For Google Workspace admins: use the Vault or Admin Data Export flow to capture user mailboxes, then import mbox files into the target. Automation is limited unless using the Admin SDK; expect manual handoff for large archives. Automate the mbox upload and extraction to target IMAP using isync or offline tools.

Step 2 — Batch provisioning on target

Provisioning should be driven from a CSV mapping and a service account with rights. Below is a Microsoft Graph PowerShell example (2026 standard) to create users and assign basic mailbox licensing.

# Create-M365Users.ps1
# CSV: new_email,displayName,upn,password,skuId
Import-Module Microsoft.Graph.Users
Connect-MgGraph -Scopes 'User.ReadWrite.All','Directory.ReadWrite.All'
$csv = Import-Csv -Path 'users_new.csv'
foreach ($u in $csv) {
  $passwordProfile = @{forceChangePasswordNextSignIn=$false; password=$u.password}
  $body = @{accountEnabled=$true; displayName=$u.displayName; mailNickname=$u.new_email.Split('@')[0]; userPrincipalName=$u.new_email; passwordProfile=$passwordProfile}
  $user = New-MgUser -BodyParameter $body
  # Assign license (example):
  # New-MgUserLicense -UserId $user.Id -AddLicenses @{SkuId=$u.skuId}
  Write-Host "Provisioned: $($u.new_email)"
}

For self-hosted systems (e.g., Postfix + Dovecot), script user creation via your distro's useradd or a management API. Ensure mailbox quotas and aliases match your CSV mapping.

Step 3 — Continuous incremental syncs until cutover

Perform an initial full sync and then schedule incremental syncs every X minutes/hours. imapsync supports safe resumable syncs — you can run it repeatedly; it only copies missing messages when configured with --useuid.

# Example cron job for incremental sync (runs every 15 minutes)
*/15 * * * * /usr/local/bin/batch_imapsync.sh >> /var/log/mail_migration/cron.log 2>&1

Dual-delivery: If the old provider supports routing (e.g., Google Workspace dual-delivery), configure new inbound to go to both mailboxes so any mail during migration is captured at the target.

Step 4 — DNS & MX cutover (low downtime)

  1. 72–48 hours prior: Lower MX TTL to 60–300 seconds.
  2. Pre-stage MX records on the target so they’re ready; do not remove old MX until final cutover.
  3. At cutover time: switch MX to target and perform a final incremental sync with imapsync (use --no-modulesversions --syncinternaldates --useuid).
  4. Keep old provider routes for email for at least 48–72 hours post-cutover as a safety net.

Scripted MX update example (Cloudflare)

# update_mx_cf.sh
# Requires: CF_API_TOKEN, ZONE_ID
curl -X PUT "https://api.cloudflare.com/client/v4/zones/$ZONE_ID/dns_records/$MX_RECORD_ID" \
  -H "Authorization: Bearer $CF_API_TOKEN" \
  -H "Content-Type: application/json" \
  --data '{"type":"MX","name":"example.com","content":"mx1.newmail.example.com","priority":10,"ttl":60}'

Step 5 — Verification and integrity checks

Never assume counts are equal. Verify per-user and per-folder counts plus checksums of exported archives.

Quick IMAP message count script (Python)

# imap_counts.py
import imaplib, sys
host=sys.argv[1]; user=sys.argv[2]; pwd=sys.argv[3]
M=imaplib.IMAP4_SSL(host)
M.login(user,pwd)
typ,boxes=M.list()
for b in boxes:
    parts=b.decode().split('"')
    name=parts[-2]
    M.select('"'+name+'"')
    typ,data=M.search(None,'ALL')
    print(name, len(data[0].split()))
M.logout()

Compare source and target counts. For exported mbox files, compute SHA-256 checksums:

sha256sum user@example.com.mbox > user@example.com.mbox.sha256

Troubleshooting: common migration problems and fixes

  • Auth errors: Legacy username/password auth is often blocked; use OAuth2/XOAUTH2 or app passwords. For Workspace, set up a service account and use impersonation for imapsync.
  • Rate limits: Throttle parallel jobs; tune imapsync with --maxsize and --buffersize; stagger jobs per mailbox size tier.
  • Missing labels: Gmail labels map to folders; confirm mapping and use the --addheader imapsync option to preserve label context.
  • UID collisions: use the --useuid flag to avoid duplicates and ensure incremental runs are idempotent.
  • Large attachments timeouts: increase socket timeouts and use --buffersize; split very large mailboxes into folder-by-folder syncs.
  • Post-migration bounces: verify SPF, DKIM, DMARC, and update sending domains at the same time as MX cutover.

Performance and capacity planning (practical numbers)

Expect throughput variance: 1–4 GB/hour per concurrent imapsync process under normal network conditions. For tenants with hundreds of users, parallelize by mailbox size: run many small ones concurrently and schedule large mailboxes on dedicated workers.

Security and compliance considerations

  • Encrypt exported archives at rest and transit (AES-256 GCM, use client-side KMS).
  • Maintain an audit trail of who initiated each migration and when.
  • Respect data sovereignty — if moving mail across regions, confirm legal requirements.
  • Rotate service account keys post-migration.

Experience notes & real-world tips (from migrations in 2025–2026)

  • Expect policy-driven migrations to be time-sensitive. Build lightweight runbooks that non-specialists can follow under pressure.
  • Use a staging team to validate a small pilot (5–20 accounts) and iterate scripts before mass run.
  • Keep final sync windows short (60–300s MX TTL) and validate a subset of VIP mailboxes first.
  • Build a monitoring dashboard (mail volume, sync success/fail rate, last sync times) so leadership can see progress.
Quote: 'Treat migration like disaster recovery — have scripted, tested automation and a rollback plan.' — Senior IT migration lead, 2026

Advanced strategies & future-proofing

  1. Adopt provider-agnostic tooling: Rely on IMAP+OAuth, Graph API wrappers, and standardized CSV mappings to avoid retooling per provider.
  2. Automated mailbox snapshots: schedule periodic snapshots to object storage so you always have a recent export.
  3. Use infrastructure-as-code for DNS and provisioning: Terraform modules for DNS, and IaC for user provisioning keep migrations repeatable.
  4. Implement staged deprecation: when deprecating an old provider, announce, dual-deliver, and run automated syncs for a defined grace period.

Checklist: Runbook for a single migration wave

  1. Export CSV: source user list + new mailbox mapping.
  2. Pre-provision new mailboxes using script; record credentials to secret manager.
  3. Run initial full imapsync for all users; capture logs and checksums.
  4. Enable dual-delivery or forwarding if possible.
  5. Lower DNS TTLs and stage MX records on new target.
  6. At cutover, change MX via API and run final incremental imapsync.
  7. Run verification scripts for counts and checksums; keep logs for 90+ days.
  8. Rotate keys and remove access from old platform only after confirmation window.

Final thoughts: preparing for the next wave of provider changes

Provider policy shifts in early 2026 underscore the need for an automated, auditable migration capability. Investing a few engineering days now in secure scripts, provisioning automation, and a verification pipeline saves weeks under pressure and ensures your org can respond quickly to policy, privacy, or security events.

Call to action

Need a tested migration runbook, custom imapsync orchestration, or Graph API provisioning script tailored to your environment? Contact your team to spin up a pilot migration this week — use the CSV-driven scripts above as the starting point and automate secrets with Vault or your cloud provider’s secret manager to get a secure, repeatable process in place.

Advertisement

Related Topics

#Email Migration#Scripting#Admin Tools
U

Unknown

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-19T01:37:42.653Z