A complete cloud security assessment before critical cloud migration maps your assets, classifies data, checks identities and permissions, validates encryption and network controls, and tests against benchmarks and compliance rules. For teams in Brazil, it must explicitly consider LGPD, shared-responsibility gaps, and differences between AWS, Azure and Google Cloud services.
Critical Findings Snapshot
- Never migrate critical workloads without an explicit asset and data inventory, including shadow IT and unmanaged SaaS.
- Over-privileged identities and missing MFA are usually the fastest paths to compromise during and after migration.
- Data classification, encryption in transit/at rest and residency mapping are mandatory before selecting any cloud region.
- Flat networks, broad security groups and exposed management interfaces create high-impact risks in hybrid environments.
- Baseline every workload against CIS Benchmarks and native tools (AWS Security Hub, Microsoft Defender for Cloud, Google Security Command Center).
- Without centralized logging, alerting and runbooks, even good controls fail under real incident pressure.
Inventory and Attack Surface Mapping
A structured inventory phase decides whether an avaliação de segurança em nuvem antes da migração will add value or just slow you down.
This assessment is a strong fit when:
- You are moving critical or regulated data (e.g., financial, health, large volumes of Brazilian citizen data under LGPD).
- You are consolidating multiple data centers or SaaS apps into AWS, Azure or Google Cloud.
- You have unclear ownership of legacy systems and want to avoid migrating unknown vulnerabilities.
- Your board, auditor or cyber-insurance requires a documented cloud security assessment.
It is usually not the right move to run a full-scope assessment when:
- You are only testing non-sensitive proof-of-concept workloads with synthetic data and no internet exposure.
- You have a fixed go-live deadline in days, leaving no time to remediate even critical issues.
- You lack minimal access to current infrastructure; in this case, start by fixing visibility and CMDB quality first.
Minimum activities during inventory and attack surface mapping:
- List all applications and services in scope: web, APIs, batch jobs, messaging, data pipelines.
- Identify all data stores: databases, file shares, data lakes, SaaS exports, backups and archives.
- Map entry points: domains, VPNs, remote access tools, exposed APIs, third-party integrations.
- Confirm business owners and technical owners for every system, to approve later risk decisions.
- Detect shadow IT by checking DNS, identity providers and expense reports for unregistered SaaS usage.
Identity, Access and Privilege Controls Review

This phase requires specific prerequisites so it can be completed safely and with useful outcomes.
Organizational and access requirements:
- Named contacts for identity management (IdP admin, AD/AAD admin, cloud account owners).
- Read-only access to current identity providers (AD/LDAP, Azure AD/Microsoft Entra ID, Google Workspace, Okta, etc.).
- Access to policies and procedures covering joiners/movers/leavers and privileged access management.
Technical tools and environments:
- For AWS: at least one read-only IAM role for assessment, with permission to list IAM users, roles, policies, groups and CloudTrail events.
- For Azure: security reader access in relevant subscriptions and Azure AD, plus Microsoft Defender for Cloud and Identity Protection enabled where possible.
- For Google Cloud: viewer roles at the organization or folder level and access to Cloud Asset Inventory and Cloud Audit Logs.
- A password vault or privileged access management (PAM) solution, even if basic, to review how admin credentials are handled.
Evidence and tools that help produce measurable results:
- Export lists of all users, service accounts and roles, with last-login and MFA status where available.
- Use native analyzers (AWS IAM Access Analyzer, Azure AD access reviews, Google Cloud Policy Analyzer) to detect unused or over-privileged roles.
- Document every exception where MFA cannot be enabled and set a clear remediation deadline.
Data Classification, Encryption and Residency Checks

Before detailing the steps, consider these risks and limitations of this phase:
- Incomplete data discovery means some sensitive records may move to the cloud without any controls.
- Wrong residency assumptions (for example, backups leaving Brazil) can create silent LGPD and contractual violations.
- Misconfigured encryption may give a false sense of safety while keys remain exposed or unmanaged.
- Manual classification processes may not scale for large data lakes and continuous ingestion.
Follow this ordered sequence to run a safe, repeatable classification and encryption review before migration.
- Define classification levels and examples
Create a simple, organization-wide model that people can apply consistently.- Use 3-4 levels such as Public, Internal, Confidential, Restricted.
- Map each level to examples: customer PII, payment data, source code, logs, analytics datasets.
- Align definitions with LGPD concepts of personal and sensitive personal data.
- Discover and catalog all data stores
Build a register of every repository planned for migration and those indirectly connected to them.- Include databases, object storage, shared folders, email archives, SaaS exports and backup systems.
- Record size, technology, owner, business process and current protection (encryption, access controls).
- Note cross-border data flows, especially data stored or replicated outside Brazil.
- Assign a classification to each data set
Use your model to label each repository and key tables or buckets.- Start with the most critical systems that will migrate first.
- Confirm classifications with data owners to avoid under- or over-protection.
- Document justification for Restricted or high-risk data (e.g., health, biometrics, financial identifiers).
- Plan encryption standards for transit and at rest
Decide how each classification level must be protected in the target cloud.- In AWS, define which data uses KMS-managed keys, CloudHSM or customer-managed keys for services like S3, RDS and EBS.
- In Azure, plan the use of Azure Key Vault keys for Storage, SQL and disk encryption, including customer-managed keys where required.
- In Google Cloud, map use of Cloud KMS and CMEK options for BigQuery, Cloud Storage and persistent disks.
- Require TLS for all connections, including internal services, with clear cipher and certificate policies.
- Verify residency and data sovereignty constraints
Match every data set against legal, contractual and regulatory obligations.- Identify which data must remain in Brazil or specific regions and which can be global.
- Check each provider’s region and backup behavior to confirm effective residency, not just primary region labels.
- Document approved regions for every classification level and exception processes.
- Define measurable acceptance criteria before migration
Convert the above decisions into simple go/no-go checks.- For each system, require documented classification, confirmed encryption configuration and approved region list.
- Require that all keys are in a managed key management system with clear ownership and rotation rules.
- Block cut-over if Restricted data would be stored unencrypted or in an unapproved region.
| Priority | Risk | Impact | Likelihood | Recommended mitigation and evidence |
|---|---|---|---|---|
| 1 | Sensitive data migrated without proper classification or encryption | Severe legal and reputational damage, LGPD penalties, breach disclosure obligations | High | Complete classification inventory, enforce encryption policies and require screenshots or IaC configs proving encryption at rest and TLS in transit. |
| 2 | Data stored or backed up in unapproved regions | Regulatory non-compliance and breach of customer contracts | Medium | Define allowed regions, restrict account creation, use policies or organization constraints; capture provider region configuration exports as evidence. |
| 3 | Unmanaged or shared encryption keys | Keys stolen or misused, making decryption of large datasets possible | Medium | Centralize keys in KMS or Key Vault, restrict key usage via IAM, enable key rotation; export key usage logs as verification. |
Network Architecture and Perimeter Protections

Use this checklist to validate that the target architecture and migration plan do not introduce unsafe exposure.
- All management interfaces (cloud consoles, bastion hosts, jump boxes, VPNs) are not exposed directly to the internet and require MFA.
- Virtual networks and subnets are segmented by environment (prod, staging, dev) and sensitivity, not flat or shared indiscriminately.
- Security groups, NSGs or firewall rules are based on least privilege, using specific ports and CIDR ranges instead of broad any-any rules.
- Inbound access for administrators is limited to controlled IP ranges or VPN, never from arbitrary public networks.
- Outbound traffic controls and egress filtering exist for critical workloads to prevent data exfiltration and command-and-control traffic.
- Web-facing applications are protected by a web application firewall (AWS WAF, Azure WAF, Cloud Armor or equivalent) with logging enabled.
- DDoS protections are enabled or subscribed for critical public services, including DNS-level protection where possible.
- Hybrid connectivity (VPN, Direct Connect, ExpressRoute, Cloud VPN or Interconnect) has defined trust boundaries and route filters to avoid accidental full mesh.
- Network logs (VPC Flow Logs, NSG flow logs, VPC Flow Logs in Google Cloud) are enabled and sent to a central log and SIEM environment.
- Change management exists for network rules, with approvals and periodic review of unused or overly broad access.
Workload and Configuration Assessment against Benchmarks
The following are common mistakes when teams try to align workloads with benchmarks such as CIS profiles and provider best practices.
- Relying only on manual review and screenshots, instead of using automated tools like AWS Security Hub, Microsoft Defender for Cloud or Google Security Command Center.
- Applying benchmarks blindly without considering workload context, leading to unnecessary impact on performance or availability.
- Ignoring infrastructure as code (Terraform, ARM/Bicep, Cloud Deployment Manager) and only checking live resources, so misconfigurations reappear after each deployment.
- Focusing only on production accounts and skipping non-production, where many breaches start via weak controls.
- Not separating shared services (logging, security tooling, directory services) from application workloads, complicating enforcement and monitoring.
- Disabling or downgrading security controls to pass short-term functional tests instead of fixing root causes.
- Failing to track benchmark exceptions and compensating controls, creating undocumented risk that management cannot see.
- Assuming that cloud-native services are secure by default and need no configuration review against benchmarks.
Compliance Posture, Logging and Incident Readiness
Depending on your constraints, different approaches can help you reach an acceptable security level before migration without overloading the team.
- Full internal assessment with external validation
Your team runs the assessment using native tools and benchmarks, then requests a short external review to challenge assumptions.
Useful for organizations with mature security teams that still want an independent look before migrating workloads with critical data. - Partnered approach with a specialized company
Engage an empresa especializada em avaliação de segurança de nuvem to co-design controls, execute sampling and build repeatable checklists.
This works well when you have limited cloud expertise in-house or must demonstrate independence to regulators or auditors. - Focused assessment on high-risk systems only
Limit scope to a few critical applications or data sets, using targeted serviços de segurança em nuvem para migração de dados críticos.
This is appropriate when time is constrained but you still need assurance over the riskiest assets before migration. - Cloud-provider-aligned roadmap for gradual hardening
Use cloud security assessment para migração para AWS Azure Google Cloud as a roadmap rather than a gate, combining quick wins now and backlog items post-migration.
Often supported by consultoria de cloud security assessment para migração, this suits teams with strong delivery pressure and moderate risk tolerance.
Practical Migration Concerns
How early should I start a cloud security assessment before migration?
Start as soon as you have a preliminary application list and target cloud decision. This allows enough time to classify data, adjust architecture and remediate critical issues without blocking go-live at the last minute.
Can I migrate non-critical workloads without a full assessment?
Yes, you can use a lighter checklist for non-production or low-sensitivity systems. Still, apply basic controls such as MFA, minimum network exposure, encryption at rest and logging before moving any workload.
Do I need different assessments for AWS, Azure and Google Cloud?
The high-level methodology remains the same, but concrete controls and tools differ across providers. Plan one unified framework with provider-specific mappings for identity, logging, encryption and network features.
How does LGPD affect my migration security assessment?
LGPD emphasizes data minimization, purpose limitation and protection of personal and sensitive data. During assessment, pay special attention to data discovery, residency, access controls and logging for personal data related to Brazilian residents.
What if I do not have enough internal cloud security expertise?
Prioritize clear, short checklists and basic automation, then bring in external experts for design or review. A specialized partner can help you avoid common design flaws and train your team as part of the engagement.
Is penetration testing mandatory before every migration?
Penetration testing is highly recommended for internet-facing or high-risk systems but may not be practical for every minor change. Combine targeted tests on critical assets with continuous configuration and vulnerability scanning.
How do I know when the environment is "good enough" to migrate?
Define explicit acceptance criteria per system covering identity, network exposure, encryption, logging and backup. When all required criteria are met or accepted as documented risk by business owners, the environment is ready for migration.
