개발

AWS, Naver Cloud, and GCP side by side through a DevOps lens (2025)

Compare AWS, Naver Cloud Platform (NCP), and GCP from a DevOps angle: startup/public-sector/global scenarios, Terraform provider skeletons, kubectl and SSO in practice, a one-week PoC checklist, and incident time zones.

AWS, Naver Cloud, and GCP side by side through a DevOps lens (2025)

Key takeaway

In one line: Cloud comparison is not a spec sheet—it starts from constraints (data residency, team skill, contracts). All three are usable; why you pick first usually differs.

SituationCommon pickOne-line reason
Domestic data / contract pins a regionMore NCPEasier accounting, docs, and tax-invoice flows in practice
Global SaaS, references, hiring poolAWS or GCPTutorials, third parties, and partners are overwhelmingly there
“Just run Kubernetes well”Whatever the team knowsOperating routines eat cost more than the logo

Choosing from constraints first


Opening: why “which is best?” has no answer

That’s often the first question in the room—“AWS, right?” or “Domestically, isn’t it Naver?”
I used to argue for one side; now I reframe: “What constraints must our team survive next quarter?”

At itemSCV we run web, APIs, batch, and occasional GPU workloads—often not a single vendor. CDN here, stateful DB there, experiments elsewhere. What matters is drawing boundaries so mixed names don’t break operations.

This article lines up AWS, Naver Cloud Platform (NCP), and Google Cloud (GCP). Azure competes too, but by discussion frequency and references in our team we skip it here (a separate post could cover it).


1. Start with the same picture: service name mapping

Cloud migration meetings derail most often because vocabulary differs. VPC, VMs, object storage, Kubernetes—the pattern is the same; only branding changes.

VPC, VM, object storage, K8s mapping

The diagram is for concept alignment. SKUs, generations, and prices change—re-check the console and official docs for quotes. Aligning Terraform module names and runbook titles this way still makes on-call calmer.


2. Latency for Korean users: “Seoul region exists” is not the end

All three have Korea or nearby regions. In production, single-digit ms RTT can still matter—especially over mobile, public Wi‑Fi, or corporate VPN paths that never appear on slides.

Sketch: Korean users and regions

Practical tips:

  • Don’t stop at ping on the production traffic path—re-measure real API handshakes.
  • Put CDN in front of static assets; keep origin focused on compute and DB.
  • I’ve seen teams move overseas for cost and then bleed people cost when support time zones misalign.

3. What people really mean when they choose Naver Cloud

Choosing NCP isn’t always “patriotism.” Recurring themes:

  • Contracts, tax invoices, domestic billing are easier for finance/procurement to explain
  • Korean support and training speeds onboarding (varies with team English level)
  • Some RFPs explicitly require domestic IDC wording

Conversely, if a global third party only “officially supports AWS,” NCP-only is hard. Then you see split designs: front/data on NCP, specific integrations on AWS.


4. Traits of orgs that default to AWS

Teams that treat AWS as default often:

  • Already have IAM, VPC, and org account structure—migration is expensive
  • Want reference architectures straight from marketplace/partners
  • Plan multi-region expansion with one pattern

Caveat: So many services that small teams get lost. “We turned it on” resources pile up until cost alerts fire. We’ve often retro’d that tag policy should have been enforced earlier.


5. Where GCP shows up (especially data / ML)

GCP often enters via BigQuery, how GKE is operated, or a specific data pipeline. “You can do similar elsewhere” is true—but familiar query and permission models cut migration cost.

Korean material density and domestic partner pools can feel different from AWS/NCP, so you sometimes see split by team: “data on GCP, apps elsewhere.” Without documenting peering and cost allocation, that becomes political later (speaking from experience).


6. Don’t end cost comparison at a table

We avoid blogging “A is N% cheaper than B” because commitments, savings plans, spot, and credits make every reader’s answer different. A question list you can use internally is still worth sharing.

QuestionWhy it matters
Is traffic steady or spiky?Reserved vs on-demand mix
What share of revenue does egress eat?Origin, CDN, cross-region quietly grow
Can you use a support plan in incidents?Can second-shift on-call handle English calls?

7. Multi-cloud: a boundary problem, not bragging rights

Framing multi-cloud only as “high availability” is thin. What you usually need:

  • Identity boundaries — accounts don’t linger when people leave
  • Network boundaries — peering vs public endpoints
  • Observability — one pane of glass vs hopping consoles

With Terraform or Pulumi, three vendors in one repo without module ownership rules becomes PR hell. Even “only Alice merges NCP” helps.


7-1. Three real-world profiles—”where are we close?”

Compressed from recurring meeting profiles, not fiction. Numbers are illustrative—don’t copy-paste as budgets.

A. Series A startup (4 engineers, Korea only)

TopicRealityCommon choice
TrafficUnder ~100k req/day, spikesManaged K8s or single-region VMs + autoscale
DataContract: Korean PII storageNCP or AWS Seoul with explicit region lock
IntegrationsPayment/email SaaSVendors with more AWS/GCP samples in docs win

Compromises like VPC peering or API-only cross appear more often than “NCP only” vs “AWS only.”

B. Public / quasi-public procurement

TopicRealityCommon choice
ContractDomestic IDC, domestic billingNCP share often large
AuditAccess logs, key-management evidenceCloudTrail-class + KMS documentation
StaffHandoff with SI vendorsKorean runbooks help maintenance

C. Global SaaS (US, EU, KR customers)

TopicRealityCommon choice
RegionsEU residency + low latency in KRMulti-region required; teams often know AWS or GCP
BillingStripe + multi-country taxSingle org bill via AWS Organizations / GCP Billing

7-2. Terraform provider “skeleton” only (not copy-paste prod)

When PoC-ing provider swaps in one repo, block shapes eat time. Below is shape-only.

NCP has a Terraform provider too—pin allowed versions and modules in policy first. We broke prod once pasting “latest blog” examples and hitting a provider major upgrade; version pins are mandatory for us.


7-3. kubectl context switching—muscle memory for multi-cloud

Rotating among EKS, NCP Kubernetes, and GKE, which cluster am I in? causes incidents.

Rule of thumb: show ctx in your prompt or use kubectx with color. The joke about kubectl delete pod hitting the wrong company’s cluster happens for real.


7-4. One-week PoC checklist (same scenario on all three)

DayTaskExample pass criteria
1VPC + subnets + NAT (if needed)VM reaches internet and internal DB
2Object storage upload + minimal IAM roleRead without app keys, role only
3Observability: metrics + logs in one viewFire one test alert
4Click backup/snapshot restoreRTO matches the doc
5Set cost alert thresholdsConfirm email on intentional overrun

Going deep on one vendor turns the PoC into a decided demo. Filling the same row three times is the point.


7-5. Incidents—time zone is operations cost

Real life: US-region outage, support window in PST → Korean on-call takes 3 a.m. English calls. Domestic vendors/partners can overlap business hours more comfortably.

QuestionWhy it matters
Business support response SLABreathing room on P1
Can on-call do English?If not, weight domestic vendor/partner

8. One-pager we fill when a new project starts

Half joking—if we fill these five lines, meetings often halve:

  1. Countries where data may live (agree up front)
  2. Mandatory SaaS integrations (any vendor lock-in?)
  3. On-call language (when English calls are OK)
  4. Budget cap and alert channel (which Slack?)
  5. Can we migrate in 3 months? (if not, accept lock-in consciously)

Closing

There is no permanent #1 among AWS, Naver Cloud, and GCP—that’s marketing. In the field, constraints and team state win. This article is a starting point: “under these conditions, we often see X.”

Product specs change quarterly; before big decisions, a small PoC is still the cheapest proof. Walk the same scenario—VPC, backup, monitoring—on a tiny workload, score it including who actually operates it.

Share

Related posts