Microservices Learning Resources

Navigate your architecture journey with resources built from real project experience

We've spent years breaking monoliths into services, debugging distributed systems at 3 AM, and learning what actually works beyond the textbook theories. These materials reflect that reality—not sanitized case studies, but honest insights from building systems that handle real traffic.

Common Questions We Actually Hear

Starting Out

Should I even be thinking about microservices yet?

Probably not if you're asking this question. And that's completely fine. Most systems don't need distributed architecture complexity until they hit specific scaling or team coordination problems. Start with understanding when monoliths actually become painful—usually around team sizes of 15+ developers or when deployment cycles create bottlenecks. Our foundation materials cover these inflection points with examples from systems we've actually migrated.

Mid-Journey

How do I handle data consistency across services?

This question keeps architects up at night. The short answer is you don't get the same guarantees as a single database. Our intermediate materials walk through saga patterns, event sourcing approaches, and compensating transactions—but more importantly, they explain when each pattern actually solves real problems versus adding unnecessary complexity. We include decision frameworks based on transaction volume, acceptable latency, and business criticality.

Advanced Territory

What about service mesh versus library-based approaches?

Here's where things get interesting. Service meshes solve specific problems around observability, security, and traffic management—but they also introduce operational complexity that smaller teams struggle to manage. Our advanced materials compare real implementations: when Istio makes sense, when Linkerd fits better, and honestly, when you're better off with well-configured libraries and good monitoring. We include actual performance benchmarks from production systems handling 50k+ requests per second.

Practical Concerns

How do I convince my team this is worth the effort?

You probably shouldn't try to convince them immediately. Instead, identify one specific pain point—maybe it's deployment conflicts, or database lock contention, or team coordination issues—and show how service boundaries could address that specific problem. Our materials include migration strategies that start small, with reversible decisions and measurable outcomes. The goal isn't to rebuild everything, it's to solve actual problems your team is experiencing right now.

Taiwan Context

Are these approaches relevant for smaller Asian markets?

Absolutely, though the scale considerations differ. Taiwan's market presents unique opportunities—lower latency requirements within region, specific data residency considerations, and often tighter integration with mobile-first architectures. Our resources include examples from systems serving Taiwan, Hong Kong, and Southeast Asian markets, where network topology and regulatory requirements create different tradeoffs than US or European deployments.

What People Actually Say

"

These materials saved us from making expensive mistakes. We were about to split our monolith into 27 services—turns out we really only needed 6. The decision frameworks helped us identify actual boundaries instead of arbitrary ones.

Portrait of Linnea Valtonen

Linnea Valtonen

Engineering Lead, FinTech Startup

"

The monitoring section alone was worth it. We thought we had observability figured out until we hit our first cross-service cascade failure. The debugging strategies from real incidents helped us build much better instrumentation.

Kazimierz Dvorský

Platform Engineer, E-commerce

"

I appreciated the honest assessment of what doesn't work. Too many resources only show the success stories. Learning about failed approaches and antipatterns prevented us from repeating common mistakes.

Branimir Kjeldsen

Solutions Architect, Healthcare

Real Projects, Real Lessons

Modern office workspace showing collaborative development environment

E-Commerce Platform Migration

Twelve-month journey from Rails monolith to event-driven microservices. Started with order processing service, learned hard lessons about distributed transactions, ended up with hybrid approach that kept checkout as a managed monolith while splitting inventory and fulfillment.

Event Sourcing CQRS Kafka

Key insight: Not everything needs to be a service

Real-Time Analytics System

Built for Taiwan-based gaming company processing 2 million events per minute. Initial architecture used REST APIs between services—performance was terrible. Rebuilt with gRPC and got 70% latency reduction. Materials include actual performance testing methodology and benchmarks.

gRPC Kubernetes TimescaleDB

Key insight: Protocol choice matters more than architecture diagrams suggest

Payment Processing Rebuild

Compliance requirements drove this one—needed audit trails, geographic data residency, and PCI DSS compliance. Used saga pattern for transaction coordination. The case study walks through specific regulatory challenges in Asian markets and how service boundaries helped meet requirements.

Sagas Compliance PostgreSQL

Key insight: Regulatory requirements can drive good architecture decisions

API Gateway Evolution

Started with Kong, moved to custom gateway when off-the-shelf solutions couldn't handle our rate limiting needs. This analysis covers when to build versus buy, performance considerations, and the operational overhead nobody talks about.

Rate Limiting Auth Go

Key insight: Sometimes custom code is simpler than configuring complex systems

Mobile Backend Services

Supporting iOS and Android apps across Southeast Asia meant dealing with unreliable networks, offline-first design, and sync conflicts. Service boundaries followed mobile app features rather than backend domain logic—sometimes that's the right call.

Offline-First Sync GraphQL

Key insight: Client needs should influence service design

Observability Implementation

You can't debug what you can't see. This project added distributed tracing, metrics, and structured logging across 14 services. Materials cover tool selection, instrumentation strategies, and building dashboards that actually help during incidents at 2 AM.

Jaeger Prometheus ELK Stack

Key insight: Observability isn't optional, it's foundational

Ready to Start Learning?

New cohorts begin September 2025. We keep groups small—usually 12-15 people—so there's room for questions and real discussion.

Ask About Upcoming Sessions