Back to Blog

Anatomy of a High-Demand Ticketing System: How Rosalía's LUX Tour sold 142,000 tickets without collapsing

The real case of December 9-11, 2025: 400,000 people, 8 concerts, 2 hours

On December 9, 2025, more than 400,000 people tried to buy one of the 142,000 tickets available for Rosalía's LUX Tour in Spain. What happened in those 2 hours is a fascinating case study on high-demand system architecture, virtual queues, and the fundamental trade-off between user experience and transactional consistency.

📋 Table of Contents

PART 1: The Case and General Architecture

PART 2: The Technical Core

PART 3: Resilience and Conclusions

PART 1: The Case and General Architecture

1. The Real Case: Rosalía's LUX Tour

The context

On December 5, 2025, Rosalía announced her world tour LUX Tour 2026 to promote her fourth studio album "LUX", released in November 2025. The tour would include 42 concerts in 17 countries, with 8 dates in Spain:

🏟️ Madrid - Movistar Arena (capacity ~17,500)

  • • 30 March 2026
  • • 1 April 2026
  • • 3 April 2026
  • • 4 April 2026

🏟️ Barcelona - Palau Sant Jordi (capacity ~18,000)

  • • 13 April 2026
  • • 15 April 2026
  • • 17 April 2026
  • • 18 April 2026
🎫 Total tickets available in Spain: ~142,000

📅 The chronology of chaos

┌─────────────────────────────────────────────────────────────────────┐
│           CRONOLOGÍA DE LA VENTA - LUX TOUR 2025                    │
├─────────────────────────────────────────────────────────────────────┤
│                                                                     │
│  📅 5 DICIEMBRE 2025                                                │
│  └─ 12:00h - Rosalía anuncia LUX Tour en redes sociales             │
│  └─ En minutos, #LuxTour es trending topic mundial                  │
│                                                                     │
│  📅 9 DICIEMBRE 2025 - PREVENTA                                     │
│  └─ 10:00h - Inicio preventa (Artist Presale + Santander)           │
│  └─ 10:05h - Colas virtuales superan 100.000 personas               │
│  └─ 10:15h - Primeros reportes de errores 503 y 500                 │
│  └─ 10:30h - Posiciones reportadas: 33.000, 65.000 en cola          │
│  └─ 12:00h - Mayoría de preventa AGOTADA                            │
│  └─ 12:45h - Colas de +400.000 personas                             │
│                                                                     │
│  📅 11 DICIEMBRE 2025 - VENTA GENERAL                               │
│  └─ 10:00h - Inicio venta general                                   │
│  └─ 10:01h - Colas instantáneas de +50.000 personas                 │
│  └─ 10:30h - Colapso total de canales de venta                      │
│  └─ 12:00h - SOLD OUT TOTAL - 8 conciertos agotados               │
│  └─ Total tiempo venta general: ~2 HORAS                        │
│                                                                     │
│  📅 POST-VENTA                                                      │
│  └─ Entradas en reventa a +1.000€                                   │
│  └─ OCU denuncia ante Ministerio de Consumo                         │
│                                                                     │
└─────────────────────────────────────────────────────────────────────┘

💬 Real user testimonials (X/Twitter)

"Después de conseguir la cantidad de cero entradas y estar ahora por el número 33.000 y 65.000 en las colas virtuales del LUX Tour de Rosalía, procederé a dejar de ser fan de toda música y artista mainstream"

— @iris_angsan

"Por favor artistas del mundo buscad otro método para vender entradas que no sea la estafa de ticketmaster"

— @Franqk1997

"Que cuando me haya metido solo quedasen entradas platinum… esto es un robo"

— @laxista01

2. The Numbers of Chaos

┌─────────────────────────────────────────────────────────────────────┐
│                   CIFRAS REALES - LUX TOUR ESPAÑA                   │
├─────────────────────────────────────────────────────────────────────┤
│                                                                     │
│  DEMAND                                                            │
│  ───────────────────────────────────────────────────────────────    │
│  • Usuarios en cola virtual (pico):      +400.000                   │
│  • Usuarios simultáneos estimados:       +1.000.000                 │
│  • Posiciones reportadas en cola:        33.000 - 65.000 por fecha │
│  • Tiempo medio de espera:               1-3 horas                 │
│                                                                     │
│  INVENTORY                                                          │
│  ───────────────────────────────────────────────────────────────    │
│  • Entradas Madrid (4 fechas x ~17.500): ~70.000                    │
│  • Entradas Barcelona (4 x ~18.000):     ~72.000                    │
│  • TOTAL ENTRADAS ESPAÑA:                ~142.000                   │
│                                                                     │
│  OFFICIAL PRICES                                                      │
│  ───────────────────────────────────────────────────────────────    │
│  • Entrada estándar mínima:              45€                       │
│  • Entrada estándar máxima:              115€                      │
│  • Paquetes VIP:                         hasta 500€                │
│  • Gastos de gestión reportados:         36,50€                    │
│                                                                     │
│  RESALE (post-sale)                                                   │
│  ───────────────────────────────────────────────────────────────    │
│  • Precios en Viagogo:                   +1.000€                    │
│  • Incremento sobre precio original:     +400% a +900%             │
│                                                                     │
│  INFRASTRUCTURE (estimates)                                  │
│  ───────────────────────────────────────────────────────────────    │
│  • Peticiones por segundo (pico):        30.000 - 50.000 req/s     │
│  • Conexiones WebSocket simultáneas:     500.000 - 2.000.000       │
│  • Errores HTTP reportados:              503, 500                  │
│                                                                     │
└─────────────────────────────────────────────────────────────────────┘

3. General Architecture: 10,000 meter view

🔽 The funnel analogy

The fundamental problem is: How do you process 400,000 requests when you can only handle 1,000 secure transactions per minute?

              DEMANDA
                │
                │    400.000+ usuarios
                │    queriendo comprar
                │    AL MISMO TIEMPO
                ▼
        ┌───────────────┐
        │   ████████    │  ← Todos intentan entrar
        │   ████████    │
        │   ████████    │
        └───────┬───────┘
                │
                │  EMBUDO (el sistema)
                │
        ┌───────▼───────┐
        │       │       │  ← El sistema solo puede procesar
        │       │       │    un número limitado a la vez
        │       ▼       │
        └───────────────┘
                │
                │    ~500-1000 transacciones/minuto
                │    de forma segura
                ▼
        ┌───────────────┐
        │   ENTRADAS    │
        │   VENDIDAS    │
        │   142.000     │
        └───────────────┘

🏗️ Complete Architecture Diagram

📱💻📱💻📱💻📱💻📱💻📱💻
USERS (400.000+)
LAYER 1: CDN / EDGE (Fastly)
  • DDoS Protection
  • Bot detection
  • Static content caching
  • Rate limiting by IP
⭐ LAYER 2: VIRTUAL WAITING ROOM
THE CRITICAL LAYER that determines who enters and who waits
  • Queue Manager Cluster (6-12 nodes)
  • Redis Cluster (128-256 GB RAM)
  • WebSocket for real-time updates
↓ Only ADMITTED users pass (~500/min)
LAYER 3: APPLICATION (Kubernetes)
Catalog Service Session Service Cart Service Inventory Service Payment Service
LAYER 4: DATA
Redis
Sessions, Cache
Kafka
Events, Queues
PostgreSQL
Inventory, Transactions
PART 2: The Technical Core

4. The Virtual Queue: Much more than a number going down

❓ What is a Virtual Waiting Room really?

The virtual queue is NOT a traditional FIFO queue. It's an admission control system that acts as a pressure buffer between massive demand and the system's actual capacity.

🎉 The nightclub analogy

❌ WITHOUT doorman
  • Everyone pushes at once
  • The door collapses
  • No one gets in
  • Total chaos
✅ WITH doorman
  • Forms an orderly queue
  • Lets people in 10 at a time
  • When someone leaves, another enters
  • The club works!

The doorman is the Virtual Waiting Room. The club is the ticketing system.

📈 Why does your position sometimes GO UP?

REASON 1: Anti-bot verification

Detected bots are removed or moved to the end. You move up in relative positions.

REASON 2: Users re-queued for inactivity

Users who close the tab and return are re-queued at the end.

REASON 3: Batch admission

Between batches, new users can enter the queue. If more enter than leave, your position goes up.

REASON 4: Priority queues activated

Santander customers entered a priority queue. Normal users went down in relative positions.

🌊 The Hydroelectric Dam Analogy

                         DEMANDA (río caudaloso)
                               │
                               │  400.000 usuarios/segundo
                               ▼
                ┌──────────────────────────────────┐
                │                                  │
                │     VIRTUAL WAITING ROOM         │
                │         (la presa)               │
                │                                  │
                │   ████████████████████████████   │
                │   ████████████████████████████   │  ← Usuarios esperando
                │   ████████████████████████████   │    (agua represada)
                │   ████████████████████████████   │
                │                                  │
                └──────────────┬───────────────────┘
                               │
                          ┌────┴────┐
                          │ VÁLVULA │  ← Rate limiter
                          │         │    (500 usuarios/minuto)
                          └────┬────┘
                               │
                               ▼
                ┌──────────────────────────────────┐
                │                                  │
                │      SISTEMA DE VENTA            │
                │    (turbina/generador)           │
                │                                  │
                │   Capacidad: 500 usuarios        │
                │   concurrentes comprando         │
                │                                  │
                │   Si entra más agua de la que    │
                │   puede procesar: COLAPSO        │
                │                                  │
                └──────────────────────────────────┘

La presa NO es el problema. La presa es LA SOLUCIÓN.
Sin ella, el sistema colapsaría inmediatamente.

🐍 Queue Algorithm - Real Code

"""
Pseudocódigo del algoritmo de cola virtual
Basado en patrones comunes (Queue-it, Cloudflare Waiting Room, etc.)
"""
import redis
import time
import uuid

class VirtualQueueManager:
    """Gestor de cola virtual para eventos de alta demanda."""
    
    def __init__(self, redis_cluster, event_id: str):
        self.redis = redis_cluster
        self.event_id = event_id
        self.admission_rate = 500  # usuarios/minuto
        self.max_in_store = 5000   # máx simultáneos en tienda
        
    def enqueue_user(self, user_fingerprint: str, ip_hash: str) -> dict:
        """Añade un usuario a la cola."""
        
        # Generar token único
        queue_token = f"qt_{uuid.uuid4().hex[:16]}"
        entry_time = time.time()
        
        # Añadir a Sorted Set (score = timestamp)
        # ZADD queue:rosalia_lux_2025 1733738400.123 "qt_abc123"
        self.redis.zadd(
            f"queue:{self.event_id}",
            {queue_token: entry_time}
        )
        
        # Guardar metadatos
        self.redis.hset(f"user:{queue_token}", mapping={
            "entry_time": entry_time,
            "fingerprint": user_fingerprint,
            "status": "WAITING",
            "verified": "false"
        })
        
        # Calcular posición
        position = self.redis.zrank(f"queue:{self.event_id}", queue_token) + 1
        
        return {
            "token": queue_token,
            "position": position,
            "estimated_wait": self._calculate_wait(position)
        }
    
    def get_position(self, queue_token: str) -> dict:
        """Obtiene posición actual (llamado vía WebSocket cada 1-5s)."""
        rank = self.redis.zrank(f"queue:{self.event_id}", queue_token)
        if rank is None:
            return {"status": "ADMITTED"}  # Ya pasó a tienda
        
        return {
            "position": rank + 1,
            "total": self.redis.zcard(f"queue:{self.event_id}"),
            "estimated_wait": self._calculate_wait(rank + 1)
        }

🔴 Redis Data Structure

# ═══════════════════════════════════════════════════════════════════════
# ESTRUCTURA DE DATOS REDIS PARA COLA VIRTUAL
# ═══════════════════════════════════════════════════════════════════════

# SORTED SET: Cola principal ordenada por timestamp
ZADD queue:rosalia_lux_2025 1733738400.123456 "qt_a1b2c3d4e5f6g7h8"
ZADD queue:rosalia_lux_2025 1733738400.123457 "qt_i9j0k1l2m3n4o5p6"
# ... x 400.000 usuarios

# Consultas típicas:
ZRANK queue:rosalia_lux_2025 "qt_a1b2c3d4e5f6g7h8"  # → Posición
ZCARD queue:rosalia_lux_2025                        # → Total en cola
ZRANGE queue:rosalia_lux_2025 0 49                  # → Primeros 50

# HASH: Metadatos de cada usuario
HSET user:qt_a1b2c3d4e5f6g7h8
    entry_time "1733738400.123456"
    device_fingerprint "fp_chrome_win10_1920x1080"
    status "WAITING"              # WAITING, ADMITTED, PURCHASED
    verified "true"
    priority "0"                  # 0=normal, 1=presale, 2=VIP

EXPIRE user:qt_a1b2c3d4e5f6g7h8 14400  # TTL 4 horas

# STRINGS: Contadores en tiempo real
SET stats:rosalia_lux_2025:total_admitted "45000"
SET stats:rosalia_lux_2025:current_in_store "487"
SET stats:rosalia_lux_2025:admission_rate "500"

# PUB/SUB: Notificaciones para WebSockets
PUBLISH queue:rosalia_lux_2025:updates '{
    "type": "POSITION_UPDATE",
    "batch": [
        {"token": "qt_abc...", "position": 45001, "wait_min": 85},
        {"token": "qt_def...", "position": 45002, "wait_min": 85}
    ]
}'

5. The Inventory Database

The inventory database is where the NO-OVERSELLING guarantee happens. Every seat reservation must be atomic - it either completes fully or not at all.

SEAT STATE MACHINE:

    ┌───────────────┐
    │   AVAILABLE   │
    │  (disponible) │
    └───────┬───────┘
            │ add to cart
            ▼
    ┌───────────────┐
    │  SOFT_LOCKED  │ ← TTL: 10 min
    │ (en carrito)  │   If expires → returns to AVAILABLE    └───────┬───────┘
            │ checkout
            ▼
    ┌───────────────┐
    │  HARD_LOCKED  │ ← TTL: 15 min
    │  (pagando)    │   If payment fails → AVAILABLE    └───────┬───────┘
            │ payment OK
            ▼
    ┌───────────────┐
    │     SOLD      │ ← FINAL - Never goes back    │   (vendido)   │
    └───────────────┘

🐘 PostgreSQL Database Schema

-- Tabla principal de asientos
CREATE TABLE seats (
    seat_id VARCHAR(50) PRIMARY KEY,
    event_id VARCHAR(36) NOT NULL,
    section VARCHAR(50) NOT NULL,
    row_name VARCHAR(10) NOT NULL,
    seat_number INT NOT NULL,
    base_price DECIMAL(10,2) NOT NULL,
    
    -- Estado de disponibilidad
    status VARCHAR(20) NOT NULL DEFAULT 'AVAILABLE',
    -- AVAILABLE, SOFT_LOCKED, HARD_LOCKED, SOLD
    
    -- Información de bloqueo
    locked_by VARCHAR(100),
    locked_until TIMESTAMP WITH TIME ZONE,
    
    -- Información de venta
    sold_to VARCHAR(100),
    sold_at TIMESTAMP WITH TIME ZONE,
    order_id VARCHAR(36),
    
    updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
    version INT DEFAULT 0
);

-- Índices optimizados para queries de alta frecuencia
CREATE INDEX idx_seats_event_status ON seats(event_id, status);
CREATE INDEX idx_seats_locked_until ON seats(locked_until) 
    WHERE status IN ('SOFT_LOCKED', 'HARD_LOCKED');

-- Tabla de transacciones
CREATE TABLE transactions (
    transaction_id VARCHAR(36) PRIMARY KEY,
    order_id VARCHAR(36) NOT NULL,
    session_id VARCHAR(100) NOT NULL,
    total DECIMAL(10,2) NOT NULL,
    payment_status VARCHAR(30) NOT NULL,
    -- PENDING, AUTHORIZED, CAPTURED, FAILED, REFUNDED
    payment_intent_id VARCHAR(100),
    created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);

📊 Comparison with Other Events

┌─────────────────────────────────────────────────────────────────────────────┐
│              COMPARATIVA EVENTOS TICKETING ESPAÑA 2024-2025                 │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│  EVENTO                   │ FECHAS │ ENTRADAS   │ SOLD OUT    │ COLA MAX  │
│  ─────────────────────────────────────────────────────────────────────────  │
│  Bad Bunny (Mayo 2025)    │  12    │ ~200.000   │ Días        │ +500.000  │
│  ─────────────────────────────────────────────────────────────────────────  │
│  Rosalía LUX Tour         │   8    │ ~142.000   │ ~2 horas    │ +400.000  │
│  ─────────────────────────────────────────────────────────────────────────  │
│  Taylor Swift Eras        │   2    │ ~100.000   │ Minutos     │ +800.000  │
│  (Madrid 2024)             │        │            │ (preventa)  │           │
│                                                                             │
│  CONTEXTO GLOBAL TICKETMASTER:                                            │
│  • Tickets vendidos anualmente:       +500 millones                         │
│  • Visitantes únicos anuales:         +1.000 millones                       │
│  • Usuarios en base de datos:         +560 millones (dato hackeo 2024)      │
│  • Países de operación:               19                                    │
│                                                                             │
└─────────────────────────────────────────────────────────────────────────────┘

6. Rate Limiting and Flow Control

Rate limiting acts as a valve that controls how many requests actually reach the backend. Without it, the system would collapse instantly.

🚿 The Faucet and Bathtub Metaphor

SIN RATE LIMITING:

   ████████████████████████████████████████
   ████████████████████████████████████████ ← 50.000 req/segundo
   ████████████████████████████████████████
                    │
                    ▼
            ┌──────────────┐
            │   BAÑERA     │ ← Capacidad: 5.000 req/segundo
            │  (backend)   │
            │ ████████████ │
            │ ████████████ │ ← OVERFLOW! COLAPSO!
            │ ████████████ │
       ═════╧══════════════╧═════

CON RATE LIMITING:

   ████████████████████████████████████████
   ████████████████████████████████████████ ← 50.000 req/segundo
   ████████████████████████████████████████
                    │
                    ▼
            ┌──────────────┐
            │   VÁLVULA    │ ← Rate Limiter: máximo 5.000 req/s
            │   (control)  │   El resto: 429 Too Many Requests
            └──────┬───────┘
                   │
                   ▼ (flujo controlado)
            ┌──────────────┐
            │   BAÑERA     │ ← Recibe exactamente lo que puede
            │  (backend)   │
            │   ░░░░░░░░   │ ← Nivel óptimo, sin overflow
            └──────────────┘

🧮 The 3 Most Used Rate Limiting Algorithms

🪣 1. TOKEN BUCKET (most used for APIs)

Imagine a bucket with tokens. Each request consumes 1 token. The bucket refills at a constant rate (e.g., 10 tokens/second). If the bucket is empty, the request is rejected (429).

Capacidad: 100 tokens   ████████░░░░░
Tokens actuales: 40          ▲
Refill: 10 tokens/seg        │ Refill constante
If bucket empty → 429

✅ Allows short bursts, O(1) per operation

📊 2. SLIDING WINDOW LOG (for abuse detection)

Keeps a log of timestamps for each request. Counts how many requests are in the last N seconds.

Window: last 60 seconds | Limit: 100 requests
t=0    t=20   t=40   t=60   t=80   t=100
├──────┼──────┼──────┼──────┼──────┤
│      │██████████████████│      │ = 100 req
       ◄───── active window ────►

✅ Very precise, ideal for abuse detection

💧 3. LEAKY BUCKET (for constant rate processing)

Unlike Token Bucket: processes at CONSTANT rate, not variable. Ideal for checkout/payments where we want predictable flow.

Variable input ──►  ┌─────────┐  ──► CONSTANT output(demand peaks)    │ ░░░░░░░ │      (500 payments/min)
                   │ ░░░░░░░ │      always the same                   └────┬────┘
                        ▼
                   ═════════ (constant drip)

✅ PSP (Stripe) has strict limits. Leaky bucket ensures compliance.

🔴 Token Bucket - Redis Lua Script

-- TOKEN BUCKET - Redis Lua Script (atómico)
local key = KEYS[1]
local capacity = tonumber(ARGV[1])      -- ej: 100
local refill_rate = tonumber(ARGV[2])  -- tokens/segundo
local now = tonumber(ARGV[3])

local data = redis.call("HMGET", key, "tokens", "last_refill")
local tokens = tonumber(data[1]) or capacity
local elapsed = (now - (tonumber(data[2]) or now)) / 1000
local new_tokens = math.min(capacity, tokens + elapsed * refill_rate)

if new_tokens >= 1 then
    redis.call("HMSET", key, "tokens", new_tokens - 1, "last_refill", now)
    return {1, new_tokens - 1, 0}  -- [permitido, restantes, wait=0]
else
    return {0, new_tokens, ((1 - new_tokens) / refill_rate) * 1000}  -- [rechazado]
end

🪣 Token Bucket Algorithm

🪣
Bucket with tokens
Capacity: 100 tokens
⏱️
Refills over time
10 tokens/second
📨
Each request = 1 token
If empty → 429 Error

7. WebSockets: Millions of Connections

How do you keep 400,000+ users informed of their queue position in real-time? Traditional HTTP polling would mean 133,000 requests/second just to show positions. The solution: WebSockets.

❌ HTTP Polling

  • 400K users × 1 req/3s = 133K req/s
  • Massive overhead
  • High latency
  • Unsustainable

✅ WebSockets

  • 1 persistent connection per user
  • Server pushes updates
  • Minimal overhead
  • Low latency (~100ms)
PART 3: Resilience and Conclusions

🔌 WebSocket Architecture for 400,000+ Connections

┌─────────────────────────────────────────────────────────────────────┐
│                          USUARIOS                                  │
│   📱💻📱💻📱💻📱💻📱💻📱💻📱💻📱💻📱💻📱💻📱💻📱💻📱💻📱💻      │
│   (400.000+ conexiones WebSocket simultáneas)                       │
└─────────────────────────────────────────────────────────────────────┘
                                   │
                                   │ wss:// (WebSocket Secure)
                                   ▼
┌─────────────────────────────────────────────────────────────────────┐
│                    LOAD BALANCER (L4 - TCP)                       │
│   • Sticky sessions por IP hash                                     │
│   • Health checks TCP cada 5s                                       │
│   • Connection draining en deploys                                  │
└─────────────────────────────────────────────────────────────────────┘
              │         │         │         │         │
              ▼         ▼         ▼         ▼         ▼
        ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
        │WS Node │ │WS Node │ │WS Node │ │WS Node │ │WS Node │ ...x40
        │   1    │ │   2    │ │   3    │ │   4    │ │   N    │
        │10K conn│ │10K conn│ │10K conn│ │10K conn│ │10K conn│
        │32GB RAM│ │32GB RAM│ │32GB RAM│ │32GB RAM│ │32GB RAM│
        └────┬───┘ └────┬───┘ └────┬───┘ └────┬───┘ └────┬───┘
             │          │          │          │          │
             └──────────┴──────────┴──────────┴──────────┘
                                   │
                                   ▼
        ┌─────────────────────────────────────────────────────────────┐
        │                    REDIS PUB/SUB                            │
        │                                                             │
        │   Canales:                                                  │
        │   ├─ queue:rosalia_lux:positions  → Broadcast posiciones    │
        │   ├─ queue:rosalia_lux:admitted   → Usuarios admitidos      │
        │   └─ queue:rosalia_lux:alerts     → Mensajes de sistema     │
        │                                                             │
        │   Throughput: ~50.000 mensajes/segundo durante pico         │
        │   Latencia end-to-end: < 100ms                              │
        └─────────────────────────────────────────────────────────────┘

8. Circuit Breakers and Resilience

What happens when one service fails? Without protection, the failure cascades and takes down the entire system. Circuit Breakers prevent this.

💥 Cascade Failures

❌ WITHOUT Circuit Breaker

[Cart] ──► [Inventory] ──► [DB] 💀 SLOW
   │           │
   │  timeout 30s (threads bloqueados)
   ▼
[Cart] 💀 SATURADO
   │
   ▼
[API Gateway] 💀
[Queue Service] 💀
[Payment] 💀

RESULT: ENTIRE SYSTEM DOWN

✅ WITH Circuit Breaker

[Cart] ──► [CIRCUIT BREAKER] ──✗──► [Inventory]
              │
              │ OPEN: fail-fast
              ▼
Return: {"error": "SERVICE_UNAVAILABLE"}

[Queue Service] 
[Payment Service] 
[Catalog Service] 

REST OF SYSTEM WORKS
CIRCUIT BREAKER STATES:

   ┌──────────────┐      5 failures     ┌──────────────┐
   │    CLOSED    │  ─────────────────►│     OPEN     │
   │   (normal)   │                    │   (cut off)   │
   └──────────────┘◄─────────────────  └──────────────┘
          ▲         test OK              │
          │                                   │ after 30s
          │            ┌──────────────┐       │
          └────────────│  HALF-OPEN  │◄──────┘
            if OK      │  (testing)   │───────┐
                       └──────────────┘       │
                                              │ if fails
                                              ▼
                                     returns to OPEN

9. The Fundamental Trade-off

In high-demand ticketing systems, there are two conflicting objectives:

👤 User Experience

  • Fast responses (<500ms)
  • No waiting queues
  • UI fluid and responsive
  • No errors

💰 Transactional Integrity

  • Zero overselling
  • All transactions complete
  • 100% inventory consistency
  • Secure payments

The industry chose:

TRANSACTIONAL INTEGRITY

At the cost of 2+ hour queues and frustrated users

10. Technologies Used

☁️
AWS
EC2, EKS, RDS, ElastiCache
Fastly
CDN, Edge, DDoS
🔴
Redis
Cache, Sessions, Queues
📨
Kafka
Event streaming
🐘
PostgreSQL
Inventory, Transactions
☸️
Kubernetes
Container orchestration

11. Conclusion

Rosalía's LUX Tour ticketing system did exactly what it was supposed to do: sell 142,000 tickets to 142,000 different people, without selling any ticket twice, correctly processing every payment.

┌─────────────────────────────────────────────────────────────────────┐
│                  SUMMARY: LUX TOUR ROSALÍA 2025                        │
├─────────────────────────────────────────────────────────────────────┤
│                                                                     │
│  WHAT THE SYSTEM ACHIEVED                                          │
│  ───────────────────────────────────────────────────────────────    │
│  ✅ 142.000 tickets sold without overselling                 │
│  ✅ All transactions completed correctly            │
│  ✅ 100% inventory consistency                          │
│  ✅ System operational throughout the event              │
│                                                                     │
│  WHAT THE SYSTEM SACRIFICED                                       │
│  ───────────────────────────────────────────────────────────────    │
│  ⚠️ User experience (hours-long queues)                 │
│  ⚠️ Response speed (503 errors during peaks)           │
│  ⚠️ Public perception (mass frustration)               │
│                                                                     │
└─────────────────────────────────────────────────────────────────────┘

💭 The final reflection

The 2-hour virtual queue you saw during Rosalía's LUX Tour was NOT a failure. It was a FEATURE.

It's the system saying: "There are 400,000 people. There are only 142,000 tickets. We're going to process this in an orderly fashion so that every sale is valid."

The virtual queue is not the problem. It's the solution.

Need high-availability architecture?

If your business needs systems that handle extreme demand spikes without failing, this is exactly what we do.

Let's talk about your project

Tags:

Share:

Minorisa de Sistemas Informaticos y Gestión S.L. © 2026
everyWAN
everyWAN