Documentation Index
Fetch the complete documentation index at: https://maps.solvice.io/llms.txt
Use this file to discover all available pages before exploring further.
Solvice Maps: Technical Architecture
Architecture Overview
Solvice Maps implements a sophisticated, cloud-native microservices architecture designed for high performance, scalability, and reliability. The system combines multiple routing engines with intelligent request processing, caching, and optimization to deliver sub-50ms routing calculations at enterprise scale.
System Architecture Diagram
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Client Apps │────│ Load Balancer │────│ API Gateway │
└─────────────────┘ └──────────────────┘ └─────────────────┘
│
┌───────────────────────────────┼───────────────────────────────┐
│ │ │
┌───────▼────────┐ ┌──────▼──────┐ ┌────────▼────────┐
│ MapR Gateway │ │ OSM Service │ │ External APIs │
│ (Quarkus) │ │ (Node.js) │ │ (TomTom/Google) │
└───────┬────────┘ └──────┬──────┘ └─────────────────┘
│ │
┌───────────────┼──────────────────────────────┼───────────────┐
│ │ │ │
┌───────▼──────┐ ┌──────▼──────┐ ┌───────▼───────┐ ┌─────▼─────┐
│ PostgreSQL │ │ Pub/Sub │ │ Storage │ │Monitoring │
│ Database │ │ (Events) │ │ (Results) │ │& Metrics │
└──────────────┘ └─────────────┘ └───────────────┘ └───────────┘
Core Components
1. MapR Gateway (Primary API Service)
Technology Stack:
- Framework: Quarkus (Java/Kotlin)
- Language: Kotlin 1.9 with JVM 17
- Database: PostgreSQL with Hibernate ORM
- Authentication: JWT (HS256) tokens
- Messaging: Google Cloud Pub/Sub
Responsibilities:
- Primary API endpoint for all routing requests
- Request validation, authentication, and rate limiting
- Intelligent request splitting for large matrices
- Content-based caching and deduplication
- Multi-engine routing coordination
- Result aggregation and response formatting
Key Architectural Patterns:
- Layered Architecture: Controllers → Services → Repositories
- Event-Driven Processing: Pub/Sub for asynchronous operations
- Proxy Pattern: Dynamic client creation for external APIs
- Circuit Breaker: Fault tolerance for external services
2. OSRM Integration Service
Technology Stack:
- Framework: NestJS (Node.js/TypeScript)
- Runtime: Node.js 22 with clustering
- OSRM: Native C++ bindings (Project-OSRM)
- Load Balancing: Weighted round-robin
- Deployment: Google Kubernetes Engine
Responsibilities:
- Direct integration with OSRM routing engines
- Traffic slice management (time-dependent routing)
- Interpolation for smooth traffic transitions
- High-performance routing calculations
- Pub/Sub message processing for batch operations
Advanced Features:
- Multi-Instance Management: Load multiple OSRM instances per region
- Traffic Slice Interpolation: Decimal slice support (e.g., 2.3, 4.7)
- Memory Optimization: Memory-mapped files for OSRM data
- Batch Processing: Configurable concurrency limits
3. Infrastructure Layer
Cloud Platform: Google Cloud Platform (GCP)
- Compute: Google Compute Engine with Container-Optimized OS
- Container Orchestration: Google Kubernetes Engine (GKE)
- Load Balancing: Global HTTP(S) Load Balancer
- Storage: Cloud Storage for large results
- Messaging: Cloud Pub/Sub for event processing
- Monitoring: Cloud Monitoring with custom metrics
Data Flow Architecture
1. Synchronous Request Flow (Routes, Small Tables)
1. Client Request → API Gateway (Auth/Validation)
2. Gateway → MapR Gateway (Request Processing)
3. MapR Gateway → Routing Engine Selection
4. Engine Processing → Response Generation
5. Response → Gateway → Client
Timeline: 10-50ms end-to-end
2. Asynchronous Request Flow (Large Tables, Cubes)
1. Client Request → MapR Gateway (Validation)
2. Gateway → Request Splitting (if needed)
3. Child Requests → Pub/Sub Publishing
4. Pub/Sub → OSRM Service Processing
5. Results → Cloud Storage
6. Completion Event → Response Aggregation
7. Client Polling/Webhook → Final Results
Timeline: 30 seconds to 10+ minutes depending on size
3. Caching and Optimization Flow
1. Request → Hash Generation (Content-based)
2. Cache Lookup → PostgreSQL
3. Cache Hit → Direct Response (sub-10ms)
4. Cache Miss → Engine Processing → Cache Store
5. Future Identical Requests → Cache Hit
Cache Hit Rate: 60-80% for typical workloads
Routing Engine Integration
Engine Architecture
Multi-Engine Support:
interface RoutingEngine {
fun calculateRoute(request: RouteRequest): RouteResponse
fun calculateTable(request: TableRequest): TableResponse
fun isAvailable(): Boolean
fun getRegionSupport(): List<Region>
}
Implemented Engines:
- OSM/OSRM: Self-hosted, high-performance, free
- TomTom: Commercial API with real-time traffic
- AnyMap: European-focused routing service
- Google Maps: Global coverage with comprehensive data
Engine Selection Logic
Automatic Engine Selection:
fun selectEngine(request: RoutingRequest): RoutingEngine {
val region = detectRegion(request.coordinates)
val engines = getAvailableEngines(region)
return engines.filter { it.isAvailable() }
.sortedBy { it.getPriority() }
.first()
}
Selection Criteria:
- Geographic Coverage: Engine support for request region
- Request Size: Engine limits and capabilities
- Performance Requirements: Response time vs. accuracy
- Cost Optimization: Usage-based routing decisions
Traffic Slice Management
Time-Dependent Routing:
- Integer Slices (0-12): Direct OSRM instance calls
- Decimal Slices (2.3, 4.7): Linear interpolation between adjacent slices
- Traffic Patterns: Different profiles for weekdays vs. weekends
- Real-time Selection: Current time-based slice selection
Interpolation Algorithm:
function interpolateSlice(slice: number, lowerResult: Result, upperResult: Result): Result {
const weight = slice % 1; // Decimal portion
return {
duration: lowerResult.duration * (1 - weight) + upperResult.duration * weight,
distance: lowerResult.distance * (1 - weight) + upperResult.distance * weight,
geometry: lowerResult.geometry // Use lower slice geometry
};
}
Database Architecture
Schema Design
Core Entities:
-- Main request tracking
CREATE TABLE table_requests (
id BIGSERIAL PRIMARY KEY,
hash VARCHAR(64) UNIQUE, -- Content-based caching
parent_id BIGINT REFERENCES table_requests(id), -- Request splitting
status VARCHAR(20), -- IN_PROGRESS, SUCCEEDED, FAILED
created_at TIMESTAMP,
updated_at TIMESTAMP
);
-- Cube (time-dependent) requests
CREATE TABLE cubes (
id BIGSERIAL PRIMARY KEY,
hash VARCHAR(64) UNIQUE,
nr_time_slices INTEGER,
status VARCHAR(20),
created_at TIMESTAMP
);
-- External storage references
CREATE TABLE table_responses (
id BIGSERIAL PRIMARY KEY,
table_id BIGINT REFERENCES table_requests(id),
storage_path VARCHAR(255), -- Cloud Storage path
size_bytes BIGINT
);
Indexing Strategy:
- Hash-based lookups: B-tree index on content hash
- Status queries: Index on (status, created_at)
- Hierarchical queries: Index on parent_id for request splitting
Data Storage Strategy
Hot Data (PostgreSQL):
- Request metadata and status
- Small responses (< 1MB)
- User authentication and rate limiting data
Cold Data (Cloud Storage):
- Large matrix results (> 1MB)
- Binary OSRM data files
- Historical analytics data
Response Time SLAs:
- Simple Routes: < 50ms P95
- Small Tables (< 100 coords): < 100ms P95
- Large Tables: Asynchronous processing
- API Overhead: < 10ms for cached responses
Optimization Strategies
1. Request Splitting:
fun splitLargeRequest(request: TableRequest): List<TableRequest> {
val engineLimit = getEngineLimit(request.engine)
val sourceBatches = request.sources.chunked(engineLimit.sources)
val destBatches = request.destinations.chunked(engineLimit.destinations)
return sourceBatches.flatMap { sources →
destBatches.map { destinations →
request.copy(sources = sources, destinations = destinations)
}
}
}
2. Content-Based Caching:
fun generateCacheKey(request: TableRequest): String {
val content = listOf(
request.sources.sorted(),
request.destinations.sorted(),
request.engine,
request.profile
).joinToString("|")
return SHA256.hash(content)
}
3. Parallel Processing:
suspend fun processTableRequests(requests: List<TableRequest>): List<TableResponse> {
return requests.map { request →
async { processRequest(request) }
}.awaitAll()
}
Scalability Architecture
Horizontal Scaling
Stateless Services:
- All services designed for horizontal scaling
- No local state storage
- Session data in external stores (PostgreSQL, Redis)
Auto-Scaling Triggers:
- CPU utilization > 70%
- Request queue depth > 100
- Response time P95 > SLA threshold
Load Distribution:
# Kubernetes HPA Configuration
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: mapr-gateway-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: mapr-gateway
minReplicas: 2
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
Event-Driven Scaling
Pub/Sub Message Processing:
- Dynamic subscription scaling based on queue depth
- Weighted round-robin message distribution
- Dead letter queues for failed processing
Asynchronous Processing Benefits:
- Decouples API response time from computation time
- Natural backpressure handling
- Enables batch optimization strategies
Security Architecture
Authentication & Authorization
Multi-Layer Security:
1. API Gateway → API Key Validation
2. JWT Token → Claims Validation
3. Rate Limiting → Per-key limits
4. IP Whitelisting → Enterprise customers
JWT Token Structure:
{
"iss": "solvice-maps",
"sub": "user-id",
"aud": "maps-api",
"exp": 1640995200,
"iat": 1640991600,
"scope": ["routing:read", "tables:write"]
}
Data Protection
Encryption:
- In Transit: TLS 1.3 for all API communication
- At Rest: Google Cloud Storage encryption
- Database: PostgreSQL transparent data encryption
Access Controls:
- Service Accounts: GCP IAM with minimal permissions
- Network Segmentation: VPC isolation
- Secrets Management: Google Secret Manager
Monitoring and Observability
Metrics Collection
Custom Metrics:
// OSRM service metrics
@Counter('osrm_requests_total', ['method', 'status', 'team'])
osrmRequestsTotal: Counter;
@Histogram('osrm_request_duration_seconds', ['method', 'team'])
osrmRequestDuration: Histogram;
@Gauge('osrm_active_connections', ['engine'])
osrmActiveConnections: Gauge;
Infrastructure Metrics:
- Request throughput and latency
- Database connection pool utilization
- Memory and CPU usage per service
- External API response times and error rates
Distributed Tracing
OpenTelemetry Integration:
@WithSpan("process-table-request")
suspend fun processTableRequest(request: TableRequest): TableResponse {
val span = Span.current()
span.setAttribute("table.size", request.sources.size * request.destinations.size)
span.setAttribute("routing.engine", request.engine.toString())
return try {
val result = routingEngine.calculateTable(request)
span.setStatus(StatusCode.OK)
result
} catch (e: Exception) {
span.recordException(e)
span.setStatus(StatusCode.ERROR)
throw e
}
}
Health Monitoring
Health Check Endpoints:
/health/live: Basic service liveness
/health/ready: Service readiness (dependencies available)
/health/engines: Routing engine status
/health/database: Database connectivity
Error Handling and Resilience
Circuit Breaker Pattern
@Component
class RoutingEngineCircuitBreaker {
private val circuitBreaker = CircuitBreaker.ofDefaults("routing-engine")
fun callEngine(request: RoutingRequest): RoutingResponse {
return circuitBreaker.executeSupplier {
externalRoutingEngine.process(request)
}
}
}
Retry Strategies
Exponential Backoff:
val retryConfig = RetryConfig.custom<RoutingResponse>()
.maxAttempts(3)
.waitDuration(Duration.ofSeconds(1))
.exponentialBackoffMultiplier(2.0)
.retryOnException { it is TemporaryRoutingException }
.build()
Graceful Degradation
Fallback Mechanisms:
- Engine Fallback: Switch to alternative routing engine
- Cached Response: Return stale cached data with warnings
- Simplified Response: Return basic distance calculations
- Error Response: Structured error with retry guidance
Data Consistency and Reliability
Event Sourcing for Request Tracking
Event Log:
sealed class TableEvent {
data class TableRequestCreated(val tableId: Long, val request: TableRequest)
data class TableProcessingStarted(val tableId: Long, val engine: RoutingEngine)
data class TableProcessingCompleted(val tableId: Long, val response: TableResponse)
data class TableProcessingFailed(val tableId: Long, val error: String)
}
Idempotency
Idempotent Request Processing:
@Transactional
fun processTableRequest(request: TableRequest): TableResponse {
val existingResponse = findByHash(request.hash)
if (existingResponse != null && existingResponse.isComplete()) {
return existingResponse
}
return executeNewRequest(request)
}
Development and Deployment Architecture
CI/CD Pipeline
Automated Deployment:
# GitLab CI/CD Pipeline
stages:
- test
- build
- deploy-staging
- integration-test
- deploy-production
test:
script:
- ./gradlew test
- npm test
build:
script:
- docker build -t $IMAGE_TAG .
- docker push $REGISTRY/$IMAGE_TAG
deploy-production:
script:
- kubectl set image deployment/mapr-gateway app=$IMAGE_TAG
- kubectl rollout status deployment/mapr-gateway
Blue-Green Deployment
Zero-Downtime Deployments:
- Deploy new version to blue environment
- Run health checks and integration tests
- Switch load balancer to blue environment
- Monitor for issues, rollback if necessary
- Decommission green environment
This technical architecture provides a robust foundation for Solvice Maps, enabling high performance, scalability, and reliability while maintaining developer productivity and operational simplicity.