Infrastructure Provisioning

Provision your infrastructure for RudderStack performance metrics.

This is a brief overview of the stats that we captured by running Backend on different AWS machine configurations that we hope gives a rough idea for users in making the decisions to provision infrastructure to host Rudder. All numbers capture below are by using metrics described in Monitoring and Metrics.

Load Test Results

All tests are done using db.m4.xlarge Amazon RDS instance for hosting the postgres database

Gateway

MachineLoadResponse Time (ms) gateway.response_timeThroughput gateway.write_key_requestsDangling Tables
m4.2xLarge8Core 32GB2.5K/s2.5K/s
m4.2xLarge8Core 32GB5K/s3K/s
m4.2xLarge8Core 32GB3K/s2.7K/s
m5.xLarge4Core 16GB2.5K/s31.9K/s
m5.large2Core 8GB2.5K/s4.21.7K/s
info
Backend migrates and drops tables that have a threshold of jobs processed. Gateway tables are backed up to object storage (S3, MinIO etc.) if configured by user. Dangling tables indicate tables are ready for drop at a rate greater than the rate at which tables are backed up to object storage. Concurrent uploads to object storage is in the roadmap for upcoming versions of Backend.

Transformer

MachineGateway Throughput gateway.write_key_requestsThroughput processor.transformer_received
m4.2xLarge8Core 32GB2.7k/s
m5.xLarge4Core 16GB1.9K/s
m5.large2Core 8GB1.7K/s
warning
Transformer is a NodeJS/Koa server launced as cluster of node processes, processses count equal to the number of cores of the machine. Choosing an instance with lower number of core the number of cores in instance processor might reduce the throughput of transformer.

Batch Router - S3 Destination

MachineGateway Throughput gateway.write_key_requestsThroughput batch_router.dest_successful_events
m4.2xLarge8Core 32GB2.7K/s
m5.xLarge4Core 16GB1.9K/s
m5.large2Core 8GB1.7K/s

Below is an image captured in CloudWatch Metrics showing the captured stats:

Figure . Gateway Requests and Batch Router Throughputs

Database Requirements

Rudder recommends using a database with at least 1TB allocated storage as there could be downtime to increase storage realtime depending on your database service provider.

Estimating Storage

If you want to dig deeper and figure out the right storage size, go through the following example.
Following variables should be considered to come with a right storage size for your use case.

VariableDescriptionProduction sample data
numSourcesTotal number of sources2
numEventsPerSecNumber of events per sec for a given source2500
avgGwEventSizeEvent size that is captured at the gateway by Rudder2.1 KB
gwEventOverheadSize of extra metadata that Rudder stores at Gateway to process the event300 B
numDestsNumber of enabled destinations for a given source3
avgRtEventSizeThe payload size that needs to be sent by the router to the destination after applying transformations1.2 KB
rtEventOverheadSize of extra metadata that Rudder stores to process the event300 B
$$gatewayStorage = numEventsPerSec * (avgGwEventSize + gwEventOverhead)$$
$$routerStorage = numEventsPerSec * numDests * (avgRtEventSize + rtEventOverhead)$$
$$totalStoragePerHour = 3600 * \sum_{firstSource}^{lastSource} (gatewayStorage + routerStorage)$$

In the above production example, after substituting the values, totalStoragePerHour adds up to 120 GB

Sample your peak load in production to estimate the storage requirements and substitute your values to get an estimate of the storage needed per 1 hour of data.

info

Event data and tables are ephemeral. In a happy path, we would have only a few minutes of event data being stored.

We recommend at least 10 hours worth of event storage computed above to gracefully handle destinations going down for a few hours.

If you want to prepare for a destination going for down for days, accommodate them into your storage capacity.

Estimating Connections

Rudder batches requests efficiently to write data. Under heavy load, backend can be configured (batchTimeoutInMS and maxBatchSize ) to batch more requests to limit concurrent connections to the database. If write latencies to the database are not in permissible thresholds, a new data set needs to be added i.e., backend server and database server.

Rudder reads the data back from the database at a constant rate. A sudden spike in user traffic will not result in more read DB requests.

RAM Requirements

Rudder does not cache aggressively and hence does not need huge amount of memory. Load tests were performed on 4 GB and 8 GB memory instances.

Rudder caches active user events by default to form configurable user sessions server side. The length of any user session can be configured with sessionThresholdInS and sessionThresholdEvents. Once a user’s session is formed, that user events are cleared from the cache. If you don’t need sessions, this can be disabled by setting processSessions to false.

VariableDescriptionSample data
numActiveUsersNumber of active users during a session (2 min) in your application during peak hours10000
avgGwEventSizeEvent size that is captured at the gateway by Rudder2.1 KB
userEventsInThresholdNumber of user events in the given threshold i.e., 40 user events in 2 min40
$$memoryNeeded = numActiveUsers * userEventsInThreshold * avgGwEventSize$$

Memory required in the above example would be 840 MB.

info
The memory estimate does not include the default RAM required for running the OS and the required processes.

Questions? Contact us by email or on Slack