Skip to main content

7 posts tagged with "redis"

View All Tags

More Resilient and More Scalable: Upstash with Multi-Zone Replication

Enes Akar

Enes Akar

CEO @upstash

We are happy to announce Multi Zone Replication capability. When enabled the data is replicated to multiple availability zones. Multi zone replication provides you high availability and better scalability.

High Availability#

Multi-zone database is more resilient to failures because there are database replicas running in different zones. This means even if an availability zone is unavailable, your applications should not be affected as the requests will be redirected to the healthy zone. The failover time for a single-zone database is several minutes while it is seconds for a multi-zone database.

Better Scalability#

In a multi-zone database, your requests are being distributed among the replicas in a round robin fashion. New replicas are added to the cluster to meet your high throughput needs.

Architecture#

We use the single leader replication model. Each key is owned by a leader replica and other replicas become the backups of the leader. Writes on a key are processed by the leader replica first then propagated to backup replicas. Reads can be performed from any replica or the leader depending on the consistency configuration. This model gives a better write consistency and read scalability.

Each replica employs a failure detector to track liveness of the leader replica. When the leader replica fails for a reason, remaining replicas start a new leader election round and elect a new leader. This is the only unavailability window for the cluster where your requests can be blocked for a short period of time.

Multi-zone Architecture

Consistency#

We have two consistency modes; Eventual and Strong consistency. With eventually consistent mode; the write request returns after the leader replica processes the operation. Write operation is replicated to backup replicas asynchronously. Read requests can be served by any replica, which gives better horizontal scalability but also means a read request may return a stale value while a write operation for the same key is being propagated to backup replicas.

With strong consistency mode; the response for a write request is returned to the client only after at least one backup replica processes the write operation in addition to the leader replica.

Also strong consistency mode guarantees that a write is synced to the disk before returning the response. Upon receiving the acknowledgement, the client can assume data will be safe even if the leader replica fails. Read requests are served only by the leader replica, which provides stronger consistency but also reduces scalability of the cluster.

Upgrades#

You can enable multi-zone replication for your database in the Upstash Console. Thanks to the replication model, there will be no down time. You can experience a slight degradation in the performance during the migration. Migration will be completed from a few seconds up to several minutes depending on the size of your database.

Pricing#

Due to the increased infrastructure cost, the price of the multi-zone database is higher. It is $0.4 per 100.000 requests and $0.5 per GB.

Speed up your Next.js application with Redis

Noah Fischer

Noah Fischer

DevRel @upstash

Next.js is a very successful web framework which brings together server side rendering and static site generation. SSG speeds up your web site thanks to CDN caching meanwhile SSR helps you with SEO and dynamic data.

Server side rendering is a great feature which helps you to write full stack applications. But if you are not careful, the performance of your Next.js website can be affected easily. In this blog post, I will explain how to leverage Redis to speed up your Next.js API calls. Before that I will briefly mention a simpler way to improve your performance.

Use SWR on your API calls#

SWR is a very smart data fetching library. It uses the HTTP cache invalidation strategy (stale-while-revalidate) described by HTTP RFC 5861. When you call an API with SWR, it instantly returns the cached data but asynchronously it fetches the current data and updates your UI. You can also set refreshInterval depending on your tolerance to staleness.

const { data: user } = useSWR('/api/user', { refreshInterval: 2000 })

In the above code, user API will be refreshed every 2 seconds.

Caching with Redis#

SWR is very simple and effective. But there are cases you will need a server side caching:

  • Client side caching improves the performance for the clients. But if the number of clients is high, you can experience high load on the server side resources which will eventually affect the client side performance too.
  • If you are consuming an external API with a quota, you will want to control the API usage on the server side. Otherwise, too many clients will consume the API quickly.
  • If you have resources calculated, fetched or processed at the server side using dynamic inputs, client side caching will not be very useful.

Example Project: Covid Tracker#

In this project, we will use Javier Aviles’ Covid API and find the top 10 countries with the most number of cases. Check the website and the source code.

We will use Redis to cache the responses from Covid API so:

  • The response will be much faster. If you check the website, you will see that calling the Covid API is hundreds of milliseconds while fetching from Redis is 1-2 milliseconds.
  • We will not overwhelm the Covid API with too many requests.

API Code#

The code first checks if we have the API result cached in the Redis. If not, we will get the all country list from the Covid API and sort them by current day’s number of cases and save the top 10 to Redis. While saving to Redis, we set the "EX" 60 parameter which means that Redis will evict the entry in 60 seconds.

import Redis from 'ioredis'
let redis = new Redis(process.env.REDIS_URL)
export default async (req, res) => {
let start = Date.now();
let cache = await redis.get("cache")
cache = JSON.parse(cache)
let result = {}
if (cache) {
console.log("loading from cache")
result.data = cache
result.type = "redis"
result.latency = Date.now() - start;
return res.status(200).json(result)
} else {
console.log("loading from api")
start = Date.now();
return fetch('https://coronavirus-19-api.herokuapp.com/countries')
.then(r => r.json())
.then(data => {
data.sort(function (a, b) {
return b.todayCases - a.todayCases;
});
result.data = data.splice(1, 11)
result.type = "api"
result.latency = Date.now() - start;
redis.set("cache", JSON.stringify(result.data), "EX", 60)
return res.status(200).json(result)
})
}
}

UI Code#

The UI is a simple React code. We fetch the data from API using SWR.

export default function Home() {
function refresh(e) {
e.preventDefault();
window.location.reload();
}
const {data, error} = useSWR("api/data", fetcher)
if (error) return "An error has occurred.";
if (!data) return "Loading...";
return (
<div className={styles.container}>
<Head>
<title>Covid Tracker</title>
<meta name="description" content="Generated by create next app"/>
<link rel="icon" href="/favicon.ico"/>
</Head>
<main className={styles.main}>
<h1 className={styles.title}>
Covid Tracker
</h1>
<p className={styles.description}>
Top 10 countries with the most cases today
</p>
<div className={styles.grid}>
<div className={styles.card} onClick={refresh}>
<table className={styles.table}>
<thead>
<tr>
<th>Country</th>
<th>Today Cases</th>
<th>Today Deaths</th>
</tr>
</thead>
<tbody>
{data.data.map((item) =>
<tr>
<td>{item.country}</td>
<td>{item.todayCases}</td>
<td>{item.todayDeaths}</td>
</tr>)}
</tbody>
</table>
<br/>
<em>Loaded from {data.type} in <b>{data.latency}</b> milliseconds. Click to reload.</em>
</div>
</div>
</main>
<footer className={styles.footer}>
This is a sample project for the blogpost &nbsp;
<a
href="https://blog.upstash.com/nextjs-caching-with-redis"
target="_blank"
rel="noopener noreferrer"
>
Speed up your Next.js application using Serverless Redis for caching.
</a>
</footer>
</div>
)
}

External Links#

https://swr.vercel.app/docs/with-nextjs

https://brianlovin.com/writing/caching-api-routes-with-next-js

https://coronavirus-19-api.herokuapp.com/countries

https://github.com/javieraviles/covidAPI

Rate Limiting Your Serverless Applications

Noah Fischer

Noah Fischer

DevRel @upstash

One of the best things about the serverless is its ability to scale even in case of huge traffic spikes. But unfortunately, scaling is not free both financially and technically. That’s why developers need to control their applications’ scalability. Here the main reasons you will need a rate limiting mechanism in your serverless application:

1- Protect your resources: If you’re providing a public API, traffic spikes can degrade the quality of the service, and may lead to a service outage for all your users. You need to protect your system against such cascading failures as well as self-inflicted Ddos incidents. A bug in your application can trigger such problems in your system. An internal process which retries an endpoint indefinitely in case of a failure can easily exhaust your resources.

2- Manage user quotas: You may want to define quotas for your users for fair use of your services. Also you may need quotas if you provide your services in different pricing tiers.

3- Control the cost: There are many real life examples how an uncontrolled system can cause large bills. This is quite a risk for serverless applications thanks its highly scalable nature. Rate limiting will help you control these costs.

Solutions#

There are multiple alternative rate limiting solutions in different layers. I will list 3 main ones with a brief pros/cons analysis.

1- Concurrency Level of Function:

Cloud providers create multiple containers to scale your serverless function executions. You can set a limit for the max number of concurrent containers/instances. Although this can help you to limit concurrency, it does not control how many times your function will be called in a second.

Here how you can limit concurrency for AWS Lambda and Google Cloud Functions.

Pros:

  • No overhead
  • Easy to configure

Cons:

  • Not a complete solution. Only controls concurrency. Number of executions per second is not limited.

2- Rate limiting on API Gateway

If you are accessing your functions through API Gateway, you can apply your rate limiting policy onto API Gateway. Both AWS and GCP have guides to how to configure their solutions.

Pros:

  • No overhead
  • Easy to configure

Cons:

  • Only applies if you are using API Gateway.
  • It does not support more sophisticated cases like quotas per user or per IP.

3- Rate limiting with Redis

This is the most complete and powerful solution. There are many Redis based rate limiting libraries available. In Jeremy Daly’s blog post, he rejects Elasticache as a possible solution, saying that this adds a “non-serverless” component and another thing to manage. Here Upstash becomes a very good alternative with its serverless model and per-request-pricing.

Pros:

  • Powerful, you can implement a customized logic that fits your user model.
  • Scalable solution. See how Github uses Redis for rate limiting
  • Rich ecosystem, many open source libraries: redis_rate, redis-cell, node-ratelimiter

Cons:

  • Overhead of using Redis.

Code: Rate Limiting with Redis#

Thanks to rate limiting libraries, it is very easy to apply rate limiting to your application code. Here below the example code limits execution of AWS Lambda function per IP per second:

const RateLimiter = require('async-ratelimiter')
const Redis = require('ioredis')
const { getClientIp } = require('request-ip')
const rateLimiter = new RateLimiter({
db: new Redis("YOUR_REDIS_URL"),
max: 1,
duration: 5_000
})
module.exports.hello = async (event) => {
const clientIp = getClientIp(event) || 'NA'
const limit = await rateLimiter.get({id: clientIp})
if (!limit.remaining) {
return {
statusCode: 429,
body: JSON.stringify(
{
message: 'Sorry, you are rate limited. Wait for 5 seconds'
},
),
};
}
return {
statusCode: 200,
body: JSON.stringify(
{
message: 'hello!'
},
),
};
};

Visit the tutorial for the full example.

Reading List#

https://cloud.google.com/architecture/rate-limiting-strategies-techniques

https://www.jeremydaly.com/throttling-third-party-api-calls-with-aws-lambda/

https://medium.com/google-cloud/rate-limit-your-api-usage-with-cloud-endpoints-quotas-1270da55d2bf

https://github.blog/2021-04-05-how-we-scaled-github-api-sharded-replicated-rate-limiter-redis/

https://redis.io/commands/incr#pattern-rate-limiter

https://stripe.com/blog/rate-limiters

Roadmap Application with Next.js, Redis and Auth0

Noah Fischer

Noah Fischer

DevRel @upstash

We have been developing example applications to showcase how easy and practical to develop serverless applications with Redis. So far, the most popular of those examples is the Roadmap Voting Application. As we started to use it in real life, there were two main problems:

  • We started to see spam entries. The application does not have an admin dashboard, so one had to connect to Redis to delete an entry.
  • We released some features in the list but there was no way to flag them as released and remove from voting list.

Latency Comparison Among Serverless Databases: DynamoDB vs FaunaDB vs Upstash

Noah Fischer

Noah Fischer

DevRel @upstash

In this article, I will compare the latencies of three serverless databases DynamoDB, FaunaDB, Upstash (Redis) for a common web use case.

I created a sample news website and I am recording database related latency with each request to the website. Check the website and the source code.

I have inserted 7001 NY Times articles into each database. The articles are collected from New York Times Archive API(all articles of January 2021). I randomly scored each article. At each page request, I query top 10 articles under the World section from each database.

Challenge of Serverless: Database Connections

Noah Fischer

Noah Fischer

DevRel @upstash

Designing a database for serverless, the biggest challenge in our mind was to build an infrastructure which supports per request pricing in a profitable way. We believe Upstash has achieved this. After we launched the product, we saw that there was another major challenge: Database connections!

As you know, Serverless Functions scale from 0 to infinity. This means when your functions get a lot of traffic, the cloud provider creates new containers (lambda functions) in parallel and scales out your backend. If you create a new database connection within the function then you can rapidly reach the connection limit of your database.

If you try to cache the connection outside the lambda functions then another problem occurs. When AWS freezes your Lambda function, it does not close the connection. So you may end up with many idle/zombie connections which can still threaten.

GraphQL API for Serverless Redis

Noah Fischer

Noah Fischer

DevRel @upstash

We’re excited to announce Upstash now supports a GraphQL API for connecting to Serverless Redis. Now, you can connect to Upstash wherever you can send HTTP requests (including WebAssembly and mobile apps).

Until now, the only way to connect to your database was with the Redis protocol which requires TCP connection. GraphQL helps you with database connection limits especially in serverless environments. Further, using GraphQL enables Upstash to provide features and APIs that Redis doesn’t support natively.