Skip to main content

56 posts tagged with "serverless"

View All Tags

· 7 min read
Enes Akar

Computing at the Edge is one of the most exciting capabilities in recent years. CDN allows you to keep your files closer to your users. Edge computing allows you to run your applications closer to your users. This helps developers to build globally distributed, performant applications.

Cloudflare Workers is the leading product in this space right now. It gives you a serverless processing environment without cold starts. You leverage Cloudflare's global network to minimize latency of your applications. You can write your functions in Javascript, Rust, C and C++.

Similar to Serverless functions (AWS Lambda etc.), Cloudflare Workers are stateless. As you can see in Cloudflare’s survey, developers are asking ways to connect their databases from Edge functions. Unfortunately, most databases are not designed for serverless environments, they require persistent connections. We developed the REST API over Redis to enable serverless edge functions to access Upstash in the simplest and fastest way possible.

· 4 min read
Noah Fischer

Next.js is a very successful web framework which brings together server side rendering and static site generation. SSG speeds up your web site thanks to CDN caching meanwhile SSR helps you with SEO and dynamic data.

Server side rendering is a great feature which helps you to write full stack applications. But if you are not careful, the performance of your Next.js website can be affected easily. In this blog post, I will explain how to leverage Redis to speed up your Next.js API calls. Before that I will briefly mention a simpler way to improve your performance.

· 4 min read
Noah Fischer

One of the best things about the serverless is its ability to scale even in case of huge traffic spikes. But unfortunately, scaling is not free both financially and technically. That’s why developers need to control their applications’ scalability. Here the main reasons you will need a rate limiting mechanism in your serverless application:

1- Protect your resources: If you’re providing a public API, traffic spikes can degrade the quality of the service, and may lead to a service outage for all your users. You need to protect your system against such cascading failures as well as self-inflicted Ddos incidents. A bug in your application can trigger such problems in your system. An internal process which retries an endpoint indefinitely in case of a failure can easily exhaust your resources.

2- Manage user quotas: You may want to define quotas for your users for fair use of your services. Also you may need quotas if you provide your services in different pricing tiers.

3- Control the cost: There are many real life examples how an uncontrolled system can cause large bills. This is quite a risk for serverless applications thanks its highly scalable nature. Rate limiting will help you control these costs.

· 10 min read
Noah Fischer

In this article, I will compare the latencies of three serverless databases DynamoDB, FaunaDB, Upstash (Redis) for a common web use case.

I created a sample news website and I am recording database related latency with each request to the website. Check the website and the source code.

I have inserted 7001 NY Times articles into each database. The articles are collected from New York Times Archive API(all articles of January 2021). I randomly scored each article. At each page request, I query top 10 articles under the World section from each database.

· 8 min read
Noah Fischer

Designing a database for serverless, the biggest challenge in our mind was to build an infrastructure which supports per request pricing in a profitable way. We believe Upstash has achieved this. After we launched the product, we saw that there was another major challenge: Database connections!

As you know, Serverless Functions scale from 0 to infinity. This means when your functions get a lot of traffic, the cloud provider creates new containers (lambda functions) in parallel and scales out your backend. If you create a new database connection within the function then you can rapidly reach the connection limit of your database.

If you try to cache the connection outside the lambda functions then another problem occurs. When AWS freezes your Lambda function, it does not close the connection. So you may end up with many idle/zombie connections which can still threaten.