AWS Lambda pioneered serverless space. Many developers think that serverless is the future of development. It gives you the true pay-per-use model, relieves you from the maintenance and scaling of the backend infrastructure. But it also comes with challenges. One of those is its statelessness. You need to keep the state in an external data store. Unfortunately most of the popular data stores are connection based. But as we explained in this post, managing connections can become painful in serverless. That’s why, we have developed a high performance REST API on top of Upstash Redis. In this blog post, I will implement a very basic stateful api (page counter) on AWS Lambda and Upstash Redis using the REST API.
11 posts tagged with "serverless"View All Tags
We are thrilled to announce that now our Upstash Terraform Provider is publicly available. Our core principle is always being developer friendly. We have announced REST Api recently. Now it is time to expand our tools with the terraform provider plugin.
Terraform is a useful automation tool that lets you define your infrastructure as code. Collaboration becomes crazy easy in this way and every configuration change is persisted so everybody knows what is going on at the infrastructure.
After community requests, we have developed our terraform provider and now it is publicly available in the terraform marketplace.
Upstash supports the REST API in addition to the native Redis API. REST API helps developers to access their Redis without connection issues from serverless and edge functions. But if you execute multiple Redis commands in the same function then this means you will make a call to the database multiple times. One of our community members (@MasterGates) came with a great suggestion in our Discord channel. Pipeline API:
In this article, we will build a Serverless Next.js based TODO application. We will try our best to make it minimalist. It will not have any database connection. It will not have any extra dependency other than Next.js. It will not have any buttons. Besides,
minimalism is cool and clean, I love it because I am a lazy developer :)
Next.js is a modern framework which enables the front-end developers to develop full stack applications. Serverless functions have an important role in simplifying backend development for Next.js developers. As you probably know, serverless functions do not like database connections due to their stateless nature. See here and here as examples of problems of database connections inside serverless functions.
It is a common need to restrict access to your website to some specific IPs. In this post, I will show how to implement an IP Allow/Deny list using Edge computing. Let me first introduce Cloudflare Workers.
Computing at the Edge is one of the most exciting capabilities in recent years. CDN allows you to keep your files closer to your users. Edge computing allows you to run your applications closer to your users. This helps developers to build globally distributed, performant applications.
Similar to Serverless functions (AWS Lambda etc.), Cloudflare Workers are stateless. As you can see in Cloudflare’s survey, developers are asking ways to connect their databases from Edge functions. Unfortunately, most databases are not designed for serverless environments, they require persistent connections. We developed the REST API over Redis to enable serverless edge functions to access Upstash in the simplest and fastest way possible.
We are happy to announce Multi Zone Replication capability. When enabled the data is replicated to multiple availability zones. Multi zone replication provides you high availability and better scalability.
Next.js is a very successful web framework which brings together server side rendering and static site generation. SSG speeds up your web site thanks to CDN caching meanwhile SSR helps you with SEO and dynamic data.
Server side rendering is a great feature which helps you to write full stack applications. But if you are not careful, the performance of your Next.js website can be affected easily. In this blog post, I will explain how to leverage Redis to speed up your Next.js API calls. Before that I will briefly mention a simpler way to improve your performance.
One of the best things about the serverless is its ability to scale even in case of huge traffic spikes. But unfortunately, scaling is not free both financially and technically. That’s why developers need to control their applications’ scalability. Here the main reasons you will need a rate limiting mechanism in your serverless application:
1- Protect your resources: If you’re providing a public API, traffic spikes can degrade the quality of the service, and may lead to a service outage for all your users. You need to protect your system against such cascading failures as well as self-inflicted Ddos incidents. A bug in your application can trigger such problems in your system. An internal process which retries an endpoint indefinitely in case of a failure can easily exhaust your resources.
2- Manage user quotas: You may want to define quotas for your users for fair use of your services. Also you may need quotas if you provide your services in different pricing tiers.
3- Control the cost: There are many real life examples how an uncontrolled system can cause large bills. This is quite a risk for serverless applications thanks its highly scalable nature. Rate limiting will help you control these costs.
In this article, I will compare the latencies of three serverless databases DynamoDB, FaunaDB, Upstash (Redis) for a common web use case.
I have inserted 7001 NY Times articles into each database. The articles are collected from New York Times Archive API(all articles of January 2021). I randomly scored each article. At each page request, I query top 10 articles under the
World section from each database.
Designing a database for serverless, the biggest challenge in our mind was to build an infrastructure which supports per request pricing in a profitable way. We believe Upstash has achieved this. After we launched the product, we saw that there was another major challenge: Database connections!
As you know, Serverless Functions scale from 0 to infinity. This means when your functions get a lot of traffic, the cloud provider creates new containers (lambda functions) in parallel and scales out your backend. If you create a new database connection within the function then you can rapidly reach the connection limit of your database.
If you try to cache the connection outside the lambda functions then another problem occurs. When AWS freezes your Lambda function, it does not close the connection. So you may end up with many idle/zombie connections which can still threaten.