Table of contents
- Server Vs. Serverless: Selecting the right architecture as per requirement
- Introduction to Serverless Computing in AWS
- AWS Lambda and its Invocation Models
- AWS Lambda Function Permissions
- Authoring a Lambda Function From Scratch
- Configurations in Lambda Functions
- AWS Lambda Integrations with Other Native AWS Services
- Serverless Architecture diagram explanation with a detailed component breakdown
- References
- Conclusion
Server Vs. Serverless: Selecting the right architecture as per requirement
In the dynamic realm of the digital landscape, organizations require a flexible, agile, and value-centric infrastructure that encompasses attributes such as adaptability, responsiveness, scalability, and cost efficiency. This has led to the rise of two prominent cloud computing models: Server and Serverless. While both offer immense potential, choosing the right model for your specific needs requires careful consideration.
This article delves first into the core differences between server and serverless architectures to guide you toward the optimal solution for your project and later talks more about serverless computing on the AWS cloud, the key service that incorporates serverless with its execution environment, invocation models, permission associated with it, authoring and configuring a lambda function and its native integrations with other AWS services etc. and at last we will see a serverless architecture with a breakdown of each component in detail.
To set up the ground, let's first see the advantages and disadvantages of both models (server and serverless). Lastly will look at some serverless services that can be used for creating dynamic solutions like Event-driven architectures (uses events to initiate actions and communication between decoupled services), API backends, Microservices architecture, Serverless batch processing, IOT applications, and Mobile backends).
Here in this blog, we are focusing more on serverless computing, but it solely depends on the project/task requirements for which model among the two should you opt for as both of them have some pros and cons at the same time.
Image Source:- https://hackernoon.com/what-is-serverless-architecture-what-are-its-pros-and-cons-cc4b804022e9
Server model (EC2): Advantages and Disadvantages
Advantages of Server Model (EC2) | Disadvantages of Server Model (EC2) |
Full control over the environment including choice of OS, libraries, and server configurations. Server (EC2) provides more flexibility. | It incorporates management overhead like manual management of servers, including provisioning, scaling, and maintenance. |
Well-suited for applications with legacy dependencies or specific hardware requirements. | Scaling can be more challenging, requiring manual configuration and potentially resulting in over-provisioning or under-provisioning. |
Suitable for long-running applications with constant high loads and predictable traffic. | This model can be costly for applications with variable workloads. |
Use cases - Web Servers, Databases, Batch Jobs, or any long-running applications. | This has a longer time to market as developers need to focus on infrastructure management in addition to application deployment. |
Serverless model (Lambda): Advantages and Disadvantages
Advantages of Serverless Model | Disadvantages of Serverless Model |
This model abstracts away the server management tasks, allowing developers to focus solely on writing code | This model may experience latency during cold starts (initial function invocations) as the runtime environment is initialized. |
This model incorporates automatic scaling based on demand, providing optimal resource utilization and cost efficiency. | Limited control in infrastructure detailing like runtime environment, including choice of OS, libraries, and configurations. |
This model follows a pay-as-you-go pricing model, where you are billed only for compute resources consumed during function invocation. | The Stateless Nature of serverless functions can be a limitation for certain applications requiring persistent state. |
Accelerate development with a shorter time to market since developers can focus on writing application logic without bothering about the infrastructure detailing. | There can be vendor lock-in as serverless offerings are specific to cloud providers. |
Use cases - Event-driven applications, microservices architectures, and short-running applications. | Serverless platforms impose concurrency limits on the number of simultaneous function executions. This can lead to performance issues, especially in scenarios with rapidly changing workloads or sudden spikes in traffic. |
Introduction to Serverless Computing in AWS
In the context of AWS, Serverless refers to the cloud computing model where AWS handles all the infrastructure management tasks, such as server provisioning, scaling, and maintenance which are abstracted from the developer so that he can focus more on writing the code in the form of serverless functions which responds to events on AWS Infrastructure and configure the triggers in the execution environment to run that specific code and also provides built-in code monitoring and logging via Amazon CloudWatch.
In AWS, the execution environment for running the serverless code is AWS Lambda and the code we run on lambda is called Lambda Function which is natively integrated with various other services in the AWS like S3, SNS, SQS, Eventbridge, API Gateway, Step Functions, DynamoDB etc. These integrations with other AWS services enable the lambda function to be triggered automatically in response to specific events or changes occurring within the associated service which can trigger the lambda function as per the events that happened on that specific service.
The AWS serverless platform includes several fully managed services that are tightly integrated with AWS Lambda and well-suited for serverless applications. Developer tools, including the AWS Serverless Application Model (AWS SAM), help simplify the deployment of your Lambda functions and serverless applications.
This approach allows for rapid application development and deployment, cost efficiency, and high availability.
Below is a basic example of what the Serverless architecture looks like and its integrations with different services.
Let's talk more about AWS Lambda, its invocation model, and the permissions associated with it in the next section.
AWS Lambda and its Invocation Models
AWS Lambda, an AWS serverless computing service, necessitates the creation of Lambda functions - self-contained applications written in supported languages and runtimes. These functions are then uploaded to AWS Lambda, where they are executed with efficiency and flexibility, harnessing a high-availability computing infrastructure.
The primary purpose of Lambda Functions is to handle events triggered by specific actions or changes within the AWS ecosystem. Upon being invoked, Lambda Functions can execute additional actions or trigger further events.
Lambda functions are designed to be stateless and we can rapidly launch as many copies of the function as needed to scale to the rate of the incoming events. Even though we donβt manage the underlying infrastructure we still need to configure execution parameters including memory timeout and concurrency.
There are different invocation patterns available for running AWS Lambda functions. Let's delve into each of these methods below.
Synchronous Invocation -
Here, Lambda waits for the function to complete execution and return a response which is ideal for short-running tasks where we need an immediate response. With this model, there are no built-in retries. You must manage your retry strategy within your application code.
In Lambda, the execution duration of a function is limited to 15 minutes. If dealing with a workload/requirement that requires more than 15 minutes for execution, it's necessary to explore other services for handling that requirement.
Associated services with Synchronous Invocation -
API Gateway: used for HTTP requests that trigger lambda functions.
Amazon Cognito: used for user authentication and authorization.
Amazon Lex: used for building chatbots.
Amazon Alexa: used for integrating lambda with Alexa skills.
AWS Step Functions: used for building serverless workflows.
Asynchronous Invocation -
Here, events are queued and the requester doesn't wait for the function to complete and this model is appropriate when the client doesn't need an immediate response. With the asynchronous model, you can make use of destinations to send records of asynchronous invocations to other services. With this model, there are 2 built-in retries by default.
Associated services with Asynchronous Invocation -
Amazon SQS: used for queuing messages that trigger Lambda functions.
Amazon SNS: used for publishing notifications that trigger Lambda functions.
Amazon CloudWatch Events: used for scheduling events that trigger Lambda functions.
Amazon S3: used for triggering Lambda functions based on file changes in S3 buckets.
Stream Polling - Based Invocation -
In this context, the Lambda function actively polls/monitors a stream, like a DynamoDB stream or Kinesis stream, to proactively identify new records or events. Instead of depending on an automatic trigger, the Lambda function regularly interrogates the stream to identify any updates.
Example - A Lambda function can be configured to poll a DynamoDB stream at regular intervals to check for new events. If there are new records in the stream, the Lambda function processes them.
Associated services with Event Source Mapping Invocation -
Amazon DynamoDB: Lambda functions can promptly respond to modifications within a DynamoDB table by leveraging DynamoDB Streams.
Amazon Kinesis: Lambda functions possess the capability to process streaming data originating from Kinesis streams.
AWS Lambda Function Permissions
Let's now go through Lambda permissions, we can use AWS IAM to manage access to lambda functions, there are 2 lambda function permissions, resource-based policy (permission that invokes lambda function) and lambda execution role (what permission lambda needs to call other services). Let's check both of them in quite a bit of detail.
IAM Resource-based policy -
We use a Resource-based policy when any AWS service invokes the Lambda function synchronously or asynchronously, in a simple way, these are Permissions to invoke the lambda function on your behalf.
Resource-based policies let you grant usage permission to other AWS accounts or organizations on a per-resource basis.
For example -
In Synchronous Invocation - If we have an API Gateway in place and we have associated lambda function with it, we should add a resource-based policy to invoke lambda functions from the API Gateway.
In Asynchronous Invocation - when adding/deleting an object from the S3 bucket, the lambda function is triggered, which is based on S3 object actions like the creation/deletion of objects inside an S3 bucket which invokes the lambda function.
Lambda IAM Execution Role -
This role grants the lambda function permission to access AWS services and resources. For example - you might create an execution role that has permission to send logs to Amazon Cloudwatch and perform some S3 bucket operations.
You provide an execution role when you create a function. When you invoke your function, Lambda automatically provides your function with temporary credentials by assuming this role. You don't have to call
sts:AssumeRole
in your function code.For Lambda to properly assume your execution role, the role's trust policy must specify the Lambda service principal (
lambda.amazonaws.com
) as a trusted service.
Authoring a Lambda Function From Scratch
Up to now, we already know that we can use our code which should be written in respect of supported AWS SDKs and AWS Lambda supports multiple programming languages like Python, NodeJs, Java, Go, C#, etc to write and invoke the automation code without worrying about infrastructure.
For different programming languages, there are different ways to interact with AWS, and like in the case of Python - we need to import boto3 to interact with AWS, for javascript/NodeJs - we need to import aws-sdk and this varies accordingly as per the choice of programming languages.
Here's a basic illustration to grasp the utilization of AWS Lambda. In this instance, I'm crafting a Lambda function that employs the Boto3 library to engage with AWS S3 service. Its purpose is to retrieve the names of S3 buckets within the account and log them onto CloudWatch Logs. You can customize the function to perform various tasks based on your requirements and then as per the complexity of the lambda function, we need to configure the function which we will see in the next section.
import boto3
import json
def lambda_handler(event, context):
# Create a Boto3 client for interacting with AWS services
s3_client = boto3.client('s3')
# Retrieving the list of S3 buckets in the account
response = s3_client.list_buckets()
print("RESPONSE ===> ", response)
# Extract bucket names from the response
bucket_names = [bucket['Name'] for bucket in response['Buckets']]
# Print the list of bucket names to CloudWatch Logs
print("S3 Buckets:", bucket_names)
# Return a response
return {
'statusCode': 200,
'body': json.dumps('Lambda function executed successfully!')
}
If you want to check out some other lambda functions, please navigate to my other blog - AWS Lambda Function Examples
Configurations in Lambda Functions
When building and testing a function, you must specify three primary configuration settings: memory, timeout, and concurrency. These settings are important in defining how each function performs. As you monitor your functions, you must adjust the settings to optimize costs and ensure the desired customer experience with your application.
Lambda Function Memory
You can allocate up to 10 GB of memory to a Lambda function. Lambda allocates CPU and other resources linearly in proportion to the amount of memory configured. Any increase in memory size triggers an equivalent increase in CPU available to your function.
Lambda Function Timeout
The duration for which an AWS Lambda function can execute before being terminated is determined by its timeout value. Presently, the maximum permissible timeout for a Lambda function stands at 15 minutes. This constraint implies that any individual invocation of a Lambda function is restricted to a maximum runtime of 15 minutes.
Lambda Function Concurrency
This is the number of in-flight requests that your AWS Lambda function is handling at the same time. For each concurrent request, Lambda provisions a separate instance of your execution environment. When the function code finishes running, it can handle another request. If the function is invoked again while the first request is still being processed, another instance is allocated.
As your functions receive more requests, Lambda automatically handles scaling the number of execution environments until you reach your account's concurrency limit. By default, Lambda provides your account with a total concurrency limit of 1,000 concurrent executions across all functions in an AWS Region.
There are 3 types of concurrency associated with Lambda Function.
Unreserved concurrency
The amount of concurrency that is not allocated to any specific set of functions. The minimum is 100 unreserved concurrency. This allows functions that do not have any provisioned concurrency to still be able to run. If you provision all your concurrency to one or two functions, no concurrency is left for any other function. Having at least 100 available allows all your functions to run when they are invoked.
Reserved concurrency
Guarantees the maximum number of concurrent instances for the function. When a function has reserved concurrency, no other function can use that concurrency. No charge is incurred for configuring reserved concurrency for a function.
Provisioned concurrency
Initializes a requested number of runtime environments so that they are prepared to respond immediately to your function's invocations. This option is used when you need high performance and low latency.
You pay for the amount of provisioned concurrency that you configure and for the period that you have it configured.
AWS Lambda Integrations with Other Native AWS Services
Let's delve into the seamless integration of AWS Lambda with different native services through triggers with a brief use case.
Amazon S3 (Simple Storage Service)
Trigger
: Response to events within the S3 environment, such as object creation, deletion, or updates.Use Case
: Utilize AWS Lambda for the automated processing of files in Amazon S3, streamlining activities like generating thumbnails or initializing data workflows prompted by object events.Amazon SNS (Simple Notification Service)
Trigger
: Activated by messages published to SNS topics.Use Case
: Dynamically handle notifications, send or dispatch alerts, or kickstart workflows based on events communicated through SNS topics.Amazon SQS (Simple Queue Service)
Trigger
: Respond to messages present in SQS queues.Use Case
: Implement serverless processing for messages within a queue, enabling scenarios such as order processing or executing background jobs.Amazon EventBridge
Trigger
: Events from various AWS services and custom sources.Use Case
: Build event-driven architectures, connect different services, and respond to changes in the AWS environment.Amazon API Gateway
Trigger
: HTTP requests via API Gateway.Use Case
: Build serverless APIs, handle RESTful requests, and integrate with backend services.AWS Step Functions
Trigger
: State transitions in Step Functions.Use Case
: Orchestrate and coordinate workflows by defining state machines. Lambda functions can be steps in these state machines.Amazon DynamoDB
Trigger
: DynamoDB Streams for changes in the table.Use Case
: React to changes in DynamoDB tables, update indexes, or trigger downstream processing.
By leveraging these native integrations, AWS Lambda becomes a powerful tool for building serverless applications that respond dynamically to events in the AWS ecosystem and beyond.
Serverless Architecture diagram explanation with a detailed component breakdown
Below is one of the basic serverless architecture and this is what it looks like as a whole.
Let's break down each of the components in this serverless architecture with detailed steps for each component:
Frontend Component :
Technology: React, Angular, and other frontend-related files.
Functionality: User interface and interaction with the serverless backend.
Website:
Components:
S3: Houses static content for the website, encompassing files such as HTML, CSS, and JavaScript files.
API Gateway: Handles incoming HTTP requests and routes them to the relevant Lambda function.
GetProducts Lambda: Fetches product details from DynamoDB and sends information back to the front end.
Steps:
The user interacts with the React application on the front end.
Frontend sends an API request to the API Gateway.
API Gateway routes the request to the GetProducts Lambda function.
GetProducts Lambda fetches product data from DynamoDB.
GetProducts Lambda returns the product data to the API Gateway.
API Gateway sends the product data back to the front end.
Frontend displays the retrieved product information to the user.
Watcher:
Components:
CloudWatch Events (Eventbridge): Triggers the Watcher Lambda function periodically or based on specific events.
Watcher Lambda: watches for certain events to happen and then invokes the application logic as written on the backend. Example -performing tasks like data processing and generating output files.
Steps:
CloudWatch Events (Eventbridge) triggers the Watcher Lambda function at a specific interval or based on an event.
Watcher Lambda retrieves data from DynamoDB or other sources.
Watcher Lambda processes the data and generates an output file.
Watcher Lambda uploads the output file to a specific location, such as an S3 bucket.
Notifier:
Components:
DynamoDB Streams: Records modifications to DynamoDB tables.
Notifier Lambda: Responds to changes in DynamoDB, triggering email notifications.
SNS: Publishes email notifications to subscribers.
Steps:
A change occurs in a DynamoDB table.
DynamoDB Streams captures the change and sends an event to the Notifier Lambda function.
Notifier Lambda processes the event and retrieves relevant information from DynamoDB.
Notifier Lambda sends an email notification through SNS.
Subscribers receive the email notification.
Integrations and Connections:
DynamoDB: All components interact with DynamoDB for data storage and retrieval.
API Gateway: Connects the front end to the GetProducts Lambda function.
CloudWatch Events (Eventbridge): Triggers the Watcher Lambda function periodically or based on specific events.
DynamoDB Streams: Notifies the Notifier Lambda about changes in DynamoDB tables.
SNS: Publishes email notifications to subscribers.
References
Configuring Destination for Event Source Mapping
AWS Lambda Event Source Mapping
Amazon Lambda IAM Execution Role
Amazon lambda integration with other services
Amazon Lambda Function Scaling (Concurrency)
AWS Serverless Application Model (SAM)
Conclusion
In conclusion, Navigating the landscape of serverless and traditional server models requires a thoughtful consideration of their respective advantages and disadvantages. Choosing between Server and Serverless architectures hinges on factors like scalability, cost, and infrastructure management preferences. Delving into the server model, EC2 instances offer control and flexibility, but at the cost of manual infrastructure management. Conversely, the serverless model, exemplified by AWS Lambda, abstracts away infrastructure complexities, promoting agility and cost efficiency.
In our exploration, we uncovered the intricacies of serverless computing in AWS, understanding AWS Lambda's invocation models, permissions, and seamless integrations with native services. The exploration concluded by meticulously dissecting a serverless architecture diagram, revealing the interlinked components that drive contemporary, event-triggered applications. With the ongoing evolution of cloud computing, the choice between traditional server-based and serverless approaches emerges as a crucial factor in designing resilient, scalable, and economical solutions that align precisely with individual business requirements.
Thank you for dedicating time to read my blog! π I trust you discovered it to be beneficial and insightful. If so, please show your appreciation with a π like and consider π subscribing to my newsletter for more content like this.
I'm continuously seeking ways to enhance my blog, so your comments or suggestions are always welcome π¬
Once again, thank you for your ongoing support!
Connect with me -
#aws #awscommunity #cloudcomputing #cloud