<img alt="" src="https://secure.bomb5mild.com/193737.png" style="display:none;">

Turbonomic Blog

Serverless Computing with AWS Lambda, Part 1

Posted by Steven Haines on Dec 20, 2018 2:15:12 PM
Find me on:

Serverless computing may be the hottest thing in cloud computing today, but what, exactly, is it? This two-part tutorial starts with an overview of serverless computing--from what it is, to why it's considered disruptive to traditional cloud computing, and how you might use it in Java-based programming.

Following the overview, you'll get a hands-on introduction to AWS Lambda, which is considered by many the premiere Java-based solution for serverless computing today. In Part 1, you'll use AWS Lambda to build, deploy, and test your first Lambda function in Java. In Part 2, you'll integrate your Lambda function with DynamoDB, then use the AWS SDK to invoke Lambda functions in a Java application.

What is serverless computing?

Last year I was talking to a company intern about different architectural patterns and mentioned serverless architecture. He was quick to note that all applications require a server, and cannot run on thin air. The intern had a point, even if he was missing mine. Serverless computing is not a magical platform for running applications.

In fact, serverless computing simply means that you, the developer, do not have to deal with the server. A serverless computing platform like AWS Lambda allows you to build your code and deploy it without ever needing to configure or manage underlying servers. Your unit of deployment is your code; not the container that hosts the code, or the server that runs the code, but simply the code itself. From a productivity standpoint, there are obvious benefits to offloading the details of where code is stored and how the execution environment is managed. Serverless computing is also priced based on execution metrics, so there is a financial advantage, as well.

What does AWS Lambda cost?

At the time of this writing, AWS Lambda's price tier is based on number of executions and execution duration:

  • Your first million executions per month are free, then you pay $0.20 per million executions thereafter ($0.0000002 per request).
  • Duration is computed from the time your code starts executing until it returns a result, rounded to the nearest 100ms. The amount charged is based on the amount of RAM allocated to the function, where the cost is $0.00001667 for every GB-second.

Pricing details and free tier allocations are slightly more complicated than the overview implies. Visit the price tier to walk through a few pricing scenarios.

To get an idea for how serverless computing works, let's start with the serverless computing execution model, which is illustrated in Figure 1.


Figure 1. Serverless computing execution model


Here's the serverless execution model in a nutshell:

  1. A client makes a request to the serverless computing platform to execute a specific function.
  2. The serverless computing platform first checks to see if the function is running on any of its servers. If the function isn't already running, then the platform loads the function from a data store.
  3. The platform then deploys the function to one of its servers, which are preconfigured with an execution environment that can run the function.
  4. It executes the function and captures the result.
  5. It returns the result back to the client.

Sometimes serverless computing is called Function as a Service (FaaS), because the granularity of the code that you build is a function. The platform executes your function on its own server and orchestrates the process between function requests and function responses.

Nanoservices, scalability, and price

Three things really matter about serverless computing: its nanoservice architecture; the fact that it's practically infinitely scalable; and the pricing model associated with that near infinite scalability. We'll dig into each of those factors.


You've heard of microservices, and you probably know about 12-factor applications, but serverless functions take the paradigm of breaking a component down to its constituent parts to a whole new level. The term "nanoservices" is not an industry recognized term, but the idea is simple: each nanoservice should implement a single action or responsibility. For example, if you wanted to create a widget, the act of creation would be its own nanoservice; if you wanted to retrieve a widget, the act of retrieval would also be a nanoservice; and if you wanted to place an order for a widget, that order would be yet another nanoservice.

A nanoservices architecture allows you to define your application at a very fine-grained level. Similar to test-driven development (which helps you avoid unwanted side-effects by writing your code at the level of individual tests), a nanoservices architecture encourages defining your application in terms of very fine-grained and specific functions. This approach increases clarity about what you're building and reduces unwanted side-effects from new code.

Microservices vs Nanoservices

Microservices encourages us to break an application down into a collection of services that each accomplish a specific task. The challenge is that no one has really quantified the scope of a microservice. As a result, we end up defining microservices as a collection of related services, all interacting with the same data model. Conceptually, if you have low-level functionality interacting with a given data model, then the functionality should go into one of its related services. High-level interactions should make calls to the service rather than querying the database directly.

There is an ongoing debate in serverless computing about whether to build Lambda functions at the level of microservices or nanoservices. The good news is that you can pretty easily build your functions at either granularity, but a microservices strategy will require a bit of extra routing logic in your request handler.

From a design perspective, serverless applications should be very well-defined and clean. From a deployment perspective you will need to manage significantly more deployments, but you will also have the ability to deploy new versions of your functions individually, without impacting other functions. Serverless computing is especially well suited to development in large teams, where it can help make the development process easier and the code less error-prone.


In addition to introducing a new architectural paradigm, serverless computing platforms provide practically infinite scalability. I say "practically" because there is no such thing as truly infinite scalability. For all practical purposes, however, serverless computing providers like Amazon can handle more load than you could possibly throw at them. If you were to manage scaling up your own servers (or cloud-based virtual machines) to meet increased demand, you would need to monitor usage, identify when to start more servers, and add more servers to your cluster at the right time. Likewise, when demand decreased you would need to manually scale down. With serverless computing, you tell your serverless computing platform the maximum number of simultaneous function requests you want to run and the platform does the scaling for you.


Finally, the serverless computing pricing model allows you to scale your cloud bill based on usage. When you have light usage, your bill will be low (or nil if you stay in the free range). Of course, your bill will increase with usage, but hopefully you will also have new revenue to support your higher cloud bill. For contrast, if you were to manage your own servers, you would have to pay a base cost to run the minimum number of servers required. As usage increased, you would scale up in increments of entire servers, rather than increments of individual function calls. The serverless computing pricing model is directly proportional to your usage.

AWS Lambda for serverless computing

AWS Lambda is a serverless computing platform implemented on top of Amazon Web Services platforms like EC2 and S3. AWS Lambda encrypts and stores your code in S3. When a function is requested to run, it creates a "container" using your runtime specifications, deploys it to one of the EC2 instances in its compute farm, and executes that function. The process is shown in Figure 2.


Figure 2. Execution process in AWS Lambda


When you create a Lambda function, you configure it in AWS Lambda, specifying things like the runtime environment (we'll use Java 8 for this article), how much memory to allocate to it, identity and access management roles, and the method to execute. AWS Lambda uses your configuration to setup a container and deploy the container to an EC2 instance. It then executes the method that you've specified, in the order of package, class, and method.

At the time of this writing, you can build Lambda functions in Node, Java, Python, and most recently, C#. For the purposes of this article we will use Java.

What is a Lambda function?

When you write code designed to run in AWS Lambda, you are writing functions. The term functions comes from functional programming, which originated in lambda calculus. The basic idea is to compose an application as a collection of functions, which are methods that accept arguments, compute a result, and have no unwanted side-effects. Functional programming takes a mathematical approach to writing code that can be proven to be correct. While it's good to keep functional programming in mind when you are writing code for AWS Lambda, all you really need to understand is that the function is a single-method entry-point that accepts an input object and returns an output object.

Serverless execution modes

While Lambda functions can run synchronously, as described above, they can also run asynchronously and in response to events. For example, you could configure a Lambda to run whenever a file was uploaded to an S3 bucket. This configuration is sometimes used for image or video processing: when a new image is uploaded to an S3 bucket, a Lambda function is invoked with a reference to the image to process it.

I worked with a very large company that leveraged this solution for photographers covering a marathon. The photographers were on the course taking photographs. Once their memory cards were full, they loaded the images onto a laptop and uploaded the files to S3. As images were uploaded, Lambda functions were executed to resize, watermark, and add a reference for each image to its runner in the database.

All of this would take a lot of work to accomplish manually, but in this case the work not only processed faster because of AWS Lambda's horizontal scalability, but also seamlessly scaled up and back down, thus optimizing the company's cloud bill.

In addition to responding to files uploaded to S3, lambdas can be triggered by other sources, such as records being inserted into a DynamoDB database and analytic information streaming from Amazon Kinesis. We'll look at an example featuring DynamoDB in Part 2.

AWS Lambda functions in Java

Now that you know a little bit about serverless computing and AWS Lambda, I'lll walk you through building an AWS Lambda function in Java.

Implementing Lambda functions

You can write a Lambda function in one of two ways:

  • The function can receive an input stream to the client and write to an output stream back to the client.
  • The function can use a predefined interface, in which case AWS Lambda will automatically deserialize the input stream to an object, pass it to your function, and serialize your function's response before returning it to the client.

The easiest way to implement an AWS Lambda function is to use a predefined interface. For Java, you first need to include the following AWS Lambda core library in your project (note that this example uses Maven):


Next, have your class implement the following interface:

Listing 1. RequestHandler.java

public interface RequestHandler<I, O> {
     * Handles a Lambda function request
     * @param input The Lambda function input
     * @param context The Lambda execution environment context object.
     * @return The Lambda function output
    public O handleRequest(I input, Context context);

The RequestHandler interface defines a single method: handleRequest(), which is passed an input object and a Context object, and returns an output object. For example, if you were to define a Requestclass and a Response class, you could implement your lambda as follows:

public class MyHandler implements RequestHandler<Request, Response> {
  public Response handleRequest(Request request, Context context) {

Alternatively, if you wanted to bypass the predefined interface, you could manually handle the InputStream and OutputStream yourself, by implementing a method with the following signature:

public void handleRequest(InputStream inputStream, OutputStream outputStream, Context context)
throws IOException {

The Context object provides information about your function and the environment in which it is running, such as the function name, its memory limit, its logger, and the amount of time remaining, in milliseconds, that the function has to complete before AWS Lambda kills it.

With that overview out of the way, we'll spend the remainder of Part 1 building a simple Lambda function that returns a widget. For the purposes of our example application, the user requests a widget by ID (a String), which we'll wrap into a WidgetRequest object. The function will then return a Widgetobject as its response.

Building a Lambda function

Listing 2 shows the source code for Widget, which is a POJO with an id and name:

Listing 2. Widget.java

package com.javaworld.awslambda.widget.model;
public class Widget {
    private String id;
    private String name;
    public Widget() {
    public Widget(String id, String name) {
        this.id = id;
       this.name = name;
    public String getId() {
        return id;
    public void setId(String id) {
        this.id = id;
    public String getName() {
        return name;
    public void setName(String name) {
        this.name = name;

Listing 3 shows the source code for a WidgetRequest, which is a POJO that contains an id:

Listing 3. WidgetRequest.java

package com.javaworld.awslambda.widget.model;
public class WidgetRequest {
    private String id;
   public WidgetRequest() {
    public WidgetRequest(String id) {
        this.id = id;
    public String getId() {
        return id;
    public void setId(String id) {
        this.id = id;

Next, we'll build our lambda request handler. Listing 4 shows the source code for the GetWidgetHandler class:

Listing 4. GetWidgetHandler.java

package com.javaworld.awslambda.widget.handlers;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.javaworld.geekcap.awslambda.widget.model.Widget;
import com.javaworld.geekcap.awslambda.widget.model.WidgetRequest;
public class GetWidgetHandler implements RequestHandler<WidgetRequest, Widget> {
    public Widget handleRequest(WidgetRequest widgetRequest, Context context) {
        return new Widget(widgetRequest.getId(), "My Widget " + widgetRequest.getId());

The GetWidgetHandler class implements the RequestHandler interface, accepting a WidgetRequest and returning a Widget. The WidgetRequestincludes an id parameter to indicate which Widget is being requested. For the purposes of this example, however, we won't load the Widget from a database. Instead, we'll build and return a new Widget instance with the specified ID and the name "My Widget ID." We'll do this on the fly.

Listing 5 shows the contents of the Maven POM file that can build a JAR file containing our Lambda function.

Listing 5. pom.xml

<project xmlns="http://maven.apache.org/POM/4.0.0"
        xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-


The POM file includes the aws-lambda-java-core dependency and builds against Java 8. The maven-shade-plugin packages all of our dependent JAR files inside of our JAR file so that it can run standalone. (See the AWS Lambda documentation to learn more about creating a standalone JAR file with maven-shade-plugin and using it with AWS Lambda.)

To run this build, execute the following command:

mvn clean install

The command creates a file named aws-lambda-java-1.0-SNAPSHOT.jarin your target directory. We'll upload this file to AWS Lambda in the next section.

Creating a Lambda function in the AWS console

Now that we have a Lambda function written and packaged into a standalone JAR file, let's set it up in AWS Lambda. Before proceeding with this step, you'll need to setup a free AWS account. Go ahead and do that now.

Once you have your account, login and open the AWS console. Click on Services, then choose Lambda in the Compute section, as shown in Figure 3.


Figure 3. Accessing the Lambda page in AWS Lambda


Click the Create a Lambda function button and select Blank Function, as shown in Figure 4.


Figure 4. Creating a blank function


Configuring triggers

At the time of this writing, there are no blueprints for creating Java Lambda functions, so you'll need to start the function from scratch. The first screen that you will see after selecting a blank function is the Configure Triggers page, shown in Figure 5.


Figure 5. Configuring triggers


You will setup triggers to instruct AWS to call your Lambda function when something occurs. If you click on the rounded rectangle on the left side of this page, you'll see the types of things that can trigger Lambda functions, including:

  • API Gateway: Allows a call to a RESTful resource to be forwarded to your function.
  • AWS IoT: Calls your function in response to an IoT (Internet of Things) events.
  • Alexa Skills Kit: Allows you to create Alexa voice-activate skills.
  • Alexa Smart Home: Handles Alexa Smart Home device events.
  • CloudFront: Calls your function in response to CloudFront events, allowing you to customize content delivered by CloudFront (AWS's CDN solution).
  • CloudWatch Events: Calls your function when a CloudWatch event or alert occurs.
  • CloudWatch Logs: Calls your function when specific messages are logged to CloudWatch.
  • CodeCommit: Calls your function when code is committed to CodeCommit, which is AWS's CI solution.
  • DynamoDB: Calls your function when data is inserted into a DynamoDB table.
  • Kinesis: Calls your function in response to analytics events.
  • S3: Calls your function when a file is uploaded to S3.
  • SNS: Calls your function when a notification is published to the Simple Notification Service.

We'll leave this blank for now, because we're going to call our Lambda function directly.

Configuring your function

Click Next and you'll see the Configure Function screen shown in Figure 6.


Figure 6. Configuring a function


Give your function a name (in this case it's "get-widget") and a description. Choose Java 8 as your runtime, which will add a dropdown that allows you to choose a JAR file to upload. Click the Upload button and find the JAR file that you built earlier in this tutorial. Below this, you'll see the page to configure your Lambda function handler and role, as shown in Figure 7.


Figure 7. Configuring the Lambda function handler and role


First, you'll configure the handler. The handler's format is as follows:


For our example, the handler is:


Next, you'll specify a role for the function. Before doing that, let's make sure you understand roles in AWS Lambda.

Configuring roles in AWS Lambda

Roles define policies that grant the executor (in this case a Lambda function) permission to interact with AWS Lambda and other AWS resources. For example, if your Lambda function was going to query a DynamoDB instance, then it would need access permissions for dynamodb:Scan and probably dynamodb:GetItem. If your Lambda was going to add an object to S3, it would need access permission to s3:PutObject on your S3 resource.

The important thing about roles is that they grant or deny permissions for your Lambda function to interact with other AWS resources. Roles have nothing to do with application users or your application's internal configuration. Rather, they define how your application can or cannot interact with AWS resources.

For this example, we don't need any special permissions because our Lambda function doesn't actually do anything. We do need to be able to access DynamoDB for the examples in Part 2, however. To setup that permission, we'll start by creating a new role from a template. We'll name it get-widget-role and add the policy template "Simple Microservice Permissions." The Simple Microservice Permissions role is part of an existing Lambda blueprint, and provides full access to DynamoDB. Its contents are shown in Listing 6.

Listing 6. Simple Microservices Permissions

    "Version": "2012-10-17", 
   "Statement": [
            "Effect": "Allow",
            "Action": [
,            "Resource": "arn:aws:dynamodb:us-east-1:YOUR_ACCOUNT_NUMBER:table/*"

This policy reads as follows:

Allow the specified actions (DeleteItem, GetItem, etc) on resource arn:aws:dynamodb:us-east-1:YOUR_ACCOUNT_NUMBER:table/*, where YOUR_ACCOUNT_NUMBER will be your account number and table/*means all of your tables in DynomDB.

If you wanted to refine the policy further, you could specify a single table name instead of all tables; or you could change the list of allowable actions. For example, a function that simply retrieves an item from DynamoDB does not need permissions to delete, put, or update items, so you might remove those.

Configuring Lambda's advanced settings

Finally, expand the Advanced Settings section, which is shown in Figure 8.


Figure 8. Advanced Settings for Lambda function


Once again, we can leave most of this alone. Because we pay for Lambda executions both by execution time and by the amount of RAM that we use, we can reduce the memory requirement from 512MB to 128MB. You may be thinking that 128MB is not a lot of memory for a Java application, but recall the Lambda functions are small and simple. A function is not like a servlet container that starts a web application and leaves it running for hours or days on end, so for most purposes 128MB is just fine.

Another important thing to notice on this screen is the timeout. Because Lambda functions are meant to be short-lived and you pay for execution time, AWS Lambda allows you to specify a timeout. If your function does not complete within the allotted time, it will be killed. This protects your cloud bill in case things go wrong.

When you're ready, press Next. Review the summary of your function, then press Create Function.

Testing your AWS Lambda function

Now that you have your function deployed in AWS Lambda, let's use the AWS console to test it out. After creating your function, you should have been dropped off on your function's page (Lambda > Functions > get-widget). If you need to, you can always navigate through Services to Lambda, then click on Functions and choose your function name (in this case, get-widget). Either way, you'll land on the page shown in Figure 9.


Figure 9. Lambda function page


 To test your function, click the blue Test button and you will be prompted to define a new test event. Recall that the Lambda function accepts a WidgetRequest, which has a single id parameter. Because you implemented the RequestHandler interface, AWS Lambda will deserialize the JSON that you pass in your test event into a WidgetRequest object. All of this is to say that to test the function you only need to specify an id field, as shown in Figure 10.


Figure 10. Lambda test event


 Click the Save and Test button and AWS Lambda will run your function with the sample event. If you did everything correctly, you should see a response similar to Figure 11.


Figure 11. Lambda test execution


 The top portion of the page shows the response body, which has an id and a name. The summary section shows the amount of time that the function took to execute. In this case, you should see 296.14 ms followed by 300 ms, which is the amount of time billed. You will also see the maximum amount of memory that was used, in this case 41 MB. The log output shows any logs that you may have written through a System.out.println() statement, or by accessing the Context object's logger.

For fun you might want to test the function again and notice the change in duration. When I ran it a couple more times, the durations that I observed were, respectively, 4.4 ms and 0.64 ms. The reason is that the first time the lambda runs, AWS needs to create a container with your JAR file and deploy it to an EC2 instance. Once it has been deployed on an EC2 instance, the function will run very quickly. Note, however, that if you do not access your function for an undetermined period of time, AWS Lambda will remove your container from the EC2 instance and you'll need to absorb that initial deployment overhead again.

If you've got all of this working so far, congratulations! You've built, deployed, and tested your first Lambda function.


In this article we've answered the question, "What is serverless computing, anyway?" You've learned how serverless architectures employ nanoservices (versus microservices) to increase application scalability, while lowering the price of delivery. I introduced the serverless computing execution model and the Function-as-a-Service concept, and explained the relationship of functions, as they are used in AWS Lambda, to functional programming. Finally, you built a Lambda function in Java, then deployed the function to AWS Lambda and tested it in the AWS Lambda console.

In Part 2, we'll add support for Amazon's DynamoDB. Using DynamoDB, you'll setup your Lambda function to manage widgets on the server, rather than creating them on-the-fly. You'll also leverage the AWS SDK to create a Java client application that can invoke your Lambda function from outside of AWS. Finally, you'll use the AWS Identity and Access Management service to create an IAM user, group, and custom policy for your example application, which we'll then build and run.

Editorial Note: This article was originally published by JavaWorld.


Check out part 2!

Topics: Nanoservices, Serverless

Subscribe Here!

Recent Posts

Posts by Tag

See all