Gestalt Laser

Gestalt Laser is a lambda engine supporting "serverless computing".

What are Lambdas?

Lambdas provide an execution model that abstracts developers away from the servers that execute the deployments. Code gets deployed to a lambda as inline, library, or executable (depending on the language supported). Then as requests come in (via REST calls or Message Queue) they are sent to an executor which loads the code if needed, and executes it, returning the result.

Lambda Basics

The containers executing lambdas can scale up or down depending on activity. This model decouples the requestor from executing on a single server, or container and can load balance and scale to meet demand for requests.

Laser dynamically shares the resources across multiple lambda deployments spinning up or down containers based upon demand.

This means that lambdas are an efficient method for deploying into a cluster of containers.

Executors

Lambda executors are essentially the containers that execute the lambda. When a lambda request comes in, Gestalt Laser finds or starts a container that is equipped to execute it. If an available executor is found (called a hot or warm executor) it can skip loading the code and just send the request and execute. If it cannot find one it attempts to start a new executor that can run the lambda.

Lambda Executor

Executors typically are configured to be a minimal container for running based on language used. Gestalt Laser currently has six executor types supporting Java (and Scala), JavaScript, Go, Ruby, Python, and .NET Core. It also has a seventh executor for Custom lambdas which require the developer to bundle all needed libraries in their lambda deployment, rather than depending on executor installed libraries. These custom executor containers as well as most of the other available executors are based on Alpine Linux to keep the size and load time to a minimum.

Invoking a Lambda

Invoking the lambda is as simple and straight-forward as sending a REST request to the API endpoint configured with the lambda. If the lambda is configured as async, there will be no return value, otherwise the return value will be returned as a JSON object.

http GET http://meta.mygestalt/myapp/mylambda
Status: 200 OK
{
    ...
}

Lambda Lifecycle

Once the API Endpoint triggers the executor, the lambda will run in the executor container, and the executor will reach an IDLE state. The executor will remain in a WARM state for a configurable period before terminating. In the case of a periodic lambda, that remains active for several simultaneous requests, this avoids having to reload the lambda artifact to the executor container on subsequent requests. This design limits the cost of the lambda engine in performance to the load time of the first ("cold") lambda request. Once requests go idle the executor will exit the container which becomes available for the next lambda request.

This design optimizes active containers based upon incoming requests, and eliminates the cost of inactive containers.

DevOps with Lambdas

Lambdas help reduce the complexity and in some cases avoid DevOps dependency. Of course you are in control of the process of managing your lambda builds and deployment. This can be as simple as defining a lambda and in the case of inline deployment pasting your function snippet into the lambda and setting the endpoint. In most cases your lambda will probably consist of a deployed artifact (ie. jar file, go executable, zip file, etc). It is good to have a build which produces the build file, and also to version the deployed artifact (ie. to Artifactory, etc). In most cases where an incident arrises with a lambda deployment you will want to rollback to a previous artifact, which can be accomplished through an artifact repository. It requires no sophistication in the cluster and results in a rolling update/rollback scenario, since there is no dependence on a specific container between requests.