Skip to main content
ServerLess Computing

This week I spent some time on exploring Azure ServerLess Computing. As part of the Pi-Lights project which I had detailed in my last post, I tried for a “ServerLess” approach to demonstrate the use of Cloud at creating a simple application which can benefit from the Cloud agility to scale on demand and handle millions of requests without worrying about any high availability, security, hardware provisioning and maintenance of any server resources. Seems like a dream but it is true. With the options of Server-Less computing, we can focus only the actual code to get our work done without worrying about even the slightest of infrastructure required to run our code, let alone worrying about capacity and demand.

The idea behind “ServerLess” computing is a very simple one. Let me walk you through this with the project I had created last time “Pi-Lights”. In order to control the device/relays, I needed a way to store the device ids and the related messages in a central store somewhere on the internet so that I can access them from any device and any network. Traditionally, I would have to provision a server on run the server code (PHP and MySQL database) on it. Along with provisioning the server, I would also have to worry about the security of this server and also the capacity (assuming my project kicks off and I sell these small devices to lot of customers across the globe 😀). So now, instead of focusing on the core functionality of my project, I will have to start worrying about the security, infrastructure and capacity more and more. As my business grows, these concerns will also grow and more and more efforts will be needed to ensure smooth functioning of my application.

Now imagine the “ServerLess” world. Instead of servers and infrastructure, I just pick up a language of my choice and start writing code. Once my code is ready, I push it to the cloud where the cloud provider will run it on the necessary pre-configured hardware/software based on the language of the code (NodeJS Server, .NET Core Server, Python, etc.). I divide my code into “FUNCTIONS” and each function on the cloud gets a unique URL. I could also write REST APIs instead of multiple functions but the basic idea remains the same where the focus is on the functionality rather than infrastructure and its management. Once the URLs are available, to secure them, each URL/FUNCTION will need a unique key for its execution. For the purpose of this post, I have created 2 functions namely: “registerDevice” and “getMessages”.

As of the name suggests, the “registerDevice” function will expect the name, UUID and port details as part of the request body. Upon receiving these details, the function will insert them into a “Cloud Database”. Similarly, the “getMessages” function will expect a UUID as an input in the request body, search for all messages from the “Cloud Database” for this UUID, extract them, delete them from the “Database” and return the messages to the device as the response to the request. With these 2 core functions deployed, all I have to do now is call them as regular web requests providing the “unqiue” function code and the necessary request parameters from any device I want. The setup can handle from a few request to millions of requests without me worrying about the underlying infrastructure and availability. During all the development and deployment stages, nowhere did I worry about what “Operating System” and hardware my code will run on. I only need to ensure that the right server environment is available (NodeJS in my case) because in the end the runtime environment is what matters most.

Along with the absence of “Servers” to host my code, even for the database, I used the “Cloud Database”; a No-SQL database called “CosmosDB” in Azure. This is non-relational database built for speed. All I did was define the database name and the “containers” (tables). Using the Cosmos API, I was able to add, retrieve and remove records from these containers. My database for test had 2 containers (devices and messages). The device registration function saved its data in “devices” container and the message API pulled and removed data from the “messages” container.

To achieve all of this, the number of lines of code was also reduced as the containers are provided by the cloud to run the code and it was initial configuration from the graphical user interface to define these execution environment parameters. All the code now needs is referenced of these environment parameters which are provided for each of the runtimes available on the cloud platform (NodeJS, .NET Core and so on and so forth). What would traditionally be an effort of 2-3 resources (Developer, Server Admin, DB Admin) is now possible with just the developer alone using these cloud resources at will to achieve much high flexibility and resilience at the same time.

Another advantage which I would like to highlight here is that if I do not want to develop further code in NodeJS, I can very well switch to .NET Core or any other provided runtime without worrying about reinstalling software stack on my servers. I just delete the parts of runtime which are no longer required and provision new runtime engines with a few mouse clicks and a little bit of parameter definitions. This is the true power of Cloud Computing which support Agile at its core. With the Cloud you now don’t have to worry about wrong choices, you can make them, learn from them, start again from scratch with better experience on what not to do and push your way forward at a much faster and confident pace.

To end the post, lets recap on why we called this “ServerLess” computing even though the cloud provider does manage the servers running behind all the code we publish. The term simply defines the fact that as the application owner/developer, we do not have to worry about the servers running our code and they just run our code as we intended it to run and hence the term “ServerLess” computing.

A quick glance at the code which made these two functions possible is as below:

registerDevice

module.exports = async function (context, req) {

if (req.body.name && req.body.uuid && req.body.ports){
    try{
        context.bindings.newDevice = {
            id: req.body.uuid,
            deviceId: req.body.uuid,
            name: req.body.name,
            ports: req.body.ports
        }
            context.res = {
                status: 200
            };
        }catch(err){
            context.res = {
                status: 500
            };
        }
    }else{
        context.res = {
            status: 500
        };
    }
}

getMessages

const cosmos = require('@azure/cosmos');
const endpoint = process.env.COSMOS_API_URL;
const key = process.env.COSMOS_API_KEY;
const { CosmosClient } = cosmos;

const client = new CosmosClient({ endpoint, key });
const allMessages = client.database("iotDevices").container("messages");
const allDevices = client.database("iotDevices").container("devices");

module.exports = async function (context, req) {

    const deviceId = (req.params.deviceId);
    const messages = context.bindings.messages;
    const message = messages ? (messages.length > 0 ? messages[0]:undefined) : undefined;
    var aMessageIds = [];
    var result;
    if(messages){
        for(i=0;i;i<messages.length;i++){
            try{
                var id = messages[i].id+'';
                var key = messages[i].deviceId+'';
                result = await allMessages.item(id, key).delete();
            }catch(err){
                aMessageIds[aMessageIds.length] = id + ' -> ' + err;
            }
        }
    }

    context.bindings.messageId = aMessageIds;

    const responseMessage = message
        ? `{"status":"success","action":"${message.action}","value":"${message.value}","count":"${aMessageIds.length}"}`
        : `{"status":"fail"}`;

    context.res = {
        body: responseMessage
    };
}