The Simplicity of Building with Serverless

A brief history of web development evolution:

More than a decade has passed away since the launch of Ruby on Rails framework in 2004 that disrupted the way that software developers code web applications. The emphasis of this tool in promoting clean code, test-driven development, and utilization of design patterns allowed very quick evolution of web applications that could transform and adapt faster to the always evolving user needs. The flexibility of properly designed applications and the increasing popularity of agile development practices attracted very quickly the attention from coders from other programming languages that followed the path inspired by Ruby on Rails. Thereafter, we saw many frameworks appear in the scene: Symfony, CodeIgniter, Zend Framework, Laravel, Flask, Express, Sails, among others. Software development was easier and faster than in the 90s and everybody was learning to adapt to this new world of opportunities…

Apparently, all was easy and simple, developers have finally found the holy grail of web applications development but even with this evolution, there was a big complexity and bottlenecks at the moment when the amazing software, crafted with love and patience, was ready to be launched to staging and production environments. In order to go live, was needed that a skilled sysadmin collected the dependencies requirements needed by the web application to function and scale in production servers. With this list at hand, the sysadmin was prepared to request to a datacentre the provision of a new server, wait a day or two, and finally, start the setup of all these dependencies. Once this was ready, proceeded to perform the web application installation following the instructions advised by web developers. Usually, the last step faced some difficulties across the way but with some patience, dedication, and collaboration, a couple of days later, the problems were finally resolved and the amazing application was ready to pass the stakeholders validation.

Can we go faster?

Suppose that instead of a long planning process for a web application that requires hundreds of features you can start small with the essential things for your MVP and iterate from there step by step, improving through the course of the weeks in small sprints, just in an agile way. In that case, it would be annoying to spend a couple of days coordinating back and forth with a sysadmin to add, remove or update some dependencies, configure scheduled jobs, provision more foundational services, and so forth. That’s why Docker containers became quite popular so fast and became rapidly adopted by most agile organizations that wanted to move faster and have lean processes. But still, Docker containers need a person well versed in services configuration, image creation, orchestration, networks, access rules, persistent storage, containers communication, monitoring, and all these fun things.

Can things be simpler?

Imagine that there would be something that provides you the basic building blocks (foundational services) that you usually need to create some lean microservice that does one thing well following the single responsibility principle and that you can produce a quick prototype that uses simple storage or document database, ready to go logging and monitoring, load balancing and so forth and expose that through a web service HTTP endpoint in a production-ready server just running one cli command. The potential would be unlimited, you could start small and combine those microservices features in something more complex with features composition as you evolve with your MVP. You could move fast without depending on a skilled, multidisciplinary team that needs to know about Kubernetes, Postgresql, MongoDB, Memcache, Varnish, and many more.

Your dream is not anymore, an illusion is here, it was made possible thanks to the AWS cloud provider lambda functions and the cloud infrastructure where you can provision common building blocks for creating web applications, like simple storage, queues, notification services, databases, caching and many more. Just by declaring these services in dependencies manifest, usually, each cloud provider had its own application services formation definition syntax but Serverless framework appeared into a scene to help to simplify this with an abstraction layer that supports a simple definition through a plain text YAML file.

With Serverless you can create a microservice endpoint that can use a persistent file system storage and be exposed in a production ready server with few lines of code. Here I will show you the example of simple trading signals processing microservice that will receive alerts from Trading View platform and transform, filter and post the signals into Zignaly cryptocurrencies trading platform, the definition of dependencies looks:

This manifest indicates to serverless that the microservice will run in AWS provider using NodeJS v12, declare the AWS S3 bucket resource, grant the permission to get objects from an S3 bucket that has the ID defined in environment file “.env” so could be mapped to different buckets based on the environment the lambda function is currently running on (dev, test or prod). Additionally, it declares the function handler that will be the entry point to resolve any HTTP request to the “/trading-view-strategy-signal” path, this handler is a function that receives a request event object with the GET/POST parameters and headers passed through the request and contains all the data we need in order to do the signal processing for the composition of the Zignaly trading signal and submission. Here is the implementation of the handler:

The mapTradingViewSignalToZignaly does the parsing of the source signal and mapping of that signal in the JSON object format expected by the Zignaly copy trading signals endpoint. This signal will pass a filter processing stage that looks into CSV indicator to validate if the signal received from Trading View technical indicators match the entry criteria from machine learning model, in the case that all checks are passed the postSignal function will do an HTTP POST request to Zignaly sending the trading signal and delegating to that platform to function as a broker and execute the entry of the position. 

To speed up the development is not needed that we test every step of our implementation in AWS due to the fact that this amazing framework includes the Serverless Offline plugin that allows us to emulate AWS lambda HTTP exposed function requests in a local environment. 

Once our implementation is ready and we want to move our endpoint to production the deployment is as simple as configuring our AWS developer credentials in our bash environment variables: 

export AWS_ACCESS_KEY_ID=<your-key-here> 

export AWS_SECRET_ACCESS_KEY=<your-secret-key-here> 

And finally running from console our serverless deploy: 

That’s it, the trading signals processing endpoint is now live in the cloud and ready to be used with endpoint POST URL assigned by AWS without any need of provisioning and manually installing the webserver. Moreoverthe power doesn’t finish here, if we want to create multiples environments for dev, test or production is as simple as passing the –stage parameter when executing the “sls deploy” command and we will have one HTTP endpoint for each environment. A powerful solution for quick MVP creation. 

All the request handling and storage services building blocks we needed are handled by the cloud provider out of the box without any effort from our side and the provisioning and deployment is handled in a very elegant manner by Serverless with the cli tool. Is simply amazing! 

Pablo Cerda

Senior Web Developer at EKKIDEN


Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée.