GreenM™ > Blog > How to build Serverless .NET Core API with AWS and Vertica
Data storage is the most important part of any big data analytics solution. How you transform, store and access your data is the most important decision you’ll make when building such systems. These choices require compromises, but with a smart approach, we can build APIs to allow access to any type of data storage.
Let’s imagine you’ve created a platform that’s been working well for several years. But new technologies are released every minute, and someday you’ll want to renew your architecture to apply all these modern improvements.
In our case, we have a Vertica distributed analytical database. It’s hosted on AWS cloud along with the other components of our platform. We have a customer facing business analytics portal, which uses an API to get data from Vertica. The API service was built with ASP.NET WebAPI technology and hosted on an EC2 instance.
Our goal was to make our API highly available (HA) and scalable, so we decided to go with a Serverless approach. But at the same time, we wanted to keep refactoring scope small. Our ideal case was to just move the existing .NET source code to newer technology.
AWS proposes several options for Serverless APIs creation and hosting
AWS Lambda Functions
Both these technologies have their pros and cons, which we’ll discuss in more detail in another article. In our case, the main factor was to use .NET Core technology in order to simplify migration from ASP.NET Web API. During investigation, we found out that the Vertica.Data package is not currently supported by .NET Core.
One option is to use the old but reliable Vertica ODBC driver in our API’s execution environment — but in Lambda, we don’t have a way to adjust the environment to use it. However, this is one of the main Docker advantages — you can build your own image with any libraries you want. So, Docker on top of AWS Fargate was our selected option.
To start with AWS Fargate, first prepare a Docker image with your application. Then, specify it in Fargate’s service and task configurations, along with other parameters like CPU, Memory and auto-scaling settings. You end up with an HA application running in Docker within its own auto-scaling group. On top of it sits a load balancer, which receives all incoming requests and forwards them to multiple Fargate tasks.