Deploy Angular 2 + node.js website using AWS

Mateusz Okulewicz
codeburst
Published in
10 min readNov 1, 2017

--

For few months, I’ve worked as an only developer in my hobby project called d20md, an interactive website for the well known game system. At the beginning of my adventure I’ve stand against the first, serious choose: “How to deploy my application?”. As a newcomer, I’ve just pick the simplest solution so my website landed in VPS. The all-in-one approach works quite well at the start but recently I had realized that I was forced to write the piece of software that should be available out of the box, for example database backup, HTTP request caching and HTTPS protocol support. There were also some problems during deployment of the application because tiny VPS cannot handle serving static content, backend server and database.

Luckily, I’ve realized that there must be other way to have all above things without many hours of work and resources. I’ve decided to migrate the project to AWS (Amazon Web Services) infrastructure. Even if achieving it wasn’t too hard, in the moment I wrote this article there wasn’t any single resource in which you could find all information essential to deploy frontend + backend solution using AWS components. That’s the reason why I’ve decided to write this article.

Although the topic is very wide, in this article I would like to focus mainly on the configuration of AWS services because there are tons of materials about node.js or Angular 2 on the Internet.

Explaining architecture

Let’s start with a presentation of the system from an architectural point of view. I think that an image is worth more than a thousand word so I’ll start from showing you this simple graph:

Diagram of AWS modules and relations between them

On the picture, there are all AWS presented and I’ll try to characterize each of them shortly:

  • EC2 container — a general purpose node which freely can be described as VPS (Virtual Private Server) thus it has own OS and you have direct access to it.
  • S3 bucket — simply, a data container. You can keep here assets needed by your website: documents, images, videos and many more. It can be also used for serving static files (just like a bundled Angular 2 application!).
  • RDS database AWS twisted name for a relational database instance.
  • VPC Virtual Private Cloud realize the concept of keeping all components inside one network. In this particular case, we place EC2 container and RDS in the same VPC to make them communicate with each other.
  • CloudFront — a gateway component which offers worldwide content delivery. There are many of CloudFront edge locations which ensure that user is redirected to the nearest one. If the content is available on that edge location, a user receives the answer immediately.

At the beginning of communication, when users tries to access the website, they are redirected to CloudFront server which is configured to serve a static content from S3 bucket (our data container). If someone requested the same content from the same edge server before, CloudFront’s cache object is returned instead of getting the object from S3 bucket again. However, described caching behavior is not available when you serve static content directly from S3 bucket container.

In addition to serving a static content, CloudFront instance redirects queries to the node.js backend server thus a user has a single endpoint both for backend and frontend server.

The last part of the puzzle is establishing a connection between EC2 server and a database instance. Moreover, we should ensure that DB instance is not exposed to the Internet to avoid security problems.

To summarize, we must resolve following issues:

  1. Setup and expose S3 bucket serving Angular 2 application.
  2. Create EC2 container which will be serving node.js server.
  3. Create a database (RDS).
  4. Establish connection between EC2 container and RDS instance.
  5. Configure CloudFront to serve Angular 2 and node.js applications.

In next paragraphs, I’ll try to explain how to resolve mentioned cases. As a prerequisite, I assume that you have created AWS account and you have access to it using aws-cli.

S3 bucket setup

We will use the S3 bucket for serving static content of a website. First, create an S3 bucket with public read access. Then go to Properties page and make sure that Static website hosting is enabled. The last step is to enable serving files from a bucket is adding proper access policy in Permissions/Bucket Policy page:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<your_container_name>/*"
}
]
}

Now your bucket can be read by everyone and you are ready to place your Angular 2 application in it using for example aws-cli:

aws s3 cp ./build/ s3://<your_bucket_name>/ --recursive

Personally, I’m using a gulp plugin to automate an upload process so after each build of the release application new version is automatically placed in S3.

Website URL patterns

It may be obvious for several reader, but I would like to briefly elaborate about creating routes for a website. After building some websites, I’m convinced that this is the most common pattern for routing:

Such convention is pretty straight-forward and prevents from mixing backend and frontend paths. This will be very helpful when we will do configuration for CloudFront.

EC2 container for node.js server

Let’s start from setting the heart of our system. As I mentioned before, EC2 is a general purpose component and in our case, we will setup it as a node.js backend server.

Be aware when selecting Network in creation wizard so you can be sure that newly created container would be placed inside selected VPC instance. You can choose this option only once so step for a moment and consider what private cloud you will use.

The last thing, you must pay attention to during creation is Security Group that can be described as firewall settings for AWS components. For EC2 Linux instance you should have enabled at least SHH and HTTP port (this should be your node.js server port). These settings can be changed in the future thus it’s not a big issue if you made mistake during configuration. Try to remember the name of that security group (or change it to remember it easily). It will be needed later for a database configuration.

Uff! We’ve finished! Make a coffee as a reward and wait a few minutes when AWS will be creating an EC2 instance for you. Next, prepare yourself for a container’s environment configuration. Make sure that your node.js server is exposed on/api URL because CloudFront would try redirect requests here.

RDS instance

Majority of the modern websites need a specific container for keeping persistent data. The most common approach for storing such data is creating a database. So now let’s see how to launch a DB instance using RDS (Relational Database Service).

Log in into your AWS console, choose RDS and launch a new instance. The parameters you must be careful about during configuration:

  • VPC instance — make sure that you create a database in same cloud instance you choose for EC2 container
  • Public accessibility — for security reasons you should disable access to a database from the Internet.
  • Backup — (optional) if you want to save the state of your DB select this option.

After creating an instance it is essential to configure its Security Group. Go to VPC console and choose Security Groups. Select the database security group (it should be named rds-xxxx) and go to Inbound rules, then click Edit. You should be able to add a new rule. We need to create All TCP, All UDP and All ICMP rules. While you will be adding a new group, enter your EC2 security group as a Source. As the result, you would have 3 new inbound rules and the source of each rule is your EC2 instance security group.

We are done with configuration! Now let’s check if our setup works. Do the following:

  1. Extract an address of your database instance. Try to run:
aws rds describe-db-instances

If you have the problem with access, go to your IAM console and set AdministratorAccess to your AWS CLI User.

The output from that command will contain ENDPOINT section and this is basically the address of your database instance. I do not fully understand why there is no information about the endpoint of a DB instance in AWS console. Let’s hope that guys from AWS will add such useful info in the near future.

2. Use the gathered address to perform connection to your database:

  • Log in to your EC2 instance.
  • Try to perform connection:
mysql --host=<db-instance-address> --port=3306 --user=root -p

(This is example command for MySQL instance. If you create another type of database, you need to find similar command for connecting to the database.)

CloudFront configuration

I’ve found a surprisingly good explanation what CloudFront offers in AWS website:

Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance. If the content is already in the edge location with the lowest latency, CloudFront delivers it immediately. If the content is not in that edge location, CloudFront retrieves it from an Amazon S3 bucket or an HTTP server (for example, a web server) that you have identified as the source for the definitive version of your content.

So if we sum up all this cool stuff, we can imagine CF as a caching gateway for our website. In our configuration, we want to cache requests to S3 bucket because it contains static content (until we would like to deploy a new version of the Angular app to speed up user experience) but we don’t want to do the same with requests for a backend as these are considered as a dynamic.

Before we take off, I feel obligated to warn you about making frequent changes for CF configuration — each save operation takes about 10 minutes to replicate so try to bulk all changes into one.

We have almost finished! This will be the last step we need in our setup: let’s create CloudFront Web instance. In the wizard, we will setup CF for serving S3 bucket and later we will add redirection for EC2 node.js container. During this part you must set:

  • Origin Domain Name: choose S3 bucket.
  • Viewer Protocol Policy: Redirect HTTP to HTTPS.
  • Allowed HTTP methods: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE.
  • Query String Forwarding and Caching: choose No if your Angular application uses query string routes.
  • Compress Objects Automatically: Yes.
  • Default Root Object: write here your index.html filename.

If you want to know more about these settings read this article.

Congratulations, now you have your new CloudFront for any project you like. At this moment you should be able to enter your website (get the Angular part of it) when you enter the CloudFront address in the browser.

So let’s go further and try to pass a non-root URL like www.app.cloudfront.com/about. Whoops! Seems like CloudFront does not know about that page. To avoid such situation just tell CF to redirect all 404 and 403 errors to Angular 2 application in very simple way:

  • Select CloudFront distribution.
  • Enter Error Pages tab.
  • Select Create Custom Error Response.
  • Select 404 for HTTP Error Code.
  • Set TTL to 0.
  • Set Customize Error Response to Yes.
  • In Response Page Path put the path to your index.html file.
  • HTTP response code: set to 200.
  • Do the same for 403 error code.

Now, all of 404 and 403 errors should be redirected to Angular 2 application so you should deal with potentially incorrect URLs there.

“We are almost there” step we need to do is to enable the redirection of URL www.cloudfront.com/api to the node.js server. Moreover, it may be useful to remove caching behavior of CloudFront because of dynamic nature of responses. To achieve such effect follow steps listed below:

  1. In the CloudFront console edit the Distribution Settings.
  2. Go to Origins tab and Create Origin.
  3. Enter the address of your EC2 component as an Origin Domain Name.
  4. Set the Origin Protocol Policy to HTTP Only.
  5. Write your server HTTP Port.
  6. Click Create.

After that, you will create the custom CloudFront origin. In the moment I’ve written the article CF supports only S3 buckets as origins. Every other component needs to be configured as a custom one.

OK, we’ve registered the origin and our final step is to create the route for it. So:

  1. Go to the Behaviors tab.
  2. Click Create Behavior.
  3. As a Path Pattern enter a wildcard of your backend base URL. In our case that will be /api/*.
  4. Choose previously created EC2 origin as Origin.
  5. In Viewer Protocol Policy choose: HTTP and HTTPS.
  6. Allowed HTTP methods: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE.
  7. Object caching: Customize.
  8. Set all TTL to 0 as we don’t want to cache responses.
  9. Probably you will need to disable Query String Forwarding too.
  10. Compress Objects Automatically: Yes.

And that’s it! Now we have fully functional CloudFront even for backend request. Take a note that we disabled caching via setting TTL of responses to 0. It means that all messages come from backend are “hot” and a user browser will need to request a new data from backend when it’s needed.

But what about HTTPS?

CloudFront instance by default is configured to use own certificate and it’s ready to use HTTPS. That’s it — you don’t need to generate and sign a certificate. The connection between a user and your gateway can use both HTTP and HTTPS. But for inner communication, you can choose a more secure option.

And we did it! After completing all above steps you have fully worked cloud server with a scalable inner organization. It can stand against user loads, it is safe and can be restored in case of failure. But the most important for fans of free solutions: all of presented here AWS components are available in free tier.

I hope that this article helps you with the configuration of your own deployment. This is my first article I’ve written in English and I’m waiting for any feedback from you. Cheers!

--

--

Ex C++ developer actually resides far away from home in hot and sunny Australia. The big enthusiast of Javascript and Linux.