A Beginner’s guide to scaling a 1 million+ users on AWS

Featured Image

Any developer or a software engineer or just anybody in the IT Sector, must have at least been dragged to the buzz a couple of times about Amazon Web Services. AWS (Amazon Web Services) as it is evident from the name is an assortment or compilation of cloud computing services that fabricate on-demand computing platform offered by Amazon. Its spread over 12 geographical locations around the globe with several Availability Zones(AZ) each with its isolated power and internet connection but with an interconnected low latency network fast enough to act as a single data centre. AWS has numerous Edge locations used by Amazon Services which provides the user content at extremely low latency irrespective of their location on the planet

 

AWS provides multiple services like CloudFront, Route 53, S3, DynamoDB, Elastic Load Balancing and a lot more at a certain cost which use multiple AZs internally, to be highly efficient and available, allowing the establishment of a highly available architecture in spite of their separate individual existence on a single AZ.

 

With a humble start of a single user i.e. self design, self architecture. Run at a single instance with types encompassing diverse combinations of CPU, RAM, and Storage and so on. Run the entire web stack with this instance using Route 53 and a single Elastic IP to the instance. The next step is Vertical Scaling. Make a new instance type with more powerful, varied hardware configurations of 244 Gigs of RAM or 40 cores. Choices range from High I/O Instances to High Storage Instances. DynamoDB can be used for scalable services. But problems like no failover, no redundancy in vertical scaling leaves you prone to high risks. So, single instances reach their limit.

 

To move up to 10 users one needs to distribute a single host to multiple hosts, one for the website, one or more for databases in case it has a larger volume than the website or leaving the mundane tasks of maintaining backups, patches, high availability, operating systems to database services like Amazon RDS (Relational Database Service), Amazon Aurora or Amazon Redshift as per the requirements. Use an SQL database since it is an established technology, with the comfort of existing codes, communities and tools and clear patterns to scalability. Move up to 100 users with separate hosts to web tires and Amazon RDS database management

 

A Beginner’s guide to scaling a 1 million+ users on AWS

 

It’s time to ramp it up to 1000 users with another web instance at a different AZ and a slave database to RDS in another AZ which assures an automatic switch over in case of a complication. Add an ELB (Elastic Load Balancer) to control the load and balance users at both web host instances in the two AZs.

 

To increase the user count to somewhere between 10,000 to a 100,000 one have to introduce horizontal scaling. Add more read replicas to the RDS database. Performance and efficiency tweaks can be done by diluting web tier services by moving fragments of traffic elsewhere namely Amazon S3 and CloudFront for the static and dynamic content in your web app and by shifting session state to ElastiCache. Use auto scaling for the automatic resizing of compute clusters and define the minimum and maximum size of your pools. Use CloudWatch for drive scaling and custom metrics.

 

Time to reach the 500,000 mark. Expand Auto Scaling groups to several AZs (limited to the count of AZs available at a region) with instances popping up from 10 to 1000 at each AZ for scalability and availability. ElastiCahe and DynamoDB are now used to unpack popular reads and session data respectively. Add monitoring, metrics and logging. Apprehend what the end users are receiving and attend to their latency and load error problems and clutch the maximum performance out of the given configuration.

 

Now it’s time to introduce automation to the humongous infrastructure created so far with higher level services like AWS Elastic Beanstalk, AWS OpsWorks, AWS CloudFormation and AWS CodeDeploy. Use Service Oriented Architecture (SOA) or micro services to create separated services. This multiplies flexibility for scaling and high availability. As a business invest in uniqueness. Services like queuing, email, transcoding, logging and so on can be managed by Amazon Services like SQS, AWS Lambda.

 

To hit the milestone of a million users all the above guidelines are quintessential with the addition of Amazon SES to send Email and CloudWatch for monitoring. These are a few strategies Promatics have applied to deploy many projects. Try them and make your site look like it’s never going down. Happy scaling up!

Ready to Take the Next Step?


icons

Deepti Manchanda

Content Writer

Deepti, an HR Head at Promatics carries charming and energetic personality. She has more than five years of experience in managing all HR functions at Promatics. Her expertise includes talent management, performance and compensation management, training and development, and employee engagement. She is passionate about helping business make the most of its resources and talent and helping individuals make the most of their inherent and latent potential. She ensures to foster trust among all and provide open and friendly environment to its people. She enjoys interacting with people and listening to her peers calmly. She leaves no stone unturned to maintain healthy employee relations. Her focus and can-to-do attitude makes her not to succumb to the challenges while working in a pressure filled environment. When she is not at work, she loves roving with her friends and family.

Still have your concerns?

Your concerns are legit, and we know how to deal with them. Hook us up for a discussion, no strings attached, and we will show how we can add value to your operations!

+91-95010-82999 or hi@promaticsindia.com