5 mistakes with Autoscaling that defies its cost benefits

Jun-21 | Neeraj Gupta

Introduction

Without autoscaling, it is almost impossible to predict server requirements. Imagine, for example, Apple offers discounts on it’s products for 1 day, however, is unable to meet the demand and the systems come to a screeching halt as they were not ready for this surge in traffic.

Autoscaling helps avoid the situations where there is surge or lag in traffic by transferring the processing power to a new server automatically. This can help you avoid paying for processing power that is unutilized. However, organizations do not leverage autoscaling to its complete potential due to a few mistakes in scaling and choosing instances, leading to reduced cost benefits. The 5 common mistakes that organizations make are listed below.

 

Auto Scaling Mistakes to Avoid

  1. Negligence in Monitoring of Scaling Activity

    Organizations fail to continuously monitor & optimize workloads and as a result end up spending more while scaling. Unless proactively monitored, auto-scaling’s cost benefits can only be partially realized.

    TO THE NEW’s leading client who is also one of India's largest women's cosmetics and lifestyle e-commerce platforms, had over 200 servers operating 24 hours a day, seven days a week, and wanted to reduce their AWS costs by using reserved instances. It was observed that the organization did not follow a straightforward mechanism to monitor the total number of running servers and their breakup based on purchase options i.e. on-demand/spot. The auto-scaling activities were optimized. A schedule for scaling was implemented which resulted in savings of 1700$ per month. The server boot-up time was also reduced.

    While AWS Auto Scaling allows you to specify how your fleet responds to changes in demand automatically, ensuring application efficiency and availability, it is important to track and maintain:

    • A regular schedule for instances
    • A scaling policy responding to dynamic demand
    • Maximum and minimum limits
    • Resource usage
    • Demand metrics
  2. Choosing incorrect instance or scaling metric

    Choosing an instance solely to reduce time to market is a popular mistake. Without scaling operation, the minimum and desired instance count will still remain the same, indicating that the scaling metric is wrong. Not all AWS Auto Scaling choices are created equal, so you should think carefully about which one you choose. All applications’ resource needs should be identified thoroughly with key constraints that can hinder success.

    It's also a good idea to turn on Auto Scaling for group metrics since the real capacity data is not shown in the capacity forecast graphs that appear after the Create Scaling Plan wizard is completed. For the workload, select an acceptable instance family and instance type. This can be done based on prior knowledge and experience with the workload (e.g., having run a similar workload in the production environment) or the workload's prerequisite specifications. At the time of initial setup, the use of the latest generation, amd/graviton processor based instance forms, and lower configuration machines is strongly recommended. These can be scaled up or down depending on use. Typically, companies over-provision instance forms at first and then fail to check the workload, resulting in cost inefficiency and resource waste.

  3. Not maintaining the target utilization threshold

    Based on realistic conditions, we suggest carefully selecting various scale-out and scale-in thresholds. Estimation during a scale-in is used which occurs when scale-in and scale-out behaviours alternate. When choosing the same scale-out and scale-in thresholds, keep this behaviour in mind. We suggest leaving enough space between the scale-out and scale-in thresholds. Managing step scaling for 14-15 auto-scaling groups & tweaking the step scaling threshold can be challenging. Scaling workloads based on request count and flawlessly managing dynamic scaling based on load can be made possible thanks to target monitoring scaling policies.

  4. Working with Zombie Resources

    AWS Zombies are idle (and sometimes hidden) properties that continue to contribute to the AWS Cloud's continuing costs without generating much value. Owing to a misconfiguration in the auto-scaling community initial setup, it was discovered that a customer was overspending on EBS volumes. Multiple auto-scaling classes were present, each with the appropriate instance sort and aggressive scaling policies. Workloads were scaled 4-5 times a day, and about 25-30 computers were used per day. The only mistake was failing to mark ‘delete' on termination when creating the AMI.

  5. Imbalance in using Spot Instances

    Spot Instances are not appropriate for workloads that are rigid, stateful, fault-intolerant, or closely coupled between instance nodes. However, both instances (on-demand & spot instances) should be used in balance. A thorough testing of the workloads in the non-production environment is a must before adopting spot instances in the production environment.

 

Conclusion

Setting up auto scaling is a complex and time-consuming task for any engineering team. In the coming years, more resources will be available to aid in the setup of auto scaling. These cloud management systems, on the other hand, are normally hard to come by. As a result, the efficiency of any environment's auto scaling mechanism will be determined by the factors mentioned above.

Learn the best practices that must be followed while implementing auto-scaling on the AWS cloud to save costs.

LEAVE A COMMENT
0 Comment
Neeraj Gupta
by Neeraj Gupta LinkedIn

Neeraj Gupta helps customers across multiple segments in adopting Cloud and building successful products on the AWS platform. As a Solution Architect at TO THE NEW, he is responsible for guiding customers in their AWS cost optimization initiatives. Over the past few years, he has helped our clients successfully bring down their AWS spends by 30%-40%.