Traditionally, applications were developed to scale up rather than scale out. When workload or performance requirement increases, the first reaction is to add more RAM, CPU or upgrade to higher performance storage – for physical and virtualized servers.
With today’s technology and the shift towards software defined data center (SDDC), almost anything can be delivered as software and via software: configuration, automatic provisioning, reclamation, mobility and management by policies and rules. This opens up opportunities for standardization, modularization and simplification in infrastructure and applications designs so that these can be put into templates and easily cloned for automatic provisioning, instantiation and scaling-out.
Organizations can take advantage of the capabilities of SDDC and cloud to build scalable applications that can scale out when demand increases, and scale in when demand falls. The following are some key design considerations when architecting applications for the cloud.
Design for Availability and Continuity
Applications and the infrastructure that they run on should be designed for failures. The key design principle is to expect failures anywhere and everywhere and to put in place measures and controls to handle and mitigate impact of those failures. Figure One below shows factors and approaches to designing for availability and continuity.
- Cost and budget will determine your design for availability and continuity. This is something organizations have to balance. The cost amount of the design can easily go up by the number of instances, bandwidth requirements, complexity of application and environment, and monitoring requirements. This is further elaborated in the following points.
- To ensure availability and continuity of services and applications, monitoring and automatic recovery solutions are required. Solutions such as VMware’s Operations Management suite (vCenter Operations Manager or vFabric Hyperic) help to proactively detect problems and prevent downtime. VMware’s HA and Site Recovery Manager helps to automatically move workloads to an available host and alternative site should a failure strikes with minimum disruption.
- Redundancy needs to be incorporated into architecture designs to eliminate single points of failures. There are two approaches to this: local redundancy and site redundancy.
- While redundancy eliminates single points of failures and increases availability, resiliency takes it a step further to ensure that even if something breaks, the service is uninterrupted and transparent to a user. An organization can deploy one or more combinations of the various methods shown in Figure 1 to increase resiliency and business continuity in the worst situations.
Design for Scalability and Performance
Having a highly available and resilient infrastructure lays the foundation for design applications for scalability and performance. As the saying goes, you are only as strong as your weakest link. No point having a poorly designed infrastructure to run your best designed application.
The design principle behind scalability and performance is to eliminate single points of failures and apply one or more of these techniques:
- Horizontal vs. Vertical Partitioning
- Horizontal vs. Vertical Scaling
This is shown in Figure Two.
- At the web or application tier is where you would deploy load balancers to increase availability and distribute workloads across nodes to optimize capacity.
To improve the scalability, applications can be designed to be:
- Partitioned vertically, i.e. each application performs a specific function or service
- Partitioned horizontally, i.e. each application performs multiple services or functions
After which, the application can be scaled up or scaled out to meet increasing demand.
- At the database tier, the database can be horizontally scaled as shown in Figure Two. However, there is a limit to how far it can be scaled this way. To overcome this, the database can be further partitioned vertically or horizontally.
- As data grows, to allow faster access and application performance, one can consider adding an in-memory data grid such as vFabric Gemfire that allows large volume of data reads and write in memory. The in-memory data grid can be clustered and scaled easily to meet changing demands.
- Lastly a clustered storage system such as VPLEX or Isilon will enable scalability and resiliency at the shared storage level to support the applications.
To take full advantage and reap maximum benefits from SDDC and cloud, organizations have to start architecting applications to be scalable and run on highly available and resilient infrastructure.