There a lots of statistics – most of them probably totally apocryphal – about how much money a slow website can cost you. You only need to know this: slow web applications enrage your users.
Avoid enraging your users by following these tips:
First things first – make sure performance is up to scratch.
Web acceleration and regional distribution
Cache static resources using caching servers distributed across a region or worldwide. This brings those resources closer to your users and reduces the amount of time it takes for a server to receive and deliver a request. If origin web servers are down, regional nodes will deliver static and dynamic reusable content to ensure an uninterrupted experience for your users.
Additionally, a web accelerator can speed up delivery via a number of techniques, including data compression, optimising code, filtering out undesirable objects and maintaining consistent TCP connections between the client and the proxy server.
Dynamic content caching
Going a step further than just regional distribution, dynamic content caching is when the first instance of a dynamically-generated webpage is stored, and further requests to that same page, go to the cache, rather than generating the page again.
In this context, an important metric is dynamic cache control. This increases the cache-hit ratio of a website, namely, how much content is delivered from a cache as compared to origin servers. The higher the ratio, the faster your content gets to your users.
Networks are often the least advanced segment of an IT estate. Yet your web applications can only ever be as fast and secure as the network down which they are delivered. Ensure that your network is reliable and secure.
If your web application is private, maximise its potential by delivering their content via a private MPLS network – avoid the vagaries of the public internet wherever possible.
You can have the fastest car in the world, if the roads can’t take the traffic, you won’t be going anywhere. It doesn’t matter how optimized your web application is if you can’t scale correctly.
Here’s a couple of ideas:
SSL optimization and offloading
SSL offloading relieves a web server from having to encrypt and decrypt web traffic sent via SSL – this is instead done by a separate device designed specifically for the task. This reduces expenses, making your solution more scalable.
Explore the public cloud
The public cloud is a hosting service that can scale up using APIs. This means that new servers don’t need to be spun up and down manually – the process is automatic: the easiest way to scale.
Ensuring that your content can be access as easily as possible is crucial.
Front your website using a highly-available content delivery network (CDN) service that supports multi-node replication. If someone in Germany accesses ‘domain.com’ the CDN will recognise the location of the user and redirect them to ‘domain.de’ accordingly – resulting in much better performance for the user.
Local and global load balancing
Load balancers distribute traffic so that the burden of traffic is spread out equally amongst your available servers. This results in improved uptime, better response times and higher throughput.
Local load balancing distributes traffic between local servers. Global load balancing distributes traffic between data centres.
Multi-node disaster recovery
If your content is distributed between multiple nodes, global load balancing between them also allows for Disaster Recovery benefits. Imagine a company with two data centres. If an individual server within one data centre fails, traffic can be redirected to a parallel server in the second site. If the whole data centre fails, likewise, traffic can be seamlessly redirected to the second data centre.
Beyond taking steps to improve performance, you can also put measures in place that prevent performance being negatively impacted by external forces.
Web application firewall
Use a web application firewall to filter out unwanted requests, this will free up web server resource and improve the performance of your application.
Reroute your traffic to go through a purpose-built DoS protection platform, which monitors and analyses traffic data patterns in real time. When a DoS attack is detected, traffic is directed to the nearest ‘scrubbing centre’, where the ‘good’ traffic is filtered from the ‘bad’ and routed to minimise the impact of the DoS attack. The clean traffic is then redirected back into the customer’s network.
This technique is not only highly effective, it also reduces the need for expensive hardware and means that you avoid having to spend time configuring complicated instruments such as routers or firewalls.
If you work with a provider that has access to a larger network, they will be able to absorb more traffic and protect their customers from larger DoS attacks than the customer would have been able to handle on their own.