Stack Infrastructure and Data Center Frontier continue a new special report series by highlighting the role of network connections in data center site selection.
Network latency is bad for business and getting worse as customers grow increasingly intolerant of slow response times. The business impact can be substantial.
- Akamai reported in 2017 that a 100-millisecond delay in website load time can hurt conversion rates by 7% (an eyeblink takes about 300 milliseconds) and that a two-second delay in web page load time increases bounce rates by 103%.
- Google reported that as page load times increase from one to 5 seconds the probability that users will leave increases by 90%.
- Mux estimated that a single buffering event reduces viewership of an online video by 39% and the amount of time visitors spend with a video by nearly 40%..
- Kissmetrics found that 47% of consumers expect a webpage to load in two seconds or less and 40% abandon a page that takes more than three seconds to load.
Data centers require reliable, robust and scalable network connections. These needs should be considered early in the planning process.
A data center may get better performance from interconnecting with a network dense facility 500 miles away than from a resource-starved one 150 miles away.
A common misconception is that latency is a function of the data center’s proximity to end-users. In fact, a more important factor is proximity to exchanges and cloud on-ramps. A data center may get better performance from interconnecting with a network dense facility 500 miles away than from a resource-starved one 150 miles away.
The bandwidth and latency needs of individual data centers vary by the type of traffic that traverses the network. For example, financial applications are likely to be more sensitive to latency issues than website hosting. Capacity and performance demands are also likely to change due to financial conditions, application availability needs and target markets. It is important to understand the nature of the traffic that will be passing over the network.
It’s also important to know what percentage of responses can be generated internally versus requiring queries over the public Internet.
For example, traditional OLTP workloads largely generate predictable north-south traffic between users and servers. With the growth of social media and mobile apps, those traffic patterns are more complex. A single query may now generate a significant amount of east-west traffic as user demographics, browsing history and recent purchases are factored into the response as well as ad-serving choices generated by real-time auctions. Hyperscale applications are also increasingly making use of microservices, which can cause a single query to generate hundreds of downstream requests to other servers. The slowest microservice can drag down overall response times.
The availability of dark fiber is also a key concern because installing fiber is expensive and time-consuming. At the right scale, dark fiber allows for rapid and cost-effective deployment of network capacity, by adding new channels on the existing dark fiber in the ground. Installing fiber is an expensive proposition, so take the time to determine if there is dark fiber in the area and the cost of activating it.
Catch up on the first and second entry in the series. And stay tuned for more data center site selection information on stakeholders, preparing for the unexpected and more, in the coming weeks.
Get the new special report, “Five Things To Know About Data Center Site Selection,“ courtesy of Stack Infrastructure, to learn more about how to successfully approach the data center site selection process.