The keys to successful, high-performing websites

December 02 2017 By Posted in Expertise

Web response times: it’s all about the browser… and other things!

It is useful to start with a comparative study into how browsers affect the response times of websites. We have the tools to test one or more sites whose visitors use a variety of browsers.  It is not enough to test the top 3  (Chrome, Firefox, IE), because from one country to the next, from one website to the next, the proportion of visitors who use one of these 3 browsers varies greatly. Those 3 may account for the majority of connections in some countries… or not. Depending on the country, there are major websites where these 3 browsers all together count for less than half of all connections to the site.

Multiple browser web monitoring

All of this just to say that we really have to monitor response times across a wide range of browsers. Even if you test 10 browsers on a selection of sites, you only cover 90% of the overall usage context. Still, the results show that there is no one single ideal browser. This is because the speed of each of them also depends on other things.

So we have to take a closer look. Characteristics other than those of the browser come into play. Response times may differ depending on where the user is, the design of the website that is being monitored, the version of the browser, type of connection, and more. Response time results can change fast: Measurements show a pattern, to take only the case of Chrome for example, that the situation evolved more or less every 6 weeks. Likewise for Firefox. So for the same website, results can differ radically (way faster or way slower) depending on the version of the browser. Furthermore, and for a variety of reasons, on some sites the fastest browser could be an old version.

Geolocation also accounts for differences: In a comparison of browsers on a single website, the differences in performance were not very significant in Spain, while in Germany we saw a variation by browser of 0 to 1.7 seconds. And the fastest browser was not even the same one when connecting from France, Germany, or Spain! Those are a few reasons why the increasing compliance of browsers with new standards does not eliminate the need to test browsers in order to cover the entire web audience.


Response times and performance

It is possible to get an overall picture by using measurements from both RUM and active monitoring. These show that the differences between browsers are sometimes due to the websites themselves. On some sites, very small differences between browser response times can be explained by the sites overall sluggishness. Measurements provide information for diagnostics. They can indicate, for instance, that a complex page includes a lot of calls to external domains, that content is delivered from a different server depending on which browser is used, that there is a problem with how a certain browser handles javascript, etc.

For troubleshooting application performance, response time alerts can be configured for each browser. In this way you can detect when there is a sudden drop in visitors on a certain browser, realize that a browser is having problems with your site, and explore the reasons for this loss of audience (for instance in the context of acceptance testing). Alerts can also be configured by location, browser version, and many other criteria. Added to that, the results of active monitoring supplied in a waterfall chart show you difficulties related to domains or javascript responsiveness (for front-end optimization, etc.).

Leave a Reply

Your email address will not be published