Big data: how to optimize the value of masses of data

December 02 2014 By Posted in Expertise

Over 3 billion measurements – this is the volume of data ip-label processes each and every day for its two technological solutions together.

  • Datametrie GX combines synthetic monitoring via robots with real-user monitoring delivered in SaaS form for customers who opt for an entirely external system for measuring the perceived quality of digital services.
  • Newtest (complete with real-user monitoring, coming this fall) for customers who wish to administer the solution autonomously, particularly in the case of intranets with stringent security constraints as well as highly complex business applications. Thus ip-label is confronted on a daily basis with what is now commonly known as BIG DATA.

 

How to optimize the value of masses of data

Information systems base their decisions on the rules they have been explicitly programmed to follow. When a problem arises, we can backtrack and understand why the system encountered it. Concretely, for example, we can investigate why an airplane’s automatic pilot angles up five degrees when an external sensor detects a sudden increase in humidity. Data is plentiful, and people who know how to interpret it can track and understand the basis of these decisions, whatever their complexity.

Today the focus at ip-label is on a different matter. Given the volume of data in ip-label’s repository, we have made a clear commitment to monitoring and expertise in order to understand and process it, and supply dashboards with a view to helping our customers make the right decisions, particularly in terms of infrastructure cost-cutting and business impact.

 

Never mind causality, now it’s time for correlation.

Obviously it is crucial to detect technical problems and where they occur in order to quickly pinpoint the cause. But this war has been waged over the years past. Today the point of big data is to locate correlations that we had not yet looked at in terms of causality. For instance when following the index of airfares, it is not very compelling to know that they fluctuate in accordance with how full the plane is. On the other hand, a correlation that enables you to predict whether the price will increase or decrease in the future has become a basic necessity for knowing when or when not to purchase a plane ticket. There is an algorithm for building the price of a ticket (as a function of the period and capacity filled). Big data will make it possible, by allowing correlation of masses of data, to plan for your ticket purchases at a lower price.

Developing new predictive algorithms has become a decisive issue. ip-label has embarked on this path to making data more pertinent. As has always been a cornerstone at ip-label, developing intelligent tools is a priority. Nevertheless the nec plus ultra can be had only in conjunction with the advisory dimension which, as always, is an entirely human intervention.

As Henry Ford might have told the tale, had algorithms and data been asked what customers wanted, Big Data would have replied, “a faster horse”. Big data would not have invented the automobile!  Big data is a resource and a tool. Its purpose is to inform rather than to explain. However dazzling and almighty big data may appear, we should not let ourselves be subjugated into forgetting its limitations.

People will continue to be the key because that is where creativity lies.

Leave a Reply

Your email address will not be published