Competing with Big Algorithms: A 2017 Outlook on Data & Analytics



January 31, 2017

The recent shift in focus – from big data to big algorithms – stems from the notion that unprocessed data has inherently little value.  For example, petabytes of healthcare system case records mean nothing unless healthcare companies can translate this data into information that improves overall treatment effectiveness. This concept holds true across industries. The last 30 years of financial security trade history isn’t valuable unless it can be used to identify potential securities fraud based on active trading patterns. As the Internet of Things continues to bring an influx of data in 2017, data collection is now the price to play in an increasingly competitive analytical landscape. Only companies whose data insights fuel better products, services, and capabilities will successfully compete.  Big algorithms help make these insights possible.

There’s nothing new about big algorithms. They’re enabled by the same rule-based calculations and operations that computer scientists have used for nearly 30 years, yet, as we enter 2017, the time for big algorithms has never been better. The decreasing cost of computer power and storage has enabled these time-tested algorithms to rapidly process enormous amounts of data.  With resulting capabilities like deep (machine) learning, neural networks, and natural language processing, bridging the gap between data and insight is more attainable than ever before. Companies can now focus on the information that matters and use that insight to drive decisions still ultimately made by real people.

Unless you’re one of the top technology companies in the world – Facebook, Google, Microsoft, etc. -you’re probably not optimizing your use of data when it comes to generating the insights that drive better decisions.  These companies are not only finding new ways to maximize the amount of big data they capture, but are continually strengthening their infrastructures and cultures to support the processing and iteration of data.

Those top technology companies already have employed hundreds, if not thousands, of researchers, dedicated to iterating and optimizing their specific efforts around big algorithms.  Meanwhile, other Fortune 1000 companies are still in the most fundamental stages of collecting data, and haven’t even started to think about how to make use of it.  Although the future may appear bleak for these companies, several prominent cloud providers are developing infrastructures that make the adoption of big algorithms less daunting.

Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure (MSFT) all offer machine learning services that offer companies a starting point with existing, pre-trained, algorithm models.  Companies can get their feet wet and begin to experiment with developing and iterating on their own algorithms. These solutions scale until a company can use big algorithms to process its own data and generate insights.

With the massive volume of data made available over the past several years, big data alone is no longer a differentiator. Instead, the growing necessity to translate this data into meaningful information brings big algorithms front and center in 2017.  Big algorithms are not a replacement for analytics or data visualization; instead, they improve information quality and optimize the value of analytics in generating insight. While very few companies are fully leveraging big algorithms today, major cloud providers offer services that both enable and scale – for those that plan to compete in the future or risk being left in the past.

Related Reading

Leave a Reply

Your email address will not be published. Required fields are marked *