To predict the future, one must look at the past, says the old adage. To determine what to expect in 2017, we thought it was best to draw lessons from 2016 despite our industry’s yearning for dramatic change. Laks Srinivasan, COO at Opera Solutions, shares his insights into the biggest Big Data trends of 2016 and reflects on where the market is going and how companies will react.
Artificial intelligence is no longer just evolving nomenclature in IT. Everyone is taking interest. With the mainstream press and bloggers from every corner weighing in, it is worth taking stock of the nomenclature and learning how to differentiate three overly used key terms: artificial intelligence (AI), machine learning, and deep learning. The simplest way to think of their relationship is to visualize them as a concentric model (as depicted in the figure below) with AI — the idea that came first and has since been evolving — having the largest area. This is followed by machine learning, which blossomed later and is shown as a subset of AI. Inside both of those is deep learning, which is just one class of machine learning algorithms but one that is currently driving today’s AI explosion.
“Didn’t you just go to a similar Big Data conference recently?” my wife asked me. “How much could have changed in a few months?” I was hesitating about attending another conference in a short time span. My wife is right about most things, but in this case, I am glad I didn’t listen and went anyway. I learned about many new advances in both commercial and open source tools and across the whole technology stack: new hardware, in-memory databases, and new-and-improved tools.
Everybody seems to be talking about machine learning these days, and a quick check of Google returns 21.2 million search results for the term. Clearly, this is a popular topic. Yet the term “machine learning” can have many different meanings, depending on the context in which it is being discussed. It is also associated with an equally lengthy list of data science techniques and technologies. Business leaders often feel overwhelmed by this rather bewildering array of terms, analytical approaches, and technology solutions. There are hundreds of algorithms, with new variants seemingly appearing every day. Researching them online does not seem to clarify the choices or point to an obviously superior decision, as most articles target deep experts and revolve around nuances of a particular model or open source package. As a result, business leaders can be reluctant to adopt something they don’t fully understand, resulting in missed opportunities.
The struggle is real — and it’s becoming increasingly apparent to companies that have dipped their toes into popular data science tools. As enterprises test the limits of their new tools, old technology, and data scientists’ time, their infrastructure is starting to show its cracks. Read on to see how these issues are revealing themselves — and more importantly — gather some ideas on what to do about it.
Over the past year, I have been averaging 2–3 customer meetings per week, resulting in over 100 customer and partner conversations around Big Data, analytics, and data science for the enterprise. From these conversations, I have found one key recurring theme: scale. Large enterprises no longer want to build one model quickly or implement just one use case in production. They all struggle with a large backlog of ideas. They need a way to rapidly turn these many ideas into real use cases that deliver tangible business value.
However, many companies simply can’t find a pathway to make this happen. Across my numerous conversations, I noticed very similar patterns and identified 5 common obstacles that can prevent companies from achieving scale for data science.
As a company that works intensively with Fortune 500 companies, our finger is firmly on the pulse of the latest needs, wants, and aspirations of the world’s biggest Big Data drivers. Here are five key trends we’re seeing for 2016 and beyond.
Is now the right time to add machine-learning analytics into your charge-capture management system?
ICD-10 is here and chances are, as a provider, you’re as ready for it as you can be, knowing there could be some hiccups and impact on revenues. Most predict that the impact will be confined to inpatient revenues, driven by significant adjustment issues in grouper methodologies. But what if the impact extends well beyond that, to outpatient revenues?
Marketing automation's time has finally come, but it's been a long, tortuous road.
I’m a big fan of marketing automation. My experience goes back a few years. I was just getting settled at a new company when our CFO received a renewal invoice from a marketing automation vendor. She didn’t know how we were using the software, and neither did I.
The Hadoop Summit conference, hosted by Hortonworks and Yahoo, has become a must-see Big Data event. The Hadoop distributed computing architecture is now an integral part of what it means to be a data scientist, and a few days of concentrated effort each year is enough to get a vision for where the industry is headed. The Hadoop Summit serves this purpose well by providing thought-provoking technical sessions, keynote addresses, and a vendor exhibition that brings many of the major players in the Hadoop ecosystem together under one roof.
Edited by Yan Zhang
Healthcare fraud, waste, and abuse (FWA) are national problems that affect all of us either directly or indirectly. National estimates project that hundreds of billions of dollars are lost to healthcare FWA on an annual basis. These losses lead to increased healthcare costs and subsequently increased insurance premiums.