Managers and executives at all levels are now expected to be at least familiar with how machine learning models are built and deployed. However, if you don’t have a formal data science education, reading through industry publications is not very helpful: High-level use case descriptions and marketing materials too often present machine learning as somewhat of a dark magic powering their products; technical publications tend to be incomprehensible for a nonspecialist, and how-to guides simply list the steps without giving sufficient background as to why each step is needed, which limits understanding.
In this interview with PYMNTS, Laks Srinivasan discusses the challenges and opportunities the Big Data brings to the world's largest enterprises.
Big Data may be everywhere, but that doesn’t mean that companies are able to actually get the most out of it in an efficient and scalable way.
With the notion that the world’s flow of computable information would eventually become the oil of the twenty-first century, Opera Solutions was launched back in 2004 with the goal of addressing the challenges and opportunities emerging as a result of the influx of data that came from more people having increased access to technology and a greater ability to generate even more data.
Artificial Intelligence (AI) is helping the enterprise create dynamic new applications and new ways to better serve customers, prevent and cure diseases, detect security threats, and more. We’re seeing how rapid advances in the field as a whole as well as the underlying technology are leading to more real-world opportunities that already are making a big impact. Speaking at a 2017 panel discussion with the The Wall Street Journal, AI luminary Andrew Ng observed, “Things may change in the future, but one rule of thumb today is that almost anything that a typical person can do with less than one second of mental thought we can either now or in the very near future automate with AI.” That’s a startling assessment for what we can expect.
To predict the future, one must look at the past, says the old adage. To determine what to expect in 2017, we thought it was best to draw lessons from 2016 despite our industry’s yearning for dramatic change. Laks Srinivasan, COO at Opera Solutions, shares his insights into the biggest Big Data trends of 2016 and reflects on where the market is going and how companies will react.
Artificial intelligence is no longer just evolving nomenclature in IT. With the technology’s recent progress, organizations of all shapes and sizes are taking interest. With the mainstream press and industry analysts from every corner weighing in, it is worth taking stock of the technology and learning how to differentiate between three arguably over-hyped terms: machine learning, artificial intelligence (AI), and deep learning.
It’s best to consider the concentric model depicted in the figure below. AI is shown as the superset since it was the idea that came first, and it has been evolving and expanding since then. A subset of AI is machine learning, which came out of the quest of AI at an early stage. The innermost subset is deep learning, which is just one class of machine learning algorithms. Deep learning is a hot area right now, and the one most typically associated with the rise in AI today.
“Didn’t you just go to a similar Big Data conference recently?” my wife asked me. “How much could have changed in a few months?” I was hesitating about attending another conference in a short time span. My wife is right about most things, but in this case, I am glad I didn’t listen and went anyway. I learned about many new advances in both commercial and open source tools and across the whole technology stack: new hardware, in-memory databases, and new-and-improved tools.
Everybody seems to be talking about machine learning these days, and a quick check of Google returns 21.2 million search results for the term. Clearly, this is a popular topic. Yet the term “machine learning” can have many different meanings, depending on the context in which it is being discussed. It is also associated with an equally lengthy list of data science techniques and technologies. Business leaders often feel overwhelmed by this rather bewildering array of terms, analytical approaches, and technology solutions. There are hundreds of algorithms, with new variants seemingly appearing every day. Researching them online does not seem to clarify the choices or point to an obviously superior decision, as most articles target deep experts and revolve around nuances of a particular model or open source package. As a result, business leaders can be reluctant to adopt something they don’t fully understand, resulting in missed opportunities.
The struggle is real — and it’s becoming increasingly apparent to companies that have dipped their toes into popular data science tools. As enterprises test the limits of their new tools, old technology, and data scientists’ time, their infrastructure is starting to show its cracks. Read on to see how these issues are revealing themselves — and more importantly — gather some ideas on what to do about it.
Over the past year, I have been averaging 2–3 customer meetings per week, resulting in over 100 customer and partner conversations around Big Data, analytics, and data science for the enterprise. From these conversations, I have found one key recurring theme: scale. Large enterprises no longer want to build one model quickly or implement just one use case in production. They all struggle with a large backlog of ideas. They need a way to rapidly turn these many ideas into real use cases that deliver tangible business value.
However, many companies simply can’t find a pathway to make this happen. Across my numerous conversations, I noticed very similar patterns and identified 5 common obstacles that can prevent companies from achieving scale for data science.
As a company that works intensively with Fortune 500 companies, our finger is firmly on the pulse of the latest needs, wants, and aspirations of the world’s biggest Big Data drivers. Here are five key trends we’re seeing for 2016 and beyond.
Is now the right time to add machine-learning analytics into your charge-capture management system?
ICD-10 is here and chances are, as a provider, you’re as ready for it as you can be, knowing there could be some hiccups and impact on revenues. Most predict that the impact will be confined to inpatient revenues, driven by significant adjustment issues in grouper methodologies. But what if the impact extends well beyond that, to outpatient revenues?