There is an increasing realization that data platforms, when approached methodically, have a higher propensity to succeed. Ad-hoc implementations of advanced analytics including exponential technologies such as AI often result in marginal benefits and therefore does not allow an organization to derive the best value out of the data. The notion of end-to-end functions of an analytics cycle covers much more than just the AI component. Many of the heavy-lifting happens while sourcing and preparing the data.
Similarly, the visualization aspect of analytical data cannot be an afterthought. In a nutshell, the effectiveness can only be realized through a comprehensive approach to deploying a data platform, which will consist of several functions, right from where it is sourced, follow through with the necessary transformation to be consumed by business applications.
The AI Ladder
Deriving the business benefits from data requires proper preparation before analysis. Insights can only be as effective as the underlying data. Therefore, the collection of relevant data and organizing it in a manner that brings out the business context will help the advanced analytics to be conducted with ease. The AI ladder depicts this methodical approach: collect, organize, analyze, and infuse. Whether the data platform is deployed on cloud or on-prem, all these functions need to be carried out in a programmatic way to ensure success.
As a first step, one needs to establish a robust Collection mechanism to make data simple and accessible. By definition, this function should have the capability to ingest multi-modal data: whether it is structured or not, IoT data, batch mode or real-time, organic, or licensed data. The notion of data virtualization can often be seen as an integral part of this function – this allows data extraction from various sources without having to physically consolidate. The complexity introduced by heterogeneous sources can be handled by having a common query engine so that the extraction mechanisms (such as SQL queries) can be written once and can be run anywhere.
The Organize step of the ladder addresses the need for creating a trusted, business-ready analytics foundation. This begins with governing the data lakes and those data offloaded from other sources like a data warehouse. Standard industry models define the knowledge catalogue to be built so that the data is profiled, cleansed, integrated, and catalogued. Typically, this step of the ladder is where one introduces, protection of data for complying with regulatory requirements (such as GDPR). For operational reporting, many organizations, dip into this layer of organized (and catalogued) data to generate business-driven reporting, visualization, and discovery. As one can surmise, the ‘Organize’ step defines the corpus of data, upon which various advanced analytics exercises can be conducted and therefore this determines the quality of downstream business insights.
As we move to the next step of Analyzing the data, exponential technologies such as AI is put to use. To scale insights in an ‘on-demand’ manner, we need to have technologies like ML and DL more pervasive and more importantly trustworthy. This step not only focuses on the core Data Science activities for design, build, train and run of various machine learning models, but also on the core predictive and prescriptive analytics, modeling and statistical analysis. Corporate functions like dynamic planning, budgeting, and forecasting analysis typically benefit from these activities.
A complementing visualization function helps in visually representing the results of analysis and consumed as a united and integrated set of business functions. This includes the AI-assisted business intelligence dashboarding also. Achieving trust and transparency in AI insights has become a significant factor in determining the adoption levels. Modern-day AI platforms have integrated various technology functions like automated fairness and issue detection, intelligent bias detection, and mitigation and finally, decision auditability and traceability – all these help in tracking the accuracy levels of analysis at the application level along with an automated explanation of the outcome.
The infusion of AI into business functions allows us to realize the business value and its impact. When automated this scales AI across the business process and thus the enterprise. To help achieve this, various process automation platforms have started integrating AI functions, deploying intelligent AI models across workflows. One of the important aspects of this infusion step in the AI ladder is the openness – by design, the framework should be open for disparate AI models and support open model deployment and integration across ML runtimes. Extending this thought at the deployment level, the runtime should support a true hybrid environment – public, private and multi-cloud landscape.
A comprehensive approach to realizing the value of the data builds the right information architecture (IA), strengthening the various phases of data transformation, along its life cycle. Having this solid Information Architecture based on cloud-native principles acts as a catalyst and allows for quicker onboarding and realization of the end goal. In simple terms, there is no credible AI without a solid IA.