In part I of this article, we discussed the history of business intelligence (BI) adoption across organizations. We outlined the importance of establishing a foundation of trust and embracing a culture of change management to support data-driven initiatives. We discussed the growth of artificial intelligence (AI) and how its adoption faces challenges similar to that of BI. At the end of part I, we introduced the concept of an intelligent digital assistant to serve as a bridge between traditional BI (descriptive and diagnostic) capabilities and advanced analytics such as AI.
The adoption of AI and cognitive computing faces a myriad of challenges inside most organizations. Poor data quality, inadequate business processes, ethics, and lack of technical skills are among many of the inhibitors cited by experts. However, even addressing all of these issues may not be enough to ensure the successful deployment of AI in the context of organizational decision-making.
Abdicating control of decisions to a machine is an extremely uncomfortable proposition for most business leaders. Since few of these leaders have a background in mathematical sciences, it is difficult for them to believe a “black box” can achieve the same or better outcomes than their intuition based on years of experience. Not surprisingly, many leaders, when confronted with the prospect of losing control of decisions, will challenge or second-guess the results of the AI model. The imperative is simple. We have to build a transparent environment that promotes trust and inspires confidence in the AI models.
I heard this theme of balancing trust and control repeated in numerous discussions I had with retail industry technology leaders on the topic of AI adoption. As one leader put it, “we are talking about a drastic change in how people operate. They want to feel as though they will still have a place in the company and that they still add value”. These sentiments are what makes the proposition of an intelligent digital assistant so palpable.
Let me begin by saying an intelligent digital assistant (IDA) is not the appropriate model for all implementations of AI. In cases where decisions and actions must be in near real-time, such as an e-commerce product recommendation engine, more fully automated forms of AI are better fits.
The sweet spot for IDA technology is complex decisions that require the inspection of many different variables and trends, resulting in a “best or optimal” choice. In the past, these decisions often necessitated an element of assumption or intuition on the part of the decision-maker. An IDA does not rely on intuition. Instead, the IDA collects and organizes relevant information and runs various scenarios to arrive at an “optimal” recommendation to a decision-maker.
An intelligent digital assistant (IDA) consists of 8 components or capabilities:
- An interface in which the decision-maker provides the parameters for the desired outcome.
- A natural interface, such as voice, inspires higher degrees of trust and user engagement.
- Easily connecting to a myriad of relevant data sources needed to make the targeted decision(s).
- An AI engine that leverages machine learning to run multiple scenarios to arrive at a recommendation (or set of recommendations).
- The ability (by the decision-maker) to override a recommendation by altering model assumptions.
- The ability to process recommended actions in the appropriate systems of record.
- A logging mechanism that collects the model recommended course of action and the implemented actions of the decision-maker.
- Post action audit reporting that shows the effectiveness of both the model and the decision-maker.
Let’s illustrate how this would work using an example from the retail industry. Sally is an apparel merchant for Company Z. On Monday, Sally arrives at work and begins her day by connecting to her digital assistant, “Jake.” Immediately, she hears, “Good morning Sally. Sales for the new launch of NFL licensed t-shirts exceeded the plan for the weekend. We have allocated 50% of the product already. Would you like to review my recommendations for the remaining inventory based on the weekend sales trends”?
Sally replies, “Sure, Jake.” Jake begins describing the recommendation as the computer screen in front of Sally starts to fill with graphs showing performance, projections, and recommended actions. “Sales were extremely high in the Northeast region, with the best performance in the Philadelphia market. Sales in the southeast were moderately above plan, on plan across the midwest and southwest regions, but behind plan in the west region. The San Francisco stores were well below plan. I recommend increasing the allocation levels for the northeast region for this week, funding the increase with corresponding reductions for the west region.”
Sally reviews Jake’s recommendations. She decides that Jake’s recommended allocation cuts for San Francisco may be too much. Sally ups the allocation amount and asks Jake to rerun his projections, holding San Francisco quantities at the levels she specified. Jake returns an updated forecast, which Sally approves. “Do you want me to enact this recommendation?” Jake asks. Sally replies, “Yes,” and Jake begins sending the transactions to the appropriate systems of record.
At the end of the next week, Sally reviews an audit report that shows she followed Jake’s recommendations 85% of the time. Of the 15% of actions she overrode, 60% resulted in a better overall result, while 40% had worse outcomes. One of the overrides that delivered poorer performance had a significant negative margin impact. She drills down on the details of this decision to better understand how she could improve future outcomes.
As described in the scenario above, the IDA (Jake) makes recommendations to the ultimate decision-maker (Sally). The IDA is transparent, showing the information used to arrive at the recommendation. In addition, the IDA fosters a sense of accountability as the outcomes of decisions are measured and reported. Human decision-makers can see how they can improve decisions in cases where the IDA recommendation proved to be better. The IDA can leverage cognitive computing to “learn” when the human decision-maker’s recommendations outperformed.
As described above, the IDA is much less threatening than a ‘black box” that spits out decisions. Using an IDA, the decision-making process is a dialog-driven, collaborative, and iterative process between machine and human. The high degree of transparency is another critical element to alleviate adoption concerns. The IDA prepares and presents the vital informational elements that informed its recommendation to the decision-maker. There is also shared accountability and learning for both the IDA and the decision-maker. The importance of accountability can’t be overstated. Too often, we don’t review the outcomes of our actions.
As one retail executive stated, “no one is going to spend time to prove they made the wrong decision.” The decision audit process is provided automatically by the IDA.
Finally, IDA technology is extensible. As more and more new data sources become available, we have the opportunity to fine-tune our decision processes. For example, weather patterns and forecasts, zip code demographics, and fashion-trend data would be valuable information for someone making merchandising assortment decisions. However, too much information can result in “information overload” for decision-makers. The IDA can easily ingest the new information, incorporate it into model projections, and present a summary to the decision-maker.
IDA technology, as a whole, is only beginning to emerge in the mainstream. However, the components needed to comprise an IDA solution are battle-tested. The dialog-driven interface has seen many successful implementations in digital voice assistants such as Siri, Alexa, Cortana, and Google. Graphical dashboards and data-driven alerts have been with us for quite some time and can be repurposed in the context of the IDA dialog to provide support for recommendations. AI and ML algorithms continue to improve with the advent of more/better data and more powerful processors.
As we discussed earlier, the lack of trust is a significant inhibitor to the adoption of new disruptive technologies such as AI. Finding ways to build trust and inspire confidence are essential elements of any AI deployment. The IDA presents a new way to build trust while driving substantial improvement in business outcomes.