How can you build trust in procurement AI when you can’t understand how it has come to a decision?
Procurement systems are evolving. Offerings that used to be referred to as “software packages” or “cloud-based solutions” have now entered a new phase, with many of the big players, startups and SMEs offering solutions with embedded Artificial Intelligence.
What is AI used for in procurement?
The best use of AI is to automate repetitive, predictable, time-consuming tasks that are swallowing up a lot of the procurement team’s time. These might include:
- Spend classification
- Spend analysis
- Supplier or market data capture and analysis
- Data anomaly detection
- Contract management
- Accounts payable automation
- Cognitive procurement.
Benefits of AI include its ability (when used intelligently) to help you make better decisions, automate manual tasks, optimise supplier relationships and free up time for procurement professionals to spend on more strategic initiatives.
But how can we trust something that we don’t understand? Research released this week by Avaya found that 42% of Australian enterprises lack understanding of AI technology.
Black Box: I don’t understand how this works
“Black Box learning” refers to the situation that occurs when humans do not understand how or why AI arrived at a decision due to the complex maths or algorithms involved. For many users of AI, this requires an adjustment in mindset as we belong to a culture where a decision needs to be explained or justified. TNW compares this situation to a student doing high school algebra who is asked by their teacher to “show their work” rather than just jump to the final answer. Another analogy would be understood by anyone who has ever played chess against a computer – the computer’s move will often seem nonsensical to human players, yet will make sense in the end when the computer wins.
Trust and Explainability
This, understandably, can lead to a trust issue, particularly for those who are nervous about incorporating AI into procurement decision-making. For example, if an AI is directed to create a shortlist of the most appropriate suppliers for a particular sourcing event, some of its answers may take you by surprise. Without being able to understand how it reached this decision, the best we can do is to understand the rules under which it was operating. This takes a leap of faith, or trust in the AI; particularly in cases where it may take months to find out if the decision was correct or incorrect.
“Explainability” refers to slowing down an AI so it can translate its reasoning into human terms and work at a pace (and in a language) that humans can follow. This form of supervised learning may be effective in terms of understanding how the AI works, building trust in the algorithm, and for continuous improvement through machine learning. But slowing an AI down can negate the very reason the business invested in the technology in the first place. AI is about achieving previously impossible levels of speed in procurement, and freeing people up to do other things, not about dedicating extra resources to check every decision the AI makes.
What is the answer? The best thing you can do to support the implementation of AI is to help build trust in your organisation for the technology. This may not seem easy at first as the AI makes mistakes, learns from them, and improves, but as it gains proficiency the AI will begin to rack up an impressive number of runs on the board (correct decisions) that will help you convince even the most distrustful stakeholders of its worth.