Artificial intelligence (AI) is something you should not miss. It is a top topic in the media, at conferences and other events. Many larger companies use AI to make different processes more efficient. The automation potential is enormous. Think about the many manual activities in your organization.
There are often two basic challenges. One challenge is, that the AI is very specific to the application. It leaves little room for easy scaling by transferring it to other applications. This is because every AI solution is trained with data specific to the domain. The solution only applies to the specific context of the application.
A second challenge is the human response to the new robot colleague. People need to build confidence in the decision-making ability of robot colleagues. And, there might be political and ethical questions, such as Dr. Draeger (board member of the Bertelsmann Foundation) at the recent Handelsblatt AI Summit showed. The algorithms will not help to overcome the challenges.
But even on a small scale, AI can cause irritation. Consider a situation in which AI is set up to only provide supportive recommendations for action, instead of triggering the action itself? In such a situation, you will consider the recommended action against your gut feeling and experience. What if your gut feeling and recommendation for action contradict each other? What will you do then? The recommendation of the AI is based on data that is available for processing and decision-making. In most cases, this is enough for a small decision. However, available data, no matter how large or current they are, is data from the past. Most likely, the AI is also designed to adhere to old fashion business principles. But what if you have to make a decision with bigger consequences for the future? Will the recommendation of the AI suffice?
Can you trust your robot colleague? And if you decide otherwise? How will your human colleagues see your decision? Do you feel valued if you always follow the robot colleague’s suggestion?
As Martin L. Roger emphasized earlier this year, it makes sense to enrich decision making with qualitative data to complement standard business metrics. This is especially true in situations where we really want to shape the future and make an impact. At best, quantitative data reflects the past .
Another useful consideration comes from brain research. Friederike Fabritius emphasizes the possibility of following gut instincts in decision-making situations. The gut feeling comprises many experiences and learning effects and is able to decide quickly and well in complex situations .
This means, that anyone confronted with actions proposed by an algorithm contradicting the gut, should be brave about what he or she believes is right in this situation, even if the decision goes against the AI recommendation. Only then you can learn. You will feel in control and not marginalized. You will strengthen confidence in your decision-making ability (consider to document the decisions). Organizations may install this in policy. See it as a fundamental act of humanity to let people have learning and development opportunities in such situations.
 Webcast by Strategyzer: https://blog.strategyzer.com/posts/2019/1/17/stratchat-replay-roger-martin-on-making-better-decisions-in-todays-business-world
 Friederike Fabritius and Hans W. Hagemann, The leading Brain, Tarcher Perigee, 2018