How to lead AI projects

Artificial intelligence is becoming mainstream and many organizations, startups and big corporates alike, are now starting internal AI projects or are incooperating AI into other existing IT. There's just one problem. Leading AI projects are very different from leading in traditional IT. 

As a result many AI projects either fail or ignite frustration with project participants, users of the AI and the management involved. They are unaware that they are now in a new paradigm and should have different expectations than usually have to IT. Even implementing off-the-shelf AI components or systems can cause problems that are not usual to the organization.

This is very understandable. A few years ago AI was still something a few big techs and the universities were engaged with but for the masses a distant future. So very few leaders and project managers have experience in the AI-field, that they are suddenly thrown into

So what is so different about AI projects? 

The keyword here is uncertainty. The big difference between traditional IT and AI is how much certainty is available. The low certainty is due to the experimental nature of AI. The AI paradigm is experimental in the sense that you can’t predict the road to the finished product. You can't plan the inner workings of AI models before you have created it and you don't know exactly what data you will need or how much data. Lastly you don’t know how well the AI will work. So setting up expectations for the finished solution can be very difficult. In many ways AI development is very much comparable with vaccine development. It's impossible to know in advance if your project will even be successful and most of the insights you need will be acquired while developing. This is very much in contrast the IT paradigm we know. The consensus of traditional IT projects is to precise planning, estimation and achieve a preset list of business objectives as timely and accurate as possible. For that reason we have developed an expertise in for example, planning tools and estimation techniques. But suddenly with the entrance of AI a lot of these skills are no longer useful. In fact they can be downright damaging in an experimental paradigm. If management demands deadlines and accurate estimates the project is bound to fail as it will never be able to deliver.

The first step is admitting

In order to lead in the AI domain you first of all, must acknowledge that this is a new paradigm and you must speak about it openly. When working in a new paradigm the most important tool is to be vocal about the new rules of engagement. With very little certainty in AI, expectation management is already approaching an art. So if you are not even in a dialogue with the project stakeholders about how the new way of working is, expectations will never align. So be very clear about this up front. Even as you’re making the business case you have to be clear that you cannot know either the costs nor revenue generated by an AI project. Not everyone will like it and you will see resistance but it’s much better that you take the conflicts before the project starts.

It's a culture thing

The enabler to get acceptance from stakeholders to work in this kind of uncertainty is the right organizational culture. As a leader it is your responsibility to massage and try to shift the culture in a direction that works with experimental AI projects. If there’s a mismatch between the paradigm you work in and the culture, you will get in trouble in no time.

One of the important features in an experimental culture is the willingness to accept null results exactly like the scientific community does. This is not just the usual preach about accepting failure and mistakes. This is a culture that sees a lot of hard work amounting to no more than the knowledge that a specific solution is not viable.

The experimental culture that fits AI development is very much in line with the learning culture, a cultural style found in the eight distinct culture styles of corporate culture. Other cultural styles such as Results culture and Safety culture can be in sharp contrast to the learning style on crucial points when working with AI. The styles are respectively very keen on achieved results and accurate planning. With AI that offers no certainty for either results or predictability this can quickly conflict.

Be visionary

When leading under uncertainty, leading through a strong vision is very effective. Leading through a strong vision is very much in line with the Purpose culture style. I like to compare it to Columbus' journey to America. Not knowing what to expect on the way or if the journey would even see any kind of success, Columbus still managed to get funding and a reliable crew. Columbus was well known for being extremely strong in his vision and I would attribute at least some of his voyage success to a strong vision. The trick here is to be specific about what you envision on the other side of uncertainty. How will everything look and feel when the project is done?

In conclusion it’s very effective to actively use the culture to support AI development since the alternative might be that it will have that working against you. And organizational culture is definitely one of the stronger forces of the universe.

Previous
Previous

Lobe.ai Review

Next
Next

Don’t be data-driven in AI