With his hands clasped in front of him, Professor Michael Smets ended module four of the Executive Diploma in Artificial Intelligence for Business with: 'it's not the end, just the end of the beginning.' And that was it, all the classwork was now concluded. There was no small amount of melancholy in the air and folks started to continue conversations and turn to look for additional interaction, not wanting the programme to end.
People were heard asking: 'Hey what are you going to do next?' or 'I can't believe the programme is over, what a journey!' I was asked: 'How do you think the programme impacted your start-ups?' That knocked me back for a second as I took in the gravity of the question. The past year has been tumultuous as I helped launch two AI start-ups whilst studying the programme. Yes, the diploma has affected me deeply, but what critical decisions can I directly credit to the learnings I have gained?
I thought about it for a moment and said: 'Yes! Let me give you three examples where I directly applied something we covered in class to a situation in these start-ups where I was stuck.'
1. Understanding the ethical implications
One of the first times I applied my learnings was when working on Probility AI – a start-up focused on making predictions about when pro athletes would get injured and how many games they'd miss. We amassed over 20 years of data on every injury in the US National Football League and combined it with new data as well as computed fields. The goal was to predict player availability for the following season. After many months of effort, we had a highly accurate model. Initially, we were ecstatic, but then we knew the ramifications these insights could have on the players, their agents, and the general managers making hiring decisions. We knew we needed to treat the data with care and respect and be the custodians of their impact. Because of our deep classroom discussions and readings on ethics in module one, ‘The Landscape of Technological Disruption’, we hesitated as we thought about the ethical implications. Ultimately, we moved forward with sharing the insights because we felt we would be the appropriate custodians of these insights and followed the steps outlined in class to be sensitive to all those impacted.
2. Setting boundaries in nascent markets
The other start-up, Budscout, is a novel robotic scanning system for indoor agriculture. And, while we were breaking new ground, nobody in the space had defined the business model or pricing. We were concerned about setting the price too high or too low. Fortunately, I had just heard Professor Marc Ventresca's lecture in module three on ‘Managing the Innovation Process’ focused on entrepreneurs creating a nascent market. He noted that in new markets the market boundaries are not set yet but are left up to the entrepreneur to set. I realised it was going to be up to us or a new competitor to set these boundaries, so we took our business model and pricing strategy to market first, instead of waiting and reacting to the positioning of others.
3. Creating successful rollout plans
The third example applies to both start-ups going forward. In module four, we heard from Professor Michael Smets about obtaining legitimacy in the market with new technologies to encourage adoption. We learned about Suchman's three types of legitimacy: self-interest (what's in it for me?), morality (is the outcome and means to achieve it morally sound?), and propriety (is this solution necessary and inevitable?) Suchman (1995) Managing Legitimacy: Strategic and Institutional Approaches. This comprehension is critical in all future AI adoption attempts and will play a key role in communications and rollout plans I am part of.
Whilst the coursework is completed, and I won't be making regular trips to Oxford, the future is upon us, and I feel better prepared and more excited to take it head-on.