How Will AI Help Companies Post-Covid?
I was delighted to attend the AI webinar hosted by Austin Fraser. This short post looks at the main takeaways from the panel of AI experts.
Sign up to receive more moments like this
[cp_slide_in display="inline" id="cp_id_efee1"][/cp_slide_in]
Strategies for AI and Data Science
First of all, I'm not sure we will find ourselves in a "post-COVID19" world anytime soon, and I think the panel of experts at Austin Fraser's Data & AI Strategy Post Covid-19 webinar felt the same, as the topics were more generically focused on future strategies, rather than post-pandemic.
The panel consisted of a variety of AI/Data Science expertise from Oracle, Sensyne Health, the University of Hertfordshire, CDP and Theory+Practice. The panel was hosted by my good friend Richard Foster-Fletcher.
There were lots of very interesting points and discussions, including a fantastic question about whether or not we're setting the bar too low in measuring AI's capability against human intelligence.
A big takeaway from that particular question was that the real power of AI will be in supporting human intelligence, in what was coined "Augmented Intelligence. Something I previous touched on in this talk.
AI is just another tool
The first of the big four takeaways came from a question about where AI is on the Hype Cycle. The general consensus was that the peak of the hype was twenty years ago (around the time I was studying Artificial Intelligence in Cybernetics), and that actually we're approaching the "plateau of productivity".
As the Gartner Hype Cycle goes, the plateau of productivity comes after the "slope of enlightenment", and during our enlightenment, we are learning several things about AI:
- AI cannot solve everything
- AI is a tool, a very powerful tool, that can be used very effectively for certain jobs
- The edge of the current research is both amazing and frightening, but in reality, when AI is successfully deployed it is mostly invisible
- Rockstars are dead. The age of the data science rockstar is coming to an end. AI and data science need to be part of the team, part of the solution, not the solution.
As such, companies should lean away from thinking how to apply AI to their business, and essentially go back to how they've been thinking for a while already, think about the customer first, and what the end goal is. Then, think about whether or not AI is part of the solution. For the vast majority of cases, AI will be useful for repetitive tasks, and tasks involving huge amounts of data.
The rockstar is dead
Big-tech doesn't own all the data
It is easy to think that Google, Facebook, Tesla, Amazon etc have such a lead on data capture that it might be impossible for smaller businesses to catch up. This is not necessarily true. Whilst they do have a massive lead in search data, personal information, road-miles or consumer behaviour, the real needle-mover AI applications for a business are probably going to rely on the business' own data and knowledge.
If you don't have all the data you need, be creative. Conduct trials, pilots, proof-of-concepts and challenges. One example I heard on the webinar was of a company that was using AI to identify wine-bottle labels. They ran pilots and trials with the help of a few hand picked companies that could provide labels, then once they had trained their AI enough, they created a competition which went something along these lines, "So you think you know wine? We challenge you to beat our bot". In doing so, they attracted a bunch of wine experts to help train their algorithms even more - effectively crowdsourcing supervised learning. Very smart.
AI in post-COVID decentralised work place
The panel was asked for examples of how AI was helping businesses who have suddenly had to scale out decentralised, remote teams. A practical example of this was financial institutions that have rigorous regulatory requirements to adhere to. In office based work, it was easier to monitor for, and keep an eye on these requirements, however, this has become much harder as the business has become decentralised and fragmented across various systems. Natural language processing (NLP) techniques are being used in these scenarios to watch for keywords in both written and spoken communications, to ensure staff are keeping within guidelines.
Don't do anything that you'd be embarrassed to tell your Mum.
Ethics and Diversity in AI
It was not a surprise that ethics was a big part of the discussion. Whilst ethics vary in the world around us from country to culture, a general rule of thumb was suggested. Don't do anything with AI that you'd be embarrassed to tell your family or friends. If it doesn't feel right, it probably isn't.
With more broad-reaching implications, the topic of diversity came up to. One panelist mentioned that in one role he managed a very diverse team, yet something was amis. He discovered that whilst there was great diversity, there was a lack of perceived "safety", which meant some people's voices were louder, some silent.
So, perhaps chiming back to the "rockstar is dead" comment, teams need to build an environment of equality and trust, giving equal weight to all voices and opinions. Perhaps a good model to follow here is the Tactical Meeting structure from the principle of Holacracy.
Tell me what you think
What do you think? Share your thoughts with me, and leave a comment below.
Share this