top of page
Search

10 Signs You Might Want To Walk Away from an AI Initiative

Here’s a few occasions where you might want to rethink creating or using AI so you don’t end up frustrated.

#1 When you don’t have enough data or enough of the right data Record numbers of arrests were recorded during New York City’s stop-and-frisk era, but up to 90% of those arrested were innocent. Counter-intuitively, those arrests remain in NYPD databases. Let’s say you want to build an AI solution that could identify policing tactics that would lead to higher convictions. The last thing you need in your data set is false arrests or intimidation bookings from that era. But given that stop-and-frisk tactics lasted from 2011 to as late as June 2020, there may not be enough data for statistical significance if you decide not to use these arrests. Do not be tempted. The context and volume of those arrests would taint your AI and lead it to recommend outdated policing tactics--something supporters of old methods would use as validation. Instead, wait until you have enough data from newer policing tactics. This idea of allowing old data full of what you do not want to taint outcomes that you do want is not isolated to policing tactics. It applies to other fields too such as hiring AI where more diverse candidates are desired than is often present in employment data.

#2 When you will have no subject matter experts to train the AI If you are building an AI system that requires deep expertise in a subject matter and the group for whom you are building the solution will not provide a diverse set of their best experts to help train the system, this is a pretty good sign they will not trust the tool once it’s complete. Hence, they probably won’t use it. Why on earth would this happen? The whole reason they wanted the expert system in the first place was to avoid lost productivity of their experts - usually a profession whose talent pool is scarce and dwindling. If you ask them for five of their best experts, let’s take actuaries as an example, to train an AI for six months, this is pretty much the situation they were hoping to avoid--not to mention the costs of salaries, too. Even though its counter-intuitive to the goal of creating an AI that will be able to sustain the results of the best actuaries, a junior actuary’s time will be considered a better immediate investment by the insurance company. But this will set your expert AI system up to fail, so do not fall for it.

#3 When your IT systems cannot handle AI and machine learning On one side you have this amazing new AI-driven social media command center that can detect trends among your customers and keep tabs on what’s important to them. It’s a cloud-based solution and customer trends can be fed right into supply chain input. At least, that’s how the demo worked. A glittery fashion-week favorite now has your fans clamoring on social media for a matching glittery tennis shoe. But in reality, you cannot get the solution to work like it did in the demo. Why? You have legacy on-premise supply chain systems that cannot keep up with the changing trends your AI-driven social media center tries to send it. Your AI vendor tells you they don’t do the integration to the supply chain system. You call up your IT staff and they have years long backlogs. You did not build in extra budget for this. Now what? There is not much you can do. Continue working the problem and plan for integrations going forward. Next time investigate any internal IT systems which may require integrations to make an AI investor's dreams a reality. If the systems will require massive overhauling, then you may just need to say “no” to the AI project in advance.

#4 When new scenarios will be constant AI is not meant for areas that will have constant new scenarios. If there are constant outliers with no patterns, then your AI will overfit. That will give erroneous responses and outcomes. AI works best, when it has data that accounts for at least 90% of situations that it will encounter. For those 10% that it does not, it is imperative to ensure you have a human reviewer/intervener (aka “human in the loop”).

#5 When you’re in a hurry and on a strict deadline If you, or more importantly those funding your AI project, have no patience for trial-and-error and testing ad nauseum in the real world before releasing an AI system “into the wild”, then this is a sign of more to come. AI is not a magic bullet to today’s short-term problems. It is a strategic long-term competitive move that requires patience and a whole lot of work and investment of time, people, training hours, subject matter expertise, and IT systems.

#6 When your AI training experts are about to leave or retire You need at least a month or two to get AI up to speed enough to start training it, and this is assuming that the AI system is a smaller project. The bigger or more complex the data, the more time the machine learning project will need to get up to speed before it can be trained or reinforced by human experts. If the human experts are leaving the company or retiring before the fledgling system will be ready for their expert training, then you may consider either postponing the AI project until you have new experts or finding an alternative to AI.

#7 When you don’t have a long-term strategy, plan or justification for AI Strategic use of AI--as in for purposes that will give a company a competitive advantage--requires planning, trial and error, patience and financial investment. It’s a major commitment from executives at the highest level inside your company. If you are unable to secure this kind of commitment, then start with small, trial AI implementations. Apply AI to certain internal areas of your business - ones that are perceived as lower risk. Cost-cutting initiatives such as procurement projects or IT application testing, even internal help desk use cases can help your executives gain familiarity with how AI works and what’s required. Then once the small AI project is showing results and your executives feel comfortable with the possibilities of it, you can start having more strategic use case discussions.

#8 When the senior-most executives believe AI can replace everyone Motivated by cutting costs quickly, executives at the highest levels want to use AI and automation to get rid of whole departments. They often have no understanding of how the process works nor planning in place for displaced workers. They may have even started eliminating people from those departments before understanding that AI needs the humans--preferably the best--in those jobs to define the job’s main tasks and train it. AI is extremely glitchy in the beginning and there are many contextual scenarios that can arise in real world situations that cause AI to make questionable decisions. The offset to this is to require the AI to confirm or deny decisions with a human before it is allowed to kick off an automated script. Not only will humans need to stick around to train the AI, but it is also rare to be able to fully automate entire job roles to the point of replacing whole departments. Without advanced expectation-setting, senior executives will perceive the AI project as a failure if they cannot eliminate high numbers of jobs right away. If you cannot get executives to see the light, you may want to walk away from the false expectations. IMPORTANT: As an AI developer who may displace workers, assist executives in thinking through the treatment of these workers including reskilling and providing plenty of advanced notice.

#9 When there are no resources or willingness to train people to work with the AI This point seems obvious, but you would be surprised how many times you reach the end of an AI initiative and there is no budget or willingness to have people trained on it. It’s one of the reasons many AI efforts do not scale - and it has nothing to do with technology. If people aren’t trained on the AI’s background, purpose and use, then the likelihood that your AI will be broadly accepted and used could be short-changed. Many people don’t trust AI to understand what they do and so they just won’t use it--even when it serves a critical function. As an AI developer you must carefully consider how it will help your user achieve their goals in the environment that they must use it. You must make an effort to understand if there are circumstances for which they will not take its advice and the reasons for that. You must insist on training programs that can address these reasons and show them results that will help them understand the AI’s efficacy. If you cannot convince people in advance, then walk away from the project before you start.

#10 When the "users" have no intention of using it Understanding what goes into an AI system, how it makes its decisions, and how often it gets critical decisions right in the real world (not just in accuracy testing) contribute to a journey of trust in an AI solution. Just because you can build it, does not mean users will do anything with the decisions it makes - especially in high stakes, highly specialized fields. Duke University’s Sepsis Watch algorithm was trained by researchers on 32 million patient data points to raise detection levels of Sepsis--a leading cause of patient deaths in hospitals. Yet, doctors would not trust it nor heed its warnings because they could not get explanations of why the algorithm decided a patient needed extra care. There are times when the barriers to gain human-machine trust are so high that you may have to consider whether all the time, effort, and resources you pour into an AI solution will be worth it if the users reject it.


1 Comment


James Kobielus
James Kobielus
Mar 09, 2021

Good article, Cortnie!

Like
bottom of page