Welcome To Our Awesome Magazine WordPress Theme
 

6 Tips for Better Artificial Intelligence Implementations

Most organizations fail at artificial intelligence (AI) implementations in the workplace or elsewhere because they lack the skills, staff and resources, as well as, potentially unrealistic expectations, according to an IDC survey. It’s not as if they’re not trying. IDC found more than 60% of organizations reported changes in their business model in association with their AI adoption, and nearly half have a formalized framework to encourage considerations of ethical use, potential bias risks and trust implications. Further, a quarter of organizations have established a senior management position to make sure they get things right.

“AI can create better outcomes to highly variable problems by constantly changing the rules,” said Mike Orr, COO of Grapevine6. “The challenge for enterprises is, ‘How do we know it’s working?’ This is an important question when even a small failure can mean something as serious as introducing institutional bias or regulatory violations.”

Organizations need to have a good AI defense strategy, regardless of where the implementation of AI comes, be it marketing, customer experience, employee experience or digital workplace. In other words, brace for the things that can go wrong in your implementation planning.

So, How do you get that done? Here are a few quick tips if you’re planning an AI implementation strategy for 2020. 

Know Your Vendor

The first step may be obvious: work with solid vendors. How can you ensure that? Work with vendors with expertise and experience, Orr said. “The more clients they work with the better because you multiply the number of people thinking about risk and finding it,” he said.

When it comes to using AI to improve your customer experience, for instance, ask your technology vendors where they think things can go wrong and what they have done to prevent it, according to Wayne Coburn, principal product manager at Iterable. “Every AI carries risk,” Coburn said, “and if your vendor can’t describe what their risks are and what their mitigation strategy is, then maybe it’s time to find a new vendor.” 

Build AI Team That Audits Program

Do your homework, Orr added. Put together a cross-functional risk management team that creates a test plan and then tests extensively with real data, he said. “Trust, but verify,” Orr said. “Commit resources to periodically audit the outcomes, which may include random and directed sampling and adding monitors that live outside of the AI. Case in point: ask candidates or customers if they felt the outcomes were fair.” 

Ensure Models Are Aligned to Company Objectives

Measurement is important because AI will have the same failings as people and sometimes over-respond to incentives, according to Orr. Periodically step back and think holistically about the outcomes to ensure sure the AI models are aligned to a company’s objectives. “In many ways this is similar to managing people in any organization,” Orr said. “You need to align on objectives, make sure incentives lead to desired outcomes and provide continuous feedback to your employees.”

Understand All AI Decisions…

Read The Full Article

Post Tags
Share Post
Written by
No comments

LEAVE A COMMENT