The people part of AI

The people part of AI

According to Gartner, there are 5 areas ripe for better COVID decision-making through AI in healthcare.

1. Early detection and epidemic analysis. Gartner names automated contact tracing, epidemic forecasting and monitoring the development of herd immunity as examples of this AI deployment category.

2. Containment. Lockdowns and similarly aggressive, one-size-fits-all measures carry enormous societal and economic costs, Gartner points out. For this reason, healthcare leaders should consult with experts in fields such as behavior analytics to optimize containment efforts.

3. Triage and diagnosis. Gartner notes that AI-enabled self-triage has already found a foothold in healthcare, as evidenced by telehealth services and virtual health assistants have increasingly helped individuals get pre-diagnoses and know what to do next.

4. Healthcare operations. Predictive staffing can help healthcare CIOs and chief data officers (CDOs) better align the supply of materials, equipment—and, not least, frontline healthcare workers—with the demand for care as it ebbs and flows, Gartner suggests.

5. Vaccine research & development. Gartner cites AI graphs and natural language processing as aids for medical researchers needing to quickly find connections across massive stacks of published clinical trials.

Here is a webinar on AI in radiology.

Is your company ready for AI?


While most of the focus seems to be on the potential of emerging technologies and accelerating them, there are several other barriers to AI dissemination and implementation. Perhaps the most obstructive one is the people part.

Here is one version from Scalingup.

No alt text provided for this image


Authors of an HBR article noted that, contrary to popular belief, digital transformation is less about technology and more about people. You can pretty much buy any technology, but your ability to adapt to an ever more digital future depends on developing the next generation of skills, closing the gap between talent supply and demand, and future-proofing your own and others’ potential.

The secret to AI is people but to build AI at scale you also need standardized processes and tools. AI development used to be the responsibility of an AI “data science” team, but building AI at scale can’t be produced by a single team — it requires a variety of unique skill sets, and very few individuals possess all of them. For example, a data scientist creates algorithmic models that can accurately and consistently predict behavior, while an ML engineer optimizes, packages, and integrates research models into products and monitors their quality on an ongoing basis. One individual will seldom fulfill both roles well. Compliance, governance, and risk requires an even more distinct set of skills. As AI is scaled, more and more expertise is required.

The challenge is how to educate and train doctors and patients to win the 4th industrial revolution and enable them to change their behavior.

Here are some ways to overcome the roadblocks:

  1. Put people first
  2. Harden the soft skills
  3. Drive change from both the top-down and bottom-up
  4. Change your culture through leaderpreneurship
  5. Lead innovators, don't manage innovation systems
  6. Use the right data to drive the right decisions when solving the right problems
  7. Think big, start small and stay small until you are ready to take the next steps
  8. Be sure you have the right structure and processes in place
  9. Empower AI champions and project teams to align with strategic priorities
  10. Know when to lead from the back instead of the front
  11. Teach data literacy
  12. Here's what to measure to see if you are ready

But, developing people to create AI is one thing. Convincing people to use AI is another since you must move them from awareness to intention to decision to action and that means explaining the why before the what and how. A failure to use design thinking will result in a failed AI or digital health initiative if you solution does not do the jobs clinicians want it to do.

Using AI to find and monitor people is another trend. However, businesses and their service providers are grappling with how to comply with New York City’s mandate for audits of artificial intelligence systems used in hiring.

A New York City law that comes into effect in January will require companies to conduct audits to assess biases, including along race and gender lines, in the AI systems they use in hiring. Under New York’s law, the hiring company is ultimately liable—and can face fines—for violations.

Focus on the people part. Otherwise, you'll wind up with a VCR clicker that no one knows how to program or wants to use.

Arlen Meyers, MD, MBA is the President and CEO of the Society of Physician Entrepreneurs on Twitter@SoPEOfficial

Janet Lawrence, BA

Surgical Technologist, Team Player, Entrepreneur at Heart

4y

Yz.

Like
Reply

To view or add a comment, sign in

More articles by Arlen Meyers, MD, MBA

Insights from the community

Others also viewed

Explore topics