Balancing the rewards and risks of AI tools

AI’s promise of time and money saved has captivated employees and business leaders alike. But the real question is… is it too good to be true? As enticing as these rewards may be, the risks of this new technology must also be seriously considered.

Balancing the risks and rewards of AI is causing pause for many organizations as they grapple with the right way to adopt AI. Every deployment in every organization is going to look different -- meaning that the balance of risk and reward is also going to look different depending on the scenario. Here, we’ll talk through the promised rewards and the potential pitfalls of adopting generative AI technologies, as well as some guiding questions to help determine if it’s the right move for your business.

The Rewards

While it’s talked about plenty, it is worth noting that generative AI has in fact changed the game when it comes to making AI available to the masses. Large language models (LLMs) like ChatGPT have captivated everyday workers and consumers in a way that was nonexistent with earlier AI and automation technologies. What’s more, almost anyone can use the technology without needing to know anything about coding or statistics.

This means that anyone can now do research, code programs, write content, and more at new speeds, enhancing productivity and freeing up time. Because of its accessibility, McKinsey predicts that generative AI could contribute up to $4.4 trillion to the economy annually, drastically increasing labor productivity across sectors. The potential for the technology -- once unlocked -- is enormous.

The Risks

However, AI does come with a set of risks -- and depending on your business, the outcomes you want to see, and the kinds of data you use, it's worth carefully considering if the risks outweigh the benefits.

First and foremost, there’s the issue of data. For AI to act over a set of information, it must have visibility to that origin data -- in other words, you can’t ask ChatGPT to write a blog summarizing the last six product updates unless it has access to information detailing those updates. This information, however amassed, should be examined for accuracy, relevance, and, most importantly, confidentiality -- which becomes extremely important when dealing with public LLMs. If a user uploads private information to a public LLM, that information becomes part of the LLM -- meaning that confidential company information could end up in the public domain. And thanks to End User License Agreements that people often agree to but barely read, that data is now at the mercy of the LLM provider and will likely be used in the future to train the model. Identifying areas where internal firewalls and permissions are lacking is imperative to avoiding data leakage and the loss of proprietary information.

However, it’s not just the data going into the LLM companies must worry about -- it’s also the information they get back. No one should take the answers produced by generative AI tools at face value. Answers could be bias, inaccurate, or simply made up. It’s important to understand where the model is getting the information from and that it is read with a healthy degree of skepticism before being used or promoted in any significant way. And remember, anything produced by generative AI is considered the public domain -- be wary about putting your name on anything written by generative AI as you could end up in a copyright nightmare.

How to Decide if You’re Ready

To guide what steps your organization should take, and if you’re ready to make the AI leap, consider asking these questions:

  • Why do we want to adopt this technology? What results do we want to see from it?
  • What use cases would be best suited to seeing these results?
  • What generative AI tools will allow us to reach these goals?
  • How would our customers and partners feel about us using this technology?
  • What could go wrong and what would it mean for the business?

Depending on how you answer the above, will likely inform your next step. A small bank with lots of personal information might want to avoid using generative AI until they determine how to safeguard their data better with the tool. A law firm however might find it helpful in summarizing legal research needed for a case. It depends on the use case, the company, and ultimately, the users.

If an organization does decide to take the plunge and invest in AI, policy setting and employee education are crucial steps to mitigating risk. A company should develop and share an AI policy outlining acceptable tools, acceptable use, what information can be put into the LLM and what cannot be, a summary of End User License Agreements, and how violations of this policy will be handled. Employees should not only be familiar with the policies, but receive training on the tools and be able to ask questions.

Generative AI adoption is going to look different for every company -- and it might not be an all or nothing scenario. A business might decide that employees can use these tools for research, but not for writing, for example. Understanding the risks and rewards is the first step of any new technology deployment. By striking the right balance, developing policies, and mitigating risk, organizations can start to actually reap the benefits of AI while also ensuring its responsible use.

Image Credit: Wrightstudio / Dreamstime.com

Michael Gray is Chief Technology Officer at Thrive where he is responsible for the company’s R&D, technology road-mapping vision, while also heading the security and application development practices.

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.

  翻译: