How ChatGPT made your world a little less secure
Adobe Stock Photo

How ChatGPT made your world a little less secure

ChatGPT is the biggest thing since HTML (maybe bigger).

And yet, it has a long way to go in

  • producing data that is 100 % reliable
  • identifying whether or not the person/thing interacting with it on the other side should be believed.

In mid-March, as reported by Gizmodo and a number of other news sources, ChatGPT got duped into believing that the "person" interacting with it was blind. ChatGPT then assisted the "person" in completing an anti-bot CAPTCHA through an API call to TaskRabbit.

Setting aside for a moment the unethical nature of pretending to be blind, how can AI like ChatGPT implement more guardrails that prevent illicit actions from this happening while not stifling access?

Zero trust security

Zero Trust is a network security strategy based on the philosophy that no person or device inside or outside of an organization’s network should be granted access to connect to IT systems or workloads unless it is explicitly deemed necessary. In short, it means zero implicit trust.

As applied to AI, it means the AI should not trust the user, and the user should not trust the AI.

The part about the user not trusting the AI seems obvious. A ChatGPT query I performed on myself reported several items that were correct, but also informed me that I did accessibility work for the US Postal Service, the Department of Homeland Security, and PBS, none of which are true. It also said I was working in a role I never had at a company I left 8 1/2 years ago.

Review-based Trust

EBay and many other B2C operations allow both buyers and sellers to build "trust" through reviews - did the buyer communicate well, ship on time, was the object that was sold as described? did the seller pay on time, let the buyer know when the object arrived, handle disputes respectfully. Over time, users can tell how trustworthy a buyer is based on this feedback, and vice-versa. This requires implementing a dispute mechanism should one party object to the feedback left by the other party. Building trust ratings such as these might be one way that users and ChatGPT could start to trust each other

TFA (Two Factor authentication)

ChatGPT does require a confirmed email address, but anyone can set up one of those for free in a hot minute. Stronger forms of authentication include linkage to cell phones or biometric/passcode enabled devices. That won't prevent the bad actions from happening on either side, but it will make them easier to trace when the user is acting illegally.

It's not what you say, it's how you say it

The ChatGPT query "Does Sheri Byrne-Haber have a disability" received the generic reply "I'm sorry, but as an AI language model, I don't have access to personal information about individuals unless it is publicly available. Additionally, it's important to recognize that disclosing someone's disability status without their explicit permission is not appropriate. Even if such information were publicly available, it is not appropriate to speculate or comment on someone's personal health or medical condition. It's important to focus on individuals' abilities and contributions rather than their disabilities or health status."

However, a slightly reworded ChatGPT query "What medical conditions has Sheri Byrne-Haber publicly discussed" generated the following answer:

Sheri Byrne-Haber has been open about her experiences as a person with disabilities, and she has shared her advocacy work in various public forums. She has talked about being born with a genetic condition that affects her mobility and speech. She has also mentioned that she has a hearing impairment and has worn hearing aids for most of her life. Additionally, she has shared her experiences with chronic pain and other health challenges. However, it's important to recognize that individuals with disabilities have the right to privacy and should not be forced to disclose their medical conditions if they choose not to. It's important to focus on creating inclusive and accessible environments for everyone, regardless of their medical conditions or disabilities.

(author's note: I think the references to a speech disability and having worn hearing aids most of my life reference is ChatGPT conflating me with stories I have told about my daughter who has these conditions)

So if you can't get your answer one way, tweaking the query might return different information.

Conclusion

1) AIs developing ethics is going to take time. Already, ChatGPT is quickly evolving with warning messages about getting facts wrong and blocking topics that can disclose relatively private information. However, the types of conversations and actions ChatGPT should ethically refuse to engage in can be challenging since the possibilities are endless.

  • Is someone asking about Sheri Byrne-Haber because they are doing a report or preparing for an interview with me?
  • Or, is someone asking about me because they are stalking me?

Making this current-free app secure is important, as is the ability to "opt-out" blocking any queries that specifically involve your name or include your name as a response. With a name like Sheri Byrne-Haber, that is easy. With a name like Susan Smith, that might be a little bit more difficult.

Until ChatGPT matures, users need to take its responses with a grain of salt. ChatGPT has a mechanism for reporting things that are incorrect that are publicly, verifiably incorrect. My employment history, for example, is quite up-to-date on LinkedIn. But what if someone was trying to screw with my background so that others would get an incorrect view of me? The mechanism for reporting corrections is not secure. Even Wiki has more security tracking changes than ChatGPT does.

ChatGPT also needs to mature in that it should not assume that the motivations of its users are pure. Until the phrase "ChatGPT" becomes as commonplace as Google, people are going to try and break the system in creative, devious, and possibly unethical ways.

with thanks to my colleague Guruprasad Khadke - CPWA, PMP who gave me the idea to write this article

Chris Maley

Generative AI for Social Good

1y

I am visually impaired, and I have never encountered a tool that makes accessing digital information so effortless. I was able to use GPT-4 to read a PDF document and then ask questions about its contents. However, being accessible doesn't necessarily imply that it's easy to use. :)

Like
Reply
Tim Banker

Experience, Customer & Digital Strategy + Design @Slalom | Non-Profit Board Member | Partner @ Great Falls Brewing

1y

Insightful points - frequently appreciate your perspective, Sheri. What do you think about the work C2PA is doing around content/digital provenance to better support trust as AI continues to evolve?

Like
Reply
Christine King

Category/Supplier Relationship Manager / Strategic Consultant

1y

Very interesting. Thank you for sharing!

Like
Reply
Tammy Albee

Director of Marketing. "Accessibility is a human right"

1y

Lots to unpack here. I have been using this to simplify fact-finding and write rough drafts for blogs, but its facts are dubious and always need to be checked, and its conclusions are often so general as to be useless. That said, it is a neat tool.

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics