Convenience, progress and ethics
Image created using Leonardo

Convenience, progress and ethics

I set up an AI assistant

I want to share a recent experience that touches on the intersection of technology and ethics. It’s an experience that raised important questions about how we integrate AI into our daily lives and professional routines.

A couple of weeks ago I finally did something I’d been researching for quite some time: I created an AI-powered virtual assistant to help me deal with my emails, which were getting a bit out of hand. I’d realised that a significant number of the emails I receive fall into predictable categories, such as “Can we meet?” If I could automate a process of replying to that sort of email with a helpful, relevant, customised response that would improve efficiency both for me and for the person contacting me.

Ultimately, I was letting people down by not getting back to them quickly enough on things that ought to have been relatively straightforward. An automated out-of-office response feels generic and isn’t suitable for everyone. A personalised response that politely acknowledges the email and provides relevant information without needing to rely on my availability adds value to everybody.

The benefits to all parties seemed clear to me but, naturally, there were risks. The AI could misunderstand the nuance of an email and produce a reply that was a bit annoying. It could get it completely wrong and write something I’d be ashamed to have my name associated with. Many other things could have gone wrong, so it was important to think it through properly.

I therefore started by testing the AI assistant myself until I was happy with what it was doing, and then extended the trial to include a couple of others. They all said they found it useful and had a novelty value that was enjoyable, which gave me the confidence to switch it on for a couple of days to see how it worked in real time and learn some lessons.

Mostly, it did exactly what I wanted. There were a few quirks that needed ironing out, but on the whole people were understanding, particularly as I'd made sure it introduced itself in every email: “I’m James, Sam’s AI assistant.”

Not everybody liked James

While most people seemed to enjoy meeting James, with one person claiming it was the coolest thing she’d ever seen, one of its replies left a bad impression. Wendy-Ann Smith , author of several excellent books on coaching ethics and host of the Coaching Ethics Forum, posted publicly about her experience, describing it as “surprising and uncomfortable”, having left her “pretty annoyed”. She said she had “so many issues with this” and described it as “fraught with ethical issues”.

I don't think she liked it very much.

This was ultimately down to the fact that I was using AI to process emails from her “without consent”. She took the experience to a group of coaches that discusses ethics and said they were “all of the mind it is not appropriate”. In her words, “it has made [her] not want to engage with [me] via email,” because I “take so [much] for granted”.

Wendy-Ann has done a huge amount to facilitate global conversations around ethics in coaching. I have huge respect for her work in general, and I have respect for her concerns around my AI assistant. Receiving an email from someone’s email address but not from that person can always feel a bit off – I share Wendy-Ann’s discomfort when someone’s PA replies to an email I’ve sent to a client, for example.

Ethics is deeply important to me, and I've put a lot of thought into the topic. But I disagree with her on this point so thought it would be helpful to look at what's going on here. For the record, I sent this article to her in advance for her comments because it would feel unethical for me to talk about it publicly without including her in that process (isn’t ethics a fascinating topic?!).

Benefits, risks and ethics

As I mentioned earlier, I didn't implement “James” without having carefully considered the benefits and risks. I believe that Wendy-Ann’s primary issue is that her email was read by a system she didn’t realise it would be. She wanted to be told in advance what would happen with the email she sent, and really she wanted to be asked, so that she could give permission for it to be used in that way.

I understand that position, but I think we need to be pragmatic and consider the broader context. When someone sends an email, it isn't the whole story that Person A sends and Person B receives. Perhaps with the emergence of Web 3.0 that will become the case one day, but we’re not there yet. When someone sends an email, it’s required to accessed by multiple systems, almost none of whom the sender will know about.

Multi-component ecosystems

In sending the email to me Wendy-Ann needed to give access to its contents to her internet service provider, email provider, the application she wrote it in, the manufacturer of the device she wrote it on, potentially other applications running in the background on the device, and potentially other devices depending on the location she was accessing the internet from. Email isn't one-to-one communication, even if it might appear that way.

In fact, it’s even more complex than that. Upon receiving the email I may have done all sorts of things with its content, depending on how I approach my typical day. I could have:

  • taken notes in my physical notepad.
  • stored images of those hand-written notes in cloud storage.
  • taken notes in a digital product, replicated across cloud servers.
  • activated plugins to detect potential calendar entries or tasks.
  • automatically processed them via a productivity tool.
  • used a browser plugin to get insights on LinkedIn profiles.
  • automatically tracked it in a CRM system.
  • automatically summarised and categorised the email using AI.

The list is virtually endless; I don’t do many of these things and you might do others. And as much as transparency over data processing is important, I’ve never heard anyone say it’s unethical to take notes on something somebody said. The nature of email is that it is fundamentally not a confidential platform for communication.

So the ethics of which systems should have access to which sorts of data is a bit more nuanced than we might want it to be. Where should we draw the line?

Maybe drawing the line before the first example feels the cleanest. If I send an email and you use that content anywhere at all outside of reading the email in its original context without permission, is that wrong? That doesn’t feel sensible to me.

Maybe everything in that little list of examples is ok, but we should draw the line when the tool automatically replies to somebody, because then the veil is lifted! Perhaps it’s alright for me to use AI to automatically draft emails on my behalf so I can review and press send under my own name, but an AI being transparent about the fact it’s in use generates discomfort because (at the moment) it’s unfamiliar.

Where I’m landing

I’ve reflected on this a lot since reading Wendy-Ann’s post and having engaged with her over it since. When it comes down to it, emails are about as unsecure as a communications technology can get, beaten perhaps only by SMS. The number of people and systems that might access any email is very high. I’ve never asked others what tools they use to know where my email’s going, and I’ve never proactively offered the information about what I use. Maybe we all need to change our habits on that.

When I set James up, I could easily have made it reply as if it were me. That felt ethically wrong, but ironically would probably have not generated the same level of discomfort. I believe ethical transparency means being transparent about the tools we use, especially when it involves communication.

If you’re curious about what systems I use and what data is sent between them, please do ask! In fact, I’ve carved out time in my diary specifically for coaches looking to engage with a consultant on this sort of topic to support adoption of the benefits of automation and so on. If you’d like to have a conversation about what that might look like for you, please do get in touch.

More than that, however, I think we need to increase our awareness of the technologies we rely on, even for something as simple as sending an email. James was Wendy-Ann’s first experience of interacting with an AI assistant...as far as she knows. If we’re not there already, I expect that there are now only two groups of people: Those who have received a reply from an AI, and those who haven’t realised they’ve received a reply from an AI.

I also think we need open dialogue about ethical concerns. It’s essential to embrace curiosity, experimentation and discomfort in order to step into the future together. I believe the ethical way to address these concerns is through open discussion, not by reinforcing fears in echo chambers.

I'll finish by returning to where I started. Every new technology brings with it new ethical questions that need answering, and they need weighing up against the benefits and the known risks. If an AI assistant increases the quality of interaction people have, through timely responses and reducing our addiction to email notifications, is that good enough? Or are there some boundaries we ought to never cross under any circumstances?

James was only ever going to be an experiment for a couple of days, so I’m afraid if you email me today you won’t get an AI reply (as far as you know! Joke.). Integrating any new technology into our daily routines and professional lives is a complex issue that requires careful consideration, and we have an ethical responsibility to work with the future of AI with a balanced perspective, ensuring that we make informed decisions that benefit every part of the systems we’re a part of.

Mandy Geddes PCC, PIECL

Director, Coach Education IECL (Institute of Executive Coaching & Leadership) * accredited/ credentialed org coach * coach educator, nurturing confident, competent org coaches * coach supervisor * writer * marketer *

4mo

Such an interesting experiment, thanks for sharing it with us, Sam Isaacson Plenty to think about here, and ethical considerations to reflect on. I'm so looking forward to your session at the IECL Leadership Summit in September!

Sam Isaacson I am happy to share that I really liked James and did think it was really cool. James responded to my mail, quickly, paraphrasing the salient points and I am extremely curious to see how I can use this technology moving forward. I also agree that James has his parameters and that he should be used carefully as there could be room for ethics issues or misinterpretation. For general responding it worked exceptionally well and I enjoyed the experience. 

Maria Newport

Global Executive/Board/Leadership/Team Coach | Coach Supervisor & Mentor | MSc HRM (Org.Psych.) | PCC ICF | Facilitator | Mediator | Non-Executive Director | GAICD | Former Lawyer | 🇦🇺 🇺🇸 🇬🇧

5mo

I love the way you experiment with these technologies and push our boundaries on these issues, Sam - but unfortunately, and to be honest, my user experience was not ideal. I too was a bit taken aback I wasn’t warned about the use of the technology beforehand (we all know nothing digital is necessarily private, but I thought I was writing to you!). I know a comment was made about PAs accessing emails, but a great executive PA is valued as much for their discretion as their skills, in my experience. I also didn’t like the long and clunky repetition of my prior email - it felt VERY canned! I would have preferred an OOO alert and personal response, at your convenience. I agree digital communication systems are complex and privacy not guaranteed but I think the potential damage vs efficiency of AI and the impact on human beings are ethical questions we do need to continue to discuss on an ongoing basis. All good reminders about privacy and data and balancing efficiency and expediency with trust and rapport, not only to be ethical and professional but provide a great user experience. Thanks for prompting the reflection and debate - always appreciated. 🙏

Tony Latimer, MCC

Training the next generation of Masterful Executive Coaches | Coaching Leadership Transitions | Building Leadership Teams | Rapid Organisational Change | Innovative Leadership Speaker

5mo

Sam many if the senior execs I coach have a PA who goes through their email first before they touch it. Is that any different?

Arthur Jones

We transform stories of life and leadership into compelling strategies, tools, and tactics for driving personal and professional growth. Wayfinder, AI Coach Humanist, Business & Executive Coach

5mo

The automation platform Zapier was launched in 2012. It is a web automation tool that connects apps and services to automate workflows without coding. Today, over a dozen similar automation tools support business efficiency. I have never heard anyone challenge the ethics of using workflow automation tools. How is using AI different from the automation we have used for decades?

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics