What Ethical Considerations Should Coaches Keep in Mind When Using AI Tools?

The following article was written by a machine, but that doesn’t stop it being helpful. Its purpose is to get you here, and here you are! If you’d like to find out how it works, you can find out by booking in a conversation with Sam Isaacson.

TL;DR: As AI tools become increasingly integral to coaching, ethical coaching with AI demands deliberate attention to principles such as transparency, confidentiality, fairness, and preserving the human connection at coaching’s heart. Coaches must clearly disclose AI’s role, ensure informed client consent, vigilantly protect data privacy, actively mitigate algorithmic bias, and balance AI capabilities with human empathy to maintain trust and psychological safety. This article unpacks core ethical considerations for responsible AI use in coaching, informed by authoritative codes, research, and thought leadership — including my work in The Digital and AI Coaches’ Handbook and guidance from the EMCC and ICF — to support coaches embracing ethical AI integration in their practice.


What Are the Core Ethical Principles in AI-Enabled Coaching?

Ethical coaching with AI builds on longstanding coaching ethics but faces novel challenges introduced by technology. The International Coaching Federation (ICF) and the European Mentoring and Coaching Council (EMCC) both emphasise principles vital for any coaching relationship:

  • Confidentiality: The client’s personal and sensitive information must be protected rigorously, especially when data passes through AI platforms.
  • Client Autonomy and Informed Consent: Clients need to understand how AI tools will be used, what data is collected, and retain full control over participation.
  • Transparency: Coaches must openly communicate AI’s role, showing honesty about its capabilities and limitations.
  • Fairness and Equity: AI systems can inherit biases that risk discriminatory or unfair treatment, which coaches have a responsibility to identify and mitigate.
  • Human-Centred Values: Empathy, intuition, and creativity are uniquely human and remain the foundation. AI should augment—not replace—these qualities.

These principles are not abstract ideals but practical guardrails requiring coaches to be vigilant and proactive. The EMCC has engaged leaders in coaching technology ethics, including myself, to help define frameworks for these emerging challenges EMCC Global Code of Ethics and related conversations.


Transparency and informed consent are cornerstones of ethical coaching with AI and serve to build, rather than erode, trust.

Coaches should consistently:

  • Explicitly Inform Clients about when AI tools are involved in coaching sessions, including the specific functions—whether AI is used for scheduling, data analysis, feedback generation, or conversational support.
  • Explain Data Uses and Limits: Detail what client data is collected, stored, and possibly shared, referencing compliance with standards such as GDPR.
  • Discuss AI’s Capabilities and Limitations: Avoid overstating what AI can do; be honest that AI provides supportive insights but cannot replace human judgement or empathy.
  • Obtain Clear Consent: Clients should agree voluntarily to the use of AI, with the option to opt out or limit AI involvement if they prefer.

An example best practice is providing clients with an AI coaching disclosure statement prior to engagement—outlining these key points clearly. Educating clients about AI fosters a partnership built on respect and choice.

Keeping clients informed also aligns with ethical guidance discussed within the CoachVox ethical AI framework, which stresses that trust depends on transparency.


What Risks Do AI Tools Pose to Confidentiality and Data Privacy in Coaching?

Data privacy concerns are heightened when coaching uses AI tools, because:

  • AI platforms often process client data on cloud servers that may be international.
  • Automated data handling increases the risk of unintentional data leaks or cyberattacks.
  • The proliferation of third-party vendors in AI ecosystems complicates accountability.

To mitigate these risks, coaches should:

  • Choose AI providers with strong security protocols, end-to-end encryption, and clear privacy policies.
  • Control data access tightly, ensuring only authorised persons and systems engage with client information.
  • Comply with relevant data protection laws, such as the UK’s Data Protection Act and GDPR.
  • Minimise data collection to what’s essential, applying anonymisation or pseudonymisation where possible.
  • Conduct due diligence and regular audits on AI tools for privacy compliance and security vulnerabilities.

Ignoring these considerations risks breaching ethical codes and damaging client relationships. These steps are crucial for responsible AI use in coaching and form part of the extensive guidance in The Digital and AI Coaches’ Handbook Passmore, Diller, Isaacson & Brantl, Routledge 2024.


How to Recognize and Mitigate Bias in AI Coaching Solutions?

AI bias occurs when coaching tools produce results that systematically disadvantage or misrepresent certain client groups due to biased training data or algorithm design flaws.

Common sources include:

  • Training datasets lacking diversity,
  • AI that reinforces societal stereotypes,
  • Algorithms not regularly updated or audited.

Coaches can play a crucial role in addressing AI coaching bias by:

  • Asking vendors about their bias mitigation strategies, data sources, and testing processes before adopting AI tools.
  • Engaging in ongoing monitoring of AI outputs to detect any patterns of unfairness or systemic error.
  • Advocating for inclusive AI design that accommodates diverse identities, cultures, languages, and coaching needs.
  • Supplementing AI insights with critical human judgement, recognising where AI might misinterpret or oversimplify complex human issues.

A 2023 review published by Boldly highlights these risks and practical steps coaches and organisations take for fairer AI adoption in coaching settings Boldly blog on AI ethics in coaching.


Balancing AI and Human Element: Preserving Trust and Psychological Safety in Coaching Relationships

While AI brings scale, consistency, and new insights, the essence of coaching remains deeply human.

To preserve trust and psychological safety, coaches should:

  • Prioritise the human connection: Use AI as a supportive tool, not a substitute for empathy, presence, and intuitive understanding.
  • Maintain active listening and emotional attunement: AI cannot replicate nuanced human emotions or respond fully to contextual subtleties.
  • Set clear boundaries on AI use: Define what decisions are AI-assisted versus coach-led and communicate this clearly to clients.
  • Encourage co-creation of the coaching process: Involve clients in discussions about how AI supports their progress, creating a collaborative stance.
  • Continuously reflect on the impact of AI on rapport: Evaluate client comfort and adjust practices to maintain a safe and trusting environment.

The emerging “AI-enhanced coaching triad” model recognizes AI as one participant alongside coach and client, with the human relationship remaining central to ethical and effective coaching EPraxis article on AI coaching triad.

Review My Order

0

Subtotal