Microsoft Message ID: MC1230459 – 2026-02-10 | Microsoft Teams: Voice tethering

Microsoft 365 Update

💡 Our Technical Review in summary

Summary

  • Microsoft Teams is introducing “Voice tethering,” an accessibility enhancement that reattributes audio contributions in meetings.
  • When a sign language interpreter speaks on behalf of a Deaf or hard‑of‑hearing (D/HH) participant, Teams will attribute the spoken words to the D/HH participant rather than the interpreter.
  • This attribution applies to live captions, meeting transcripts, and Microsoft 365 Copilot outputs (notes, summaries, and action items).
  • The update is tied to Microsoft 365 Roadmap ID 553223 and is scheduled for rollout between March and April 2026.

Impact

  • Meeting Intelligence Accuracy: AI-generated summaries and Copilot insights will now correctly identify the D/HH participant as the contributor of ideas and action items, ensuring accountability and accurate records.
  • Identity Preservation: The identity of the sign language interpreter is separated from the content of the message, ensuring the D/HH user is recognized as the active meeting participant.
  • Automatic Integration: The feature is enabled by default whenever Sign Language Mode and interpreter assignments are used. No manual activation is required for individual meetings.
  • User Experience: All participants viewing live captions or post-meeting transcripts will see the D/HH participant’s name associated with the interpreted speech.

Action Required

  • No Admin Configuration: There are no administrative toggles or controls required to enable this feature. It will be deployed automatically to all tenants.
  • Update Documentation: IT Admins should update internal training materials, accessibility guides, or documentation related to interpreter workflows in Microsoft Teams.
  • Stakeholder Communication: Notify D/HH employees and sign language interpreters about the change in speech attribution to ensure they are aware of how their contributions will appear in transcripts and Copilot.
  • Helpdesk Awareness: Inform support staff that this is a functional improvement and not a “sync error” or “spoofing” of participant identities in meeting logs.

Microsoft Official Update

Service: N/A
Category: stayInformed
Severity: normal


[Introduction]

Voice tethering builds on the recent introduction of Sign Language Mode in Microsoft Teams. When a sign language interpreter voices on behalf of a Deaf or hard‑of‑hearing (D/HH) participant, Teams will now attribute captions, transcripts, and meeting intelligence—such as Copilot notes, summaries, action items, and insights—to the D/HH participant rather than the interpreter. This update ensures accurate representation in meetings, clarifies who is contributing to the conversation, and improves downstream meeting accuracy and accountability.

This message is associated with Microsoft 365 Roadmap ID 553223.

[When this will happen]

  • Targeted Release (Worldwide): We will begin rolling out in mid-March 2026 and expect to complete by late March 2026.
  • General Availability (Worldwide, GCC): We will begin rolling out in early April 2026 and expect to complete by mid-April 2026.

[How this affects your organization]

Who is affected

  • Organizations with meeting participants who are Deaf or hard‑of‑hearing and use sign language interpreters in Teams meetings.
  • Any users who participate in meetings with Sign Language Mode enabled.

What will happen

  • Voice contributions made by interpreters will be attributed to the D/HH participant across:
    • Live captions
    • Meeting transcripts
    • Copilot notes, summaries, action items, and insights
    • Other Teams meeting intelligence
  • Meeting data becomes more accurate by ensuring the correct participant is represented.
  • Interpreter identity is no longer conflated with the signer they support.
  • Sign Language Mode is already available to all users; voice tethering enhances it automatically.
  • The feature is on by default when sign language mode and interpreter assignment are used.
  • No admin controls are required to enable or manage the feature.

[What you can do to prepare]

No action is required.

Optional preparation steps:

  • Inform D/HH users and interpreters that speech attribution in meetings will change.
  • Update internal training or accessibility resources if you document interpreter workflows.
  • Notify helpdesk or support teams that captioning and transcript attribution will appear differently for interpreted meetings.

[Compliance considerations]

No compliance considerations identified. Review as appropriate for your organization.