Skip to Content
Articles

The AI Notetaker: An Uninvited Party Guest Who May Need to Leave

AI-powered notetaking tools have quickly become common participants in the modern workplace. Tools such as Otter.ai, Fireflies and Zoom AI Companion can record conversations, transcribe them in real time, and summarize or extract key action items, with minimal human intervention. As these tools become increasingly integrated into conferencing platforms, many companies are adopting them without fully appreciating the legal risks they introduce. The AI notetaker often joins the call prior to anyone from the company itself. While these tools can provide substantial benefits, it is essential for companies to understand the potential risks inherent in these tools as well and to adopt appropriate guardrails.
 
What Are AI Notetakers?
AI notetakers are voice-to-text tools that record meetings and calls, transcribe the conversation in real time, and generate summaries or action items from the discussion. They are available both as standalone applications and as built-in features of major video conferencing platforms. While they offer obvious productivity benefits, they also create a detailed, time-stamped record of conversations that might not otherwise exist. There is also no guarantee that the transcription is correct, and they are almost never reviewed and corrected by anyone who actually participated in the meeting. This permanent record brings with it a host of legal considerations.
 
Attorney-Client Privilege and the Risk of Waiver
Perhaps the most significant concern with AI notetakers is the potential to lose attorney-client privilege. When a privileged conversation is recorded and transmitted to or stored by a third-party AI provider, the confidentiality that underpins the privilege may be compromised. Cloud-based AI providers may not qualify as agents of the client or the attorney, which means that sending privileged communications through these tools could be treated as disclosure to an outside party. Additionally, many AI notetakers have auto-sharing features that can distribute transcripts to unauthorized recipients, potentially constituting a waiver of privilege. Also, depending on where and how the transcriptions are stored, they may be available to unintended participants. The key question companies and their counsel must consider is whether the use of a third-party AI transcription tool destroys the confidentiality necessary to maintain privilege.
 
Confidentiality and Security Risks
Beyond privilege, AI notetakers raise broader confidentiality and cybersecurity concerns. These tools may store transcripts on external servers, be trained on user data unless the company has explicitly opted out and lack adequate encryption or access controls. Many also lack robust data governance policies. The implications are significant, leading companies to face exposure to internal leaks, cybersecurity vulnerabilities and potential breaches of NDAs.
 
Equally important are the questions that arise around access and internal filing. Companies should carefully consider where AI-generated notes end up, where they are sent and stored, who has access to them, and whether the notes can be modified after they are stored. An all-encompassing, timestamped record of a meeting creates a document that would not otherwise exist, and companies must be aware how that document is managed and determine whether they want a record of the conversation to exist at all. If a company does choose to retain an AI-generated transcript, that transcript should be reviewed and corrected for accuracy before it is stored as these AI transcription tools tend to be imperfect. An uncorrected transcript may contain errors that create misleading or inaccurate records.
 
Data Privacy and Consent Considerations
AI notetakers also implicate a patchwork of state and international privacy laws. Many jurisdictions require notice or consent before a conversation can be recorded or transcribed. In two-party consent states such as California, Florida and Pennsylvania, all participants must consent to the recording. Internationally, the EU’s General Data Protection Regulation requires a lawful basis for processing personal data, including voice recordings and transcripts. Critically, many AI tools are not configured to notify meeting attendees that recording or transcription is taking place, which can expose companies to civil and regulatory liability for noncompliance.
 
Regulatory and Compliance Risk
Transcripts generated by AI notetakers may become discoverable records in a range of legal contexts, including SEC and DOJ investigations, internal investigations and employment litigation. This introduces a significant risk of the unintended creation of records that become subject to document retention obligations, litigation holds or regulatory inquiries. Companies that do not have clear policies governing the retention and deletion of AI-generated transcripts may find themselves preserving far more information than intended, with real consequences in the event of litigation or a regulatory investigation.
 
Intellectual Property and Contractual Concerns
AI notetakers also raise questions about intellectual property ownership and contractual protections. In many cases, the terms of service governing these tools favor the vendor, raising questions about who actually owns the notes and transcripts the tool generates. When meetings involve discussions of sensitive strategy, trade secrets or proprietary code, companies are posed with a high risk of IP leakage. Compounding the problem, many companies adopt AI notetaking tools without entering into a data processing agreement with the vendor and without negotiating service level agreements or warranties around confidentiality. These contractual gaps leave companies exposed.
 
The Boardroom: A Heightened-Risk Context
While AI notetakers pose risks across all corporate settings, the stakes are particularly high when these tools are used in board and committee meetings. Board-level discussions routinely involve the most sensitive strategic, financial and legal matters a company faces. The introduction of AI transcription into that setting amplifies every risk discussed above.
 
The broader trend and benefits are clear—AI is poised to reshape how boards function, how they process information, and how they interact with management. A recent Stanford study found that AI has the potential to reduce information asymmetries between boards and management, allow for real-time analysis during meetings, and supplement or even replace work currently performed by outside advisors.
 
However, this increased capability comes with increased risk. AI-generated transcripts of board meetings could become discoverable in shareholder litigation, SEC enforcement actions or DOJ investigations. AI monitoring and analysis may generate red flags that, once documented, the board is obligated to investigate, potentially increasing the board’s own liability if it fails to follow up. Directors’ fiduciary duties to investigate potential red flags exist regardless of whether those issues are documented. The concern with AI transcription is not that it creates obligations that would otherwise not apply, but rather it removes the board’s discretion over what gets memorialized. Traditional board minutes reflect the board’s judgement about what is material to the discussion. By contrast, an AI-generated transcript captures everything indiscriminately, including hypothetical scenarios, casual remarks and preliminary concerns, elevating them to the same status as the board’s deliberate conclusions. The result is that AI monitoring and analysis may surface and permanently document issues that the board, exercising its judgment, might have determined were not material, potentially increasing the board’s exposure if those documented issues are later scrutinized in litigation or regulatory proceedings.
 
A related concern arises when directors independently use AI tools to conduct research or analyze board materials. These tools may surface issues that a director would not otherwise have encountered, and once those issues appear in a transcript, AI-generated summary or search history, they may be treated as known to the board. This expands the universe of documented information that could be scrutinized in litigation or regulatory proceedings. For this reason, companies should provide directors with clear guidance on which AI tools have been vetted and approved for board use, including tools that offer appropriate confidentiality protections, access controls and data segregation. Directors using unapproved AI tools risk conducting searches and generating outputs that are stored on third-party servers without adequate security, effectively converting confidential board deliberations into unprotected, externally accessible information. Protecting board data from cybersecurity threats and unauthorized access should be a central consideration in how AI is adopted and governed in the boardroom.
 
Given these risks, companies should approach the use of AI notetakers in board settings with particular caution, applying all of the best practices outlined below and considering whether certain board discussions, particularly those involving privileged legal advice, pending litigation or M&A strategy, should be excluded from AI transcription entirely.
 
Best Practices
Companies looking to manage the risks of AI notetakers should consider the following steps:
 
  • Review vendor terms and data processing agreements. Carefully review the terms of use and data processing agreements of all AI notetaking tools in use across the organization. Understand what data the vendor collects, how it is stored and used, and whether the vendor has the right to train its models on your data.
 
  • Ensure adequate security controls. Require encryption, access controls and secure storage for all AI-generated transcripts and recordings.
 
  • Restrict use in privileged settings. Prohibit or carefully restrict the use of AI notetakers in meetings involving attorney-client privileged communications, unless the company has confirmed that adequate protections are in place to preserve confidentiality.
 
  • Draft or update acceptable use policies. Establish clear, company-wide policies governing the use of AI transcription tools, including procedures for use, storage, retention, and deletion of recordings and transcripts.
 
  • Train employees. Provide training on consent requirements, confidentiality obligations, and the risks of auto-sharing features. Employees should understand both the legal requirements and the practical steps they need to take before activating an AI notetaker.
 
  • Implement opt-in protocols. Consider requiring a green light from a legal expert before AI notetakers can be used in certain categories of meetings, particularly those involving sensitive or privileged subject matter.
 
Conclusion
AI notetakers offer real productivity benefits, but they can also create a tremendous amount of legal exposure that many companies have not yet fully considered or addressed. From privilege waiver to privacy compliance to the unintended creation of discoverable records, the risks are significant and span multiple areas of law. As these tools become more deeply embedded in everyday business operations and as AI adoption accelerates at the board level, companies that take a proactive approach to policy, training and vendor management will be best positioned to capture the benefits of the technology while minimizing their legal risk.
 
Cookie Policy

This website uses cookies to maximize your experience and help us improve it. To learn more about cookies, how we use them and how to change your cookie settings please view our cookie policy. By using this site you indicate your consent to this.