Using an AI notetaker (like Otter, Teams, Fireflies or Laxis... there are plenty of them) to take automatic notes of meetings is probably one of the least controversial uses of AI. After all, using pen and paper is so... 2020 isn't it? And tapping away at a keyboard can be genuinely distracting for everyone.
But - there are a few key issues to watch out for.
Depending on how you set up your notetaker for instance, you may find that a transcription of the call is sent to people you might not have expected - for instance, not just to attendees, but also to those who didn't show up for the call.
That might include those not involved in the matter or (in some cases where events are re-scheduled) the "other side" of the negotiation.
In this article, we explore how AI notetaking tools can compromise legal privilege and create data protection liabilities, particularly when transcripts are distributed beyond intended recipients or used to train AI systems without the correct consent.
Legal privilege
For lawyers in particular this provides an additional risk. If recipients were those meant to receive legal advice - does that transcript still attract "legal privilege" (the concept that legal advice about contentious issues should be treated as confidential by its very nature and should not be capable of being disclosed in a dispute)?
If a lawyer was giving advice to, for instance, notify a data breach to the ICO, or to immediately report a health and safety breach, then losing legal privilege would mean that transcript would be disclosable in court, which could be a problem for the company where that advice was both received and ignored.
If those who didn't attend were not involved in the dispute then, for those people at least, the advice was arguably not legally privileged in nature.
Data protection: Brewer v Otter.ai case
Then there are the data concerns. Do the delegates at the meeting know that they are being recorded and transcribed? What about those who turn up late? Was it likely that those who signed up for the call knew or expected that their contact details and comments would be shared with everyone else on the call?
Most providers of notetaking solutions put the onus firmly on their customer to comply with all relevant legislation (the person setting up the call). This is one instance where the "human in the loop" carries a fair bit of responsibility to ensure that all of the right people are informed and that only the right people receive the transcript.
A recent claim against Otter.ai alleges that the record of the call is used not only by the delegates but also to continuously train the AI solution itself. This therefore makes the point that in some cases the AI company itself benefits from the data that is processed.
Issues for AI providers
This is an issue for many AI providers - on what basis can they use the data their users collect for their own purposes to continually improve their solutions? I have seen lots of AI solution providers seek to "fudge" this in their terms and conditions, by providing wording in their platform that suggests the use of a AI solution will allow both AI provider and its customer to use inputted data.
When done clumsily, this can risk the following:
- the ultimate user discovering their information has been used by the AI provider in a way that they weren't expecting (and had not agreed to),
- the AI provider's customer finding itself exposed to claims from those end users, and;
- the AI provider itself not being able to use the data.
Key lesson for using AI notetakers
If you're recording your calls, make sure both you and others in the call know what will happen to the recording. And if you're an AI provider, you should probably consider how you might make that task easier for your customer.
Contact

Richard Nicholas
Partner
richard.nicholas@brownejacobson.com
+44 (0)121 237 3992