
Technology assisted review (“<span class="news-text_medium">TAR</span>”) emerged as a response to the exponential growth of electronic documents in commercial litigation. As disclosure volumes expanded into the hundreds of thousands and often millions, manual review by large legal teams became increasingly costly, slow and vulnerable to inconsistency.
Early e-disclosure tools relied heavily on keyword searching. While effective to a degree, keywords proved to be a blunt instrument, frequently missing relevant material or capturing large volumes of irrelevant documents. TAR represented a significant advance, using examples of relevant and irrelevant documents to train systems to identify patterns across datasets. Although adoption was initially cautious, TAR became widely accepted by the late 2010s and is now firmly embedded in mainstream disclosure practice.
Generative AI builds on this foundation. Rather than replacing TAR, it adds a further layer of capability, offering enhanced document analysis, thematic understanding and summarisation at scale. As Reuben Vandercruyssen of Hogan Lovells observes, TAR has long been encouraged by the courts and reflected in procedural guidance; GenAI is now extending what is considered normal and proportionate in large disclosure exercises.
Practice Direction 57AD, which came into force in October 2022 alongside the Disclosure Review Document (“<span class="news-text_medium">DRD</span>”), remains the cornerstone of disclosure in the Business and Property Courts. The DRD provides a structured mechanism for parties to agree disclosure methodologies at an early stage and creates accountability if disputes arise later.
While Practice Direction 57AD expressly addresses TAR, it does not deal directly with generative AI, reflecting the pace at which the technology has developed since the practice direction was drafted. This gap has contributed to a degree of hesitation among litigation teams, particularly given Practice Direction 57AD’s emphasis on transparency, cooperation and proportionality.
In September 2025, the International Legal Technology Association published guidance for litigators in England and Wales on the responsible use of generative AI in disclosure. Originally issued as an addendum to its Active Learning guidance, the Generative AI Best Practice Guide now stands alone as a practical playbook for court-ordered disclosure.
Co-chaired by Fiona Campbell of Fieldfisher and Tom Whittaker of Burges Salmon, the guide responds directly to the absence of procedural guidance on GenAI within Practice Direction 57AD. It is designed to operate within existing disclosure frameworks, offering guardrails rather than prescriptions. The guidance is deliberately modular, allowing teams to adopt its recommendations in full or selectively, depending on the use case.
The emphasis is on early engagement between parties, avoiding unnecessary court intervention and ensuring that generative AI is deployed in a way that is understandable even to those without deep technical expertise.
In practice, disputes about generative AI are unlikely to be resolved by abstract arguments about risk or efficiency. Instead, success will turn on whether a proposed approach can be explained, justified and defended under Practice Direction 57AD.
The underlying message is straightforward: Practice Direction 57AD already expects serious parties to use technology to achieve proportionality. Generative AI is now becoming part of that expectation.
Despite its growing role, generative AI is not a decision-maker. Fiona Campbell stresses that while generative AI can perform exceptionally well in first-level review and data analysis, legal judgment remains indispensable. Recent authority reinforces that AI should support, not replace, human decision-making.
In practice, generative AI is often most effective when used to understand datasets holistically: identifying themes, constructing chronologies and summarising complex material. Used in this way, it enhances established methodologies rather than displacing them.
Looking ahead, it is widely anticipated that AI will increasingly handle initial review stages that are currently performed by teams of paralegals, accelerating timelines and reducing costs. This shift will place greater importance on documenting workflows and quality controls within the DRD, with expectations around transparency continuing to evolve.
Market attitudes towards generative AI have shifted rapidly over the past 18 months. Major eDisclosure platforms are integrating generative AI features into existing systems, rather than offering standalone solutions. Pushback from opposing parties is increasingly rare, reflecting Practice Direction 57AD’s emphasis on cooperation and early engagement.
As several practitioners note, generative AI has the potential to democratise disclosure by enabling smaller practices to conduct sophisticated disclosure exercises that were previously cost-prohibitive. For lower-value cases in particular, generative AI’s ability to identify thematically relevant documents, build timelines and surface connections beyond keyword searches can deliver disproportionate benefits. This capability reduces the need for repeated amendments to the DRD and supports a more accurate assessment of what disclosure is truly required.
Recent judicial decisions suggest a pragmatic approach to disclosure technology. Rather than mandating specific tools, courts are increasingly focusing on accountability, transparency and defensibility, leaving methodological choices to the lawyers responsible for the exercise.
This approach aligns with the work of the Disclosure Review Working Group, which has recently sought feedback on how Practice Direction 57AD operates in practice. The prominence of questions relating to TAR and AI in that consultation indicates that these technologies are no longer peripheral but central to modern disclosure.
Further refinements to Practice Direction 57AD are anticipated, particularly around the level of detail required when explaining disclosure methodologies. The aim is likely to reduce recurring friction over technology choices and to establish clearer expectations of what good practice looks like.
Generative AI is moving decisively from experimentation to expectation. By 2026, it is set to become a standard component of disclosure best practice. Parties who can clearly explain how generative AI has been used, supported by a robust audit trail and human oversight, will be best placed to realise its efficiency gains—while avoiding unnecessary procedural disputes. Ultimately, generative AI is a tool. Its value depends not on novelty, but on how thoughtfully and responsibly it is integrated into established legal processes.



