Selected Work

Governance Frameworks

Evidence-based frameworks and published research designed to help courts, legal institutions, and policymakers navigate AI adoption without sacrificing accountability, legitimacy, or public trust. Each framework is grounded in research infrastructure and oriented toward the institutional design questions practitioners actually face.

Distributional Analytics — distributional analysis reveals who the justice system is actually serving; rather than how many and how quickly (throughput analytics). See Beyond Disposal Rates: Why Distributional Analytics Will Define the Legitimacy of Digital Courts

Taxonomy of AI Tools — a structured classification of AI tools deployed in court and legal institutional contexts, by function, risk profile, and proximity to official output; See AI arrives in the Courts

Authority Exercise Test — for measuring how much public authority an AI tool exercises in practice, and what governance obligations follow

Proportional Governance Frameworks — governance requirements calibrated to the authority-exercise profile of each tool category

AI Procurement Framework for Justice Sector Actors — a risk-aligned procurement methodology for courts and legal institutions evaluating AI tools

Born-Digital Courts and Process Proportionality — design principles and evidence drawn from digitally native court models, examining the structural advantages of governance-first institutional design

The Architecture of Trust — institutional design as the hidden infrastructure of the AI era; why legitimacy, governance, and accountability must be built in before policy is debated

Research Data Sets & Insights

Structured datasets built to give justice sector actors evidence-based visibility into the AI tools, deployments, controversies, and governance frameworks shaping their operating environment. Replace speculation with structured evidence usable for procurement, risk analysis and policy design.

NPAI Tools Tracker — a structured database of AI tools relevant to the justice sector, cataloguing tool categories, capability fields, pricing models, and deployment context across the full range of applications from case management to decision support

Controversies Database — 253+ documented AI controversies relevant to justice sector actors, structured for risk analysis and procurement due diligence

AI Deployment Tracker — tracks experimental and operational AI deployments across justice sector subcategories; current data includes 86 experimental and 13 operational court AI deployments globally

AI Policy Frameworks Library — a structured index of mandatory and voluntary governance frameworks applicable to justice sector AI adoption, mapped by jurisdiction and instrument type

Case File Test Pack - Research grade synthetic corpus, fully customisable

The JusticeData Case File Test Pack is a research-grade synthetic matter corpus designed for justice technology, digital court, and legal AI testing. It combines a canonical reference matter with configurable distributions of ordinary and edge-case files so teams can evaluate drafting engines, case-management systems, workflow automation, analytics pipelines, and AI models under conditions that more closely resemble live operational environments.

The dataset includes richly structured court and party metadata, pleadings-level fact patterns, event-based chronologies, procedural schedules derived from commencement dates, and machine-readable fields for document assembly and system validation. It also supports separately indexed outlier matters for robustness testing, allowing teams to vary the proportion of atypical files, stress-test failure modes, and verify expected behaviour against known scenarios across different procedural settings and jurisdictional profiles.

Matters can be generated to reflect the margin of different jurisdictions, with configurable court labels, procedural assumptions, and filing conventions layered onto a consistent core schema. This makes the pack suitable for product demonstrations, QA environments, model evaluation, regression testing, and comparative benchmarking where realistic but non-live justice data is needed.

The JusticeData Case File Test Pack is the only commercially available dataset specifically designed for justice-tech testing and evaluation, rather than adapted from generic enterprise examples or public case-law corpora built for retrieval alone. Existing legal datasets and benchmarks tend to focus on case retrieval, citation analysis, or general legal tasks, whereas this pack is designed for operational testing of justice workflows, AI deployment, filings, and system behaviour.

Justice Data Standard v2.2 Commercial Guide

Courts and tribunals remain among the least digitally mature major public institutions. While finance, health, trade and statistics have all undergone systematic digital transformation underpinned by shared data standards, justice systems still rely on fragmented, vendor‑specific schemas and manual data exchange. The result is non‑interoperable case‑management systems, limited visibility of performance and access‑to‑justice outcomes, and significant barriers to responsible deployment of AI and analytics across the justice lifecycle.

JusticeData v2.2 is a modular data standard for courts and tribunals. It specifies how core justice data (cases, parties and identity, documents and evidence, hearings and procedure, decisions and reasons, fees and costs) is structured, exchanged and secured across systems and jurisdictions. The standard comprises ten interlocking modules that operate as a single interoperable framework, spanning case metadata, parties, documents, hearings, decisions, security, integration, analytics, governance and ethics.

For early adopters, the Minimum Viable Justice Data (MVJD) Profile defines a lean conformance slice of the standard focused on case metadata, parties and baseline security/privacy. It is designed to be implementable within existing case‑management systems while still delivering immediate interoperability and analytics gains. The MVP profile also underpins the Justice Data Test Pack, allowing vendors and court IT teams to exercise real systems against realistic but non‑live matters before moving toward full‑module implementation.

JusticeData v2.2 is technology‑neutral and jurisdiction‑agnostic. It is designed to guide and support technology developers, vendors and justice IT teams, and can be adopted in full by public institutions and suppliers without prescribing substantive law or local procedure. Courts can retain full control over legal rules and workflows, while converging on a common data layer that makes digital transformation, interoperability and AI governance tractable.

AI-Powered Products Under Development

Commercial governance technology products designed to close the gap between rapid AI adoption and the accountability standards justice institutions require.

Products are built directly from the evidence base in the research infrastructure — structured datasets and governance frameworks informing tools that work at the point where AI enters justice institutions.

Governance tools that sit between AI tool deployment and official institutional outputs, enabling certification, chain-of-custody, and accountability at the point where AI enters official records

Justice sector AI governance layer applicable across the full range of AI tools deployed within a court or legal institution

Born-digital court platforms, with governance as a critical feature of the architecture

Sector Expertise Law & Justice

Technology companies and legal tech players building into justice markets face a distinct challenge: institutions are risk-averse by design, legitimacy-constrained in ways that don't appear in standard market analysis, and structurally resistant to procurement pathways that work elsewhere.

Nicolas supports technology and legal technology players as a sector expert — providing insight on justice sector institutions, legitimacy constraints, governance design integration, access to justice principles, fairness, transparency & bias, and proportionality to improve product alignment and unlock market opportunity.

Market Insights

Global, structured datasets, updated weekly, produce game-changing market insights for justice tech developers, vendors, regulators, systems integrators, courts, police, parole, corrections, law firms, insurers and other justice sector and justice sector adjacent actors.

The datasets are analysed by AI-enabled tools to identify market patterns, anomalies, implications, risks and opportunities bespoke to the subscriber.

  • Data Insight #25: Together, ‘Data rights + data protection” emerge as the de facto constraint layer for AI adoption in justice, shaping what can be built, bought, and deployed.

    Model performance may take a back seat as procurement and design choices will increasingly be focussed on privacy/data protection, contestability, and security requirements.

  • Data Insight #26: Low-Risk, High-Throughput: Courts are clustering AI adoption around guided e-forms, rule enforcement, and filing defect reduction. The pattern suggests institutional risk appetite is shaped less by capability than by proximity to official output and contested decision-making.

  • Data Insight #102: Adoption controversy is converging on one strategic question:

    “Is AI responsible for part of a decision that affects rights, and can the public meaningfully challenge it?”

    This is the one of the key adoption issues driving transparency litigation and governance demands.

Boards

Boards confronting AI governance obligations rarely have directors who bring both the institutional design depth and the sector-specific experience to support governance decision-making.

As regulatory expectations rise and stakeholder scrutiny of AI risk intensifies, the gap between nominal and genuine board-level AI governance capability is widening.

With more than two decades of experience as a lawyer and senior partner in a global law firm, sustainability and responsible business professional, board member and board chair, Nicolas is a trusted advisor to and member of boards navigating the social impact risk created by AI deployment, reputation risk, AI regulation, evolving stakeholder expectations, and responsible business practices.

“System-level thinking combined with a strategic approach to innovation.”

Board Chairperson, Global Professional Services Firm.