Selected Work

Governance Frameworks

Evidence-based frameworks and published research designed to help courts, legal institutions, and policymakers navigate AI adoption without sacrificing accountability, legitimacy, or public trust. Each framework is grounded in research infrastructure and oriented toward the institutional design questions practitioners actually face.

  • Distributional Analytics — distributional analysis reveals who the justice system is actually serving; rather than how many and how quickly (throughput analytics). See Beyond Disposal Rates: Why Distributional Analytics Will Define the Legitimacy of Digital Courts

  • Taxonomy of AI Tools — a structured classification of AI tools deployed in court and legal institutional contexts, by function, risk profile, and proximity to official output; See AI arrives in the Courts

  • Authority Exercise Test — for measuring how much public authority an AI tool exercises in practice, and what governance obligations follow

  • Proportional Governance Frameworks — governance requirements calibrated to the authority-exercise profile of each tool category

  • AI Procurement Framework for Justice Sector Actors — a risk-aligned procurement methodology for courts and legal institutions evaluating AI tools

  • Born-Digital Courts and Process Proportionality — design principles and evidence drawn from digitally native court models, examining the structural advantages of governance-first institutional design

  • The Architecture of Trust — institutional design as the hidden infrastructure of the AI era; why legitimacy, governance, and accountability must be built in before policy is debated

Research Data

Structured datasets built to give justice sector actors evidence-based visibility into the AI tools, deployments, controversies, and governance frameworks shaping their operating environment. Replace speculation with structured evidence usable for procurement, risk analysis and policy design.

  • NPAI Tools Tracker — a structured database of AI tools relevant to the justice sector, cataloguing tool categories, capability fields, pricing models, and deployment context across the full range of applications from case management to decision support

  • Controversies Database — 253+ documented AI controversies relevant to justice sector actors, structured for risk analysis and procurement due diligence

  • AI Deployment Tracker — tracks experimental and operational AI deployments across justice sector subcategories; current data includes 86 experimental and 13 operational court AI deployments globally

  • AI Policy Frameworks Library — a structured index of mandatory and voluntary governance frameworks applicable to justice sector AI adoption, mapped by jurisdiction and instrument type

AI-Powered Products Under Development

Commercial governance technology products designed to close the gap between rapid AI adoption and the accountability standards justice institutions require.

Products are built directly from the evidence base in the research infrastructure — structured datasets and governance frameworks informing tools that work at the point where AI enters justice institutions.

  • Governance tools that sit between AI tool deployment and official institutional outputs, enabling certification, chain-of-custody, and accountability at the point where AI enters official records

  • Justice sector AI governance layer applicable across the full range of AI tools deployed within a court or legal institution

  • Born-digital court platforms, with governance as a critical feature of the architecture

Sector Expertise Law & Justice

Technology companies and legal tech players building into justice markets face a distinct challenge: institutions are risk-averse by design, legitimacy-constrained in ways that don't appear in standard market analysis, and structurally resistant to procurement pathways that work elsewhere.

Nicolas supports technology and legal technology players as a sector expert — providing insight on justice sector institutions, legitimacy constraints, governance design integration, access to justice principles, fairness, transparency & bias, and proportionality to improve product alignment and unlock market opportunity.

Market Insights

Global, structured datasets, updated weekly, produce game-changing market insights for justice tech developers, vendors, regulators, systems integrators, courts, police, parole, corrections, law firms, insurers and other justice sector and justice sector adjacent actors.

The datasets are analysed by AI-enabled tools to identify market patterns, anomalies, implications, risks and opportunities bespoke to the subscriber.

  • Data Insight #25: Together, ‘Data rights + data protection” emerge as the de facto constraint layer for AI adoption in justice, shaping what can be built, bought, and deployed.

    Model performance may take a back seat as procurement and design choices will increasingly be focussed on privacy/data protection, contestability, and security requirements.

  • Data Insight #26: Low-Risk, High-Throughput: Courts are clustering AI adoption around guided e-forms, rule enforcement, and filing defect reduction. The pattern suggests institutional risk appetite is shaped less by capability than by proximity to official output and contested decision-making.

  • Data Insight #102: Adoption controversy is converging on one strategic question:

    “Is AI responsible for part of a decision that affects rights, and can the public meaningfully challenge it?”

    This is the one of the key adoption issues driving transparency litigation and governance demands.

Boards

Boards confronting AI governance obligations rarely have directors who bring both the institutional design depth and the sector-specific experience to support governance decision-making.

As regulatory expectations rise and stakeholder scrutiny of AI risk intensifies, the gap between nominal and genuine board-level AI governance capability is widening.

With more than two decades of experience as a lawyer and senior partner in a global law firm, sustainability and responsible business professional, board member and board chair, Nicolas is a trusted advisor to and member of boards navigating the social impact risk created by AI deployment, reputation risk, AI regulation, evolving stakeholder expectations, and responsible business practices.

“System-level thinking combined with a strategic approach to innovation.”

Board Chairperson, Global Professional Services Firm.