Home
/
Robin AI University
/
How Robin AI handles uncertainty with N/A answers

How Robin AI handles uncertainty with N/A answers

Share

Reports Learning Path
  1. Getting Started
    Run your first Report  |  Run a report on your stored contracts
  2. Review & Verify
    Verify insights using citations  |  Understanding N/A Responses
  3. Customize & Refine
    Customize report templates
     |  Prompt writing tips  |  Leverage answer types and answer previews

Introduction

When creating reports in Robin, you can assign different Answer Types for each topic. Answer types are instructions on how you would like Robin to format its answer. You can select a more detailed "Summary" type for comprehensive explanations or more concise answer types that format answers as dates, numbers, or yes/no responses.

When using these more concise answer types, you may occasionally see "N/A" responses in your report.

What "N/A" means

For more structured answer types, "N/A" indicates Robin couldn't find the requested information in your contract. This serves as standardized shorthand for when the relevant information isn't found in the contract, the answer to your question does not neatly fit within the formatting constraints you’ve given the AI, or Robin isn’t confident in its answer.

With longer "Summary" answer types, Robin instead uses plain language to explain why information couldn't be found, as these formats allow for more detailed explanations.

N/A in practice

Imagine you've asked about liability caps in a supplier agreement. However, this contract does not actually contain any liability language. After analyzing the contract, Robin responds with "N/A."

This response means the supplier agreement doesn't contain liability provisions. Rather than speculating or providing an ambiguous answer, Robin has clearly communicated this absence.

Why it matters

This approach reflects a fundamental principle for using AI in legal contexts: effective AI systems must have appropriate ways to handle uncertainty and absence of information. Legal professionals should be cautious of systems that always provide definitive answers, as these may generate misleading information when faced with ambiguity.

More trustworthy legal AIs recognize their boundaries, communicate clearly when they cannot find requested information, and enable ‘human in the loop’ workflows to ensure accuracy. Transparency around limitations walks hand-in-hand with capabilities and performance.