The TVCR Method

A 7-step framework for measuring AI adoption quality — from raw interaction data to actionable benchmarks.

TVCRToken Value Conversion Rate = (Aggregate BVSSum of Business Value Scores across all interactions × PQS WeightPrompt Quality Score as a decimal weight (0–1)) / Total TokensTotal tokens consumed across all interactions

Three Interaction Types

TVCR measures AI token consumption at the evaluation layer — not just during live AI sessions. There is no zero-token TVCR evaluation: the moment TVCR analyzes any interaction, tokens are consumed in the act of scoring it. The AI is always present at the evaluation layer.

Type Tokens at Interaction Tokens at Evaluation Scoring Model
Type 1 — Pure AI Session Yes — real time during exchange Yes — during scoring Full TVCR on both layers
Type 2 — AI-Augmented Human Work No — human exchange only Yes — AI processes content TVCR on evaluation layer
Type 3 — Human Interaction Review No — human exchange only Yes — AI scores the content HIS score + downstream token cost
Type 1
Pure AI Session
Direct prompting — ChatGPT, Claude, Copilot, and similar. Tokens are consumed during the live exchange and again at the TVCR evaluation layer. Full scoring applies to both layers.
Type 2
AI-Augmented Human Work
Email chains, meeting transcripts, documents processed by AI. The original exchange is human-only, but the TVCR evaluation layer consumes tokens when the AI scores the content. Includes Meeting Intelligence.
Type 3
Human Interaction Review
Purely human correspondence — email, legal letters, sales communications — submitted to TVCR for performance scoring. The AI evaluation layer consumes tokens to generate a Human Interaction Score (HIS). Enables HR annual reviews, legal correspondence audit, and executive communication assessment.

Key principle: The distinction between types is not "AI vs. no AI" — it is where in the workflow tokens are consumed. TVCR always measures AI token consumption. Evaluation-layer token consumption is itself a novel measurement surface: TVCR scores the ROI of AI evaluating human work, not just AI producing work.

Seven steps to measurement

Click each step to expand its description.

1
Collect AI interaction data
Gather emails, conversations, documents, and any other AI-assisted outputs. Each interaction becomes a measurement unit.
2
Score Prompt Quality (PQS)
Evaluate the human input across 5 criteria: Specificity, Contextual Completeness, Strategic Framing, Clarity of Intent, and Iteration Efficiency. Each scored 1–5, yielding a composite PQS out of 25.
3
Score Business Value (BVS)
Rate the AI output across 6 rubric categories: Content Summary Quality, Thread Clarity, Solutions vs. Analysis, Resolution/Agreement, Escalation Tracking, and Decision & Outcome. Each scored 1–5, yielding a BVS out of 30.
4
Estimate token consumption
Calculate or retrieve the total tokens consumed for each interaction — including both input and output tokens from the AI model.
5
Calculate TVCR
Apply the formula: TVCR = (Aggregate BVS × PQS Weight) / Total Tokens. The result is a decimal representing value-per-token efficiency.
6
Benchmark against scale
Compare the computed TVCR against the benchmark scale: Very High (≥0.08), High (≥0.05), Moderate (≥0.03), Low (≥0.01), Minimal (<0.01).
7
Compare across dimensions
Compare TVCR scores across persons, time periods, departments, or AI tools to identify patterns and optimize adoption strategy.

Decision Valuation Model

TVCR extends beyond token ROI to quantify the full cost of every business decision — and measure whether that cost was justified by the value produced.

Decision CostTotal cost of the decision process = Σ (Attendee Hourly RateFully loaded cost per hour per participant × Time SpentDuration of the full decision cycle including prep and follow-up) + AI Token CostCost of tokens at both interaction and TVCR evaluation layers
Decision Category Typical Participants Key Cost Driver TVCR Scoring Lens
Hiring HR, hiring manager, panel Interview time + sourcing AI usage Candidate quality per token consumed
Vendor Procurement, finance, business unit RFP analysis + evaluation sessions Selection quality vs. process cost
Process Change Operations, IT, impacted teams Design workshops + AI workflow modeling Implementation value per planning token
Risk / Compliance Legal, risk, compliance officers Review cycles + AI document analysis Risk mitigation value per token
Strategic Direction C-suite, board, senior leadership Executive time + strategic AI modeling Highest multiplier — longest value horizon

Key principle: The Decision Valuation Model combines TVCR scoring with fully loaded labor cost to answer the question most organizations never ask: was the cost of reaching this decision justified by the value it produced?

Improvement Coaching Engine

Every TVCR evaluation generates an Improvement Report — transforming each score into actionable coaching guidance that drives measurable adoption improvement over time.

1
Score Breakdown
Dimension-by-dimension PQS and BVS scores with individual ratings and explanations — so you know exactly where value was generated and where it was lost.
2
Strength Identification
Specific dimensions where performance exceeded benchmark — reinforcing effective patterns so they become repeatable practice.
3
Gap Analysis
Dimensions scoring below potential, with root-cause framing — insufficient context, unclear intent, weak resolution — so improvement is targeted, not generic.
4
Targeted Coaching Prompts
Specific, actionable recommendations for closing the identified gaps in the next interaction or decision cycle.
5
Trend Tracking
Longitudinal TVCR, PQS, and BVS history — enabling measurement of coaching effectiveness and AI adoption maturity progression over time.
NOS Method — Core Philosophy
“AI improves critical thinking — it doesn’t replace it.”

Prompt Quality Score (PQS)

PQS measures the quality of the human side of the interaction. Five criteria, each scored 1–5.

1
Specificity
How precise and detailed is the prompt? Vague requests score low; targeted, well-scoped prompts score high.
2
Contextual Completeness
Is sufficient background provided? The AI needs context to produce relevant output — missing context degrades results.
3
Strategic Framing
Is the prompt goal-oriented or generic? Prompts framed around a clear business objective generate more measurable value.
4
Clarity of Intent
Is the desired outcome unambiguous? When the AI knows exactly what you want, it delivers more actionable results.
5
Iteration Efficiency
How well do follow-up prompts narrow toward the goal? Efficient iteration means fewer tokens spent reaching the right answer.

Business Value Score (BVS)

BVS measures the quality of the AI's output. Six rubric categories, each scored 1–5.

1
Content Summary Quality
How well does the AI distill and communicate the key information from the interaction?
2
Thread Clarity
Is the conversation flow logical and easy to follow? Clear threads indicate organized, useful AI output.
3
Solutions vs. Analysis
Does the AI provide actionable solutions, or just restate the problem? Higher scores for outputs that drive action.
4
Resolution / Agreement
Did the interaction reach a clear conclusion or agreement? Unresolved threads reduce business value.
5
Escalation Tracking
Were escalation needs identified and flagged? Proper escalation handling prevents downstream problems.
6
Decision & Outcome
Did the interaction result in a clear decision or measurable outcome? The ultimate measure of business value.

TVCR Benchmark Scale

Where does your score fall?

VERY HIGH
≥ 0.08
HIGH
≥ 0.05
MODERATE
≥ 0.03
LOW
≥ 0.01
MINIMAL
< 0.01

Real-World Test: 20 Business Emails

Results from the March 31, 2026 reduction to practice — the first formal TVCR measurement.

0.0495
Aggregate TVCR
313 / 600
Total BVS (52.2%)
15.7 / 30
Avg BVS per Email
5,704
Total Tokens

Breakdown by Category

Sent
0.0789
Draft
0.0694
Inbox
0.0351

Key finding: SENT emails consistently outperformed INBOX, confirming TVCR captures who generates value — not just who receives it.

Try it yourself

Session-Only — No Data Retained