The TVCR Framework

Patent Pending

A standardized instrument for measuring business value per AI token consumed.

Author Atomic Armstrong
Published April 2026
Status Patent Pending
Revised April 9, 2026
Abstract

As enterprise AI spending accelerates past $644 billion globally, organizations face a critical measurement gap: they can track how many AI tokens they consume, but not whether those tokens produced business value. Existing tools monitor cost and volume. None answer the fundamental question investors, CFOs, and boards are asking: what did we get for what we spent?

This paper introduces the Token Value Conversion Rate (TVCR) — a patent-pending scoring framework that quantifies the business value generated per AI token consumed. TVCR comprises three novel components: the Prompt Quality Score (PQS), which evaluates the quality of human input into AI systems; the Business Value Score (BVS), which measures the tangible and strategic value of AI outputs; and the TVCR composite score, which normalizes both against token consumption to produce a single, comparable metric.

TVCR is designed for universal adoption across industries, AI platforms, and use cases — providing the standardized instrument that enterprise AI adoption has been missing.

01 — The Measurement Gap

The Measurement Gap in Enterprise AI

$644B
Projected global enterprise AI spend in 2025 (Gartner)
60%
of AI projects fail to deliver expected value — not because the AI doesn't work
85%
of enterprises miss their AI infrastructure forecasts by more than 10% (CloudZero)
1.1 The Scale of the Problem

Enterprise AI investment has moved well beyond experimentation. Gartner projects global enterprise AI spending at $644 billion in 2025, with a decisive shift confirmed by Futurum's 2026 survey of 830 IT decision-makers: direct financial impact — combining top-line revenue growth and bottom-line profitability — nearly doubled as the primary ROI metric, while productivity-based justifications declined sharply.

The message from the C-suite is clear: AI must connect to the P&L.

Yet according to industry analyses, 60% of AI projects fail to deliver expected value — not because the AI doesn't work, but because organizations cannot measure whether it does. CloudZero's research found that 85% of enterprises miss their AI infrastructure forecasts by more than 10%, and 80% miss by more than 25%.

1.2 What Exists Today

The current measurement landscape includes the following categories of tools — each valuable for its stated purpose, but none answering the core business value question:

  • Token consumption trackers (CloudZero, Larridin, Vantage): Monitor how many tokens are consumed, by whom, and at what cost. Valuable for budgeting, but silent on value.
  • AI observability platforms (Datadog, Arize, Weights & Biases): Track model performance — latency, accuracy, drift. Useful for engineering teams, not for business value assessment.
  • Traditional ROI frameworks: Apply standard financial formulas but struggle with AI's compound, probabilistic, and often intangible returns.
  • Enterprise AI maturity models: Assess organizational readiness but don't score individual interactions or aggregate operational value.

None of these instruments answer the core question: per token consumed, how much business value was generated?

1.3 Why This Gap Matters Now

Token consumption is becoming the core unit of AI economics. The industry is shifting from charging for access (subscriptions) to charging for activity (tokens). Revenue rises and falls with usage, making token consumption a direct measure of demand — but not of value.

This creates a dangerous asymmetry: enterprises can quantify what they spend on AI with precision, but cannot quantify what they receive in return with any standardization. TVCR closes this gap.

02 — The Framework

The TVCR Framework

2.1 Core Formula
TVCR Formula
TVCR = (Business Value Score × Prompt Quality Weight) / Total Tokens Consumed
BVS — Business Value Score: measures the tangible and strategic value of AI outputs on a 0–100 scale
PQW — Prompt Quality Weight: derived from the Prompt Quality Score (PQS), normalizing for input quality
Tokens — Total tokens consumed: the sum of input + output tokens across the interaction or session

The result is a normalized score that enables comparison across interactions, sessions, teams, departments, and organizations. TVCR is the first metric designed to answer not just "how much AI did we use?" but "how much value did our AI use generate?"

2.2 Prompt Quality Score (PQS)

PQS evaluates the quality of human input into AI systems. The premise is straightforward: AI output quality is bounded by input quality. An enterprise investing millions in AI infrastructure but providing poor prompts is leaving value on the table.

Dimension What It Measures Weight
Clarity Specificity, unambiguity, and completeness of the prompt 25%
Context Relevant background, constraints, and domain framing provided 25%
Objective Definition Explicitness of the desired outcome, format, and success criteria 25%
Efficiency Conciseness relative to complexity — avoiding over-prompting or under-prompting 25%
Range Rating Interpretation
90–100 Expert Prompts consistently maximize AI capability
75–89 Proficient Strong input quality with minor optimization potential
60–74 Developing Adequate but leaving measurable value unrealized
Below 60 Foundational Significant input quality improvement needed

Why PQS matters commercially: Organizations with low PQS scores are consuming tokens without maximizing value — this identifies a training and process improvement opportunity that consulting firms and AI platforms can monetize directly.

2.3 Business Value Score (BVS)

BVS measures what the AI interaction actually produced — not whether the model performed technically, but whether the output generated business value.

Dimension What It Measures Weight
Task Completion Did the AI output fulfill the stated objective? 30%
Decision Impact Did the output inform, enable, or accelerate a business decision? 25%
Deliverable Quality Professional readiness of the output (accuracy, formatting, depth) 20%
Strategic Alignment Does the output advance organizational goals beyond the immediate task? 15%
Efficiency Gain Time, cost, or resource savings relative to non-AI alternatives 10%
Range Rating Interpretation
85–100 High Significant, measurable business value generated
70–84 Good Meaningful value with clear operational impact
50–69 Moderate Some value, but below potential given tokens consumed
Below 50 Low Minimal value — tokens consumed without proportionate return
2.4 The Composite TVCR Score

The TVCR composite normalizes value against consumption, producing a score that answers: for every token I consumed, how much value did I get?

TVCR Range Rating Interpretation
8.0–10.0 Exceptional AI adoption producing outsized value per token
6.0–7.9 Good Healthy value conversion — AI investment is justified
4.0–5.9 Moderate Value is being generated but not optimized
Below 4.0 Low Token consumption exceeds value production — requires intervention

Comparative use cases: Compare TVCR across teams to identify which departments extract the most AI value. Benchmark TVCR month-over-month to track adoption maturity. Score individual sessions to identify training opportunities (low PQS) vs. misaligned use cases (low BVS). Aggregate TVCR across an organization for board-level AI ROI reporting.

03 — Applications

Applications

TVCR is designed for integration into the platforms and workflows where AI measurement matters most. The revised interaction model expands TVCR's addressable market significantly: it covers not only direct AI sessions, but any workflow where the AI evaluation layer consumes tokens to score human-generated content.

3.1 Interaction Type Model

There is no zero-token TVCR evaluation. The moment TVCR analyzes any interaction — even a purely human email chain — tokens are consumed in the act of scoring it. The AI is always present at the evaluation layer. The distinction between interaction types is not “AI vs. no AI” — it is where in the workflow tokens are consumed.

Type Tokens at Interaction Tokens at Evaluation Scoring Model
Type 1 — Pure AI Session Yes — real time during exchange Yes — during scoring Full TVCR on both layers
Type 2 — AI-Augmented Human Work No — human exchange only Yes — AI processes content TVCR on evaluation layer
Type 3 — Human Interaction Review No — human exchange only Yes — AI scores the content HIS score + downstream token cost

Type 3 and the Human Interaction Score (HIS): When TVCR evaluates purely human correspondence — email threads, legal letters, sales communications, executive memos — the AI evaluation layer still consumes tokens to score the content. The output is a Human Interaction Score (HIS): a structured assessment of communication quality, decision clarity, resolution effectiveness, and professional impact. The HIS does not measure AI output; it measures the value of human communication as assessed by AI. Evaluation-layer token consumption is itself a novel and measurable ROI surface.

This model directly enables new use cases that TVCR previously could not address: HR annual performance reviews scored against communication quality benchmarks, legal correspondence audited for clarity and outcome effectiveness, sales team communications scored for pipeline impact, and executive communications assessed for strategic alignment. Each of these workflows consumes tokens at the evaluation layer — making them quantifiable TVCR measurement events.

3.2 Platform Applications
Enterprise AI ROI Reporting
TVCR provides a single metric for board and investor communication. Instead of "we spent $X on AI" or "we saved Y hours," organizations can report a precise, comparable TVCR score — and track it quarter over quarter.
AI Platform Differentiation
SaaS platforms, AI cost management tools, and observability platforms can integrate TVCR to differentiate from competitors. A platform that shows customers not just what they spent but what they got reduces churn and increases enterprise contract value.
AI Training and Adoption Programs
PQS identifies prompt quality gaps at the individual, team, and department level — creating a measurable foundation for AI training programs. Organizations can track PQS improvement over time and correlate it with TVCR gains.
Meeting Intelligence (Patent Expansion)
TVCR extends to real-time meeting environments: microphone-captured audio is transcribed by AI, with decisions, action items, and value-generating moments identified and auto-scored. The Meeting TVCR score quantifies AI-assisted efficiency in live conversations. Meetings are a Type 2 interaction — human conversation, AI evaluation layer.
HR Performance Management
Performance management platforms (Workday, BambooHR, Lattice) can integrate TVCR's Human Interaction Score to evaluate employee communications at scale. Annual reviews, peer feedback threads, and management correspondence are scored for clarity, decision quality, and professional impact — consuming tokens only at the evaluation layer.
Legal Communication Audit
Document and matter management platforms (iManage, NetDocuments) can deploy TVCR to audit legal correspondence for communication quality and outcome effectiveness. External counsel letters, negotiation threads, and regulatory filings are scored as Type 3 interactions — no AI in the original exchange, but AI at the scoring layer.
Sales Communication Scoring
Sales teams generate significant written communication volume. TVCR scores email threads, proposal correspondence, and negotiation chains for business value signals — decisions advanced, commitments secured, objections resolved. The HIS enables sales managers to benchmark rep communication quality against pipeline outcomes.
Executive & Compliance Communications
Board communications, compliance filings, and regulatory correspondence require measurable quality standards. TVCR's Type 3 framework enables organizations to score executive communication quality against objective rubrics — with full auditability and zero retention of the underlying content.
04 — Decision Valuation Model

Decision Valuation Model

Every business decision carries a cost that most organizations never calculate: the aggregate time of everyone present, multiplied by their hourly rate, plus any AI token cost consumed in the decision-making process. TVCR's Decision Valuation Model makes this cost explicit — and measurable.

4.1 The Decision Cost Formula
Decision Cost Formula
Decision Cost = Σ (Attendee Hourly Rate × Time Spent) + AI Token Cost
Attendee Hourly Rate — fully loaded cost per hour for each meeting participant
Time Spent — actual duration of the decision process, including preparation and follow-up
AI Token Cost — cost of AI tokens consumed at both the interaction layer and the TVCR evaluation layer

When applied against the TVCR composite score for the decision event, this produces a complete picture of decision ROI: the total cost of the decision process, normalized against the business value the decision delivered. Organizations can now compare decision quality per dollar spent — not just per token consumed.

4.2 Three Cost Layers
Layer What It Captures Measurement Method
Process Cost Fully loaded labor cost of everyone involved in reaching the decision Attendee Rate × Time Spent
Opportunity Cost Value of alternative work foregone by participants during the decision process Derived from department TVCR benchmarks
Decision Impact Strategic and operational value unlocked or destroyed by the decision outcome TVCR BVS applied to decision output
4.3 Decision Category Table

Different decision categories carry different cost profiles and value multipliers. The Decision Valuation Model applies category-specific context to produce accurate cost-to-value analysis:

Decision Category Typical Participants Key Cost Driver TVCR Scoring Lens
Hiring HR, hiring manager, panel Interview time + sourcing AI usage Candidate quality per token consumed
Vendor Procurement, finance, business unit RFP analysis + evaluation sessions Selection quality vs. process cost
Process Change Operations, IT, impacted teams Design workshops + AI workflow modeling Implementation value per planning token
Risk / Compliance Legal, risk, compliance officers Review cycles + AI document analysis Risk mitigation value per token
Strategic Direction C-suite, board, senior leadership Executive time + strategic AI modeling Highest multiplier — longest value horizon
05 — Improvement Coaching Engine

Improvement Coaching Engine

TVCR does not just measure — it teaches. Every TVCR evaluation generates an Improvement Report: a structured coaching output that translates scores into actionable development guidance for the individual, team, or organization. This transforms TVCR from a scoring tool into a continuous training platform.

5.1 Five Components of the Improvement Report
# Component Output
1 Score Breakdown Dimension-by-dimension PQS and BVS scores with individual ratings and explanations
2 Strength Identification Specific dimensions where the user or team performed above benchmark — reinforcing effective patterns
3 Gap Analysis Dimensions scoring below potential, with root-cause framing (e.g., insufficient context, unclear intent, weak resolution)
4 Targeted Coaching Prompts Specific, actionable recommendations for improving the identified gaps in the next interaction or decision cycle
5 Trend Tracking Longitudinal view of TVCR, PQS, and BVS scores over time — enabling measurement of coaching effectiveness and adoption maturity progression
5.2 The NOS Method Philosophy
NOS Method — Core Philosophy
“AI improves critical thinking — it doesn’t replace it.”

The Improvement Coaching Engine is built on the principle that AI scoring should make humans better decision-makers, better communicators, and better operators — not automate those functions away. Every report is designed to close the loop between measurement and behavior change.

From scoring tool to training platform: This is the strategic shift TVCR makes with the Improvement Coaching Engine. A score alone is a judgment. A score with structured coaching guidance is a development system. Licensees integrating TVCR gain not only a measurement layer but a continuous improvement platform — differentiating their products as AI adoption training infrastructure, not just AI ROI reporting.

06 — Prior Art & Uniqueness

Prior Art & Uniqueness

In preparing the TVCR patent application, a prior art search identified adjacent tools across several categories. None combines the elements that define TVCR's novel contribution.

6.1 Adjacent Tools Identified
  • Token cost trackers (CloudZero, Larridin, Vantage) — measure token consumption volume and cost; do not score business value
  • ROI frameworks (McKinsey AI ROI, Forrester TEI) — qualitative and survey-based; not scored per interaction or normalized against token consumption
  • Quality scoring patents in customer service and NLP domains — score output quality for domain-specific tasks; not composited with prompt quality or token cost
  • Meeting analytics platforms (Gong, Otter.ai, Fireflies) — transcription and keyword extraction; no business value scoring or token normalization
  • HR communication tools (Culture Amp, Lattice) — measure engagement and sentiment; not scored against a rubric for decision quality or communication effectiveness
6.2 Five Unique TVCR Elements

No existing tool combines all five of the following elements — each is present in TVCR and absent from any single prior art instrument:

# Unique Element Why It Is Novel
1 Business value normalized against token consumption No existing tool computes a per-token business value ratio across interaction types
2 Prompt quality as a weighted scoring dimension Input quality is typically ignored in output scoring; TVCR makes it a first-class measurement layer
3 Three-type interaction model with evaluation-layer token accounting No prior art measures AI token consumption at the evaluation layer as a distinct ROI surface
4 Human Interaction Score (HIS) for purely human correspondence First framework to apply AI scoring to human-only communications and account the evaluation-layer token cost
5 Integrated coaching output tied to scored dimensions Prior scoring tools produce ratings without structured, dimension-level improvement guidance

Conclusion on novelty: TVCR is novel. The combination of per-token business value normalization, input quality weighting, three-type interaction accounting, human correspondence scoring, and integrated coaching output has no precedent in the prior art. Each element individually has partial analogues; the composite framework does not.

07 — Licensing & Integration

Licensing and Integration

TVCR is designed for platform integration, not standalone deployment. The framework is available for non-exclusive licensing to organizations that need a standardized value measurement layer in their existing AI infrastructure. The three-type interaction model expands the eligible licensee base beyond AI-native platforms to any system that processes human communication for performance measurement:

  • AI cost management and token analytics platforms
  • AI observability and monitoring tools
  • Meeting intelligence and collaboration platforms
  • Enterprise AI consulting firms
  • AI governance and compliance tools
  • HR performance management platforms (Workday, BambooHR, Lattice)
  • Legal document and matter management systems (iManage, NetDocuments)
  • Sales enablement and communication analytics platforms
  • Executive communication and compliance audit tools

Licensing includes the complete scoring methodology, dimension weights, normalization algorithms, and integration guidance. The inventor retains full IP ownership. All licensees receive non-exclusive usage rights — enabling TVCR to become an industry-standard measurement layer across multiple platforms simultaneously.

For licensing inquiries: atomicarmstrong@proton.me | tvcrpro.com

08 — Conclusion

Conclusion

The enterprise AI market has a measurement problem. Organizations can track what they spend but not what they receive. Current tools measure cost, volume, and technical performance — none measure business value per token consumed.

TVCR is the instrument that closes this gap. It is standardized, composable, and designed for the way AI is actually used: interaction by interaction, token by token. As the industry matures from "how much AI are we using?" to "is our AI producing value?", TVCR provides the answer.

The three-type interaction model extends TVCR's reach beyond AI sessions to any workflow where the evaluation layer consumes tokens to assess human-generated content. This means TVCR is not exclusively an AI ROI tool — it is a full-suite communication and value scoring system applicable to HR, Legal, Sales, Compliance, and Executive communications. Every evaluation consumes tokens. Every token consumption is a TVCR measurement event.

AI adoption is inevitable and necessary. The question is whether your adoption — and the human work your AI evaluates — is producing value. TVCR is the instrument that answers both questions.

About the Author
Atomic Armstrong is the inventor of the Token Value Conversion Rate (TVCR) framework and the founder of TVCRpro. With a background in operational process improvement and workforce planning, Armstrong identified the AI value measurement gap through direct observation of enterprise AI adoption patterns. TVCR is patent-pending.

© 2026 Neil William Armstrong. All rights reserved. Patent pending. This document is published for informational purposes and to establish prior art. The TVCR framework, including PQS and BVS methodologies, is proprietary intellectual property. Licensing inquiries: atomicarmstrong@proton.me

Ready to license TVCR?

Integrate the AI value measurement standard into your platform.

Session-Only — No Data Retained