The TVCR Framework
Patent PendingA standardized instrument for measuring business value per AI token consumed.
As enterprise AI spending accelerates past $644 billion globally, organizations face a critical measurement gap: they can track how many AI tokens they consume, but not whether those tokens produced business value. Existing tools monitor cost and volume. None answer the fundamental question investors, CFOs, and boards are asking: what did we get for what we spent?
This paper introduces the Token Value Conversion Rate (TVCR) — a patent-pending scoring framework that quantifies the business value generated per AI token consumed. TVCR comprises three novel components: the Prompt Quality Score (PQS), which evaluates the quality of human input into AI systems; the Business Value Score (BVS), which measures the tangible and strategic value of AI outputs; and the TVCR composite score, which normalizes both against token consumption to produce a single, comparable metric.
TVCR is designed for universal adoption across industries, AI platforms, and use cases — providing the standardized instrument that enterprise AI adoption has been missing.
The Measurement Gap in Enterprise AI
Enterprise AI investment has moved well beyond experimentation. Gartner projects global enterprise AI spending at $644 billion in 2025, with a decisive shift confirmed by Futurum's 2026 survey of 830 IT decision-makers: direct financial impact — combining top-line revenue growth and bottom-line profitability — nearly doubled as the primary ROI metric, while productivity-based justifications declined sharply.
The message from the C-suite is clear: AI must connect to the P&L.
Yet according to industry analyses, 60% of AI projects fail to deliver expected value — not because the AI doesn't work, but because organizations cannot measure whether it does. CloudZero's research found that 85% of enterprises miss their AI infrastructure forecasts by more than 10%, and 80% miss by more than 25%.
The current measurement landscape includes the following categories of tools — each valuable for its stated purpose, but none answering the core business value question:
- Token consumption trackers (CloudZero, Larridin, Vantage): Monitor how many tokens are consumed, by whom, and at what cost. Valuable for budgeting, but silent on value.
- AI observability platforms (Datadog, Arize, Weights & Biases): Track model performance — latency, accuracy, drift. Useful for engineering teams, not for business value assessment.
- Traditional ROI frameworks: Apply standard financial formulas but struggle with AI's compound, probabilistic, and often intangible returns.
- Enterprise AI maturity models: Assess organizational readiness but don't score individual interactions or aggregate operational value.
None of these instruments answer the core question: per token consumed, how much business value was generated?
Token consumption is becoming the core unit of AI economics. The industry is shifting from charging for access (subscriptions) to charging for activity (tokens). Revenue rises and falls with usage, making token consumption a direct measure of demand — but not of value.
This creates a dangerous asymmetry: enterprises can quantify what they spend on AI with precision, but cannot quantify what they receive in return with any standardization. TVCR closes this gap.
The TVCR Framework
The result is a normalized score that enables comparison across interactions, sessions, teams, departments, and organizations. TVCR is the first metric designed to answer not just "how much AI did we use?" but "how much value did our AI use generate?"
PQS evaluates the quality of human input into AI systems. The premise is straightforward: AI output quality is bounded by input quality. An enterprise investing millions in AI infrastructure but providing poor prompts is leaving value on the table.
| Dimension | What It Measures | Weight |
|---|---|---|
| Clarity | Specificity, unambiguity, and completeness of the prompt | 25% |
| Context | Relevant background, constraints, and domain framing provided | 25% |
| Objective Definition | Explicitness of the desired outcome, format, and success criteria | 25% |
| Efficiency | Conciseness relative to complexity — avoiding over-prompting or under-prompting | 25% |
| Range | Rating | Interpretation |
|---|---|---|
| 90–100 | Expert | Prompts consistently maximize AI capability |
| 75–89 | Proficient | Strong input quality with minor optimization potential |
| 60–74 | Developing | Adequate but leaving measurable value unrealized |
| Below 60 | Foundational | Significant input quality improvement needed |
Why PQS matters commercially: Organizations with low PQS scores are consuming tokens without maximizing value — this identifies a training and process improvement opportunity that consulting firms and AI platforms can monetize directly.
BVS measures what the AI interaction actually produced — not whether the model performed technically, but whether the output generated business value.
| Dimension | What It Measures | Weight |
|---|---|---|
| Task Completion | Did the AI output fulfill the stated objective? | 30% |
| Decision Impact | Did the output inform, enable, or accelerate a business decision? | 25% |
| Deliverable Quality | Professional readiness of the output (accuracy, formatting, depth) | 20% |
| Strategic Alignment | Does the output advance organizational goals beyond the immediate task? | 15% |
| Efficiency Gain | Time, cost, or resource savings relative to non-AI alternatives | 10% |
| Range | Rating | Interpretation |
|---|---|---|
| 85–100 | High | Significant, measurable business value generated |
| 70–84 | Good | Meaningful value with clear operational impact |
| 50–69 | Moderate | Some value, but below potential given tokens consumed |
| Below 50 | Low | Minimal value — tokens consumed without proportionate return |
The TVCR composite normalizes value against consumption, producing a score that answers: for every token I consumed, how much value did I get?
| TVCR Range | Rating | Interpretation |
|---|---|---|
| 8.0–10.0 | Exceptional | AI adoption producing outsized value per token |
| 6.0–7.9 | Good | Healthy value conversion — AI investment is justified |
| 4.0–5.9 | Moderate | Value is being generated but not optimized |
| Below 4.0 | Low | Token consumption exceeds value production — requires intervention |
Comparative use cases: Compare TVCR across teams to identify which departments extract the most AI value. Benchmark TVCR month-over-month to track adoption maturity. Score individual sessions to identify training opportunities (low PQS) vs. misaligned use cases (low BVS). Aggregate TVCR across an organization for board-level AI ROI reporting.
Applications
TVCR is designed for integration into the platforms and workflows where AI measurement matters most. The revised interaction model expands TVCR's addressable market significantly: it covers not only direct AI sessions, but any workflow where the AI evaluation layer consumes tokens to score human-generated content.
There is no zero-token TVCR evaluation. The moment TVCR analyzes any interaction — even a purely human email chain — tokens are consumed in the act of scoring it. The AI is always present at the evaluation layer. The distinction between interaction types is not “AI vs. no AI” — it is where in the workflow tokens are consumed.
| Type | Tokens at Interaction | Tokens at Evaluation | Scoring Model |
|---|---|---|---|
| Type 1 — Pure AI Session | Yes — real time during exchange | Yes — during scoring | Full TVCR on both layers |
| Type 2 — AI-Augmented Human Work | No — human exchange only | Yes — AI processes content | TVCR on evaluation layer |
| Type 3 — Human Interaction Review | No — human exchange only | Yes — AI scores the content | HIS score + downstream token cost |
Type 3 and the Human Interaction Score (HIS): When TVCR evaluates purely human correspondence — email threads, legal letters, sales communications, executive memos — the AI evaluation layer still consumes tokens to score the content. The output is a Human Interaction Score (HIS): a structured assessment of communication quality, decision clarity, resolution effectiveness, and professional impact. The HIS does not measure AI output; it measures the value of human communication as assessed by AI. Evaluation-layer token consumption is itself a novel and measurable ROI surface.
This model directly enables new use cases that TVCR previously could not address: HR annual performance reviews scored against communication quality benchmarks, legal correspondence audited for clarity and outcome effectiveness, sales team communications scored for pipeline impact, and executive communications assessed for strategic alignment. Each of these workflows consumes tokens at the evaluation layer — making them quantifiable TVCR measurement events.
Decision Valuation Model
Every business decision carries a cost that most organizations never calculate: the aggregate time of everyone present, multiplied by their hourly rate, plus any AI token cost consumed in the decision-making process. TVCR's Decision Valuation Model makes this cost explicit — and measurable.
When applied against the TVCR composite score for the decision event, this produces a complete picture of decision ROI: the total cost of the decision process, normalized against the business value the decision delivered. Organizations can now compare decision quality per dollar spent — not just per token consumed.
| Layer | What It Captures | Measurement Method |
|---|---|---|
| Process Cost | Fully loaded labor cost of everyone involved in reaching the decision | Attendee Rate × Time Spent |
| Opportunity Cost | Value of alternative work foregone by participants during the decision process | Derived from department TVCR benchmarks |
| Decision Impact | Strategic and operational value unlocked or destroyed by the decision outcome | TVCR BVS applied to decision output |
Different decision categories carry different cost profiles and value multipliers. The Decision Valuation Model applies category-specific context to produce accurate cost-to-value analysis:
| Decision Category | Typical Participants | Key Cost Driver | TVCR Scoring Lens |
|---|---|---|---|
| Hiring | HR, hiring manager, panel | Interview time + sourcing AI usage | Candidate quality per token consumed |
| Vendor | Procurement, finance, business unit | RFP analysis + evaluation sessions | Selection quality vs. process cost |
| Process Change | Operations, IT, impacted teams | Design workshops + AI workflow modeling | Implementation value per planning token |
| Risk / Compliance | Legal, risk, compliance officers | Review cycles + AI document analysis | Risk mitigation value per token |
| Strategic Direction | C-suite, board, senior leadership | Executive time + strategic AI modeling | Highest multiplier — longest value horizon |
Improvement Coaching Engine
TVCR does not just measure — it teaches. Every TVCR evaluation generates an Improvement Report: a structured coaching output that translates scores into actionable development guidance for the individual, team, or organization. This transforms TVCR from a scoring tool into a continuous training platform.
| # | Component | Output |
|---|---|---|
| 1 | Score Breakdown | Dimension-by-dimension PQS and BVS scores with individual ratings and explanations |
| 2 | Strength Identification | Specific dimensions where the user or team performed above benchmark — reinforcing effective patterns |
| 3 | Gap Analysis | Dimensions scoring below potential, with root-cause framing (e.g., insufficient context, unclear intent, weak resolution) |
| 4 | Targeted Coaching Prompts | Specific, actionable recommendations for improving the identified gaps in the next interaction or decision cycle |
| 5 | Trend Tracking | Longitudinal view of TVCR, PQS, and BVS scores over time — enabling measurement of coaching effectiveness and adoption maturity progression |
“AI improves critical thinking — it doesn’t replace it.”
The Improvement Coaching Engine is built on the principle that AI scoring should make humans better decision-makers, better communicators, and better operators — not automate those functions away. Every report is designed to close the loop between measurement and behavior change.
From scoring tool to training platform: This is the strategic shift TVCR makes with the Improvement Coaching Engine. A score alone is a judgment. A score with structured coaching guidance is a development system. Licensees integrating TVCR gain not only a measurement layer but a continuous improvement platform — differentiating their products as AI adoption training infrastructure, not just AI ROI reporting.
Prior Art & Uniqueness
In preparing the TVCR patent application, a prior art search identified adjacent tools across several categories. None combines the elements that define TVCR's novel contribution.
- Token cost trackers (CloudZero, Larridin, Vantage) — measure token consumption volume and cost; do not score business value
- ROI frameworks (McKinsey AI ROI, Forrester TEI) — qualitative and survey-based; not scored per interaction or normalized against token consumption
- Quality scoring patents in customer service and NLP domains — score output quality for domain-specific tasks; not composited with prompt quality or token cost
- Meeting analytics platforms (Gong, Otter.ai, Fireflies) — transcription and keyword extraction; no business value scoring or token normalization
- HR communication tools (Culture Amp, Lattice) — measure engagement and sentiment; not scored against a rubric for decision quality or communication effectiveness
No existing tool combines all five of the following elements — each is present in TVCR and absent from any single prior art instrument:
| # | Unique Element | Why It Is Novel |
|---|---|---|
| 1 | Business value normalized against token consumption | No existing tool computes a per-token business value ratio across interaction types |
| 2 | Prompt quality as a weighted scoring dimension | Input quality is typically ignored in output scoring; TVCR makes it a first-class measurement layer |
| 3 | Three-type interaction model with evaluation-layer token accounting | No prior art measures AI token consumption at the evaluation layer as a distinct ROI surface |
| 4 | Human Interaction Score (HIS) for purely human correspondence | First framework to apply AI scoring to human-only communications and account the evaluation-layer token cost |
| 5 | Integrated coaching output tied to scored dimensions | Prior scoring tools produce ratings without structured, dimension-level improvement guidance |
Conclusion on novelty: TVCR is novel. The combination of per-token business value normalization, input quality weighting, three-type interaction accounting, human correspondence scoring, and integrated coaching output has no precedent in the prior art. Each element individually has partial analogues; the composite framework does not.
Licensing and Integration
TVCR is designed for platform integration, not standalone deployment. The framework is available for non-exclusive licensing to organizations that need a standardized value measurement layer in their existing AI infrastructure. The three-type interaction model expands the eligible licensee base beyond AI-native platforms to any system that processes human communication for performance measurement:
- AI cost management and token analytics platforms
- AI observability and monitoring tools
- Meeting intelligence and collaboration platforms
- Enterprise AI consulting firms
- AI governance and compliance tools
- HR performance management platforms (Workday, BambooHR, Lattice)
- Legal document and matter management systems (iManage, NetDocuments)
- Sales enablement and communication analytics platforms
- Executive communication and compliance audit tools
Licensing includes the complete scoring methodology, dimension weights, normalization algorithms, and integration guidance. The inventor retains full IP ownership. All licensees receive non-exclusive usage rights — enabling TVCR to become an industry-standard measurement layer across multiple platforms simultaneously.
For licensing inquiries: atomicarmstrong@proton.me | tvcrpro.com
Conclusion
The enterprise AI market has a measurement problem. Organizations can track what they spend but not what they receive. Current tools measure cost, volume, and technical performance — none measure business value per token consumed.
TVCR is the instrument that closes this gap. It is standardized, composable, and designed for the way AI is actually used: interaction by interaction, token by token. As the industry matures from "how much AI are we using?" to "is our AI producing value?", TVCR provides the answer.
The three-type interaction model extends TVCR's reach beyond AI sessions to any workflow where the evaluation layer consumes tokens to assess human-generated content. This means TVCR is not exclusively an AI ROI tool — it is a full-suite communication and value scoring system applicable to HR, Legal, Sales, Compliance, and Executive communications. Every evaluation consumes tokens. Every token consumption is a TVCR measurement event.
AI adoption is inevitable and necessary. The question is whether your adoption — and the human work your AI evaluates — is producing value. TVCR is the instrument that answers both questions.
Atomic Armstrong is the inventor of the Token Value Conversion Rate (TVCR) framework and the founder of TVCRpro. With a background in operational process improvement and workforce planning, Armstrong identified the AI value measurement gap through direct observation of enterprise AI adoption patterns. TVCR is patent-pending.
© 2026 Neil William Armstrong. All rights reserved. Patent pending. This document is published for informational purposes and to establish prior art. The TVCR framework, including PQS and BVS methodologies, is proprietary intellectual property. Licensing inquiries: atomicarmstrong@proton.me