Skip to main content
Methodology: Keep, Fix, or Kill

Every AI tool in the firm earns its place. Or it leaves.

Keep, Fix, or Kill is the working framework HIP applies inside every AI Operating Audit. Each tool is scored against four criteria and assigned one verdict. The output is not a long list; it is a short, sequenced plan leadership can execute. This page is the framework, the criteria, and how the verdict gets reached.

The four criteria

Each tool is scored against four criteria.

The criteria are deliberately small. Four. Each is binary or near-binary. The verdict reaches a written decision in minutes per tool, not a workshop per tool. Speed matters because firms typically have 15 to 50 tools to triage.

Scoring is documented per tool so the decision survives the next leadership rotation. A future operator can reread the Audit and understand why the verdict landed where it did.

Criterion
01

Data-class fit

Does the tool’s data-handling contract permit the data class the firm is putting through it? Public marketing copy is one answer; client PII, fund data, or privileged work is another. Mismatch is fatal regardless of how useful the tool is.

Criterion
02

Throughput impact

Does the tool measurably move a workflow leadership cares about? Drafting cycles, response times, deal-team capacity, RM throughput. Without a measurable lift, the tool is not earning its place even when the data-class fit is clean.

Criterion
03

Vendor posture

Does the vendor offer the DPA, sub-processor disclosure, sovereign-residency option, and audit-trail features a regulated firm needs? Or is it consumer-grade with no upgrade path? The contract has to be in writing, not aspirational.

Criterion
04

Owner clarity

Is there a named owner of the tool inside the firm who is accountable for its use, its renewal, and its retirement? Tools without owners drift. The Audit either names an owner or the tool moves to the kill pile.

The three verdicts

Three verdicts. Each with a written reason.

01

Keep

Scores cleanly on all four criteria. Stays in production under the governance line, in the inventory, with a named owner and a quarterly review trigger.

02

Fix: vendor upgrade

Score fails on vendor posture but the tool is otherwise valuable. Path: upgrade to enterprise tier, sign a tighter DPA, switch to sovereign-residency option, or amend the contract. Verdict ships with a price tag and a turnaround commitment.

03

Fix: replacement

Tool is doing useful work but the vendor cannot reach the bar. Replace with a sanctioned alternative that does the same job under the firm’s governance line. Verdict ships with a named replacement and a migration plan.

04

Kill

No combination of fixes resolves the data-class fit, throughput impact, or vendor posture. The tool comes out. Sanctioned alternatives are named where they exist. Where they do not, the workflow returns to manual until something fit-for-purpose lands.

Where the framework applies

Keep, Fix, or Kill applies cleanly, and where it does not.

Applies cleanly

  • Regulated, fiduciary, or privileged-work firms with 50 to 500 employees and an active AI footprint to triage.
  • Firms with 15 or more tools, accounts, or embedded features running across the operation.
  • Leadership that wants written, defensible verdicts per tool rather than blanket policy.
  • Firms whose next regulator review, LP DDQ, or major client RFP is within 12 months.

Does not apply

  • Firms with three or fewer tools; a workshop suffices.
  • Firms whose plan is to ban all AI; the framework triages; it does not block.
  • Pre-revenue startups still discovering which tools work. Keep, Fix, or Kill is a governance framework, not an experimentation framework.
Common questions

What leadership asks about the framework.

Why three verdicts and not five?

Three is the maximum number of categories leadership can hold in working memory while making 30 decisions in a row. Five categories degrade into one-of-five-something-something. Three forces a clean decision per tool. Where a verdict feels ambiguous, the framework treats that ambiguity as a Fix outcome with a named subcategory.

Can a tool be partial-keep, partial-kill?

Yes, indirectly. A tool can be Keep for one data class and Kill for another. The Audit produces verdicts per tool by data class, not per tool alone. The output is still binary at the cell level so the operating posture remains executable.

How long does Keep, Fix, or Kill take to apply across a firm?

The triage itself happens during the Audit week. The full Audit, including inventory, scoring, and roadmap, takes two to six weeks depending on firm size. The roadmap to execute the verdicts is sequenced over 60 to 90 days post-Audit, run by the firm’s own team or by HIP under Integration Oversight.

Is the framework only used inside HIP engagements?

HIP applies it inside every AI Operating Audit. Other operators are welcome to use the framework directly; the criteria and verdicts are public and named precisely for that reason. Where a firm wants the discipline applied by an outside operator with cross-firm pattern recognition, that is what the Audit provides.

More sectors

Other regulated sectors where HIP fits.

Start

Keep, Fix, or Kill is applied inside the Audit. Apply to work with HIP.

The framework is public. The discipline of applying it across 15 to 50 tools in two to six weeks, with written verdicts that survive a regulator question, is what the Audit delivers. Apply to work with HIP to begin.