SFT Part V
Computing Systems Built from SFT
Part V turns SFT into families of computing problems and tool systems: field reconstruction, operation support inference, ConsequenceEnvelope generation, AI proposal governance, lifecycle decision, and calibration benchmarks.
SFT computational problems
SFT defines problem families instead of remaining a metaphor. Each problem should name inputs, outputs, soundness boundary, and claim level.
- Field reconstruction Build a bounded `SoftwareFieldEstimate` from artifacts, codebase, review, CI, and incident traces.
- Operation support inference Estimate which operation families are natural, possible, risky, or low cost in the current field.
- ConsequenceEnvelope generation Translate PRDs, specs, issues, or AI proposals into reachable path classes and boundary reports.
- AI proposal governance Keep generated proposal support inside a bounded field model and record shortcut witnesses.
- Lifecycle decision Diagnose repair, migration, contraction, or end-of-life choices from signatures, costs, risk, and capacity.
Problem
input: selected field estimate, artifact, boundary, support, horizon
output: bounded estimate, report, intervention, or posterior update
soundness boundary: what the estimate records and what it does not claim
claim level: conceptual, trace-grounded, formal schema, calibrated, deployed
This contract is what makes the computing-system layer falsifiable. A result that cannot name its boundary is only an explanatory heuristic, not a mature SFT computation.
PRD-to-ConsequenceEnvelope simulator
The simulator is a way of making SFT computational. It is not a PRD-to-PR predictor; it generates bounded consequence envelopes rather than a single future.
PRD / Spec / Issue / AI Proposal
-> ArtifactDescriptor
-> OperationSupport
-> ForecastCone / ConsequenceEnvelope
-> Signature Axis and Witness Candidate report
-> Review / CI / Issue Decomposition recommendation
-> Calibration by observed outcome
Practical value starts when the tool reports affected axes, witness candidates, missing boundaries, and review or issue decomposition recommendations. Empirical prediction claims depend on later weighting and calibration.
- Level 3 minimum A bounded `ForecastCone` or reachable path class is generated from explicit support and horizon.
- Level 4 practical report `ConsequenceEnvelope` adds signature axes, witness candidates, missing boundaries, and recommendations.
- Level 6 scientific loop Observed issue, PR, review, CI, or incident outcomes are compared against the report and used for field update.
AI-agent governance
SFT reads an AI coding agent as a field participant: artifact interpreter, proposal generator, local pattern amplifier, shortcut generator, review load shifter, and hidden assumption reproducer.
SFT does not prove general AI safety. It bounds proposal support with prompt, policy, theorem boundary, review, CI, shortcut witness reports, and posterior field update.
bounded proposal support
+ shortcut witness report
+ review / CI intervention
+ posterior field update
Review, CI, and type systems
Review, CI, type systems, architecture rules, ownership boundaries, and runtime guards are governance interventions. They shape future operation support and selection policy, not just pass or fail a change.
- Restrictive Remove unsafe support from the accepted operation set.
- Redirective Raise shortcut cost and lower lawful path cost.
- Instrumenting Add observation axes, tests, runtime checks, or evidence records.
Lifecycle and benchmarks
SFT includes birth, growth, stabilization, drift, repair, migration, contraction, and end-of-life. End-of-life can be a field reconfiguration decision rather than failure.
Benchmarking is required before calibrated empirical status: PRD-to-issue, issue-to-PR signature, review mediation, incident feedback, AI shortcut, and lifecycle decision benchmarks give the workbench something falsifiable to test.
birth
+ growth
+ stabilization
+ drift
+ repair / migration
+ contraction
+ end-of-life
Systems boundary and non-conclusions
Part V describes computing systems that can be built from SFT, but their outputs remain bounded estimates. A simulator is not a PRD-to-PR predictor, an AI governance report is not general AI safety, and benchmark calibration is required before empirical forecast claims are promoted.
- Input discipline Reports must state the selected artifact, field estimate, observation boundary, support extractor, and horizon.
- Output discipline Outputs are envelopes, affected axes, witness candidates, missing boundaries, recommendations, and update records.
- Claim discipline Heuristic path classes, formal reachability, calibrated weights, and deployed governance outcomes remain separate statuses.