The Killer App
A case study in using SFRS for longitudinal AI-powered financial analysis.
The "Killer App" for SFRS isn't just a better validator—it's Longitudinal AI Analysis.
Because SFRS is token-optimized and JSON-native, you can perform deep historical analysis that is technically impossible or economically prohibitive with legacy XBRL.
The Problem: The "Token Tax"
Financial analysts increasingly use Large Language Models (LLMs) to scan filings. However, XBRL (XML) carries a massive "token tax":
- Verbose Tags:
<us-gaap:NetIncomeLoss contextRef="ctx_2023" unitRef="USD" decimals="-6">is 42 tokens. - JSON Equivalent:
"netIncome": 3400is 4 tokens.
When analyzing a single company's 10-year history, this difference is the difference between fitting in a context window and failing.
Comparison: 10 Years of 10-Ks
| Metric | XBRL (XML) | SFRS (JSON) | Benefit |
|---|---|---|---|
| Payload Size | ~80 MB | ~250 KB | 99% Reduction |
| Est. LLM Tokens | ~20,000,000 | ~60,000 | Fits in 1 prompt |
| Context Window | Exceeds GPT-4o | 95% Free Space | Room for analysis |
Try the Demo
We've provided a script to synthesize a 10-year longitudinal SFRS record and estimate the token savings.
node scripts/killer-app-demo.mjsTicker Explorer: Real-World Data
You can also explore real-world company data by ticker. The Ticker Explorer maps a public ticker to its SEC CIK, fetches recent filing metadata, and demonstrates how that data is condensed into SFRS.
node scripts/ticker-explorer.mjs NVDAThis script will:
- Map NVDA to CIK 0001045810.
- Fetch the metadata for the last five 10-K filings.
- Generate a sample longitudinal SFRS file at
dev/active/nvda-history.sfrs.json.
Real-World AI Analysis via API
You can programmatically build longitudinal datasets for any public company using the SFRS API and the SEC'gar archive.
1. Identify Filing Archive
Map a ticker to its SEC CIK and locate the .xml XBRL instance for the desired periods.
# Example for NVDA 2024 10-K
export XBRL_URL="https://www.sec.gov/Archives/edgar/data/1045810/000104581024000029/nvda-20240128_htm.xml"2. Convert to SFRS via API
POST the XBRL URL to the SFRS conversion endpoint to get a dense JSON payload.
curl -X POST https://api.sureshake.org/api/v1/convert/from-xbrl \
-H "Content-Type: application/json" \
-d '{
"sourceUrl": "'$XBRL_URL'",
"options": { "taxonomy": "us-gaap-2024" }
}' > nvda-2024.sfrs.json3. Aggregate and Analyze
Repeat for 5 years and combine the payloads. The resulting file will be ~98% smaller than the original XBRL, allowing you to send the entire 5-year history to an LLM in a single request:
// Sample Node.js script using Google Generative AI
import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-3.1-pro-preview" });
const history = [/* Combined SFRS Payloads from 5 years of filings */];
const prompt = `You are a financial analyst specializing in SFRS data.
Analyze the following 5-year longitudinal financial history for trends in
profitability and asset efficiency: ${JSON.stringify(history)}`;
const result = await model.generateContent(prompt);
console.log(result.response.text());Sample Analyst Prompt
Once you have the longitudinal SFRS data, you can feed it to an LLM with a prompt like this:
"I am providing 10 years of financial data for Acme Corporation in SFRS format. Please identify the trend in R&D intensity relative to revenue and predict the 2025 net income based on the 8% historical CAGR."
Why This Matters
- Benchmarking: Compare a company against 50 peers in a single context window.
- Anomaly Detection: Find "drift" in accounting policies across a decade in seconds.
- Real-time Valuation: Feed dense financial data into an agentic workflow for instant valuation modeling.
SFRS makes financial data machine-useful, not just machine-readable.