From Spreadsheet to System: Automating Deal Flow in 48 Hours

Day 0: Monday, 9:47 AM
Sarah, VP of Investments at a mid-market PE firm, is staring at a spreadsheet with 127 rows.
Each row is a potential deal. Each one came in via email, LinkedIn, referral, or cold inbound. Each one needs to be:
- Logged with key details
- Scored against investment thesis
- Categorized by stage and sector
- Assigned to a partner for review
- Followed up with next steps
Her analyst team spends 15 hours per week just maintaining this spreadsheet. Meanwhile:
- 40% of inbound deals never get reviewed (too much volume)
- Partners complain they see opportunities 3-4 days after they come in
- High-potential deals get lost in the noise
- The spreadsheet formula broke last week and nobody knows how to fix it
Sarah's thought: "There has to be a better way. But we can't afford to spend 6 months building custom infrastructure."
Day 0: Monday, 4:32 PM
Sarah books a demo with StratumIQ.
Day 1: Tuesday, 10:00 AM - Discovery Call
The StratumIQ team asks Sarah six questions:
1. "Where do deals come in?" Sarah: "Everywhere. Email mostly—three different addresses. Our website form. LinkedIn DMs. Partner forwards. Referrals via personal email."
StratumIQ note: Need to consolidate multiple inbound sources into single stream.
2. "What information do you need from each deal?" Sarah: "Company name, founder/CEO, sector, stage, revenue, funding raised, what they're looking for, who referred them if anyone, and the pitch deck if attached."
StratumIQ note: Standard schema with custom fields for sector/stage/referral source.
3. "How do you decide what's high-priority?" Sarah: "We like SaaS and healthcare companies, Series A to C, $2-10M revenue, ideally with a warm intro. If they're outside that, they go to a 'review later' pile that... we never review."
StratumIQ note: Clear scoring criteria. Can build multi-factor model.
4. "Who reviews deals and how are they assigned?" Sarah: "We have three partners. Each focuses on different sectors. If it's unclear, it goes to me first to triage."
StratumIQ note: Sector-based routing with fallback to Sarah for edge cases.
5. "What happens after a deal is assigned?" Sarah: "Ideally, the partner reviews within 48 hours and either schedules a call or declines. In reality, deals sit in the queue because nobody's sure what's in there."
StratumIQ note: Need visibility layer + SLA tracking.
6. "What does success look like?" Sarah: "Every inbound deal gets logged, scored, and routed to the right partner within 2 hours—not 2 days. Partners can see their queue with context. We catch high-potential deals before they talk to 5 other firms."
StratumIQ note: Speed + coverage + transparency.
---
Day 1: Tuesday, 2:00 PM - Build Begins
The StratumIQ team doesn't write custom code. They configure.
Step 1: Connect Data Sources (30 minutes)
Email inboxes:
- deals@firm.com (main intake)
- portfolio@firm.com (portfolio updates, but sometimes new deals)
- sarah@firm.com (personal referrals)
StratumIQ sets up:
- Secure email connectors via API (Gmail/Outlook)
- Webhook from website form
- CSV upload option for manual entries from LinkedIn
Result: Every inbound deal, regardless of source, flows into one system.
Step 2: Define Schema (20 minutes)
StratumIQ creates a unified deal object:
Deal { company_name: string founder_name: string sector: enum [SaaS, Healthcare, Fintech, Other] stage: enum [Seed, Series A, Series B, Series C, Later] revenue_range: string funding_raised: string ask: string (what they want from you) referral_source: string pitch_deck_url: string (if attached) inbound_channel: enum [Email, Form, LinkedIn, Referral] received_at: timestamp }
Extraction logic:
- Parse email body/form submission for key fields
- Use LLM to extract company name, founder, sector, stage from unstructured text
- Flag missing critical fields for human review
Step 3: Build Scoring Model (45 minutes)
Sarah's investment thesis becomes a scoring algorithm:
Scoring factors: 1. Sector match (30 points max)
- SaaS or Healthcare: +30
- Fintech: +20
- Other: +5
2. Stage match (25 points max)
- Series A-C: +25
- Seed: +15
- Later stage: +10
3. Revenue range (20 points max)
- $2-10M: +20
- $1-2M or $10-20M: +15
- Outside range: +5
4. Warm intro (15 points max)
- Referral from partner/portfolio: +15
- Referral from known contact: +10
- Cold inbound: +0
5. Completeness (10 points max)
- Has pitch deck: +5
- All fields populated: +5
Total possible score: 100
Thresholds:
- 70+: High priority (immediate partner review)
- 50-69: Medium priority (review within 1 week)
- <50: Low priority (hold for batch review)
StratumIQ implementation:
- Template scoring logic pre-built
- Customized with Sarah's specific weights and thresholds
- Confidence scoring added (flags deals where data extraction was uncertain)
Step 4: Define Routing Rules (30 minutes)
Routing logic:
IF sector = SaaS → Route to Partner A ELSE IF sector = Healthcare → Route to Partner B ELSE IF sector = Fintech → Route to Partner C ELSE → Route to Sarah (for manual triage)
IF score < 50 → Skip partner routing, send to "Review Later" list IF confidence < 75% → Flag for Sarah to review before routing
Actions triggered on routing: 1. Update CRM with deal details + score 2. Post in Slack: "#deals - New [sector] deal: [company], scored [X]/100, assigned to [Partner]" 3. Send email to assigned partner with deal summary and pitch deck link 4. Create task in partner's work management system 5. Start 48-hour SLA timer
Step 5: Add Human-in-the-Loop Safeguards (15 minutes)
For edge cases:
- Deals scored 45-55 (borderline): Sarah reviews before discard
- Deals with incomplete data: Flagged for manual enrichment
- Deals from VIP referral sources: Auto-escalate regardless of score
StratumIQ adds:
- "Review needed" queue for Sarah
- Quick approve/reject interface
- Option to override score or routing with one click
---
Day 2: Wednesday, 11:00 AM - Testing
StratumIQ loads the last 30 days of Sarah's spreadsheet deals into the system as a test.
Results:
- 89% of deals extracted correctly
- 11% flagged for review (missing data or unclear sector)
- Scores match Sarah's manual priorities 84% of the time
- 3 deals that Sarah scored low were flagged high by the model
Sarah reviews the 3 discrepancies:
- Deal #1: Scored high because revenue was $8M, but Sarah knows this sector is oversaturated → adjusts sector weight down
- Deal #2: Scored low because no referral, but it was from a top-tier accelerator → adds "accelerator_name" as bonus factor
- Deal #3: Scored medium but should be high—founder was a repeat entrepreneur → adds "founder_background" field
StratumIQ updates the model in 10 minutes.
Re-test: 94% match rate.
---
Day 2: Wednesday, 3:00 PM - Go Live
Sarah flips the switch.
What happens next:
- 3:17 PM - New email comes into deals@firm.com
- Subject: "Series B SaaS - $5M ARR - Warm intro from [Portfolio CEO]"
- Body: Pitch with company details
- Attachment: Pitch deck PDF
- 3:18 PM - StratumIQ processes it
- Extracts: Company name, founder, sector, stage (SaaS), revenue ($5M), referral source (portfolio CEO)
- Scores: 88/100 (high priority)
- Confidence: 94% (high confidence)
- 3:19 PM - StratumIQ routes it
- Posts in Slack: "🔥 New HIGH PRIORITY deal: [Company], SaaS, Series B, $5M ARR, scored 88/100, assigned to [Partner A]"
- Emails Partner A: "You have a new deal from [Portfolio CEO]. Key details: [summary]. Pitch deck: [link]. Please review by Friday 3pm."
- Updates CRM with full deal details
- Creates task in partner's work management system
- Start 48-hour SLA timer
3:20 PM - Partner A sees Slack notification, clicks link, reviews deal summary
3:47 PM - Partner A replies to founder directly: "Thanks for the intro from [CEO]. Let's set up a call this week."
Total time from inbound to partner response: 30 minutes.
Under the old system: This deal would have been added to the spreadsheet Friday, reviewed Monday, assigned Tuesday, and contacted Wednesday. 7 days later.
---
Day 3: Thursday - First Full Day Live
Morning stats:
- 8 deals came in overnight (email, form submissions)
- All 8 processed and routed by 7:00 AM
- 5 high-priority (partners had them in queue before office opened)
- 2 medium-priority (went to weekly review list)
- 1 flagged for Sarah (unclear sector)
Sarah's morning routine:
- Old way: 90 minutes updating spreadsheet, triaging, emailing partners
- New way: 8 minutes reviewing the 1 flagged deal, approving its routing
Partner feedback:
- "I actually know what's in my queue now."
- "The Slack notifications with scores help me prioritize."
- "Can we add a field for 'founder previously raised from top VCs'? That's a signal we care about."
StratumIQ adds the field in 5 minutes.
---
Day 5: Monday - One Week Later
Sarah pulls the numbers:
Deal volume:
- 47 deals processed in 1 week
- Old system: 18-22 deals made it into the spreadsheet (rest were missed)
- New system: 47 deals logged, scored, and routed (100% coverage)
Partner engagement:
- Old system: Partners reviewed 60% of assigned deals within 1 week
- New system: Partners reviewed 91% of assigned deals within 48 hours
Time savings:
- Analyst team: 15 hours/week → 2 hours/week (reviewing edge cases)
- Sarah: 5 hours/week → 30 minutes/week
- Total time saved: 17.5 hours/week
Deals caught that would have been missed:
- 3 high-priority deals came in over the weekend → routed and actioned Monday morning
- 1 deal from a quiet email address would never have been noticed → scored 82/100, led to intro call
ROI calculation:
- Time saved: 17.5 hrs/week × $150/hr (blended rate) = $2,625/week = $136,500/year
- Deals saved: 1 additional deal per month × $500K average check size × 20% close rate = $1.2M/year in additional portfolio value
- Cost of system: $30K/year
Return: 45x in year one.
---
Why This Worked (And Why Most Automation Projects Fail)
Sarah's project succeeded because:
1. She started with a clear process She knew exactly how deals should flow. The system just automated what was already defined.
2. She didn't try to boil the ocean She didn't automate due diligence, portfolio management, or LP reporting. She automated one workflow: inbound deal triage.
3. She used infrastructure, not custom code No engineers. No 6-month dev project. Just configuration of pre-built components.
4. She kept humans in the loop The system flagged edge cases. Sarah still made judgment calls. Partners still talked to founders.
5. She measured the right things Not "how accurate is the AI?" but "are we catching more deals and moving faster?"
Most automation projects fail because:
1. No clear process to automate "We need AI to help with deals" isn't a process. It's a wish.
2. Trying to automate everything at once They want the system to source deals, score them, do diligence, write memos, and recommend investments. That's a 2-year project.
3. Custom-building from scratch They hire engineers to build connectors, parsers, scoring models, routing logic, and UIs. By month 8, requirements have changed and the code is already legacy.
4. Removing humans entirely "The AI will handle it" → the AI makes a bad call → deal lost → team stops trusting the system.
5. Not measuring impact They track "number of deals processed" instead of "deals we would have missed" or "time to first response."
How to Replicate This in Your Business
You don't need to be a PE firm. The pattern applies to any high-volume inbound triage:
Sales teams: Inbound leads → score → route to right rep Customer success: Support tickets → categorize → assign by expertise + workload Partnerships: Inbound partnership inquiries → qualify → route to BD team Recruiting: Inbound applications → screen → route to hiring manager
The 48-hour template:
Day 1 Morning: Map your current process
- Where does stuff come in?
- What do you need to know about each item?
- How do you decide what's important?
- Who should handle what?
- What happens next?
Day 1 Afternoon: Define your schema and scoring
- Standardize the data structure
- Weight the factors that matter
- Set thresholds for priority levels
Day 2 Morning: Build routing logic
- If X, then Y rules
- Fallbacks for edge cases
- Actions triggered on routing
Day 2 Afternoon: Test with historical data
- Load last 30 days of items
- See what the system would have done
- Tune weights and rules
Day 3: Go live and measure
- Coverage rate (% of items processed)
- Speed (time from inbound to action)
- Accuracy (% routed correctly)
- Time saved (hours per week)
The StratumIQ Difference
We didn't build a tool that "helps you triage deals."
We built infrastructure that triages deals for you.
Pre-built components:
- Connectors for email, forms, CSVs, APIs, webhooks
- Schemas for common objects (leads, tickets, events)
- Scoring templates for momentum, fit, urgency, and risk
- Routing logic with confidence bands and fallbacks
You configure, don't code:
- Define your scoring factors and weights
- Set your routing rules
- Choose your actions (CRM, Slack, email, task)
Live in hours, not months:
- Day 1: Connect sources and define schema
- Day 2: Build scoring and routing
- Day 3: Go live with real data
Full visibility:
- See every item processed, scored, and routed
- Audit logs on every decision
- Export everything as JSON
The Bottom Line
Sarah went from drowning in a spreadsheet to running a system that processes deals while she sleeps.
It took 48 hours.
Your team doesn't need more analysts. You need infrastructure that scales without headcount.
Stop triaging manually. Start routing intelligently.
Ready to build lead scoring that actually works?
See how StratumIQ helps revenue teams deploy self-correcting scoring in hours, not months.
See How It Works