I Cross-Referenced Lobbying Disclosures, Congressional Trades, and Earnings Calls to Find a Story Nobody Else Was Covering
Follow the Money, Follow the Words
The best investigative stories start with a pattern nobody else noticed. The problem is that the data lives in five different government databases, each with its own format and search interface.
I built a pipeline that cross-references three public data sources through one API:
- Lobbying disclosures — Who's spending money to influence policy?
- Congressional stock trades — Who's trading stocks in areas they regulate?
- Earnings call transcripts — What are companies saying to investors about regulation?
When all three align around a single topic — that's a story.
Step 1: Search Lobbying Activity
Start with an industry. Who's lobbying on AI regulation?
curl "https://api.gettrawl.com/api/lobbying/search?issue_area=science%2Ftechnology"Step 2: Check Congressional Trading
Now check if any committee members are trading stocks in the same sector they're legislating.
curl "https://api.gettrawl.com/api/congress-trading/search?ticker=NVDA"Step 3: Listen to What Companies Tell Investors
Earnings calls are where executives reveal their real priorities. Search for what they're telling Wall Street about the same regulation.
curl "https://api.gettrawl.com/api/earnings/search?ticker=NVDA"The Investigation Pipeline
from trawl import TrawlClient
client = TrawlClient()
# 1. Who's lobbying on AI?
lobbying = client.lobbying.search(issue_area="science/technology")
companies = set()
for filing in lobbying.results:
companies.add(filing.client_name)
print(f"{filing.registrant_name} lobbying for {filing.client_name}")
# 2. Are legislators trading in AI stocks?
ai_tickers = ["NVDA", "MSFT", "GOOGL", "META", "AMD"]
suspicious_trades = []
for ticker in ai_tickers:
trades = client.congress_trading.search(ticker=ticker)
for trade in trades.results:
suspicious_trades.append({
"politician": trade.politician,
"ticker": ticker,
"amount": trade.amount,
"date": trade.transaction_date,
})
# 3. What are these companies saying about regulation?
for ticker in ai_tickers:
calls = client.earnings.search(ticker=ticker)
if calls.results:
latest = calls.results[0]
transcript = client.earnings.get_transcript(
ticker, latest.year, latest.quarter
)
# Search transcript for regulation keywords
for segment in transcript.segments:
if any(kw in segment.text.lower() for kw in [
"regulation", "legislation", "policy", "compliance", "government"
]):
print(f"\n{ticker} Q{latest.quarter} — {segment.speaker}:")
print(f" {segment.text[:300]}")
Why This Matters for Journalism
The public data is all there — on EDGAR, on Senate.gov, on the LDA database. But no journalist has time to manually cross-reference three databases every day.
This pipeline runs in 30 seconds and surfaces patterns that would take a human researcher days to find. The journalist still does the hard work — verification, context, narrative. But the lead generation is automated.
The Pattern to Watch For
- Company lobbies on specific regulation (lobbying disclosures)
- Legislator on relevant committee trades the stock (congressional trading)
- Company tells investors regulation is favorable (earnings call)
When all three happen within the same quarter — that's your lead.
Non-Code Options
You don't need to write Python to run this investigation. Trawl's MCP server works inside Claude Desktop — describe what you're looking for and it handles the API calls:
"Cross-reference NVDA lobbying activity with congressional stock trades and recent earnings calls. Are there any patterns?"
Claude pulls from all three sources, aligns the timelines, and surfaces the overlaps. You review the leads, not the code.
For longer investigations, the Obsidian plugin lets you save results directly into your research vault — lobbying filings, trade records, and earnings excerpts as linked notes you can cross-reference later.