Every SaaS founder knows they should be watching what their competitors charge. Almost none actually do it β at least not systematically. I know because I tried.
For 30 days in Q1 2026, I manually tracked 100 SaaS pricing pages. Every morning, I checked for changes. I kept a spreadsheet. I screenshotted anything interesting. By the end of it I had 3,000+ data points, a lot of cold coffee, and some genuinely surprising findings.
Here's everything I found.
The setup
I picked 100 SaaS products across 10 categories: productivity, marketing, developer tools, analytics, support/CX, HR/recruiting, finance, design, sales, and project management. I weighted toward products in the $20β$200/mo range β the segment where pricing decisions matter most to founders making buy-vs-build decisions.
I checked each page daily using a combination of manual review and page-change alerts. My criteria for "a change" was anything that affected pricing information: plan prices, plan names, feature inclusions, limits, trial lengths, or the removal/addition of entire tiers.
Methodology note: I excluded changes to testimonials, blog links, or general marketing copy on pricing pages. I only counted changes to the pricing information itself. Some products use A/B testing, which can look like a change β I flagged but excluded those where I could confirm it.
The headline numbers
One in three SaaS companies changed something on their pricing page in a single month. That was higher than I expected. I'd estimated maybe 15β20%. When I described the study to other founders, they guessed 10β15%. We were all wrong.
"One in three SaaS companies changed something on their pricing page in a single month."
What kinds of changes happened?
Price increases (21 events)
The most common change by far. Of the 21 price increases I recorded, here's how they broke down:
- 10 were straight increases to existing plan prices (median increase: +22%)
- 7 were "new tier" additions that effectively pushed existing users into lower-featured plans
- 4 were limit reductions on lower tiers (same price, fewer seats/API calls/features)
The "new tier" pattern is particularly sneaky and I've started calling it the tier squeeze: you add a new tier above the current top tier, repackage the old Pro as a new "Business" tier with slightly fewer features, and suddenly your $49/mo plan has fewer features than it did last month β even though the price didn't change. Technically not a price increase. Effectively a price increase.
Price decreases (4 events)
Far rarer, but they happened. Three were from newer entrants clearly in acquisition mode β dropping prices aggressively to compete with an established player. One was from a mid-size tool that had apparently mispriced its annual plan (they corrected a calculation error that had made annual subscriptions more expensive than monthly ones, which is... not intentional).
Price decreases were all announced. The companies that cut prices wanted you to know about it. This is the opposite of price increases, which were almost all done quietly, with no announcement.
Feature and limit changes (11 events)
These are the changes that are hardest to catch and have the biggest impact on perceived value. Examples I documented:
- An email marketing tool moved "remove branding" from Starter to Pro (effectively a ~$30/mo forced upgrade for anyone who cared about branding)
- A project management tool cut the free tier from 5 projects to 3, down from the 10 it launched with two years ago
- An analytics platform reduced data retention from 12 months to 6 months on the $49 plan β with a note in their changelog that most users wouldn't notice
- Three products quietly removed features from their public pricing table that were still technically available β a form of quiet deprecation testing
The feature changes are where the most interesting competitive intelligence lives. A company moving a key feature up a tier often signals either (a) that feature became a conversion lever, or (b) they're trying to force upgrades ahead of a fundraise.
Free plan removals (3 events)
This is the "freemium is a liability" trend in action. Three products removed their free plans entirely. One was a design tool. One was a scheduling product. One was a CRM.
All three replaced their free plans with a 14-day trial. All three made the announcement sound like an upgrade: "we're investing more in product quality and can no longer offer the free plan β but here's a trial to get started." None of them were wrong β free plans are expensive to support and notoriously hard to convert.
The free plan removal playbook: It's becoming standardized. Remove free tier β replace with time-limited trial β emphasize what trial users get β don't mention what they're losing. If you have a free tier and your competitors are dropping theirs, now you have a differentiator β or a cost center depending on how you look at it.
The data: selected changes by category
| Category | Type of change | Direction | Magnitude | Announced? |
|---|---|---|---|---|
| Email marketing | Plan price increase | β +31% | Starter: $19 β $25 | No |
| Developer tools | Free tier limit cut | β Effective increase | API calls: 10k β 5k/mo | No |
| Project mgmt | Price decrease | β β17% | Annual plan correction | Yes |
| Analytics | Feature downgrade | Feature moved up tier | Custom reports: Starter β Pro | No |
| CRM | Free plan removed | Free β Trial only | 14-day trial replacement | Yes |
| Support/CX | New tier added | β Tier squeeze | Enterprise tier, Pro features moved | No |
| Marketing | Price increase | β +18% | Pro: $49 β $58 | No |
| Design tools | Free plan removed | Free β Trial only | 7-day trial replacement | Yes |
| HR/Recruiting | Seat limit reduced | β Effective increase | Starter: 10 users β 5 users | No |
| Sales | Competitor response | β β25% | New Solo tier at $15/mo | Yes |
I'm not naming specific companies here β that's not the point and frankly some of these are clearly still being tested. The patterns matter more than the specifics.
The patterns that surprised me
1. Unannounced changes are the norm, not the exception
Of the 34 changes, only 11 were announced β and all 11 were price decreases or free plan replacements (i.e., things the company wanted to make noise about). Every single price increase happened quietly. No email to existing customers. No blog post. No changelog entry. Just: one day the price was different.
This means if you're not actively monitoring, you're not finding out about competitor price changes until a prospect mentions it in a sales call β if ever.
2. Changes cluster on Tuesdays and Wednesdays
I tracked the day of week for each change. 23 of 34 changes (68%) happened between Tuesday and Thursday. Almost nothing changed on Fridays or weekends. This makes intuitive sense β you don't push pricing changes on a Friday when support is winding down β but it means if you're only checking weekly, check midweek.
3. The "quiet rollout" pattern is real
Several of the changes I caught weren't immediate β they appeared to roll out to different users at different times. I'd see a price change on one browser session but not another. This suggests A/B testing of pricing (common) or a gradual rollout to new vs. existing visitors. This makes automated monitoring both more important (you need to catch the "variant" before it becomes permanent) and more complex (you need to distinguish testing from real changes).
4. Limit changes happen more than price changes
When you combine feature downgrades and limit reductions with outright price increases, the effective-price-increase rate was much higher than 21%. The companies playing the limit game are making the same economic move as raising prices, but with less backlash risk β fewer customers notice, and those who do are usually the heaviest users (who might just upgrade anyway).
5. Competitor pricing changes are a signal, not just noise
Several of the changes I captured came in clusters: within days of each other, two competitors in the same category made similar moves. A project management tool raised its per-seat price, and within 72 hours, its direct competitor raised theirs by almost the same amount. Either they were watching each other, or they both read the same Lenny's Newsletter about pricing benchmarks. Either way β pricing signals travel fast, and if you're not watching in real time, you're always the last to know.
What this means for your pricing strategy
If you're a SaaS founder, here's what I'd take from this:
- Your competitors are raising prices more often than you think. If you haven't looked at their pricing page in 3 months, you don't know what they're charging.
- You're probably underpriced relative to your market. If 21 out of 100 competitors raised prices in a single month and you haven't, the gap is widening.
- Limit and feature changes are the sneakiest competitive move. Watch for them. They're often more impactful than outright price changes.
- Price decreases are always announced β use that. When a competitor cuts prices, they'll tell you. When they raise them, they won't. Your monitoring should be designed around catching the silent moves.
- Midweek is when changes happen. If you're checking manually, check TuesdayβThursday.
Why I built PricePulse
Doing this manually for 30 days with 100 companies was exhausting. And I only checked once a day β a determined team watching a competitor could push a test, roll it back, and iterate before I noticed.
More importantly: most founders don't have time to do this at all. You're building a product. You're not checking 20 competitor pricing pages every morning. But you need to know when they change.
That's the problem PricePulse solves. You add your competitors' pricing URLs. We monitor them 24/7. You get an email the moment anything changes β price, feature, plan name, limit, anything.
If I'd had PricePulse running during this study, I would have spent 30 minutes setting it up instead of 30 days checking manually.
Get alerted when competitor pricing changes
Add your competitors' pricing pages. We'll watch them 24/7 and email you the instant anything changes. Early access is free.
Methodology appendix
A few questions I've already been asked:
Why 100 companies? Large enough to see patterns, small enough to manually verify every change. At 500 companies this would have required automation (which defeats the purpose of a manual study).
How did you pick the 100? I started with Product Hunt's top-rated tools in each category, then cross-referenced with G2 rankings and a handful of communities (Indie Hackers, SaaS-specific subreddits) to make sure I wasn't missing bootstrapped products. I tried to avoid weighting toward VC-backed companies.
What counts as a "pricing page"? Any public URL that lists plans and prices. Some companies price only in their app β I excluded those. Some companies have multiple pricing pages for different segments (annual vs. monthly, SMB vs. Enterprise) β I tracked the primary "Pricing" link from their nav.
Is 30 days a representative sample? Probably not. It's a snapshot. My hypothesis is that the rate would be consistent month-over-month, but Q1 tends to see more pricing activity as annual budgets reset and companies re-evaluate their go-to-market. I'd expect Q3 and Q4 to be somewhat lower.
If you have questions about the methodology or want to discuss specific findings, reach out via the site.