The Next Wave of SaaS Is Trust
AI will make it easier than ever to launch software. The products that last will come from people who actually have the problem and care enough to solve it well.
AI is going to create a flood of software.
That is not a bold prediction anymore. It is already happening.
The cost of starting a product is collapsing. You can describe an idea, generate a landing page, scaffold the app, wire up auth, add a database, create a demo video, and get something online faster than ever.
That is exciting.
It is also going to make a lot of products feel disposable.
When everyone can build, the question changes. It is no longer just:
Can this be built?
It becomes:
Why should anyone trust this?
That is the part I keep coming back to.
More software means less default trust
There are going to be more AI apps, more SaaS experiments, more polished landing pages, more little tools, more wrappers, more demos, more half-products, and more attempts to cash in on whatever wave is currently moving.
Some of that will be useful.
A lot of it will not.
The strange part is that many of those products will look good. They will have clean design. They will explain the category well enough. They will have screenshots, pricing pages, changelogs, and maybe even a decent first version.
But polish is getting cheaper too.
That means users are going to look for other signals.
Does the person building this actually understand the problem?
Do they have the problem themselves?
Do they care enough to keep solving it after the launch energy wears off?
Do they stand for something more specific than “this market looks big”?
That trust layer is going to matter more, not less.
The founder story is not fluff
I used to think founder story could sound a little performative.
Sometimes it is. A lot of product storytelling is just a clean narrative wrapped around a market thesis.
But I am starting to understand the useful version.
The useful version is not “here is my dramatic origin story.”
It is:
Here is the problem I actually have. Here is why it matters to other people too. Here is why I care enough to keep going.
That matters because users are not only buying software. They are buying judgment.
They are trusting that the product will make the right tradeoffs. They are trusting that the person behind it understands the edge cases. They are trusting that the company will not disappear after the first shallow version.
That is especially true for AI products.
AI makes products feel powerful quickly. It also makes products feel suspicious quickly. Users can tell when something is a thin layer over a trend. They might not say it that way, but they feel it.
The more AI-generated everything becomes, the more people will care about what is real.
This changed how I think about product ideas
I wrote recently about how I choose projects to work on. The short version is that I use four questions:
- Do I actually have this problem?
- Would other people use it or find it helpful?
- Does the problem matter to enough people?
- Would I still work on it for a long time if it never paid off?
I still believe that filter is right.
But I think there is another layer underneath it now.
If the AI market is going to be full of products, trust becomes part of the product itself.
It is not enough to say “this is a pain point.”
I want to be able to say:
This is my pain point. I have lived it. I have taste around it. I care about the outcome. I am building for myself first, but not only for myself.
That does not guarantee a good product.
But it is a much stronger starting point than chasing a category because it looks fundable.
This is where AI makes the bar higher
AI makes it easier to build quickly.
It also makes it easier to sound convincing before the product has earned that confidence.
That is the tension.
You can generate the pitch before you have talked to users. You can generate the UI before you understand the workflow. You can generate the brand before you know what the product should stand for.
None of that is automatically bad.
It just means the visible surface of a product is less reliable as a signal.
So the deeper signal matters more.
What problem led you here?
What have you learned by being close to it?
What tradeoffs would you make differently because you understand the pain personally?
What would you keep improving even if the first launch did not get attention?
Those questions are harder to fake than a landing page.
That is why I think trust will become a larger part of product discovery. People will not only ask whether the tool works. They will ask whether they believe the people behind it are solving something real.
Trust is a product feature
I think a lot of the next wave of SaaS will be judged by trust.
Not only security trust.
Not only uptime trust.
A more human kind of trust:
- Do I believe this builder understands the problem?
- Do I believe they care about the people using it?
- Do I believe the product is pointed at a real pain, not just a market opportunity?
- Do I believe they will make better decisions because they are close to the problem?
That kind of trust is hard to fake for long.
You can fake polish. You can fake a launch. You can fake a positioning statement. You can fake momentum for a while.
But it is much harder to fake sustained care.
The products that win will still need good execution. They will need distribution, quality, design, reliability, and timing.
But I think the products people actually want to stick with will also need a believable reason to exist.
The point
AI is making it easier to build software.
That means more people will build.
It also means users will have to decide what is worth trusting.
My current belief is simple:
The best products will come from people who have the problem, can prove they understand it, and care enough to keep solving it for others.
That is not a guarantee.
It is a filter.
And in a world full of AI-generated products, filters matter.