π Β Pennymac selects Vesta to supercharge its mortgage platform. Read more β

We've published a series of essays about the future of mortgage origination and what it will take for AI to meaningfully change how work gets done. We wrote about why the hardest problems in lending aren't speed or scale, but ambiguity; why rules alone don't scale; why transparency and trust are prerequisites, not nice-to-haves; and why deploying AI into real operations is as much about system design as it is about models. And in The Road Ahead, we described a vision of autonomous agents that coordinate and act across the loan lifecycle.
β
Those posts laid out a point of view. This one is about what that point of view looks like in production β because that vision is no longer theoretical.
β
Our AI agent is now live in production, operating on real loans and performing operational tasks as part of everyday mortgage workflows.
β
In mortgage tech, "AI" often means narrowly scoped automation driven by often brittle rules. And "live" frequently means a narrowly scoped pilot. Those can be valuable starting points, but they are not the destination we envision.
β
Going live with artificial intelligence means autonomously executing tasks in production where the desired outcomes are specified, but the exact means by which to accomplish them are not. To do this requires intelligence and reasoning, general knowledge and specific context, and the tools required to take action to achieve a desired outcome.
β
This is not a chatbot. This is not RPA. And this is not a deterministic rules engine. All of those things have their place, but they can only change how work gets done at the margin.
β
Our AI agent actually does the messy, hard, difficult to encode in rules work of mortgage operations.
β
Today, lenders are enabling the agent for operational tasks where a second set of eyes drives higher file quality and efficiency downstream.
β
One example is early file quality review, before a loan reaches processing. The agent can examine a file, check that required data is present, ensure aliases are fully captured from all documents, and reconcile conflicting information across sources. When it can resolve issues directly, which it usually can, it does. When it can't, it clearly explains what it found and why a human needs to step in.
β
To users, it doesn't feel like a new tool or system to manage. It feels like a really fast colleague who quietly takes work off their plate and brings humans in only when judgment is required.
β
That said, these tasks are only scratching the surface of what the agent can handle. The agent is designed to operate across the lifecycle of loan origination with new capabilities being added every week.
β
Below is a short video showing the agent performing tasks inside a representative workflow. This is a non-production environment so that no sensitive data is exposed, but the behavior is the same as what's running in production today.
β
β
This isn't a conceptual mockup or a prototype, it's the agent doing real operational work. And this is the worst it will ever be. The underlying models will continue to improve, and even if they don't, our agent will β through tighter integration with our AI-native platform, expanded tooling, and the continuous feedback loop we've built into every workflow.
β
As noted in our previous posts on this topic, it is essential that the agent explains its reasoning, records a complete audit trail of every action it takes, and allows humans to review, override, and provide feedback on its actions. This helps build confidence in its abilities and improve its performance.
β
Our agent works because:
β
That level of transparency isn't optional, especially in a regulated environment. It's what allows an autonomous system to operate safely, earn trust, and improve over time.
β
It's also why this kind of agent can't be bolted onto a legacy LOS or built separately from the system of record. It has to be part of an AI-native system of record β with a data model and tools built as primitives for AI to operate on.
β
Today, operators spend time tracking down missing information, reconciling discrepancies, and manually validating work that follows familiar patterns.
β
With our AI agent, those things happen autonomously and asynchronously behind the scenes, leaving humans to focus on exceptions, decisions, and edge cases.
β
The result is higher throughput per person. Teams can scale to originate more loans without adding headcount.
β
This is a concrete step in the direction of the long-term shift toward more autonomous mortgage operations β AI-native systems that coordinate and execute on work, with humans specifying and supervising outcomes, not execution mechanics. We'll continue expanding what the agent can do β adding new capabilities and pushing intelligence deeper into the core of origination workflows. By the end of the year we expect it to be able to handle the vast majority of the work that currently sits on operators' plates.
β
We'll share more about growth in its capabilities and outcomes for lenders in the coming months.
β
For now, the takeaway is simple: operational AI agents aren't just theoretical anymore. With the right system design, they can operate safely and transparently in production and drive real operational efficiency.
β
If you're a lender curious what AI-native mortgage operations actually look like, we'd love to show you. And if you've been skeptical, that's reasonable β this industry has heard big promises before. We waited to write this until the system was live in production.
β