AI

Top AI Questions Every Real Estate Executive Is Asking Right Now

Top AI Questions Every Real Estate Executive Is Asking Right Now

Stackpoint team

·

This article is a guestpost written for ThesisDriven by Stackpoint general partners Chris Kelly and Adam Pase. 

Stackpoint builds vertical AI companies from the ground up. To validate ideas for new companies, we’ve had in-depth conversations with nearly a hundred real estate leaders including owners, brokers, lenders, developers, and operators in the past few years–identifying the most pressing operational pain points that could be solved with AI. 

Across these conversations, one pattern stands out. Real estate owners and developers see AI’s potential, but translating that interest into meaningful action isn’t straightforward. The same core questions echo across firm types, roles, and markets: How do I start? Where does AI actually help? What’s hype and what’s real?

This post brings those most pressing questions and answers into one place. The insights reflect both our direct experience in the field and the thinking captured in our recent white paper, Real Estate AI: A CEO’s Guide to What Matters Now." 

1. What exactly is "AI", and what do real estate leaders actually need to understand about it?

You don’t need to become a computer scientist to understand AI. Having this simple mental framework helps you ask the right questions: 

Think of AI like building architecture. Most AI applications today share the same core structure: compute layer (the hardware like the chips produced by Nvidia), models (the "brain" like GPT-4 or Claude), frameworks and tooling (the developer infrastructure), and applications (what you actually interact with). Performance of the AI tool depends on every layer, not just the top-level features. 


The application layer is what you see: for example, the interface for lease abstraction or investment memo generation. But the quality depends on the foundation: how well the data is structured, whether the system can swap in better models as they emerge, and if the architecture can scale reliably.

The best AI tools feel like they were impossible to build before AI matured because they were designed around AI's strengths from day one. AI enables a new way of building software that wasn’t possible before.

Take LoanLight, for example. It reimagines non-QM mortgage underwriting from the ground up by applying purpose-built AI to navigate complexity that previously required layers of human expertise, judgment, and back-and-forth. Instead of relying on rigid forms and static checklists, LoanLight dynamically ingests borrower documents, matches them to evolving investor guidelines, and flags conditions in real time, accelerating decisions with confidence and clarity. Before recent advances in AI, this level of contextual understanding, adaptability, and real-time decision support simply wasn’t feasible.

SurfaceAI’s Lease Audit AI Agent is another example. Traditional lease auditing requires scheduled, manual review by internal or external auditors to spot inconsistencies across leases. It’s slow, expensive, and leaves room for revenue leakage between audits. Surface’s Lease Audit AI Agent flips this model: it continuously audits 24/7 in the background, adapting to each property’s specific lease formats and terms, flagging discrepancies the moment they occur, and automatically routing them to onsite teams for resolution. This isn’t just faster, it’s an entirely new model of autonomous, real-time, AI-native risk mitigation that wouldn’t be possible without the current generation of AI. 

Ask your vendors these questions to understand if they are the real deal: Can you upgrade to better models down the line? How is your data actually structured for AI? What happens when the underlying technology gets better? Does your solution provide incremental improvement to my workflow or drastically improves it by 10X? 

2. Which workflows are actually ready for AI—and which still need a human touch?

AI works best not where headcount is high, but where friction is high, volume is consistent, and outcomes are repeatable. Look for processes involving pattern recognition, data extraction, content generation, or structured decision-making.

At Stackpoint, we use a simple framework to evaluate and build vertical AI tools—one that maps most real-world applications into four core capability buckets: 

  • Retrieve – Find and surface relevant information from documents and systems

  • Predict – Forecast outcomes based on historical or real-time data

  • Generate – Create content, text, summaries, or recommendations

  • Act – Take actions or trigger workflows based on outputs

The most powerful AI systems don’t live in a single bucket. They combine multiple capabilities such as data retrieval, reasoning, content generation, and task execution into integrated, multi-step workflows. This makes them agentic. They do not just respond to prompts; they initiate, plan, and act autonomously to achieve goals. 


These AI agents go far beyond chatbots. They function more like digital employees or teams, capable of managing complex tasks from start to finish without human micromanagement. Once deployed, they can identify what needs to be done, decide how to do it, and carry it out. They retrieve information, make decisions, generate outputs, and take real-world action with minimal oversight. For example, SurfaceAI has multiple AI agents handling different aspects of multifamily operations from lease audits to delinquency to due diligence. 

But while AI excels at consistency and scale, it still struggles with nuance, ambiguity, and strategic judgment. That’s why deployment should be targeted: automate where you need speed and standardization, and keep people involved where experience, empathy, or discretion are essential.

Ready for AI: Document processing, lease abstraction, property descriptions, tenant screening, maintenance scheduling, basic customer service inquiries, market research compilation.


Keep humans involved: Complex negotiations, relationship management, strategic planning, novel deal structures, crisis management, final approval on high-stakes decisions.

3. Should I wait for my current vendors to catch up, or work with newer players?

Many vendors have added AI to their legacy tools as surface-level features, an autocomplete here, a chatbot there. But these additions are often layered on top of rigid software architectures that weren’t built to support AI in the first place.

It’s like trying to retrofit a skyscraper’s foundation and internal systems while the building is still standing— duct-taping in new wiring without upgrading the structure. It might look modern on the surface, but it won’t scale, flex, or perform reliably over time.

AI introduces a new way of building that never existed before.  The best AI vendors think differently about the problem. Instead of asking "How do we add AI to what we already have?" look for vendors who start by asking, "If we built this workflow today, knowing what AI can do, how would we design it?

Workflows are designed around AI’s strengths, not old constraints. For example, rent delinquency was traditionally a slow and error-prone process. Staff had to track overdue payments manually, send reminders through multiple channels, post notices in person, and manage legal steps—all while trying to stay compliant with complex regulations.

SurfaceAI’ has built an AI Agent to transform this workflow. It continuously monitors for late payments, automatically sends reminders via email, SMS, and chat, escalates to legal when needed, and ensures every required step happens on time. Human involvement only happens when absolutely necessary (like posting a notice on the door). What was once a manual and risky process is now a continuous, autonomous system—built for scale, precision, and compliance from the start. 

2. Data pipelines are structured for real-time retrieval, validation, and feedback, not just static storage.

3. Interfaces are built for human-AI collaboration, with control points and oversight layers built in from Day 1. 

Truelist has an AI-powered listing coordinator for residential listing agents—automating listing prep, vendor scheduling, and seller communication through a voice- and text-based AI assistant. It also includes a collaborative seller dashboard. The AI assistant handles timeline creation, vendor coordination, and seller dashboard updates, while the listing agent stays in control of the entire process.

4. Infrastructure is modular and flexible enough to upgrade over time, allowing you to swap models as better ones emerge, without re-architecting the whole product. 

For example, LoanLight is designed to support continuous improvement without requiring constant rebuilds. Its infrastructure is modular and adaptable, making it easy to deploy AI agents across different lending verticals such as non-QM, conforming and home equity loans.

The system is also model-flexible, meaning it can integrate newer and more capable models like OpenAI or Claude depending on the specific task each agent is designed to perform. This flexibility allows LoanLight to evolve quickly as technology advances or lending requirements shift, without disrupting the integrity of core workflows. 

5. Performance is optimized for scale, not just “it works for a few users,” but “it works reliably across a portfolio, a region and more.” SurfaceAI’s product strategy embeds centralization at its core. It’s engineered to support the needs of corporate, regional, and on-site teams alike, ensuring consistent performance across diverse properties and geographic locations. 


AI-native systems don’t feel like they’re using AI as a feature. They feel like they were impossible to build before AI matured. That’s the difference. That’s why Stackpoint partners with real estate operators to build AI companies from the ground up: to make the most of what AI uniquely enables.

4. How do I know if an AI tool is actually useful—and how do I evaluate it if I’m not technical?

Not all AI tools are built the same. Some fundamentally change how work gets done; others simply layer AI onto outdated systems. The most useful tools don’t just automate individual tasks—they rethink entire workflows, unlocking speed, scale, and precision that were previously out of reach.

You don’t need a technical background to evaluate whether a tool is genuinely valuable. The key is to focus on whether the tool delivers real transformation. Ask yourself: Does it meaningfully reduce friction in a high-impact process? Can it operate end-to-end with minimal human oversight? Is the tool adaptable—designed to improve as models evolve and new data becomes available?

Focus on outcomes, not just architecture. Ask about their approach to accuracy, error handling, and how they ensure outputs are based on reliable data rather than model specifics.

Here's what to ask for:

  • Request a complete workflow demonstration, look for tools that enable new ways of working—not just faster versions of the old ones

  • Ask how they handle mistakes and edge cases

  • Understand their data requirements and security practices

  • Evaluate their update frequency and development velocity

  • Test with your actual documents and use cases

  • Clarify data ownership and whether your data will be used to train the model

Treat vendors as operating partners, not just feature providers. The best AI companies go beyond delivering software. They immerse themselves in your business challenges and co-design solutions that improve workflows rather than simply showcase technology. Their goal is not just to sell you the car. It is to help you become an expert driver. That means guiding implementation, enabling adoption, and ensuring long-term impact.

Warning signs: Vendors who can't explain their approach clearly, deflect questions about accuracy, or promise solutions that sound too good to be true. Mature AI companies acknowledge limitations and build safeguards accordingly.

5. What are the risks of moving too fast—or too slow—on AI?

Let’s reframe this question: where can you afford to experiment now versus where you need to watch closely? The real risk isn't the technology, it's the organizational response.

Moving too fast: Implementing AI in business-critical workflows without proper testing, human oversight, or error handling. This can lead to compliance issues, customer service problems, or operational disruptions.

Moving too slow: Competitors gain efficiency advantages while you deliberate. AI adoption creates compounding benefits, early momentum builds organizational capability, data advantages, and operational leverage.

The balanced approach: Run controlled experiments in non-critical workflows while carefully monitoring results. Celebrate learning speed over perfect outcomes. Help your team get comfortable using AI through hands-on experience rather than theoretical planning.

Early momentum compounds not just in tools, but in organizational muscle. Teams that develop AI fluency early become better at identifying opportunities, implementing solutions, and adapting to new capabilities.

6. How do I implement AI internally without derailing other priorities?

Start small and scale smart. The goal isn’t to transform everything at once, but to build internal fluency through manageable experiments. Begin with high-friction, low-stakes workflows. Test an AI solution in a focused environment, measure the impact, and build from there. This gradual approach helps teams learn while showing real results.

It’s important to move in steps. AI tools get better with use, and your team needs time to understand where AI performs well and where human judgment still plays a critical role. Jumping into full automation too quickly can create unnecessary risk. A better approach is to keep people involved early on—reviewing, guiding, and approving AI output—and then gradually increasing autonomy as the technology proves itself.

The goal isn’t for every employee to become an AI expert. What matters is that people across the organization begin to shift how they work. That looks like:

  • Asking “Could AI help here?” when they hit points of friction

  • Building quick prototypes before requesting a new system

  • Knowing when AI can be trusted, and when human oversight is needed

  • Iterating more freely, because the cost of testing new workflows is lower

As AI fluency spreads, so does operational leverage. The teams that identify opportunities early and act on them quickly will gain a meaningful advantage.

One of the biggest mistakes companies make right now is over-planning. Waiting until you have a fully baked, enterprise-wide AI strategy might feel safe. But the reality is: by the time you finish it, you’ll already be behind. You don’t need a 12-month AI roadmap to start. You need a 2-week experiment. Pick a team, a process, a tool, then try it and learn from it. Then try again, a little faster, and a little better. The momentum builds from there.

7. What's my role as an executive in all this?

Position yourself as the tone-setter for curiosity, iteration, and internal permission, not as the AI evangelist or technical architect.

Your primary responsibilities: clearing red tape for experiments, celebrating learning speed, and making strategic decisions about where to invest in AI capabilities. The biggest barriers to AI adoption are usually organizational, not technical.

Behaviors that unlock progress:

  • De-risking early tests by setting clear parameters and expectations

  • Protecting early adopters from organizational antibodies ("that's not how we do it here")

  • Making AI fluency part of performance expectations for managers

  • Streamlining approval processes for low-risk AI pilots

Strategic decisions only you can make: When to partner with AI-native vendors versus building internally, how to reallocate capital toward AI-enabled processes, and which workflows to prioritize for transformation.

The companies moving fastest have CEOs who personally drive critical decisions, not by dictating technology choices, but by owning the speed and scale of organizational change.

More Resources

More Resources