AI Software User Research on Personal Financial Management of GenZ

Budgetwise.ai is an AI-powered personal finance app that simplifies money management for young users

Context: Product was going through a major update with AI technology

My Role: Lead UX researcher

Methods: Market exploration / Survey / In-depth interviews / Usability testing / Feature prioritization / Competitive Analysis

Goals

Product Goal

  • To revolutionize how young users approach personal finance by leveraging AI-powered technology.

Research Goal:

  • Find the market gaps that could make this app stand out

  • Understand what users truly need, what frustrates them, and how they manage their money with budget plans

Challenges

When I first joined the team, the startup team faced several significant hurdles:

01

Navigating a lack of clear product direction during a redesign phase + unplanned leave of PM lead

02

Competition from big tech companies in the crowded AI market

03

Operating within a constrained budget

04

A team inexperienced in AI product development

Research Solution

To tackle the challenges effectively, I designed a multi-phased, mixed-method research approach, specifically ordered to address specific questions and provide actionable insights:

I summarized the framework tailored for this problem space, named “4C” (i.e., Clarify, Collect, Connect, and Collaborate)

Step 1: Clarify – Survey

At the beginning of the project, our stakeholders needed clarity on the market size and target user segments.

    • Provide quantitative data to validate pricing models and identify which user segments were most interested in an AI-powered finance tool

    • Serve as a cost-effective method, allowing us to collect over 800 responses within five days, which was crucial given our limited budget of $500

    • What are the primary pain points for potential users in managing their finances?

    • Which features and functionalities are most desirable in a personal finance app?

    • How can AI capabilities enhance the user experience?

Based on survey findings, I identified two key user groups by analyzing budgeting habits, frustrations, and willingness to adopt new tools:

  • Current Budgeters: Use budgeting apps but find them frustrating due to usability issues or missing features

  • Potential Budgeters: Interested in budgeting but haven’t developed the habit due to complexity or lack of motivation

Step 2: Collect – In-Depth Interviews

The PM and stakeholders needed research to tackle an important problem space in the budgeting app ecosystem.

They wanted to explore better ways regarding monthly budgeting habits and automation features, this required me to apply psychological principles.

Surveys provided useful information but lacked details on income, AI views, and community needs. We recognized the need for in-depth interviews to explore:

Why do competitor apps focus on fresh monthly budgets instead of replicating previous ones?

  • Would an automated budget based on past spending habits truly benefit users?

Are budgets transferable or do users prefer starting fresh each month?

What communication style do users want from an AI assistant?

10 participants (5 per persona) were recruited online through UserInterview to ensure a diverse range of experiences and financial literacy levels.

INITIAL RESULTS

Following the survey and user interview, the results showed that

  • Survey insights convinced stakeholders to pivot toward an AI-driven strategy, realigning the product’s roadmap and strategy

  • However, outcomes from interviews showed that users were cautiously optimistic about AI assistants but wanted transparency and control over suggestions (e.g., tailored financial advice, automated categorization, and predictive insights)

  • Interesting validation about the community feature — seen as useful for shared tips, accountability, and learning from others

THE POT TWIST

As the team gained momentum, unforeseen challenges arose…

  • Early user engagement showed a different story: people weren’t using AI-driven budgeting tools as expected.

    • Some didn’t trust it, while others felt it didn’t adapt to their needs; they were confused by how the AI assistant communicated

  • Misaligned expectations from stakeholders required revisiting the product's design and strategy

These new challenges pushed me to rethink our approach

-> Here’s how we turned things around:

Instead of assuming AI was the problem, I applied Human-Centered AI (HCAI) principles to understand how users perceived, interacted with and relied on AI-generated financial recommendations.

Step 3: Refine – Researching AI Trust & Adoption

The new problem wasn’t just about what the AI was doing, but how it was doing it.

If AI was going to be the core of the product, it needed to earn users’ trust and feel like a real financial assistant—not just an automated script.

So, I shifted focus to understanding why users were hesitant about AI-driven budgeting and what would make them trust it.

✓ Understanding the AI Adoption Barriers

  • User Interviews – I continued the user interview to the 2nd round to further explore how users currently manage money and whether they trust AI for financial decisions

  • Behavioral Observation – During the remote interview, I also tracked where users dropped off when interacting with AI-generated suggestions

What I found:

  • Lack of transparency – Users didn’t understand why AI-generated budgets and alerts were made, leading to distrust

  • Unclear value proposition – Users weren’t sure how AI actually helped them beyond what they could already do manually

✓ Designing AI to Work With Users, Not For Them

Instead of replacing human decision-making, AI needed to act as a collaborative tool.

The goal was simple—let users feel in control of AI, not the other way around.

I recommended three key changes in the following development process:

✧ AI Confidence Scores – A transparency layer that explains why AI makes budgeting suggestions, similar to explainable AI (XAI) in fintech

✧ Personalization Controls – Users could adjust automation settings, choosing between fully AI-driven, hybrid, or manual budgeting

Step 3: Connect – Competitive Analysis

Before prioritizing features, we needed to understand how competitors positioned themselves in the market.

Key Questions Addressed:

  • What are the gaps in competitor products that our product can address?

  • How do competitors implement AI features, and what can we do differently?

  • Which aspects of competitors’ user experience resonate most with our targeted users?

Here is my process:

  • Identify opportunities for differentiation by analyzing strengths and weaknesses in competitors like Wally and Cleo

  • Use thematic analysis of competitor reviews to pinpoint areas for improvement in our own product design

How did I deliver the findings to the designers?

  • Utilized screenshots to visualize concepts and facilitate designers' comprehension

  • Regular workshops and check-ins ensured everyone stayed aligned at critical decision points, and invested in the process

  • Proactively engaged team members in research sessions and shared interim insights via 1 on 1 to maintain alignment

How did I deliver findings to the designers?

> Utilized screenshots to visualize concepts and facilitate designers' comprehension

Evaluation Process:

  1. Analyzed Wally & Cleo to determine their strengths and weaknesses

  2. Conducted a side-by-side comparison of our prototype's design on AI Chat feature against competitors

  3. Extracted insights and areas of improvement for AI features for a better user experience

With insights from surveys, interviews, and competitive analysis, we found that “Features and Functionality” and “Ease of Use” were the top factors influencing users to switch tools.

Decision Factors to Switch Tools:

Step 4: Collaborate – Feature Prioritization Workshop

With insights from surveys, interviews, and competitive analysis, we found that “Features and Functionality” and “Ease of Use” were ranked as the biggest factors when deciding to switch tools.

Thus, we organized a collaborative workshop to define the Minimum Viable Product (MVP). The goal was to:

  • Align the team by prioritizing features that directly address user pain points and business objectives

  • Evaluate technical feasibility and resource limitations to pinpoint must-have features for the MVP while deferring less critical ones

My Approach:

I divided the workshop into 5 stages:

01. Introduction:

Reviewing user insights and establishing the purpose

03. Evaluation and Prioritization:

Ranking features based on criteria (e.g., user impact and feasibility)

05. Closing:

Finalizing timelines and check-ins with the engineering team

02. Idea Generation and Grouping:

Brainstorming features based on user feedback

04. Next Steps: Summarizing decisions and defining action plans

Step 5: Communicate — Insights and User Flow

After prioritizing key features, we designed a user flow to

  • Visualize how users interact with spending trends, category tracking, and Favorites

  • Help stakeholders understand the user journey easily, This promotes clear communication between teams, saving valuable time and resources. Increased efficiency. User flows eliminate guesswork and streamline the design process

  • Promote clear communication between teams, saving valuable time and resources, making development more efficient

Final Recommendation

    • Introduce AI Confidence Scores which is clear, user-friendly explanation of why AI suggests certain budgets or spending insights

    • Display explainability tags (e g "This budget is based on your last 3 months of spending trends") to make AI reasoning more accessible

    • Allow users to toggle AI involvement, and provide options for fully automated, hybrid, or manual budgeting modes

    • Implement AI learning preferences, letting users adjust how often AI suggests changes to their budget

    • Add a “Teach AI” feature where users can provide quick feedback (e g “This suggestion isn’t relevant to me” or “Not helpful”)

    • Allow users to customize categories for AI-generated budgets, so it adapts to their unique financial habits over time

    • Frame AI as an advisor, not a decision-maker—marketing and UX copy should emphasize “smart suggestions” instead of rigid automation

    • Provide AI insights with human explanations, such as “This recommendation is based on past spending, but you can adjust it as needed”

Final Outcomes & Impact:

Research efforts finally turned out:

  • User trust in AI-driven budgeting increased, leading to higher engagement with automated financial tools

  • Stakeholders shifted from treating AI as a secondary feature to making it a core strategy

  • The feedback loop improved AI accuracy; Consequently, recommendations became more relevant over time

What I will do differently next time?

Even though the refinement was very quick, I wish to test & measure AI effectiveness

The next time, once the proposed refinements were implemented, I would run usability tests to measure their impact.

  1. A/B Testing: Engagement rates before and after adding transparency and personalization features

  2. Sentiment Analysis: Do the trust levels in AI recommendations before and after design change?

  3. Adoption Tracking: How many users customized AI settings vs. relying on default automation?

My Learnings:

  • Spend more time at the start aligning with stakeholders on goals, metrics, and expectations. I learned to create a dashboard for team goals and mark down what questions researchers are making efforts to solve with the team, which can reduce misaligned priorities later in the project

  • Incorporate higher-fidelity prototypes earlier in usability testing to capture deeper insights into user behavior and decision-making

  • Engage engineers earlier in the process to address technical feasibility and ensure research recommendations align with development capabilities

  • Develop scalable strategies to recruit a more diverse participant pool without exceeding budget constraints

Next
Next

02. Sleep Earbuds & Al-driven Planner App