Designing Trust in AI-Generated Code

As design lead on AlgoBuilder, an AI-powered fintech product, I owned the UX strategy, user flows, and interaction design across the full Build → Code → Backtest → Results pipeline. Taking traders from a plain English idea to downloadable, production-ready trading code they trust enough to run live.

Role: Design Lead | Scope: UX · Interaction & Flow Design · Design System · Branding | Company: AlgoBuilder (ThinkHuge)

Problem

The gap between a trading idea and working code

Most retail traders have strategy ideas but can't code. Hiring freelancers is expensive, buying pre-built EAs (Expert Advisors) requires blind trust, and learning MQL5 takes weeks.

Milestone 1 tried a form-based approach: dropdowns, condition rows, Boolean operators. It reduced the barrier, but still required users to think in technical abstractions.

Reviewing usage data and support conversations from Milestone 1 surfaced a consistent pattern: users who completed strategy setup rarely moved on to backtest, and most abandoned around indicator configuration. The form modelled the code, not the way a trader thinks.

Milestone 1 entry conditions, form-based strategy builder
Milestone 1: condition rows with Boolean logic, still felt like code

Key Decision

A four-step trust pipeline

Trust in AI-generated code is a UX problem. I designed a four-tab flow where each step reduces uncertainty and builds the confidence needed to take the next one.

Step 1

Build: "Will the AI understand me?"

The first barrier is getting started. No forms, no dropdowns.

Natural language input. Users describe their strategy in plain English.
Conversational AI. Explains its interpretation and asks clarifying follow-ups.
Real-time summary. Entry, exit, and risk rules build as the conversation progresses. Intent is visible before any code exists.
Chat on the left, structured strategy summary on the right. Intent is visible before code

Step 2

Code: "What did it actually generate?"

In Milestone 1, users couldn't see the code being generated. They were expected to download a file and run it on a live trading account without knowing what was inside. That's a big ask when real money is on the line. I redesigned this step to show the full MQL5 source as it generates, line by line. No black box, no leap of faith.

Progress indicator. Three visible steps: generating, checking for errors, ready.
Full source visible. Every line of code stays in view throughout generation.
One click to backtest. Keeps the momentum of the flow.
Code streams in line by line. No black box, no leap of faith

Step 3

Backtest: "Will it work?"

The gap between "built" and "tested" was where users dropped off. They'd complete a strategy but never backtest it - the cognitive cost of configuring a separate test was too high. So backtesting lives one click away, inside the same flow.

Guided inputs. Symbol, date range, deposit. Nothing intimidating.
Years of tick-level data. Simulate against real market conditions.
One action. Configure once, click run, get results.
Backtest configuration with full history of previous runs

Step 4

Results: "How did it perform?"

Numbers alone didn't create confidence. Even after good backtest metrics, traders wanted to watch the strategy execute on real price data before trusting it with real money. The results step gives them everything they need to make that decision.

Data visualisations. Balance curves, performance metrics, trade analysis by session and time.
Deep analysis. Profit & loss breakdowns and candlestick replay with trade annotations.
Download and go live. One click exports the EA file ready to run on MetaTrader 5.
Balance curve with full metrics. Download EA activates when backtest completes

Designing for the model, not around it

What happens when the AI is wrong

Natural language is ambiguous and LLMs hallucinate. In a fintech product where the output runs on real money, the UI has to assume the model will sometimes misinterpret intent or generate broken code. Three design decisions handle that directly.

Ambiguity

Clarify before committing

When the prompt is underspecified, the AI asks a follow-up question instead of guessing. The strategy summary stays empty until the user confirms — no silent assumptions baked into code.

Misinterpretation

Editable summary

The strategy summary on the right of the Build screen is the AI's interpretation made visible and editable. Users can correct a misread rule before a single line of MQL5 is written.

Broken output

Compile gate before backtest

Generated code runs through a syntax check before the "Run backtest" button becomes active. If it fails, the error surfaces inline and the user is routed back to refine the prompt — the flow never hands off a broken EA.

Outcome

Idea to algo, instantly.

The product launched and is live at algobuilder.com.

Full autonomy. Traders who couldn't code can now build, test, and export their own strategies without relying on freelancers or pre-built tools they can't inspect.
Trust through transparency. Visible code, real-time summaries, and backtesting against real data let users verify every step before risking real money.
Complete flow adoption. Users consistently finish the entire Build → Code → Backtest → Results flow in one session. With the form-based approach, most abandoned before reaching backtest.

Beyond the product flow, I led the brand identity and built the design system from the ground up on atomic design methodology, with tokens, components, and patterns structured to scale. The result: a cohesive visual language the product shipped with from day one.

Next Project
Reducing Onboarding Friction for a SaaS Platform
SaaS Product Design Front-end Dev