Leadership
Good design leadership is not about having the best taste in the room.
It's about building the conditions where the best ideas survive.
Discovery Process
01
Stakeholder Mapping
I identify: who has the most context, who has decision authority, who will quietly block progress, and who is the most user-empathetic person in the room. I build a relationship with all four.
02
User Interviews
Structured but conversational. Recorded with permission. Synthesised with AI assistance, then every theme challenged against the raw transcript before presenting to the team.
03
Workflow Mapping
End-to-end process documentation with pain points annotated at every step. Validated in a facilitated session with stakeholders and SMEs — not produced in isolation.
04
Problem Framing
A 'How Might We' statement that includes: business context, user need, current failure mode, and a measurable success criterion. This becomes the north star for the entire project.
05
Alignment Session
Research findings and proposed design direction presented to all stakeholders before any high-fidelity design work begins. Changes are far cheaper here than after delivery.
Design Reviews
I run design reviews as hypothesis tests, not showcases.
Every review I lead starts with this frame: Here is the problem I was trying to solve. Here is what I decided, and here is why — including the alternatives I rejected. Here is what I'm not confident about, and what I'd want to test.
This approach invites substantive critique rather than stylistic feedback, demonstrates strategic thinking rather than visual execution, and models intellectual honesty to junior designers watching how I respond to challenge.
'The question I hate most in a design review: Does it look right? The question I want: Does this solve the problem we said we were solving?'
AI in UX Strategy
01
Map the Workflow
Where are the highest-friction moments? Where is manual effort being spent on things that don't require judgment?
02
Identify Decision Moments
Where do users need information to act? Is that information currently available, or is it buried or delayed?
03
Assess Data Availability
What signals exist in the system? What can be derived? What would need to be built?
04
Score Opportunities
Use a 2×2: User Impact vs. Implementation Feasibility. Focus on high/high first. Build the AI roadmap from that matrix.
05
Design the Trust Layer
For every AI feature: how do users know what drove this? How confident is the system? What's the action pathway? What happens when they're wrong?
06
Define the Feedback Loop
How does the AI learn from user behaviour? How does the system improve over time? Design for the second version, not just the first.
Designing Humanist AI Experiences
The AI features that fail aren't the ones with bad models. They're the ones that make users feel like they're being managed by the product, not helped by it.
I've worked on AI features across two enterprise products — an eCommerce ERP and a financial services platform. In both, the same pattern held: the technical capability was never the hard part. The hard part was designing for trust, transparency, and control in moments where the stakes were real and the user had no tolerance for opaque outputs.
01
Explain the Signal, Not Just the Recommendation
Every AI output I design surfaces its reasoning — not in a tooltip buried behind an icon, but inline, at the point of decision. "Low stock risk based on 14-day sales velocity and supplier lead time" is more actionable than "Reorder recommended." Users who understand the signal can override it intelligently. Users who don't will either ignore it or over-trust it.
02
Design for Confident Disagreement
The most important interaction in any AI feature is the dismiss. If dismissing a recommendation feels like overriding the system, users will avoid it — and start deferring to outputs they don't believe. I design the 'Not for me' action to be as visually accessible as 'Accept.' The model should learn from both.
03
Surface AI at the Decision Moment, Not Before
AI nudges that appear when a user isn't facing a relevant decision are noise. I map the workflow first — where does the user need to act? — and place the AI output there, contextually, at the exact moment it is relevant. Not in a separate dashboard. Not in a weekly digest. At the row, the screen, the field where the decision happens.
Applied across
Inventory reorder alerts · Dynamic pricing suggestions · Demand forecasting · Customer churn risk scoring · AI catalogue search · Order anomaly detection · Automated return categorisation
In every case: the question I asked before any visual design was done was not 'how do we show this?' It was 'what does the user need to believe about this output in order to act on it?' The answer to that question determined the design.
Mentorship
I give feedback on the thinking, not just the output. 'Why did you place the CTA here?' is a better question than 'move the CTA here.' The first builds a designer's judgment. The second builds dependency.
I share my own process explicitly — including the decisions that didn't work. Design confidence comes from understanding that every experienced designer gets it wrong regularly, and from having a vocabulary for how to iterate out of it.