
Role
Product Designer (Owned manager experience)
Duration
12 weeks
Industry
Financial Services (Fraud & Risk)
Team
2 Product Designers
(Scoped across Manager and Agent workflows)
OVERVIEW
Designing a manager experience to improve visibility, prioritization, and operational strategy
Phoenix is a conceptual credit card fraud detection and risk monitoring platform for managers overseeing fraud investigators. Investigators, referred to internally as agents, review AI-flagged transactions and determine case outcomes.
This project designs a manager experience that centralizes operational oversight while also reinforcing sustained performance across teams.
PROBLEM
Addressing structural and behavioral gaps
Janine, a fraud team supervisor, must balance detection accuracy, operational efficiency, and executive reporting across multiple queues and investigators.
Current fraud management workflows present two core challenges: Structural and Behavioral
STRUCTURAL GAP
Fragmented visibility
Operational metrics were spread across multiple reports and system views, requiring manual interpretation to understand overall performance
No unified risk view
There was no consolidated dashboard connecting investigator workload, queue health, and fraud model effectiveness in one place
Manual workload balancing
Reassigning investigators across queues required synthesizing multiple metrics without clear prioritization cues
Limited strategic reporting
Performance insights lacked visual clarity for communicating trends and risk exposure to executive stakeholders
Behavioral gap
Limited performance reinforcement
Metrics show performance but do not consistently influence motivation, improvement, or long-term engagement.
As a result, oversight becomes reactive. Janine spends time navigating disconnected dashboards instead of adjusting fraud strategy early.
GoalS
Strategic goal
The objective is to structure fragmented operational data into a cohesive system that supports informed, forward-looking decisions rather than reactive oversight.
FUNCtional requirements
The manager dashboard needs to translate strategic oversight into actionable system behavior.
Centralized visibility
Provide a unified workspace that connects investigator performance, queue health, and fraud model effectiveness within a single operational view
Workload Prioritization
Enable managers to rebalance investigator workloads using clear prioritization cues rather than manually interpreting multiple reports
Strategic reporting
Support structured visual summaries that communicate fraud trends and operational performance clearly to executive stakeholders
Real-time risk awareness
Surface emerging risk patterns and operational bottlenecks early to support faster and more confident decisions
Conceptual Model
Defining actors and system relationships
Before designing screens, I mapped the core objects within Phoenix and the actions that connect them. The goal was to structure operational complexity into a coherent system model that reflects how managers reason about fraud operations.
Object–Action Matrix

The object–action matrix clarified what actions could be performed across cases, investigators, queues, and fraud models. This defined the scope of managerial oversight.
Attributes & Metrics

An attributes table documented transactional and analytical properties such as resolution velocity, backlog size, escalation rate, and false positive trends. These metrics later informed chart selection and dashboard hierarchy.
Prioritization Framework

The prioritization matrix mapped actions by frequency and impact. High-frequency workflows shaped the primary dashboard structure, while lower-frequency tasks were treated as contextual actions.
JOBS TO BE DONE
Defining the manager’s core responsibilities
Janine’s responsibilities fall into two connected layers: operational oversight and performance reinforcement.

OPERATIONAL OVERSIGHT
Monitor queue health
Track backlog, resolution velocity, and escalations to maintain balance across queues
Evaluate investigator performance
Assess productivity, accuracy, and workload distribution to identify gaps
Review model effectiveness
Monitor false positive trends and escalation rates to ensure detection quality
Reallocate workload
Transfer investigators or cases to reduce bottlenecks and improve throughput
Share structured reports
Communicate operational and risk performance to executive stakeholders
Performance reinforcement
Define performance challenges
Create structured milestones aligned with key operational metrics
Assign targeted training
Address performance gaps through competency-based modules
Monitor performance impact
Track whether reinforcement mechanisms improve productivity, accuracy, and backlog reduction
DESIGN SPECIFICATIONS
Establishing system standards before interface design
Before moving into screen design, I developed supporting specifications to ensure structural and visual consistency.

Data visualization specification
Defined comparison logic, categorization standards, normalization rules, micro-interactions, and chart selection criteria
Information architecture specification
Outlined browse, search, and taxonomy logic to support scalable navigation and drill-down behavior
Game Definition Document
Defined goals, mechanics, motivators, rules, rewards, penalties, and engagement loops.
These specifications grounded both operational dashboards and behavioral mechanics in consistent system logic.
DESIGNs
Translating system logic into a unified dashboard
The manager dashboard is structured from overview to detail, allowing drill-down exploration while reducing cognitive load and supporting informed decision-making. All screens were built using the IBM Carbon Design System to maintain consistent enterprise interaction patterns and scalable component structure.

Overview
High-level visibility into queue health, investigator metrics, and model trends

Queue Performance
Backlog monitoring, resolution velocity, and queue-level analysis

Investigator Performance
Comparison of productivity, accuracy, and case distribution

Transferring Investigators
Previews operational impact before transferring investigators across queues.

Case Reassignment
Simulates projected changes when redistributing cases between agents.

Game Behavioral Controls
Manager interface for creating marathons, defining challenges, assigning rewards, and monitoring game ROI

Training Management
Assign training modules based on performance signals and track competency progression
REFLECTIONS
What I learned
Think in systems, not just dashboards
Clear system modeling reduces arbitrary UI decisions and makes prioritization defensible.
Strategic oversight and drill-down workflows require different structures. Hierarchy shapes decision clarity.
Design data with intent
Visualization choices must support decisions, not decoration. Comparison and normalization rules matter.
Define standards early
IA and data specifications prevent inconsistency and reduce rework as systems scale.