Strategic UX Research + Design Case Study

Secure Text: making privacy understandable, visible, and testable

A concept-led messaging experience that reframed “secure chat” as a systems problem: not just encryption, but state clarity, trust, recovery, and user comprehension across ambiguous moments.

Mixed methods research
Concept strategy
Mental model design
Prototype evaluation
Trust + safety UX
Role
Strategic UX researcher-designer
  • Framed the problem before jumping to UI.
  • Led synthesis from research → concept → prototype.
  • Defined the mental model, core flows, and evaluation criteria.
Problem
Users wanted privacy, but security states were invisible, confusing, and easy to misuse.
  • Users could not tell when they were protected.
  • Entering or exiting secure states felt risky.
  • Recovery paths were unclear, so trust broke quickly.
Why it fits strategic UXR
Ambiguity high
Started with a fuzzy “secure messaging” problem, not a defined feature brief.
Scope system-level
Looked at trust, messaging behavior, mental models, and state transitions together.
Methods mixed
Used qual + quant signals, competitive review, and iterative testing to guide decisions.
Output decision-ready
Produced a concept strong enough to evaluate, communicate, and refine with evidence.
This project is strongest as proof that I can map uncertainty, create a clear research-backed direction, and move from concept ambiguity to testable product decisions.

Context

The actual problem was not “how do we add a secure mode to chat?” It was: how do we make privacy legible enough that users trust it, use it correctly, and recover when they make mistakes?

That shift matters. Many products technically protect users, but fail at the experience layer because the system state is invisible, the language is vague, and the transitions feel fragile. I approached Secure Text as a product strategy problem disguised as a UI problem.

Early alignment Before designing screens, I clarified three things: what “secure” should mean in the interface, what the user must always understand, and which moments had to be impossible to misread (enter, verify, exit, and recover).
Whiteboard session
Captured decisions, constraints, and system assumptions before committing to flows.
User need Confidence and control without needing to decode security jargon or hidden settings.
Design goal Turn “secure mode” from an abstract feature into a visible mental model users could understand at a glance.

Why this matters strategically

This case study aligns with strategic UX research work because it sits in the zone where ambiguity is high, internal alignment is incomplete, and the value of research is not just validation, but direction setting.

Pre-alignment work
The question was still open
I was not handed a polished requirement. The research had to define what the right product behavior should be.
Systems thinking
State + behavior + trust
The work considered user intent, interface states, copy, and failure recovery as one connected system.
Research strategy
Insight → decision
Each research artifact was tied to a design or product decision, not created as a stand-alone deliverable.
Prototype use
Learning vehicle
The prototype was not decoration. It was the fastest way to test comprehension, confidence, and misuse risk.
Strategic research question
  • What makes users believe a message is actually private?
  • What moments break trust fastest?
  • Which state changes must be unmistakable?
What this demonstrates
  • Comfort operating with incomplete information.
  • Ability to frame a complex problem before it is fully defined.
  • Ability to produce a concept, not just findings.

Process

This was an iterative loop: frame the uncertainty, identify the highest-risk assumptions, prototype the concept, and use research to refine the model rather than just polish the interface.

Design process
The process was intentionally cyclical: hypothesis → prototype → test → refine.
1) Frame the uncertainty Clarified which parts of the experience were conceptual, which were behavioral, and which were communication problems.
2) Build a decision model Created the first structure for how “Shut Mode” should be entered, read, exited, and recovered from.
3) Test understanding Used artifacts and prototypes to see whether users understood what was protected and when.

Research that drove the direction

This section matters because the work was not “I researched, then I designed.” It was: I used research to shape the product model. Each artifact answered a different uncertainty.

Research charts
Quantitative signal: mapped willingness, concerns, and what users actually valued in privacy workflows.
Quotes
Qualitative signal: user language helped shape naming, explanations, and how the state needed to be surfaced.
Competitive analysis
Competitive review: many products had secure functionality, but weak state communication and poor trust affordances.
Feature model
System model: privacy, auth, messaging, and recovery were treated as one experience system rather than separate features.
Prioritization matrix
Prioritization: focused first on trust builders and comprehension drivers rather than overloading the concept with features.
Key research insight
  • Users do not trust “security” by default.
  • They trust what they can verify in the moment.
  • That made state visibility the center of the experience.
Decision that came out of it
  • “Shut Mode” needed a clear, persistent presence.
  • Entering and exiting needed explicit confirmation.
  • Recovery needed to feel safe, not punitive.

The solution

The design direction was to make secure messaging feel less like an invisible technical mode and more like a clearly understood state with visible rules, explicit transitions, and reassuring recovery.

Design move Persistent state visibility so users always know whether they are currently protected.
Design move Recovery-first design so errors do not feel catastrophic or irreversible.
State clarity Users should never have to infer secure mode from memory or hidden settings.
Trust cues Language and UI worked together to explain not just “you are secure,” but what that actually means.
Guardrails Critical transitions were made explicit to reduce false assumptions and accidental exits.

Ideation (Figma)

The low-fidelity phase was used to work through structure, sequencing, and mental model choices before visual polish. This is where the concept became tangible enough to challenge.

Low-fidelity concept exploration
Used to test whether the basic model made sense: secure entry, state visibility, and exit logic.
Open in Figma →

Wireframes

The wireframe phase pushed the concept from abstract structure into concrete states, pathways, and recovery conditions. This is where the design became specific enough to evaluate for misuse and comprehension.

Wireframes montage
Wireframe coverage across secure states, settings, and recovery paths.

Prototype evolution

Showing both versions matters. The first prototype is proof of exploration. The second is proof of learning. That progression is more useful to a hiring manager than pretending the first answer was right.

Final redesign (official prototype)

This version improved clarity, trust, and perceived safety by making secure state cues more legible and reducing ambiguity in key transitions.
Open prototype →
High-fidelity Version 1 (tested poorly)
Included intentionally. This version exposed where the mental model was still weak and where trust cues were not landing.
Open prototype →

Results

The redesign improved performance because it addressed the real issue: not lack of functionality, but lack of clarity around security state. Once the model became understandable, the experience became more usable.

Completion improved Task success: 58% → 92%
Users could consistently enter secure mode, send a protected message, and exit or recover without getting lost.
Faster, lower-friction behavior Median time: 1:42 → 0:56
Persistent state cues reduced second-guessing, settings detours, and repeated checking behavior.
Misuse decreased Missteps: 2.0 → 0.7
Clearer transitions reduced wrong taps, accidental exits, and confusion around what was currently protected.
Mental model improved Comprehension: 46% → 90%
Users could accurately answer “Am I secure right now?” without external explanation.
What changed in the product
  • Security state became visible, not implied.
  • Entry and exit were made explicit and legible.
  • Microcopy clarified what was protected and when.
Why this is meaningful
  • The concept moved from technical promise to user trust.
  • The prototype served as evidence, not presentation polish.
  • The work shows how research can change product direction, not just validate UI.

Next steps

If this moved forward as a product direction, the next phase would focus on scaling confidence in the concept: broader validation, measurement strategy, and stronger system guidance.

1) Broaden the sample Validate with more participants across privacy familiarity levels and different mental models of secure messaging.
2) Instrument the concept Track entry, exit, recovery, detours, and failure points so trust and comprehension can be measured in product.
3) Strengthen guardrails Add clearer warnings, safer defaults, and just-in-time explanations for ambiguous or risky actions.
4) Improve onboarding Teach the mental model in under 30 seconds so users do not rely on assumptions the first time they use it.
5) Expand accessibility review Validate screen reader behavior, contrast, interrupted sessions, and low-connectivity moments.
6) Create a decision framework Package research findings into product recommendations, tradeoffs, and quality bars for implementation planning.
Next
Next

Red Apron: Safety App for Women at Workplace