Wound Wizard AI App

Industry: Health Tech / AI Product Design

Executive Summary

Developed as part of the Hustle Badger: Build with AI course, Wound Wizard is an AI-assisted concept designed to support non-specialist carers with wound assessment. The project focused on identifying responsible AI use cases, designing human-in-the-loop experiences, and framing AI as a supportive tool rather than a clinical authority in a high-risk domain.

My Role: Product Designer (AI Use Case Definition, Prompt Engineering, Experience Design, Risk Mapping).


Context and Challenge

Wound assessment is frequently performed by non-specialists or carers who lack formal medical training. Current guidance is often overly clinical, fragmented, and difficult to interpret, leading to anxiety and delayed action.

  • The Design Challenge: How do we use AI to provide clarity without the user over-relying on it as a diagnostic tool?

  • The High-Risk Barrier: In healthcare, AI must be framed with extreme care to maintain safety, user trust, and clinical boundaries.


My Approach: Responsible AI Discovery

I followed a risk-aware discovery process to ensure the AI added value without crossing into “medical advice.”

Activities included:

  • Problem Definition: Setting clear boundaries to avoid diagnostic claims, focusing instead on “confidence and understanding.”

  • Risk & Over-reliance Mapping: Explicitly mapping where a user might “blindly trust” the AI and designing friction points to prevent it.

  • Prompt Engineering: Designing and iterating on structured prompts that gather symptoms and context while ensuring the output remains cautious and explainable.

  • Human-in-the-Loop Design: Ensuring every AI output was paired with an escalation path to a human professional.


The Solution: AI as Decision Support

Wound Wizard is a guided experience that uses AI to surface patterns and considerations in plain language, rather than providing a “final answer.”

  • Structured Dialogue: Guides users through specific questions to gather consistent data.

  • Pattern Recognition: Uses AI to highlight potential risk signals (e.g., “This pattern often indicates inflammation”) without making a definitive diagnosis.

  • Uncertainty Framing: The UI was designed to highlight what the AI doesn’t know, encouraging users to seek professional help when signals are unclear.

  • Calm & Reassuring Tone: Following the “CosimaCreates” philosophy of replacing anxiety with confidence through clear, human language.


The Iteration: Reframing Authority

During the validation phase, I iterated on the “voice” of the AI. Initial prompts felt too “authoritative,” which increased the risk of user over-reliance.

  • The Feedback: Users were taking AI suggestions as absolute facts rather than “considerations.”

  • The UX Improvement: I adjusted the prompt structures and UI components to use “hedging” language (e.g., “This might suggest…” or “Consider checking…”) and added persistent disclaimers.

  • The Result: Testing showed that users were more likely to use the tool as a secondary check while still planning to consult a clinician.


The Outcome

The project resulted in a robust, testable product concept that demonstrates how to navigate the ethics of AI in sensitive environments.

  • Strategic Impact: Proven ability to identify where AI shouldn’t be used, as much as where it should.

  • Design Impact: A strong example of responsible AI design, prompt engineering, and the practical application of “human-in-the-loop” principles.

  • Technical Application: Demonstrated a structured approach to prompt design and the framing of AI-generated insights in a high-stakes UX.

Scroll to Top