Transparency in Tech
Making AI in short-form video legible — labels, explainability, and consent in the places people actually use.
Discover
Desk research on scale and regulation, plus four lightweight interviews on how people notice AI in the feed.
Primary research
Four lightweight interviews (30 min each)
Short-form video users, ages 19–34, US. Exploratory—not affiliated with any platform—on how people notice (or don't) AI in the feed.
- P1
“I think of For You as “the app,” not really as AI—I wouldn’t call it that out loud.”
Naming gap — feed ranking felt invisible, not “AI.”
- P2
“If a video is AI-made, I want it on the actual video, not five menus deep.”
Surface-level labeling beat long policy pages.
- P3
“I’m fuzzy on when AI is helping a creator versus making the whole thing.”
Assisted vs generated needed plain language, not jargon.
- P4
“If “why am I seeing this?” was one tap from the clip, I’d use it.”
Explainability had to stay in-flow, not only in settings.
Desk research
There are nearly 2 billion active users on TikTok.
Confidence in companies protecting personal data has fallen to 47% in 2024.
The EU's Digital Services Act pushes platforms to publish transparency reports—but that accountability mostly lives in PDFs and dashboards, not next to the scroll where users choose what to watch.
Sources
Hidden processes
Assisted vs generated — different expectations, different labeling.
AI-assisted content
Leverages AI tools for drafting, structuring, editing, or brainstorming, but requires significant human expertise and editing to ensure uniqueness and quality.
AI-generated content
Produced solely by AI from existing information, with minimal human input—think “push button” inputs or automatic SEO-style production; often weaker on originality and insight.
Reference: AI-generated vs AI-assisted (Clearscope)
Selecting AI UX patterns
Four dimensions from GitLab’s framework — useful for choosing the right intervention on a surface (P3 Transparent Tech).
Mode
The emphasis of the AI–human interaction (focused, supportive, or integrated).
Approach
What the AI is improving (automate or augment tasks).
Interactivity
How the AI engages with users (proactive or reactive).
Task
What the AI system can help the user with (classification, generation, or prediction).
Decision story
From interview themes and GitLab’s dimensions to three surfaces—what shipped, what waited.
Primary research kept returning to the same tensions: the feed didn't feel like “AI” to people (P1), labels had to sit on the content (P2), assisted vs generated was fuzzy (P3), and explainability had to be one tap away (P4). GitLab's dimensions let me cut scope: prioritize reactive, in-flow interventions (ribbon, star) over a big integrated “how the algorithm works” hub.
Shipped in 3 weeks
- FYP star — consistent mark for AI-assisted ranking/surfaces.
- “Why this video?” ribbon — bridge to settings + topics.
- Media + caption labels — generated content + consent paths.
Deferred (tradeoff)
- Full in-app algorithm explainer / whitepaper flow.
- Download/offline and creator-studio transparency.
- Deep personalization dashboards beyond topic sliders.
High value for trust, but wrong shape for a short sprint—interviews pointed to surface signals first.
UX Practices
GitLab — AI and human interaction
- Clearly mark feature maturity when an AI feature is an experiment or beta.
- Add an explicit AI disclaimer so users know the feature is AI-powered.
- Use plain language to explain what the system is doing and why.
- Explain training context, including what data informs recommendations.
- State data use clearly so consent is informed, not implied.
Full pattern library: GitLab — AI–human interaction
UI Practices — Labels
IBM Carbon — when to use AI labels
- Mark AI-generated content. The label should be accessible, consistent, and always visible where AI output appears.
- Use one repeatable visual reference. Recognition improves when the same AI marker appears across surfaces.
- Make labels pathways to explainability. The marker should open clear context, not function as decoration only.
Design Solutions
Three system moves: icon, ribbon, and media labels.
FYP Star
One icon for AI-assisted surfaces. Keep wording distinct from generated-media labels.
Why this video?
One-tap explainability ribbon. Route users to details, settings, and controls without leaving flow.
Media Labels
Label AI-generated captions, audio, and content. Pair with consent and data-use disclosures.
Context: TikTok — Introducing auto captions · AI for subtitles (guide)
FYP Star
A single mark for AI-assisted surfaces vs generative media — star + labels + ribbon copy from the case study.
Visual fidelity — do this before final crit
The blocks below are CSS stand-ins so the page reads in browser. For portfolio and interviews, replace them with exported Figma frames (PNG or WebP): at minimum For You + star, ribbon, and one caption/settings row—same layouts, real pixels from your file.
Star label icon
- Simple, clear designated icon.
- Include labeling for AI-assisted systems.
Purple orb + star — same family as FYP & ribbon.
UI mock (CSS)
In-frame copy (hi-fi)
Ribbon: “Why this video? Learn how TikTok recommends content” (chevron)
Music row: pill AI Artist + track title. Chip: AI Generated.
- For You
- AI-assisted algorithm
- FYP ribbon
- AI-assisted algorithm
- Music label
- AI-generated content
- Content label
- AI-generated content
Why this video?
AI explainability ribbon — from surface signal to settings + topic sliders.
Easy access to algorithm settings and explainability for why specific content is suggested—grounded in data the product already collects.
- 1
Ribbon appears
When the algorithm is making new recommendations.
- 2
Pop-up
Short explanation + link to how the algorithm works; large control toward settings.
- 3
Adjust your For You
Clear options (not interested, report, filter keywords, manage topics) + optional “helpful?” feedback.
- 4
Manage topics
Sliders for categories: creative arts, current affairs, dance, fashion & beauty, food, health & fitness, humor…
Auto-generated captions & content
Purple as the system accent for AI consent, playback, and settings — distinct from translation toggles.
AI-generated captions
Use color to signify AI-related options—especially when opting into data collection for caption generation—and to highlight settings tied to AI-generated captions vs other toggles (e.g. translation).
- Onboarding / consent: turning on captions ties to disclosure about collecting and using audio to generate captions.
- Playback: subtitle line with an affordance that signals AI-generated captions.
- Settings: “Show captions (auto-generated)” and related controls visually distinct so AI is never confused with generic display options.
Reflections
What stuck after three weeks — specifics, not slogans.
Assisted vs generated needed one visual system, two vocabularies.
Carbon’s label pattern gave a consistent mark, but interviews showed people conflated “AI helping the algorithm” with “AI made this clip.” The star had to stay tied to assisted surfaces; chips and media pills carried “generated” so the distinction didn’t collapse in the feed.
Purple had to mean “AI consent,” not “translation.”
Caption flows sit next to language toggles. Using the same neutral gray for both would hide opt-in to audio-for-captions. Reserving the purple accent for AI-specific consent and playback kept accessibility paths honest.
Three weeks forced a ruthless surface list.
GitLab’s dimensions (Mode, Approach, Interactivity, Task) were a cut list: ship FYP mark, in-feed ribbon, and media/caption labels first; defer a full algorithm explainer and download flows.
Explainability only counts at one tap.
Interview quotes kept coming back to “don’t send me to settings.” The ribbon’s job wasn’t education for its own sake—it was a bridge to “Adjust your For You” and topic sliders with minimal friction.