Profile photo
Profile photo

Holding the line

At Facebook, my team faced a challenge when preparing for the U.S. launch of Reels. Leadership was eager to ship, but the quality of the creation experience wasn’t there. Users in our early test markets struggled to create and share high-quality Reels successfully, and the experience was riddled with small usability pain points that started adding up. The tap targets for the trimmer tool handles were slightly off. The preview window was too small to check the timing of transitions. Clips were difficult to align perfectly.

I had seen this happen before. A team, caught in the momentum of shipping, convinces itself that minor usability issues will be quickly fixed post-launch. But they rarely are. Instead, bad first impressions kill adoption, and by the time fixes roll out (if they ever do), it’s too late.

For Reels sharing and editing tools, the quality issues weren’t the result of bad design choices, ignored research, or poor engineering. Most features and flows worked well in isolation. QA testing revealed no major issues. Facebook employees were active dogfooders, but the videos they created weren’t at the same level of complexity or quality as those made by top content creators on other platforms. They simply didn’t encounter many of the issues experienced by real video creators because they lacked the exposure and expertise.

Plus, many of the key metrics from our test markets were within acceptable ranges. The experience didn’t feel high quality, but there wasn’t enough hard ‘evidence’ to justify delaying the launch.

Instead of relying only on dashboards and research reports, our design and research team tried two things:

  1. We gathered a random sample of hundreds of public Reels created with our editing tools. We held a viewing party with the team, and the reaction was visceral. Most of the videos were low quality—single-shot clips with poor lighting, people staring into the camera unsure what to do. The team cringed at the output but also realized why the behavioral data looked fine. Users were creating videos, but they weren’t videos that would help them reach their audience.

  2. We ran an internal dogfooding challenge, asking employees and leadership to create and share high-quality Reels that mirrored the process used by top short-form video creators. Their frustration was impossible to ignore. Employees encountered the same issues as real users, making it clear that the product needed fixes before launch. Seeing real frustration convinced leadership in a way no report ever could. It completely changed the conversation.

We were fortunate to have top product leaders participating in these two activities. The team was ready to act, but pushing a launch still required someone at the top to back the decision. Because they had firsthand experience with the issues, they became fierce advocates for improving quality before release.

We didn’t need a long delay. The designers and researchers had already identified a focused set of usability fixes based on research and dogfooding that could be implemented quickly. Engineers had experienced the issues themselves and deeply understood how to fix them. As a result, leadership agreed to delay the launch just long enough to address critical usability gaps - ensuring a stronger rollout.

UX teams are often the last line of defense against bad product decisions. Data alone won’t always change minds. Holding the line means making the problem tangible.

  • Make leadership feel the pain. Instead of just presenting findings, find ways to make decision-makers experience the problem firsthand.

  • Tie insights to business risk. Show how usability failures will impact retention, engagement, or revenue—not just user satisfaction.

  • Present solutions, not just problems. Leadership is more likely to listen if you suggest a clear, specific path forward that minimizes delays.

  • Build allies across disciplines. Engineering, data science, and marketing partners can add weight to your argument.

Bad products don’t just ship. They happen when UX teams see a problem but don’t have the right tools to make the organization understand it well enough to weigh the trade-offs.

2025 - Ryan Finch

2025 - Ryan Finch