Growing a Media Platform from Zero to Product-Market Fit in Four Months
Why sequencing matters more than execution speed—and how the UX fixes, product validation, and investor conversations had to happen in a specific order.
I took on Wattl as a growth project in mid-2020. The platform had built something different: an infinite scroll that let users curate, discover, and share media in a way that didn’t exist elsewhere. The product was interesting. The challenge was getting enough people to care about it.
The brief covered three areas: fix what was broken in the user experience, prove the concept could drive engagement at scale, and build a credible path to investment. None of those could wait for the others. They had to happen in parallel.
The Situation
When I arrived, the platform had traction with early users but couldn’t retain them. The sign-up flow had too many steps before anyone could see what made Wattl worth using. Every extra click between “I’m curious” and “I’m in” was losing people who might have stayed.
Navigation was broken across all surfaces. On the app, the path to discovering worlds was not where users expected to find it. Someone who had never used the platform would struggle without prior knowledge of where to go. On the web, the interface felt slow, almost as if it was running on a limited connection, and the button-based navigation was easy to forget between sessions. Android was nearly unusable. The world appeared as a static image with no pinch-to-zoom, no interactivity, nothing that told a first-time user what they were looking at or how to engage with it.
Upload was broken too. Video orientation did not always process correctly on the first attempt, requiring manual adjustment. After several uploads the platform slowed down noticeably. And if something went wrong during a direct URL upload and you refreshed the page, you got stuck in a loop of upload prompts with no clean exit.
These were not cosmetic problems. Each one was a reason for someone to leave and not come back.
Beyond the product, I was responsible for proving the platform could work as a media experience at scale, not just a product with potential. I needed to coordinate investor outreach. And I had to navigate the regulatory and partnership complexities that came with growth.
What I Saw
The first thing I noticed was that the team was treating this as a series of independent tasks. UX fixes were one workstream. Product validation was another. Investor outreach was a third. Each had its own owner, its own timeline, its own set of assumptions. Nobody had mapped how these pieces connected, where a delay in one area would block progress in three others.
Complex programmes with multiple dependencies don’t fail because of bad ideas. They fail because of bad sequencing. If you do the wrong thing first, even by a week, the downstream consequences cascade. The UX fixes, for instance, were the most constrained element in the entire programme. You couldn’t validate the product on a broken interface. You couldn’t pitch investors without validation. You couldn’t close a round without credible proof that the platform could hold attention. Every step was dependent on the previous one, and any delay at the top of the chain pushed everything below it.
The second insight was about the test case itself. I needed proof that the infinite scroll could anchor user behavior in a real-world scenario, not just capture early curiosity. March Madness offered that: a known event with a built-in audience that would measure whether the platform could sustain attention across a defined period. If it failed there, the entire hypothesis was wrong. If it held, the numbers would speak.
The third was about the investor conversation. The founding team had strong credentials, but credentials alone don’t move investors. I needed to frame the platform not as a social app but as a new media consumption interface, one where the infinite scroll was the mechanism and curation was the product. That reframe would determine which investors could actually see what Wattl was building.
What I Did
My first action was to map the entire programme as a dependency chain. I identified the critical path: the sequence of events that had to happen in order, where a delay in any link would delay everything downstream. The UX fixes had to come first. Product validation had to come second. Investor outreach had to come third. But within each phase, parallel workstreams could run independently.
I proposed collapsing the sign-up and sign-in into a single flow and adding social media authentication so users could join without starting from zero. The next step was offering to pull media from their existing social accounts to populate a first private board. The point was simple: get people to the good part faster, and they’d discover what made the platform different.
For navigation, I worked with the design and product teams to restructure how worlds were discovered. The path became explicit rather than hidden. On mobile, the interface needed speed and clarity. On Android, we had to rebuild the interaction model entirely to make the world responsive rather than static.
With the UX work underway, I selected NCAA March Madness as the validation test. The logic was straightforward: a known event with built-in audience would give me a natural baseline for measuring whether the infinite scroll could hold attention in a real-world scenario. I structured the test to track everything: sign-ups, sessions, bounce rate, session duration, user retention across the event period.
For the investor conversation, I built a positioning framework that translated Wattl’s technical approach into language that investors could evaluate. Not “we’re building a social platform.” Instead: “We’re building a new interface for media consumption where curation is the product and the infinite scroll is the mechanism.” That reframe opened conversations with accelerators and attracted interest from the Discovery Channel, which saw potential in the second-screen approach, where users could interact with media content while watching broadcast television.
What Happened
Average users went from 267 in October to 3529 by March. Sessions jumped from 363 to 8902. Bounce rate dropped from 70.7% in October to 31.7% during March Madness. Average session time climbed from two minutes and forty-six seconds to seventeen minutes and thirty-five seconds. People were not just arriving. They were staying. The hypothesis that a familiar cultural event could anchor user behavior on the platform held up with room to spare.
The UX fixes had compressed the sign-up flow from five steps to two. The navigation restructure made discovery explicit. Upload stopped timing out. The interface felt responsive instead of laggy. These were small changes. They compounded.
With validation in hand, the investor conversations shifted. I could show numbers instead of promise. The platform had moved from “interesting product” to “product that works.” That distinction changed everything. Discovery Channel saw a partnership opportunity rather than a speculative bet.
The parallel workstream approach reduced the overall timeline by compressing what could have been sequential into simultaneous execution. Being first to market with credible proof is a credibility signal with partners and investors, not just a commercial advantage.
What I’d Do Differently
The biggest lesson was about sequencing confidence. I mapped the dependency chain early, which was the right call, but I didn’t invest enough time in getting every stakeholder to internalise it. Some teams treated the dependency map as a project artifact rather than a living constraint. When a team doesn’t fully understand that their delay blocks three other teams, they optimise for their own timeline rather than the programme’s. I’d spend more time in week one building shared understanding of the critical path, not just showing the map, but making everyone feel the consequences of a broken link.
I’d also push for earlier clarity on the investor positioning. We landed on the right frame—media consumption interface, not social app—but we discovered it by iteration rather than by thinking it through at the start. If I’d started with a clearer hypothesis about how investors would evaluate the platform, we’d have structured the validation test to surface those specific metrics from day one.
Finally, I’d document the decision-making process more aggressively. In fast-moving programmes, decisions get made in conversations, Slack threads, and hallway discussions. Six months later, nobody can remember why something was done a particular way. That’s fine until you need to replicate the programme or explain it to a new partner. Write it down. Every time. You’ll thank yourself later.


