Intuition vs data-driven product flow
Because you needed more gray areas in your startup life.
Tech in 2024 is a brave new world where data is king. Governance models that once validated ideas & outcomes based primarily on the voice of the customer are now prioritizing data instead; sometimes in healthy ways and sometimes not. Now more than ever, charts and reports tend to dictate our futures.
I woke up this morning wanting to share how our data-driven decision-making processes at Post News suffered from a few unhealthy protocols, but I wasn’t sure how to express it with respect for nuance and for the undisputed value of good data analytics. Thankfully @andrewchen recently published an opinion full of nuance, so I’ll jump straight into practical experience.
Data should never be the loudest voice in the room
At Post News, almost every employee had an innate sense of what our users really wanted from the platform. After we launched our alpha, we began building our beta roadmap in public while using Post itself to communicate with our members. This helped us grow very close with our community, providing us with a constant feedback loop that helped us identify (and act on) growth-minded product enhancements. Our users adored our transparency and the speed at which we delivered, and we moved mountains in a very short period of time.
However, given the high stakes of our mission and the diligence we owed to investors, our second phase of development (product-maturity phase) adopted a decision-making process that queued new ideas behind data validation and experimentation protocols. Although we eventually excelled at executing this new model, I felt like there were too many moments where the process made it more difficult to green-light larger and more innovative ideas. Perhaps we debated less once data became our loudest voice. In any case, here are a few more reasons why we probably should have waited for growth before adopting this process:
It was more difficult and costly to execute experiments than it was to engage in rapid iteration backed by high-level KPI review.
Greater than 90% of our experiments made it into the product. We had great ideas.
We burned a significant amount of time maintaining experiments that ran alongside the experiences they were designed to replace. With 90% success rate, we really didn’t need to do that.
In some cases, a medium-sized feature would grow 300% in scope if it needed to run in parallel with a feature it would eventually replace. Maintenance bloat.
Don’t get me wrong, we also had positive experiences with this process… but the net result is that it slowed us down significantly when we really needed to be fast. It also made some members of the team feel like their ideas were being overlooked, which had its own set of downstream consequences. Ultimately this process wasn’t a deathblow for us, but it didn’t change the game like we hoped it would.
If I could turn back the clock, we’d have held fast with a process that trusted our team’s expertise and experience.
’s post sums it up nicely:“sometimes when you become an expert about the customer, and have great intuition, you just know how to do the right thing. And not just the right thing that can move the needle in the short-term, but also the thing that will create better, long-term value. […] make decisions based on your expert qualitative opinion about the market, your customers, and your competition. And only optimize based on data later on, when it’s more appropriate”
Early-stage data is often shallow and incomplete. Sometimes it’s even erroneous and too dirty to be trusted. At this stage, the most valuable data you can get is already freely offered to you by your most experienced teammates. Let their voices be the loudest in the room.
Reading between the lines, I see “trust your gut, or the gut of your people, over data.”
Thank you for providing a behind the scenes glimpse. It must have been immensely deflating, infuriating, exhausting, and yet rewarding at times.