top of page

Pipeline & Deal Velocity

Hypotheses – test them

20/09/23, 00:00

Who is it for?

Founders and revenue leaders at B2B SaaS companies trying to improve pipeline quality and deal momentum.

When to use?

When pipeline exists but movement is slow, deals stall, or the cycle time is creeping up.

When working with start-ups, I’m frequently asked questions like “should we target market X or Y?”, or “which is the best marketing channel to engage with persona Z?” or “should we build a BDR team or outsource?”. On…

When working with start-ups, I’m frequently asked questions like “should we target market X or Y?”, or “which is the best marketing channel to engage with persona Z?” or “should we build a BDR team or outsource?”. On each of these subjects I am very happy to have an opinion, but the normal answer is “I don’t know”. While lots of people have relevant experience and can advise on many of these topics, ultimately, each company is different, markets are different, the messages are different, and even when they are the same as a previous experience, that previous experience was, by definition, at a different time.

The way to find a path through the uncertainty is to test. Don’t assume you know anything, but formulate clear hypotheses with clear actions and clear targets. The trick here is to ensure you get useful information within a short period – no more than two months, ideally 4-6 weeks. Some things clearly take longer than that – if you have a six month sales cycle, you can’t know that your new target segment is going to generate revenue within a few weeks. However, you can look for leading indicators – data which shows you are moving in the right direction.

For example, if you decide to target retailers with more than 20 locations within the UK with your solution, you will get a good indication of likely success by the level of interest you can generate in 4-6 weeks, even if the actual revenue will be several months later.

In order to be effective with hypothesis testing, you have to be very specific in what you’re going to test and how, and you need to have clear goals that you’re going to measure. In the above example, you would identify how many potential targets there are, decide how many you’re going to address in your initial hypothesis testing (say 50), determine which persona you will be targeting, what messaging you will be using, through what channels, and with what expected outcome. You then follow this plan carefully and track the data – both the activities and the outputs – rigorously.

If this goes well – say your goal was 20% of those targeted engage with your content – then you double down. Expand the focus group (add another 100) and repeat. If it did not go well, you didn’t get the engagement you were targeting, can make a change and retest. Eg. you got some interest, but less than you hoped, maybe you try different messaging, or different persona – with a new group of 50.

None of this is rocket science – people have talked about A/B testing as long as I can remember. But this doesn’t require sophisticated tooling nor any specific industry experience. It requires common sense, thoughtful upfront hypothesis definition, careful adherence to the plan and rigourous evaluation of the data. And working closely with your entire team: sales/marketing/success – as relevant.

I have some tools to support this type of thing – let me know if you’re interested.

bottom of page