Scott is a Partner at Sproutward, a digital agency working primarily with e-commerce businesses. Over his career in analytics, he’s worked with big brands like Victoria’s Secret, Adidas, Staples, and Shoes.com.
A couple years back, his team at Sproutward set out to build truly universal reporting for their clients – pulling in *all* of their data into one suite of reports.
All of their web analytics, all of their paid channels, email platforms, CRMs – truly everything.
They evaluated every tool under the sun, but eventually decided to roll their own data pipeline in Google BigQuery, and report on that data in Looker.
This is their story…
Scott’s team works with e-commerce brands in the 8-to-9 figure revenue range.
Often they’ll come into a company as an “Acting CMO.” Getting off to a good start in those situations means having to digest *a lot* of data, quickly. As Scott puts it:
“Marketers at companies generally work in all the different tools separately, all the manual kind of Excel stuff. They’re used to seeing very siloed data structures.”
To be able to quickly get up to speed on a new client, Sproutward set out to build universal reporting – reporting that would unify these disparate data sources into one place, simply and repeatably.
This is, of course, a big lift.
They had lots of sources to pull data from. Outside of the usual suspects like Google/Adobe Analytics or Facebook Ads, Sproutward’s clients used many legacy Email + CRM providers that weren’t supported by standard tooling.
They needed to build a custom data pipeline process, that they could fit like a glove to their clients’ marketing stack.
Scott found us at CIFL while reading up on data pipelining approaches, and picked up our Build your Agency Data Pipeline course. As he told me later:
“I think what we saw from you guys was just sort of a clear and sort of approachable way to get into it, to get into that type of a stack.”
Later on, we helped Scott’s team at Sproutward kick-start their data pipeline build in BigQuery, by setting up their initial data feeds + data models.
One major technical challenge that we were able to solve together was mapping ad set-level data between ad platforms (like FB Ads) and Google Analytics, to show a full funnel view at the ‘Campaign Goal’ and other granular levels.
We built out some complex regex-mapping logic, that allowed folks on the Sproutward team to enter mapping logic on a Google Sheet, and have that logic flow all the way through into reporting in Looker:
This might look like a humble screenshot of a dashboard, but that simplicity belies a *ton* of complexity under the hood.
As Scott put it, “Every client, everyone maps their stuff a little differently. So we had to solve that one. We’ve seen very few people be able to solve this.”
When we think about reporting automation, we tend to think of outcomes strictly in terms of time (and therefore $) saved. We think of outcomes in terms of fancy analyses that we’ve unlocked the power to perform.
Of course, Sproutward banked these wins by building out their data pipeline on BigQuery + Looker.
What I loved hearing from Scott, though, was how investing in their data pipeline helped their agency in unexpected ways:
“It almost just elevated our agency from a competence perspective – where we could start working with a client, and within a week or two have the majority of their data sources connected in a really shiny, nice dashboard with a license, login, you know, all that kind of stuff.”
It was great to see Scott and the team at Sproutward taking their data + reporting game to this high level, and we’re happy to have helped them get there.
Thanks again to Scott Zakrajsek and Sproutward for sharing their data pipeline story.