Thread Reader
Tim Keen

Tim Keen
@timkeenloopclub

Nov 24
17 tweets
Twitter

I run a $2,500,000 marketing agency. Hot take: Most marketers have no idea how to properly attribute ad spend. Let me explain:

One of the largest problems I see from someone running ads today is that they’re over-indexing on the effectiveness of attribution. Attribution has never worked perfectly and it never will.
I see this problem most often with marketers who began their career with FB ads. At one point, we had access to granular data about the entire customer journey from ad impression to sale. Marketers were almost addicted to this level of insight into the customer.
Despite losing that granularity after iOS-14, people still use attribution software as if it’s just as insightful as it’s always been. In reality, these insights aren’t clear. They're very muddy.
That said, we use Triple Whale with nearly all of our customers. It’s a fantastic tool, and we use it every day. More information is better than none. But most people are misusing softwares like this, here’s how:
People are making ad-level decisions based on tiny differences from imperfect data. When someone is tweaking ads incessantly based on 5%, 10% or even 20% differences, there is a degree of blind faith in trusting that the data is accurate. It’s not.
As I said earlier, the data we have today is muddy. Some customers move from mobile to desktop to purchase. Some are jumping between browsers before they buy, or opening a new tab to buy. Some have seen a Tiktok ad 10 times, then buy from the first ad they see on FB.
All of these little customer buying nuances muddies the final numbers you see attributed to a specific ad’s performance in your ad account.
One of the worst habits I see is stopping an ad based on a few hundred dollars in ad spend based on on 5% sub par performance. This isn’t data-based decision making, it’s blind faith in the accuracy of your data.
Instead of pin pricking these tiny 5-10% differences, look for large differences in performance. If one ad outperforms another by 30%, 50%, or 100%, then you’ve found something meaningful.
Paying attention to (and trying to create) large performance differences is WAY more reliable than tweaking ads based on tiny differences from inaccurate data. If you want to be a data-based marketer, you need data you can trust.
There’s a couple pieces of data I trust. 1. Large differences in ad-level performance 2. Post-purchase surveys
We use the post-purchase survey as the gold standard of making channel-level attribution decisions. Why? Because your customers are giving you a direct answer to a direct question. Less mud.
It’s a simple question most companies should ask their customers: “Where did you first hear about us?” This question tells you everything you need to know about which channels are driving returns.
We go as far as building channel-specific ROAS from post-purchase surveys by using the following data points: - % of new customers - New customer revenue - $ spent on that channel
We’re trying to find how many dollars we put into each channel and what we’re getting out based on post-purchase surveys (which is extremely accurate data). This seems entry-level to me, but we’re one of few agencies I’ve seen build comparative post-purchase survey ROAS.
For more nuanced marketing insights, feel free to click on my profile and give me a follow!
Tim Keen

Tim Keen

@timkeenloopclub
Running a $2.4M eCom marketing agency by day. Tweets about marketing secrets by night.
Follow on Twitter
Missing some tweets in this thread? Or failed to load images or videos? You can try to .