You have to realize when you're a marketer that in this day in age, we're going to talk about the impact of AI. Only got more and more of this, but you can't really outrun Google's algorithm, right? And that you used to be able to do that 10 years ago. There were things you could do to kind of trick the system and get an advantage. Those days are over.
Welcome to episode 49 of B2B SaaS Marketing Snacks. My name is Mike Northfield and I lead product marketing at Kalungi and T2D3. And I'm here together again with Stijn Hendrikse, who is a serial SaaS marketing executive and ex-Microsoft product marketing leader. And today we're talking about some of the recently released and previously confidential documents that were released by the Department of Justice through their antitrust suit with Google. And there's a lot of really interesting insights in these documents and we're summarizing them today and kind of creating some analogies and abstractions to help us think about how Google's search ranking prioritization works. And there is a link to the search engine journal article in the show notes. And then I also go into a little bit more detail and spec out the exact factors that are in the documents there in the show notes as well.
So we talked about that first. And then after that we kind of get into a little bit of the AI component. So when you layer on all the artificial intelligence that's happening in the world right now, how do you make sure that as a marketing leader or a founder, you can stay ahead of the game and not fall behind when it comes to pursuing an organic search channel if you're trying to build that inbound engine for yourself. So we talk about all those things and as always, thank you for choosing to spend your time with us. If you're interested in more content like this, go to T2D3.pro or kalungi.com. We've got a lot of guides, templates and other things about SaaS marketing kind of nestled into those websites. Alright, let's dive in. So there was a search engine land article that was published earlier this month and the premise is this:
So the US Department of Justice, DOJ released a bunch of trial exhibits from the antitrust suit that is happening with Google right now. And what's really interesting is that a lot of these exhibits are about the kind of inner workings of how Google interprets and handles ranking for different content on the search engine. So a lot of the stuff has been redacted, but there's a lot of interesting insights that can be gained from it and it confirms a lot of, there's nothing that's really groundbreaking, but it confirms a lot of the things that I think we have guessed at as marketers over time in terms of how Google actually prioritizes content and decides where to place it in the rankings. And there's some really interesting things in here. I sent you the article Stijn, I think you've looked through it. I'm curious to hear your kind of reaction and talk about, I think you had a really, really cool analogy for how Google ingests information then serves it back up. But yeah, I just wanted to get your thoughts on it and we can kind of talk through some of the exhibits and the implications and see where it goes.
Really interesting topic. When you think of a recipe for a dish that you really like at a restaurant, you try to go home and make it yourself. You had to because the chef wouldn't give you the recipe. It's kind of a secret sauce. So you have to go home and kind try to make that salad yourself by testing some ingredients. And there is a little more of this, a little less of that. And that's what companies like Moss and others did for years, right? Because Google wouldn't tell you how the rankings really worked. They kind of try to recreate the results by building content in a certain way, publishing it in a certain way, adding certain types of metadata, doing certain things with timing when things get published and see if those had any impact on the results. And now with these documents being released, people can start to validate, Hey, how much of the recipe did we get?
And which you also find is that some of the things that people thought had an impact on the recipe that were basically correlating to results, they may not have had any causality, they just, maybe by chance maybe the causal relationship was in the reverse direction. It still means there's a relationship and people maybe were optimizing for that for years because they thought it would drive the results, but maybe it'll actually the reverse, the way the results showed up was actually anyway, it doesn't matter. But now, yeah, we have some amazing things to learn from all these documents that were released. It's also great for competition of course, that these things get exposed a little bit. I, back to your point on the analogy that I used to make a lot, when you think of good SEO, good search engine optimization, for me it was always important that you realize that at the fundamental basis of the algorithm, what Google tried to do is to see whenever something new got shared with the world, it was crawled by their search engine to see as fast as possible if it's any good, and for who being the equivalent of whatever someone was typing into a search query.
And I always believed that the first minutes, the first days of after the crawl happened were really critical. And some of these documents are actually confirming that, right? That when you publish something, Google will give you some form of artificial bump early on. I liken it to when you publish a book, you get one shot maybe at making it on the table in the small bookstore in the front of the room where people walk in because the owner of the bookstore will basically see, do people pick up the book or they, and when they pick up, do they hold onto it? Put it back quickly. So imagine you would have a bookstore where you actually put a webcam that looks at that table and sees what the foot traffic kind of thinks about the book that's on the table. But the title of the book, is it going to get picked up?
That's the equivalent of people collecting on the search link. So Google measures those things. How many click-throughs do you get? And then someone kind of paging through the book and see is this for me? That's the equivalent of time on page. And then of course whether someone holds onto the book and takes it to the register, that would pay the equivalent of someone actually clicking through and started to consume the content engaging with it. And Google did something similar basically. Now that we learned from these documents. And that's why some of the metrics that we've always been optimizing for are very equivalent to that book being tested in the bookstore on that table. So when you know that now you can actually forget about all these gaming of metadata and just make you write a really good book and you write a really good title and those things and you learn from the early results whether you did great or whether you could have done it a little bit better.
And I might add one additional workflow behind your analogy, which is there's what was shown in some of these documents is there's this additional element on top of that, which is in the past, things may change in the future, now that we have access to the large language learning models that we do now...
Everybody can write a good sentence and a good paragraph.
...Google didn't in the past understand what documents meant. It didn't understand the context of things. It looked at user signals to understand how good it was. So in your opinion,
Because it couldn't interpret the content.
Exactly. So the only way it could learn how good content was, was to figure out how people interacted with it. So if you were to add an additional piece to your analogy, it would be the equivalent of if the bookstore owner would also take into account which books were worth putting on the front table...
Actually read the books before you.
...or they would, they would let the person who bought it, they would want them to come back to the front desk and say, oh, I loved this book and I read the whole thing. And because of that feedback, then they would also put it in the front. So there's two components of it. They kind of put it out and see if people are willing to pick it up. And then they also see user signals, which in the context of content is: Are people staying on the page? Are they clicking on things on the page? How are they interacting with it? And also what other websites say about it. Are there links to that content? Things like that.
That would be the equivalent of the review would be the backlink, right?
Exactly. Exactly. So those are kind of the three pillars. So there's what the document says about itself, what the web says about the document, which is reviews and backlinks to that document. And then user interactions, which is what users say about the document when they actually click into something, there's a black box that Google doesn't know, it can't read the information, so it relies on seeing what people do with it, which is really interesting.
In general, search engine optimization, and honestly the same goes for paid search when it is related to content, and I don't think this is limited to written content, you can apply the same principles to video to a lot of other media, is that you have to realize when you're a marketer that in this day and age, and we're going to talk about the impact of AI and it's only going to get more and more like this, but you can't really outrun Google's algorithm and you used to be able to do that 10 years ago. There were things you could do to kind of trick the system and get an advantage. Those days are over.
So now to outrun the competition, because that's really what you have to do to show up on the first, in the top three or so of the search results or the best, the most displayed ads that people will be incented to click on, you just have to do a better job. You have to make better content, better headlines, better ad positioning, better timing for your ads. You have to be the one that shows up at the right times of the day or the right day of the week. And that mostly applies to paid not so much for organic, but with organic. Yeah, we've talked about this in many of our podcasts and Mike in some way, shape or form. But you have to have a question that you're answering that nobody else has really answered very well or there are good answers but not as great as your answer. And there's a part of the world, a small audience where you can actually add value that's unique. And if you don't have that ability, then just go right about something else. Because the AI capabilities and the algorithm as it has been for long, we'll just be able to figure that out relatively fast and you're just going to add noise to the world that already has far too much noise.
A hundred percent. I think that that's, yeah, it's a big learning. I think that in the past people have been able to get away with kind of gaming content by writing really long articles or trying to look at the other questions that people asked on Google that you saw what people also asked. They'll put those answers at the top of the article. But what we're learning now, especially through these documents, is that that actually may hurt you if you are not tailoring it to the specific person who's landing on your, if someone clicks in and then immediately clicks back out because you're not answering their question as fast as possible or you're not getting them to continue to read, then that hurts you way more than trying to game it.
And there used to be utility value in making things more usable and useful as you mentioned earlier. And that still is the case I think if you're able to combine certain things in a new way. But that's even not going to be that valuable. It used to be really helpful for someone to write a top 10 of this or a summary of that, right? And I did it with one of our best performing blogs on the Kalungi website explaining BDR vs. SDR, the reason that I wrote that article, because there was no really good answer to that question and people were confused, et cetera. But my blog article is nothing else than summarizing a lot of things that were already out there. I think you can still do that, but the value of that will diminish really fast because guess what? The large language model chatbots will do that for you.
They will do that summary, right? They will basically tell you, Hey, based on these five or six sources, this is kind of the summary. And so even that, adding value by just adding some structure or a little bit of color, is going to be less and less valuable. So the value of new ideas, of actual—I love by the way, when we post this, we should add a link to Eric Hoffer, the longshoreman philosopher, he was a writer and he wrote very short books, but sometimes he wrote big books, but he always made sure that every book and every chapter had at least one original idea. Originality and answering questions that are really not being answered by anyone else is going to be key.
You can no longer win by aggregating or synthesizing information because there are tools for that. Now you have to take that a step further and interpret it in a specific context for a specific use case for a specific person in a very specific scenario. And that is the content that's going to win. Otherwise, you're going to fall behind.
And then there's still this whole notion of outrun the other guys who know who you're competing with, you're not competing with Google. Well, you are, but you're not going to win. So now you look at the question that you're trying to answer, who else is providing answers and just make sure that you do a better job. That's the only way to do it.
Cool. Okay. So how do you think that AI and AI-generated content is going to influence how we should change our view and approach to SEO?
We talked a little bit about the aggregation and the synthesis of information, et cetera, right? That's clear. So when you think of search engine optimization or a better word may be the optimization of answering questions that people have as a marketer. Because your job...what is marketing? Marketing is changing someone's behavior or what someone believes. So we do that through communication. So as a marketer, you have new tools now to make that you use all those tools to communicate things like value or belief systems, et cetera. So now you use tools that are mostly powered by AI and AI-generated content. So there's two things that have changed. One is that aggregation and synthesis. The other is the interface, right? The interface is not going to be a search bar anymore. It might be a chat user interface, it could be an audio conversation that someone has with an AI.
It could be other new interfaces that we haven't even thought about. It could be your watch buzzing to tell you about something that could be a new interface for us to interact with information. And even with questions that we have, I think what's really critical for a marketer is that we stay really on top of how content evolves, how people ask questions, what type of content they expect as answers. I love to separate content into what I call high-fidelity and low-fidelity media. And the difference between those two is that high fidelity media is the media that you cannot really skip through skimm through or fast forward through when you read a blog article, you can kind of scan the page, you can kind, hey, "It's kind of the first paragraph. Whether they saying at the end, do I see some kind of chapter headings today? Is this interesting enough for me to dive into?", a certain part of this thing you do with a book.
You can kind of quickly go through a book, look at the index, look at the table of contents and go to the pieces that you're interested in. That would be low-fidelity media. High Fidelity Media is a podcast where the moment you fast forward through it, you'll miss stuff. You just cannot, you dunno, you don't know what you missed. So you have to be a hundred percent engaged. You can put it on fast playback, you can't fast forward. So it's high fidelity content video is very similar, although it and YouTube and you see sort of video frame, you can do a little bit of skipping. But when you think of an SEO search engine optimization strategy and now maybe content consumption optimization or whatever we want to call it, you need to really think in the two types, the question and the answer or both, benefiting from all these new types of content, high fidelity, and low fidelity content.
When someone has a question that requires a low-fidelity content answer, I just need a list of the questions to ask in a job interview with for this type of role. Or I just want a starting point for my presentation that can really well be answered with low fidelity content. High fidelity content is when you want to really understand a very complex problem where the color of the conversation and the example, the details really matter. There's a different type of question that needs an answer like that. So as a marketer to think through those two types of question and answer in high and low fidelity media will make it a little easier to think about what type of optimization do you need to do for people to find those things, right? So yeah, that's I think one thought I have where AI will play a huge role. AI will be probably able to summarize the high fidelity content without you having to listen to the full hour. Right?
Very cool. I love that. I feel I'm tingling. I'm ready to go create some content. Yeah, super cool. I've never heard the high fidelity, low fidelity content usually hear it in the context of design, but that makes a lot of sense. And when you frame it that way, there is definitely a time and place for both. And probably taking every piece of content that you have and maybe seeing how you can adapt it for both formats too. Because high fidelity has nuance and low fidelity is very much like someone just looking for a quick answer.
High fidelity is watching Oppenheimer, low fidelity is watching a TikTok video, and they both have a place, right? But knowing as a marketer what type of media is needed for what type of question is very important.
Thank you to Adriana Valerio for producing this episode and of course the Kalungi team for helping make this whole thing work and you for choosing to spend your time with us. We really appreciate it. As a reminder, you can see all the links we mentioned in this episode in the show notes, and if you want to submit or vote on a question you'd like us to answer, you can do that at kalungi.com/podcast. Every time we record, we take one of the top three topics, and we jam on it. So, if you're interested in having us do that, go to kalungi.com/podcast. Alright, see you in the next one.