1. SEJ
  2.  ⋅ 
  3. SEO

[turn-anchor preview v2] Google Explains Query Fan-Out Behind AI Search

Google's Nikola Todorovic explains how query fan-out works in AI Overviews and AI Mode, what it means for SEO, and why longer queries are reshaping search traffic.

A Google Search engineer reveals how AI Overviews and AI Mode decompose complex queries into parallel sub-queries, confirming that longer prompts are driving new search traffic.

Google Search Intelligence Director Nikola Todorovic explained how “query fan-out” works inside AI Overviews and AI Mode on the latest episode of Search Off the Record. The mechanism decomposes a single complex query into multiple parallel sub-queries, retrieves results for each, and then synthesizes them into one AI-generated answer. (13:34 , 13:51 , 13:58 )

For SEO practitioners, the disclosure confirms something many have suspected: Google’s AI features are not simply rewriting your query. They are running an entirely separate retrieval layer on top of the traditional ranking stack. That means your content needs to be structured so individual sections can be extracted and cited by these sub-queries. (14:21 , 14:42 )

Todorovic also confirmed that average query length is growing as users discover AI search can handle more complex prompts, calling the shift “a revolution” and noting that Google is seeing “new traffic” as a direct result. (3:50 , 3:57 , 4:02 )

How Query Fan-Out Works

Speaking on the podcast, Todorovic described fan-out as the process where Google identifies additional search queries that could yield results relevant to the user’s original prompt. The system then forks and runs retrieval for all of those queries in parallel, bringing the results back together to serve the original, more complex question. (13:34 , 13:51 , 13:58 )

“We can fork and in parallel, do the retrieval for multiple search queries that can all come back into one original, more complex query that you gave in.”

Nikola Todorovic, Director of Software Engineering, Google Search (Search Off the Record)

AI Overviews then combines “an interesting selection of these results” and generates a summary from the snippets, titles, and additional page context it can access. Todorovic emphasized that the underlying retrieval and ranking system is still the traditional search infrastructure. AI Overviews operates as a feature layer on top of it. (14:21 , 14:32 , 14:42 )

AI Mode takes this further. It also uses fan-out queries and includes linked results and citations, but Todorovic described it as having “bigger ownership” and a larger platform of its own compared to AI Overviews. Users can transition from an AI Overview into AI Mode for a longer, multi-turn conversation. (19:35 , 19:39 , 19:42 , 19:10 )

Why Queries Are Getting Longer

Todorovic confirmed that users are writing longer, more detailed queries because AI features have shown them that search can handle complex questions. He described the shift as a “revolution” in how people use Google. (3:39 , 3:50 , 3:57 )

Host Martin Splitt illustrated the point with a practical example: users once searched with fragmented keywords like “restaurant, vegetarian, Zurich.” Now they type full natural-language prompts such as “based on dietary restrictions, which restaurants would you recommend now for a lunch in Zurich?” The fan-out system handles the decomposition automatically. (15:35 , 15:50 , 16:09 )

Todorovic noted that this works in both directions. Whether a query is vague or highly detailed, the system now returns better results because the fan-out mechanism can interpret multiple dimensions of intent simultaneously. (17:42 , 17:46 )

This aligns with what Google VP Liz Reid told Search Engine Land about query-length shifts in AI search, adding executive-level confirmation to Todorovic’s engineering perspective.

The Evaluation Process Hasn’t Changed

One of the most notable takeaways from the episode is that Google’s AI features go through the same evaluation process as every other search change. Todorovic described a system of side-by-side experiments where a new version of search is compared against the production baseline using human raters and published quality guidelines. (6:43 , 6:55 , 7:25 )

Google runs thousands of changes per year through this process, and each must pass a formal launch review before going live. Even when overall metrics improve, engineers may be sent back to fix specific patterns of losses before a change ships. (5:07 , 7:25 , 7:53 )

Todorovic confirmed that AI features “absolutely” go through this same process, though he acknowledged that the competitive landscape has forced Google to adapt its pace. (8:48 , 8:51 )

For practitioners, this is a useful signal: the same quality standards that govern traditional ranking also apply to what appears in AI Overviews and AI Mode. Content that meets Google’s published quality rater guidelines is evaluated the same way regardless of the surface it appears on.

What This Means for Your Work

  • Structure long-form content into clearly delineated, self-contained sections (FAQ blocks, comparison tables, step-by-step guides). Fan-out sub-queries need to be able to extract and cite individual chunks of your page, not just the page as a whole.
  • Audit your keyword strategy for long-tail, multi-intent queries. Google confirms average query length is growing. Traditional short-head keyword lists miss the sub-queries that fan-out generates, which show zero volume in legacy keyword tools.
  • Prioritize topical depth and entity authority. The fan-out architecture cross-references multiple sources for consensus, so thin or duplicative content on a topic is less likely to be cited than comprehensive, authoritative coverage. See SEJ’s guide to <a href=”https://www.searchenginejournal.com/the-new-structure-of-ai-era-seo/562116/”>structuring content for AI-era search</a> for more on this.
  • Monitor Search Console data for AI-driven impression and click shifts on your highest-value pages. Same-day visibility into whether your content is being surfaced in AI Overviews or AI Mode answers can inform content updates quickly.
  • Do not mass-produce AI-generated content to fill gaps. Todorovic explicitly cautioned against multiplying content just because generation is cheap, noting it “is not going to provide a ton of value.” Focus on original insight and first-hand experience that AI models cannot replicate.

Looking Ahead

Todorovic’s disclosure fills in a key architectural detail that practitioners have been speculating about since AI Overviews expanded across industries earlier this year. The fan-out mechanism explains why some pages get cited in AI answers while others do not: if your content is not structured in a way that a sub-query can match and extract from, the AI layer may skip it entirely, even if you rank well in traditional results.

Several open questions remain. Google has not disclosed how many parallel sub-queries a typical AI Mode prompt triggers (third-party estimates range from 8 to 12, but these are unconfirmed). It is also unclear whether Google plans to surface fan-out query data in Search Console, which would give publishers visibility into which sub-queries their content was matched against. The state of AI search optimization in 2026 will depend heavily on whether that transparency materializes.

Meanwhile, third-party data on click-through rates from AI Overviews remains mixed. Search Engine Roundtable reports that CTR from organic results with AI Overviews is improving, which supports Todorovic’s claim that AI features drive new traffic rather than cannibalize existing clicks. But the practitioner community remains split, with some SEO consultants arguing that AI Mode becoming the default could reduce publisher click-through over time.

Full Transcript

Full Transcript (click to expand)

0:00Martin:
“Hello and welcome to a new episode of Search of the Record, the podcast where we take you a little bit behind the scenes of Google search and hopefully have some fun along the way. Well, you probably have seen AI features in search and whenever I have to talk about AI features in search, I’m really, really happy that I got to see a presentation at Search Central Live in Zurich last year. And I think it’s time to open this up to more people. So I invited a guest today. My guest today is Nikola Todorovic and would you like to introduce yourself, Nikola?”

0:49Nikola:
“Yes. Thank you, Martin. So I have joined Google about 15 years ago over here in the Zurich office. And for all of that time, I’ve been a part of the search organization, what used to be called Search Quality, nowadays it’s Search Intelligence. And I’ve been a part of the team that’s called SafeSearch. And for the last several years, I’ve been leading that team. And also in the last couple of years, I was more involved in the ecosystem work, working together with you, with the folks from Search Console, Google Trends, etc. And so have some more experience on that front as well.”

1:30Martin:
“And we pushed you into the cold water of our stage in Zurich as well. And you had a really, really cool topic. You talked a bit more about AI in search. Would you like to tell us what led to that talk and what was the thinking behind it and what you want people to take away from that?”

1:47Nikola:
“Yeah, well, clearly, AI is the topic that everybody’s talking about right now. A lot of people are wondering how is search evolving and what will be the future of search, the future of AI, etc. And from that perspective, I think it was valuable to bring that particular presentation. Now, the presentation that you refer to has showed a lot more things before the new wave of AI came in. I think that was the context that I felt it was helpful to present to the audience over here.”

2:21Martin:
Yeah, because I think everyone is talking about AI and search as if it’s a new thing, but it has been there behind the scenes, so to speak, before that, right? So what makes these AI features that people are using now and that are progressively enhancing the search experience for them so different from the features we had before? Would you consider these new features revolutionary and completely different from what we’ve been doing so far? Or is it more like an evolution of what we have been doing in the past?”

2:58Nikola:
“I think the way they are being used, and I think it is a revolution that we’re speaking of right now. But clearly, in the whole process, there were small steps. But if you compare search now and search 10 years ago, it’s a very different product. So I would say, yes, it’s like a big step change. And it is absolutely changing the way the users are searching. So if you think about it, any feature is changing in some way. For example, if you bring more images, videos, et cetera, then it is bringing this kind of experience so people are going more to image search. For example, when we added what we call the image universal blocks on the main page. Now this new wave is also changing the way the users are searching because they are uncovering that search can actually answer to more complex questions. And for that reason, we do see that user queries, or if you call them prompts now, so they’re getting longer. They become more detailed, and the average query length is growing. So we do see the new traffic, and this new wave of traffic is a consequence of users being able to see, aha, there is something new I can do over here. So from that perspective, it is a revolution. But it is obviously a bunch of steps in between that happened and have been improving search all the time.”

4:22Martin:
“Can you shed some light on the steps in between that you think are outstanding and probably have kind of paved the way for this?”

4:32Nikola:
“Before I jump into that, maybe it will be interesting to tell you a little bit about the process, how the changes happen in search. And then I can add kind of what are the particular changes that reflected this AI revolution. So in principles, Google Search is a huge product. It has a lot of different components, and you, Gary Elias, John Miller, and others have been talking about this. It all starts with the web, with the crawling, indexing, the ranking components, and so on, the new features on top, et cetera. So we have thousands of changes in Google Search per year. I’m not sure how many, but certainly in thousands. We know that because we’re tracking all of them, and we are evaluating all of them. We’re measuring. Because the key point is, yes, we have new technology. We have things that are, for example, we know problems that happen. Very often, the changes that come to search are either a consequence of the new technology that’s coming up. And we say, oh, let’s use this new technology because it certainly will bring us some improvements. Or alternatively, we see how there is a problem. I’m typing this query, but I’m getting this result. It’s not optimal to see this. And when we do this, we, as engineers on search, we are making kind of an experimental version of Google Search that has something new, that has something different compared to the production version of Google Search. And we need some way to tell us, okay, what is better? We’re not just launching these 5,000 changes because some engineer or some product manager has an intuition. Ah, this probably will be better. So let me add this thing here, this thing there. No. So we have to start and see how I have to build a prototype of the new version. Thankfully, all the infrastructure at Google is really amazing. So it helps us run this very quickly once we have a good idea. So we can build a new version, run a comparison with the baseline, which is the production system, and we run those things called side-by-sides. So you’re getting random user queries that will see a difference between the production and your experiment. And we have published the guidelines that help human raters review those changes, those differences between the baseline and the experiment. And out of these reviews, out of these human reviews, we’re getting statistics. And this statistic is telling us the experiment is better than the baseline. And if it is, then, well, you will think, yeah, let’s submit it and commit the changes and go launch. No. We will have something called launch review. That is a process where the engineers are talking to the leads who have the decision-making power in the end and make a call, yes, this is better. And sometimes it can be that your overall statistics look like improving, but you have some really bad pattern of losses in your experiment. And well, if there’s a kind of reasonable way how to fix those patterns, we’re going to bring the engineer back and let them fix those patterns and make an improvement. So right now, I’m just talking about the kind of the standard good old process of the launch reviews and the new experiments and everything that goes in search. And this process has been going on and is still there. So let me know if this, what I was just explaining is clear.”

8:17Martin:
“Do you have any sub-questions on that before I move into the kind of more AI territory? I’m just wondering if at some point we should break this out as a separate episode, because I think we’ve mentioned both the search quality radar guidelines and the experiments beforehand, but I don’t think we’ve ever gotten such a nice explanation of how the process works and how the different bits and pieces fit together. So that was really, really cool. Let’s take it back to AI now. So I’m guessing the AI features underwent more or less the same process, right?

8:48Nikola:
“Yeah, absolutely. They do. I have to say, yes, given that the world is obviously changing, the competitor landscape has changed as well. We also need to adapt to this new world. However, a lot of AI inside of Google has been developed for years before the generative AI came to play. As I mentioned in the beginning, I am responsible for the safe search engineering team, and we were one of the first places where Google was able to comfortably apply artificial intelligence slash machine learning models directly in search. The reason why it was not so easy to just apply it everywhere is because these models function like a kind of black box. You don’t always understand what’s happening underneath. It’s a complex set of, for example, neural networks or even the older, kind of simpler, the linear models are kind of the easiest ones to understand and to debug, right? Because it’s not just you can put your AI or ML system into search and you reap the most benefit from your side-by-side experiments that I just mentioned previously, and now you will get to something and launch it. But then you will have problems with that as well, because obviously the systems evolve, the searches evolve and so on, and then you will need to debug this and replace it. And this kind of replacement and changes is complicated. So the more you can understand how these things work, what signals are you using, what signals are important for the relevance, for the quality, for the safety of the results. So you do need to understand the system and kind of the more complex the AI or the ML systems are, then the more challenging it is. But a safe search has been one of the places where, you know, you could isolate outside of the main search ranking flow. You can isolate the systems that just do like process the images, process the videos, process the text, and just give you kind of a signal on its own, how explicit, for example, a result can be. And then the kind of the understanding of, let’s say that 10 years ago or 15, no, it’s more like 12 years ago when really the convolutional neural networks came in to help us understand the images better. And in many places, they were actually already doing things better than humans in understanding images. Then, you know, we could apply this as a kind of a standalone AI system that runs on a topic. And, you know, if we have problems, yes, the engineers in the safe search team had the intuition and could run an iteration and improve the neural network itself. But it’s a kind of a very isolated space, so you can more easily navigate. And then the rest of the search stack has still been on its own and running things. Along the way, there have been various new technologies. So starting with transformers, I think that that’s the biggest one like that in the end introduced all the gen AI world. But we were reaping the benefits of transformers on search long before all the stuff came in. And we were open about it. So we have announced publicly the systems like BERT, like MUM, and they have been able to transform the search and ranking into a much better place. And again, these systems were built in kind of an isolation as well, just like the safe search systems. The safe search systems were also built in isolation as the new signals. And these new signals were supporting the whole ranking infrastructure and was one more thing on top of everything else. Hopefully that makes sense.”

12:36Martin:
“That makes sense. And I mean, if you look at it, the new AI features are kind of also, they are integrated, but they are also somewhat isolated, as in like there’s an AI overview that lives in its own space and AI mode is a completely different way of searching. So they are kind of also independent of the rest of the search, even though they use the rest of the search infrastructure and search stack and ranking systems, right? Would you say that’s the case as well? Or is that completely different from previous systems?”

13:08Nikola:
“Yeah, let’s maybe start with AI overviews, because that’s where I think this holds the most still. Because if you think of AI overviews, like this is your normal search with perhaps a few fan outs. I just introduced a new term, I probably should. Please explain that. I think it’s for the experts out there, I don’t think it’s like probably many of them have heard about it. But anyway, a fan out is when you have your own search query, but then we might identify some additional search query that will yield the results that can be relevant for your original search query as well. And then we have like, we can fork and in parallel, do the retrieval for multiple search queries that can all come back into one original, more complex query that you gave in. And so, as I initially said previously, that we do see longer queries. This is also, we can help and understand the kind of more directions of what you were initially typing. So we launch multiple queries. Now we get all of this retrieved back. And then AI overviews is combining an interesting selection of these results and making a summary from what you can see in those results. So in a sense, the whole retrieval system, the whole ranking system is the old style, the old school. And that one is the AI overviews is a feature that stamps on top of this and operates on its own. This is the kind of the isolated space for the AI overview where it combines and it’s really fascinating what the language models have been able to do. But yes, it can combine like text that it sees on the sources, on the snippets, the titles, et cetera, and an additional context it can get out of those pages and then make a really nice summary in the end.”

15:19Martin:
“I really like that. And I think that also goes back to what you said earlier, that the behavior changes and queries get longer and more complicated. Because I remember back in the days when, I don’t know, the word was still monochrome or something. When I searched, even on Google, I searched kind of keyword like restaurant, vegetarian, Zurich. And then over the years, that became more conversational as in like vegetarian restaurants in Zurich, which is already a change. And now, nowadays, I ask questions or I type in queries that are so much more vague and I still get usable results like based on dietary restrictions, which restaurants would you recommend now for a lunch in Zurich? And then you get like a bunch of stuff and it works because of these fan-out queries. It asks like a bunch of queries that I don’t have to ask myself anymore to get to the right result. And what I find myself doing is I’m asking questions where I don’t even know what a good question is. Right? Beforehand, you would sit in front of Google and think like, how do I even look for this? There’s an effect in, I don’t know, let’s say like there’s a physical effect. And I, ah, what was the name of that? So you would try to like find the name of the effect first and then Google for the specific effect once you had the name. And now you’re like, what is the physical effect that makes water glow when there’s radiation there? And then it kind of figures it out for you. And I think that’s one of the possibilities of features like AI overview, right? So from AI overviews, what was the motivation and the idea behind then going further towards AI mode?”

17:03Nikola:
“Yeah. No, I agree completely. These are the, these are exactly, you know, the nice examples of the way how search has evolved with the AI overviews and eventually also AI mode, but all the capability of understanding your intention with some vagueness, or, I mean, even if you’re more detailed, yeah, I want a vegetarian restaurant that serves falafels and that has like, you should be able to get this, that’s open now, near me, like all the, all the kind of context that you’re getting it. True. I didn’t think of that, but yeah, even if you have like more details, you now get better results. Yeah. So, so either if you have vague, like a query or if you have actually more details, so you, both of this seems to work better and, you know, clearly this doesn’t stop there yet because what we’re seeing with the large language models, they’re able to gather a lot of information on their own. Right.”

18:03Martin:
“And so they’re able to, like things like, what is the capital of France?”

18:08Nikola:
“You don’t really need to kind of do the search for it. Right. You know, one part of like, it’s all in parametric memory of the, of the model. And so AI mode is able to communicate with you in, obviously it’s like even longer queries or longer discussions because it also enables you to do the multi-turn thing. And I mean, it’s, it’s, you have like different tools that do all that, right. So like with Gemini being like a Google’s version, but obviously others like JGPT, et cetera, have been there. And we do see that users like that. So the users like the kind of conversational aspect, the user like to, to communicate longer and so on. So AI mode is kind of searches answer to that. And we have also seen, obviously not every user in the world is going to some of these chatbots. So, and obviously a mode is a part of search. So like the users of search might actually want to use that and see how it’s like. And you do have also the option to transition from the AI overviews to AI mode if you want to kind of explore more and have like a longer conversation and more detail. So I think it’s an overall really, really nice addition. And I like get myself like many times entering query and search or maybe directly into AI mode or like going to the AI overviews and say, maybe I want like a longer, longer conversation and then I’ll go to AI mode. AI mode is also still using the search, right? So, so it does have its own fan outs. It does have the linked results and citations as well. So it is kind of, in essence, still based on this kind of standard concept of how we do things on search, but it on its own, it has a kind of a bigger, well, like the infrastructure is new and I call the, it has kind of bigger ownership or like it’s no longer an isolation of it. It’s like the AI mode is kind of, it runs on search, but it’s also has like a bigger platform for its own. I’m still processing the fact that, yeah, of course it works in both directions.”

20:16Martin:
“It also works with like, if you have more details and I just like the ability to have multimodal search and I think AI mode just adds to that really. And that’s pretty cool. But one thing that we keep hearing from the ecosystem pretty much at every event we do and, and it’s, it’s everywhere is how do we make sure that with AI features being part of search now that the ecosystem continues to thrive? And I think that’s an interesting challenge, but also there are like lots of opportunities thanks to AI features these days. And I know that we at Google try our best to go on this journey together with the ecosystem, but how, how do you see it from your perspective? What is it that we do to make sure the ecosystem thrives with, with these new features?”

21:16Nikola:
“Yeah. I think the ecosystem impact and like, I think, as you said, I’ve been on two, three Search Central lives, like twice in Zurich, once in Madrid. This is clearly one of the, of the key question and you, you see them a lot on the social media as well. And I don’t think there is like a magic, magic wand that can clearly give the guidance. Okay. What do I do now? Like what would the SEO experts do now in the, in the new system? My kind of guiding principle or my like, the way I see here is that the site owners, I think they do need to continue making sure that their products, that their websites, their platforms are providing value to the user, because ultimately if you provide a particular value, then the users will continue coming to you and they will continue coming to you through Google as well. So if, you know, for example, you’re selling something, you have like a product or like platform, you have like some subscriptions, et cetera, you clearly will, if you are providing value to your clients, like they will continue coming to you. We were talking about restaurants, right? Obviously if you’re like putting a menu, et cetera, so yeah, we’ll, the users will eventually come as well to either your restaurant. So they will like go over and see. So in the AI-centric or AI-oriented system, I think those kind of bringing the values still continues. But just like in kind of the previous evolutionary or revolutionary steps, like on how the media has been disseminated, thinking about the newspapers, the radio, the TV, like the internet, all the, all the stuff, like all of these things also kind of remain to be in this world, but people needed to continue providing value because if you don’t provide value, nobody’s going to buy your like a newspaper or book or like nobody’s going to listen to the radio or to the podcast. But so it’s, I think everybody like, including all of us, like there, there’s a lot of question, right? Like is AI going to take our jobs and so on? I think we all need to continue thinking like, how do we provide value on top of all of this? And in many cases, this is about mastering the AI tools and being able to use them in the best possible way. So kind of, this is one of my recommendation to all the SEO professionals and side owners and like the whole ecosystem that they continue providing value, but then do not neglect the new technology and make sure you use it in the best possible way for you. Now obviously I don’t think we would over here recommend like the best possible way is to just multiply all the content and just generate because you know, it’s, it’s cheap and easy. But in general, like it’s not going to provide a ton of value. But if you know, you’re using it to like improve your grammar, to improve the style a little bit, make it kind of more interesting and so on. I don’t think that’s a wrong use of the technology, but then there’s plenty of ways. Okay. Maybe the AI can help you better understand your data. Maybe AI can help you understand the competition potentially better as well. And so on. So clearly this is something we can advise.”

24:34Martin:
“I find that really interesting because I’m seeing a lot of excitement at the same time, a lot of varying in the community and the ecosystem. And I think it is like that because on one hand it democratizes a lot of stuff that has been traditionally difficult to do or just cumbersome to do. At the same time, some people have misunderstood whatever it was that they are trying to accomplish. Or to provide, to be these cumbersome bits and only these cumbersome bits. So to give you an example, when it comes to, let’s say, writing articles about lifestyle or technical topics, because I’m more like a geek, so I’m reading more technical things. I really enjoyed when people were giving me like interesting details of technology from the days past, much older than I am. So I wouldn’t have any touching points with technology from the 60s or the 70s. And if someone was like, hey, did you know that the displays in old hi-fi devices worked like this? That was a really interesting article. But obviously they also went and explained what their experiences were with new technology as it came out and as they were provided with samples sometimes even. And that was interesting. And eventually that turned into them effectively, how do I put this nicely, putting words around spec sheets from manufacturers. And that wasn’t really the value that I was looking for. I’m not interested in knowing how many gigahertz a certain new processor has, because I can read that basically on the box. It says it on the box. You don’t have to tell me that this is now a three gigahertz processor. It says it on the box. Thank you. And I had a key moment when I was buying a joystick back in the days for a computer game. And I didn’t know what force feedback was. And that’s effectively like you have a different resistance and it might move and vibrate the device if there’s any shaking happening in the surroundings. And I didn’t know what that was. And it said on the box it has force feedback. And so I went to someone who worked at the shop and I anticipated them to be an expert on the topic. So I’m like, so this says force feedback. What does that mean? And he literally said to me, oh, that means that this joystick has force feedback, right? And this is funny, but I’m seeing this a lot in articles and on websites that they’re effectively not giving me any context. They’re just explaining what I can kind of glimpse and gather from the information that is right in front of me. And I think AI makes that easier. Like you don’t have to spend as much time to kind of like rattle off the spec sheets into a more readable human conversational form. But chatbots do that. So you don’t necessarily have to do that on your website anymore. But maybe you have tested it and you found it to be particularly good for your use case or particularly unfit for your use case. And then you can share this insight that AI doesn’t have. It doesn’t know. It hasn’t used the technology. It doesn’t know this. But you do. So you’re the expert. And I might be coming if you’re using your electronics the way that I use them, I might be interested in your opinion. I might not be interested in this other person’s opinion because they are using their electronics differently. But that’s fine because there are other people who are using their electronics the same way as they do, just not me. So I think there is still enough space online for different outlets and people and opinions and experiences. But I think we have to increase the level of our content to be useful and interesting for humans, from humans to humans. And I don’t think AI is going to take that away. I think AI is going to bridge that.”

28:23Nikola:
“Yeah, I absolutely agree. I get often kind of nervous like when I see the kind of AI style reports like, you know, obviously internally we want to use these tools like they’re helping us. They help me understand the documentation more easily. Can we ask questions like, you know, notebook has been a fascinating tool that can like in a couple of minutes explain a complicated thing. So yeah, I do believe there is still a need for the human touch on top of all of that. I do think we need to understand the capabilities of the tools. But in the end, us providing the value, us making sure that yes, we’re bringing something to the table. And I think that’s like where we want to focus.”

29:09Martin:
“But yeah, are you using the coding tools?”

29:13Nikola:
“Interestingly enough, yes. And that’s exactly where this stuff comes in so handily.”

29:17Martin:
“So the code base in Google is huge because it has a lot of stuff and you’ve seen it yourself. And just a couple of days ago, we stumbled upon a specific piece of code and it was going through like lots of layers of indirection and abstraction to do something. And we had a hypothesis where this is going in the end, but we didn’t know. So we asked our internal tool, it’s like, so we found this thing that does this thing, but where does the information actually go? So it was basically like we found this method that tells us how big an image is. Where does this information come from? Does it have to download it or does it use like the image in index for this? And we could have found that ourselves by going like 20, 30 minutes through abstraction layer after abstraction layer to finally get to where it’s coming from. Or we just ask the system and it’s like, oh, this is coming from here. And that was the right spot. And we’re like, oh yeah, okay. So it comes from where we expect it to come from. Cool. That’s good to know. So it is useful. It does help. It makes things faster, right? Doesn’t replace us making the effort of figuring out if what we’re doing makes sense in the first place and if it takes the right trade-offs and if it’s the right choice, those things I think are not that automatable or AI-able yet. Yet. Maybe yet. Yet. That might change, right? But I think these tools are useful, but yeah, you’re absolutely right. It depends on how you use these tools. But yeah.”

30:50Nikola:
“And on top of that, I think there’s always risk of introducing a bug that you don’t understand and so on. I think the whole discussion of how is the software of the future is going to be maintained by the AI and will remain human maintainable or understandable. Right now, you still have a bunch of people who can understand what’s going on. We’ll see how that will evolve and will the system that is fully AI-run and AI-automated become at some point in the future more, we will have an edge over the current system architecture or style of building systems. We’ll see all that. But I think it’s important for now, at least for all of us in the engineering side to lean into the tools and make sure we continue using them and be capable with them.”

31:39Martin:
“And I would love to hear from you all out there. Do you think, are you using AI for something that you wouldn’t have expected before you tried it out? Or are you skeptical? Have you made good experiences? Have you made bad experiences with AI? I’m just curious how you all out there are experiencing this shift and this time of exploration basically. Anyway, thank you so much, Nicola, for being here. I think that was really, really interesting. We touched upon so many interesting things from how we are running experiments to how AI evolved at Google into the thinking behind AI overviews and AI mode. And thank you so much for your time and thanks so much for being here.”

32:20Nikola:
“Thank you, Martin. It was a pleasure joining you in the podcast.”

32:23Martin:
“And all of you out there, if you’d like to hear more of this, please do subscribe. We are on all your podcast platforms out there, and we’re looking forward to hearing from you. So leave us a comment, leave us a like, leave us a subscription, and talk to you soon. Bye bye.”

32:38Nikola:
“Bye, everybody.”

32:40Martin:
“We’ve been having fun with these podcast episodes. I hope you, the listener, have found them both entertaining and insightful, too. Feel free to drop us a note on LinkedIn or chat with us at one of our next events we go to. If you have any thoughts, let us know. And of course, do not forget to like and subscribe. Thank you so much for listening and goodbye. Bye.”


AI-generated first-pass scaffolding. This draft was produced by Search Engine Journal’s newsroom automation as a starting point for a writer. Rewrite before publishing.


Research notes (review and remove before publishing)

The bot collected this context while writing. Skim, verify, then delete this whole section before publish.

Headline alternatives

  1. Google Explains Query Fan-Out Behind AI Search
  2. What Google’s AI Architecture Means for Your SEO
  3. Google: AI Mode Triggers Parallel Sub-Queries at Scale

Primary sources cited

Suggested internal links (prior SEJ coverage)

Competitor coverage seen

Practitioner pulse

Split opinion: one camp sees AI Mode as an existential traffic threat to publishers; the other argues SEO fundamentals persist but execution (content structure, entity authority) must evolve. Most posts are practitioner commentary, not vendor PR.

LinkedIn:

X / Twitter:

Background

Google has been layering AI into Search for over a decade — from convolutional neural networks in SafeSearch (~2013) through transformer-based systems like BERT (2019) and MUM (2021) to the current generative features (blog.google). AI Overviews launched broadly in the U.S. at Google I/O 2024 and expanded globally; AI Mode followed as a separate, more conversational interface powered by Gemini models (blog.google). Liz Reid, VP and Head of Search, has publicly stated that links in AI Overviews receive more clicks than traditional listings for the same query, though third-party data is mixed (seroundtable.com). BrightEdge research found AI Overviews grew 52% year-over-year but still appear on only about half of queries, meaning traditional organic results remain the majority experience (searchenginejournal.com).

Open questions for follow-up coverage

  • How many parallel sub-queries does a typical AI Mode prompt actually trigger? Todorovic describes the mechanism but doesn’t give a specific number — third-party estimates range from 8–12 but are unconfirmed by Google.
  • Will Google surface fan-out query data in Search Console so publishers can see which sub-queries their content was matched against?
  • Does the fan-out mechanism weight freshness differently than traditional ranking — i.e., are time-stamped sources preferred in the synthesis step?
  • What is the actual CTR impact on sites that ARE cited in AI Overviews vs. those pushed below? Google and third-party data conflict.

⚠ Unknown-tier sources surfaced (vet before quoting)

Image search query

“Google search AI query processing diagram”

Flags

degraded research: preflight

Drafter’s writer notes

FACTCHECK_FLAGS_GO_HERE

Degraded research stage: Preflight was degraded. Verify whether SEJ has already published a piece specifically covering this Search Off the Record episode. If so, pivot the angle to ‘what’s new since’ or add a reference to the prior piece.

Open verification items:

  • The exact number of parallel sub-queries triggered by a typical AI Mode prompt is unconfirmed by Google. Third-party estimates of 8-12 are cited in the brief but have no primary source. The article flags this as an open question rather than stating it as fact.
  • Todorovic’s title is listed as ‘Director of Software Engineering’ in the brief. Verify against his LinkedIn or Google’s official attribution. The podcast itself introduces him as leading the SafeSearch team within Search Intelligence.

Unknown sources in the brief: searchanswerlab.com, icoda.io, and bradleebartlett.com were flagged as unknown-tier. None were cited in the article body.

Suggested follow-up coverage: 1. A how-to piece on structuring content specifically for fan-out sub-query extraction (practical, template-driven). 2. Analysis of whether Search Console will surface fan-out query data, and what that would mean for keyword research workflows. 3. Comparison of fan-out architecture across Google AI Mode, Bing Copilot, and Perplexity to see if the content-structure implications differ by platform.


Fact-check pass: No flagged claims.

Category SEO
SEJ STAFF Matt G. Southern Senior News Writer at Search Engine Journal

Matt G. Southern, Senior News Writer, has been with Search Engine Journal since 2013. With a bachelor’s degree in communications, ...