Ross and Scott interview Martin Splitt from Google’s Developer Relations Team. Martin answers SEO questions on JavaScript, the possible future of core web vitals, the likelihood of Web 3.0, and how easily his words are misconstrued – including a current mess he’s dealing with on LinkedIn.

 

 


 

Transcription of Episode 426

 

Ross: Hello, and welcome to SEO 101 on WMR.FM episode number 426. This is Ross Dunn, CEO of StepForth Web Marketing and my co-host is my company’s senior SEO, Scott Van Achte. Today, we have a special guest, Martin Splitt from Google! Just to give you a bit of an intro, Martin is a Developer Advocate at the Google Search Relations team in Zurich, Switzerland, and as he puts it, ‘a friendly internet fairy and code magician.’ I love that on your LinkedIn profile! I gotta say, first of all, thank you so much for being here in your evening. I know you have long days already, so thank you.

Martin: Oh, thanks so much for having me. It’s brilliant to be here and I’m really, really looking forward to our conversation.

Ross: Awesome! And you know, I’m gonna jump right into something silly…First of all, I love researching people before I have a call with them. I looked up a whole bunch of photos of you and stuff. I got to ask, I got a hell of a kick out of some of your photos. I like that you’ve got character, and I admire that. What is the story behind the unicorn outfits? 

Martin: That began in Montreal… I can’t remember, was it like 2015 or something? It has been a while ago, when someone said, “If I get you a unicorn onesie, will you do a talk in it?” And I said “Yes” assuming that they couldn’t get a unicorn onesie, like on very, very short notice, because it was at the conference and my talk was basically the next day. Little did I know that there was a unicorn onesie available for me, so yeah. Then I realized that that’s quite a fun one, I realized that it’s a fantastic icebreaker, because there’s a few challenges to you as a speaker going to events. One challenge that I usually face is that oftentimes, especially in the developer world, there are so hyper focused communities like the Angular community, the React community, the back end…I don’t know, the Ruby community, the Python community, the PHP community, and these communities are usually relatively closely knit and I have never been in one specific community only. I have been this weird traveler between worlds, if you want to say, and I have worked with so many different technologies, and I realized that the technology doesn’t matter for me. It’s mostly like, what problem am I trying to solve? What’s the right tool to solve that problem? Then I picked from a palette of different technologies to solve the problem at hand. Yet, I solved problems that not that many people sometimes had solved before. At least I was the one who was willing to talk about it and the journey towards the solution, and so I ended up at a bunch of different events from very different communities. If they don’t know you, they don’t necessarily interact much with you. Also, if you’re a speaker in a community that already does know you, a lot of people feel intimidated, right? So the unicorn onesie was fantastic and phenomenal because it just allowed me to just roam a community that didn’t know me and just get into conversations and meet people who were too shy to reach out to a speaker or who were not interested in talking to someone that they didn’t know and only wanted to hang out with the people they did know. I made a lot of connections at these events, thanks to the onesie,  so I kept doing that. 

Ross: I love it. That’s great, man. Well, there are brilliant photos out there. Love it. 

 

JavaScript

 

Ross: Let’s jump into this. As I understand it, one of your roles is to demystify Google’s treatment of JavaScript the prompt upon crawling, did I get that right?

Martin: Yes, not necessarily crawling, but fundamentally demystify JavaScript and its relation to Google search. 

Ross: Awesome. Can you elaborate on that for listeners who may not have understanding what even JavaScript does and why it is an important topic?

Martin: Absolutely. If you have been using the web in the last couple of years, the likelihood that you have been in touch with some sort of JavaScript web application has been close to 100%. We are recording this podcast in a browser, so I see your face, you see my face, you hear my voice, the listeners hear my voice —  that all happens in the browser, not in a traditional download-install kind of application or app on my phone or something. That is because our browsers have a programming language that allows us to access all these newfangled things such as microphones, cameras, the network, push notifications, make websites work offline, all this kind of behavior has been done or built using a technology called JavaScript. JavaScript, despite the name, has nothing to do with Java so if you don’t want to make an idiot out of yourself, don’t talk about Java and JavaScript in the same sentence, unless you say, “JavaScript is not the same as Java.” That is, I think, the only acceptable sentence where that will work. But there is a high likelihood that you have used some sort of email program calendar scheduling application, some sort of social network that allows you to do a bunch of stuff, like has these pop ups that pop in, allows you to post things without reloading or leaving the page, allows you to load more news while you scroll, these kinds of things, all this behavior, everything that is dynamic has very, very likely been built in JavaScript.

The big question from SEOs then is, “But how does that impact my work on SEO on websites,” because JavaScript pretty much holds the keys to everything. It’s like the facility manager of the website, it can remove things from the website, it can add things that weren’t there before, it can modify the things that are on the website. So how does that get reflected in search engines? Is something that I add via JavaScript visible to search engines? How about, is something that I remove from the JavaScript actually being removed? What if the adding only happens when I scroll? What if it only happens when I click on a button? These kinds of questions were out there, I think, and they have been for a long time. At least Google’s answer has been, “It mostly works,” and that makes everyone very uncomfortable and very, very nervous, understandably so. If you go to a garage and get your car fixed, and you pick it back up, and you say, “Is it fixed now?” and they say, “Very likely yes.” That’s probably not great. So that’s when I got to work and basically started digging into what specifically works, what specifically doesn’t work? Where are the limits? Is that a technical limitation? Does that make sense? Do we need to change things? Is this a bug? And also, then catching up with documentation with our fantastic tech writer, Lizzi S., she’s absolutely amazing. She helped me to figure out what are the questions that people are asking out there. What are the answers to these questions? 

Martin Splitt’s Job

 

Ross: Wow. With all those questions, Is this all you do? For your role in your job, like are you answering these questions all the time. 

Martin: I do answer them all the time. I’m actually literally on LinkedIn, someone is very, very angry with me right now. Not a LinkedIn employee, just a random person on the internet is very, very angry because I said something they misunderstood a little bit. Now, I tried to explain to them why what they believe is not quite right but what they think I said is also not what I actually said, that’s a large part of my job and I really enjoy this. The other thing is, we often do a lot of work that is invisible to the external world, like the documentation changes, there’s a bunch of work with internal product teams when they want to launch a new thing or when they want to change something, we usually consult with them to make sure that this is fine for everyone in the community as well as Google. We do triage and fix bugs, I also run a few events that we do like the Virtual Unconference that we did last year, that we will do again this year. I did use to help Gary from my team run the Webmaster Conferences around the world. Yeah, and then also often, I am just speaking at an event, or at a podcast like this. 

Ross: I feel for you. Again, I’m doing a little research and listening to how often you have to deal with “They misunderstood me.” It must be very hard having to check every word you say. 

Martin: To be fair, if it is a complicated question, and the answer is complicated, I kind of see that. But it’s really, really unfortunate, because oftentimes, there is a simple answer like, this is a yes or no question, and I’m often like, “It’s not” because if I say yes, for this context, for what you described to me, it is yes. But people will hear this and then think “Oh, so generally speaking, the answer to this much broader question that we started from is ‘yes‘” and it’s not. The internet is a very diverse and wild place with so many different things working together at the same time to give us all these pretty colorful pixels on our screens. It very rarely is a clear ‘yes or no’ answer and that’s very, very hard to reflect. It’s unfortunate if I, for the sake of simplicity, say something broadly, and then people ignore all the details I put in this specific answer. It makes me sad. If it’s at least by mistake, I’m kind of fine with it, and we can talk about it. If it’s by malice, because you’re trying to push an agenda or if you’re trying to mislead people into buying your product or service, then I have a problem with that. That’s why I don’t answer certain questions anymore, at all. 

Ross: That’s a good note, I forgot to mention that because you’re not in that area, you don’t answer questions about rankings, and that’s fine, this is good. It actually forces us to go into different areas, because our clients or customers and listeners all want ranking questions answered but there’s so much more. So, that’s what you’re here for. 

Martin: That’s great. 

Ross: Scott, you want to take the next one about JavaScript? 

 JavaScript Mistakes

 

Scott: Yeah, let’s see here. I gotta find my notes here. What are some of the most common JavaScript mistakes that cause crawling issues for websites? Like what are some of the “Don’t do this if you’re a web developer.” I mean, it used to be, back in the day, if you had JavaScript-based header navigation, that was like, you’re dead right off the start. I know that’s not an issue anymore, necessarily but are there any things like that that we need to watch out for? 

Martin: There’s already multiple layers to your question, even though it sounds like a simple question, it’s not. When you say, “What are the most common JavaScript mistakes or problems or errors” versus “What are the most common things that JavaScript developers do wrong.” Those are two very different things. 

Scott: Yeah, not necessarily coding errors, like, missed semicolons and stuff, but like… uses, I suppose. 

Martin: What I often see is overly excited SEOs being like “Our crawl budget, it’s holy, so we need to protect it. So we are just going to completely disable and disallow our API.” Then they have some JavaScript that runs in the browser, and thus, also will run in Googlebot, that actually needs to fetch data from the API to display any content and then the content can’t be fetched, because you just disallowed Googlebot to do that. So none of the content shows up. Then they’re like, “See, JavaScript doesn’t work.” I said, like, “Actually it does. JavaScript tries to load content from your API and you have told us not to. It’s not JavaScript’s fault that that happened. That is actually kind of your fault.” That, I think, is the number one problem that we’re seeing with JavaScript. 

New technology? 

 

Ross: I understand that JavaScript is power- and resource- intensive… I have to admit, I have never been a programmer. I understand a tiny bit of code other than obviously, HTML and stuff that I know well, but I didn’t know that it was considered resource- intensive and that, in many ways, is almost bad for the environment in a way because it uses so much power now. Are there new technologies or languages coming that you think will supplant it or make it better? 

Martin: That is really, really speculative and really, really hard to answer because I don’t know. I think if you would go to the year before the iPhone launched and ask anyone if we will ever just carry around the phone and people will not have laptops and desktops and just work from the phone for what they’re doing, and they’ll be traveling with it, and they’ll do pretty much all their day-to-day activities with it, people probably have said, ‘no’. What I do know is that there have been various attempts at replacing JavaScript and none of them have ever even come close to that goal. Which is not to say that maybe our consumption, or maybe the way that we interact with the web might change in the future. Was it like 2010 when someone said something like “Oh, yeah, the web is dead and we will not, will not use a browser.” I am definitely the example of that not happening and the fact that we are recording this, at least on my end, in the browser right now tells me that the web might not be fully dead. I also remember Taco Bell, at some point, stopped having a website, and then they very quickly revised that, if I remember correctly. The messages about the web’s death are vastly exaggerated, and I don’t know if JavaScript will be around in five years or ten years but I’m pretty sure something will continue to allow us to build on the web as the application platform. It might as well be JavaScript.

Ross: So nothing’s on the radar, that’s good to know. I’m just curious. 

Scott: Flash isn’t making a comeback? No? Okay.

Martin: No.

General SEO Questions 

 

Ross: I’ve seen very old SEO information persist online and we see a lot of misinformation. We always seem to have to correct this stuff from prospective clients. I’m wondering, is the same true for some of the Blackhat techniques you guys see? For example, do you guys still see cloaking? I can’t even imagine thinking like that anymore, but do you guys still see that stuff? 

Martin: Yeah, that still happens. Same with like, link farming or guest posting, and all this kind of stuff still continues to happen. Everything takes a really, really long time to get out of people’s minds. And I think, especially with SEO, the problem is that people are like, “Oh, I heard someone, a friend of a friend of a friend has said that it’s X Y, Z” and it’s very, very hard to figure out if it’s true or not, because then you go online, and you search for that particular piece of information and there’s stuff from like ten years ago, or even like five years ago, or three years ago, or even like yesterday, where someone’s like, “I still think that’s true.” As I said earlier, like this specific exchange that I have with someone online right now, is literally someone saying, “I don’t like what you said, here, I think is wrong. I completely disagree.” And I say, “That’s how Googlebot works.” Then they say, “No, that that can’t be true. It can’t work like that. It doesn’t know, that’s wrong” and I was like, “What do you want to hear from me?” I can only tell you how it is. If you disagree with this, that’s fine by me, but then please don’t talk to me again because what’s the point of this exchange, and then that keeps happening. 

Ross: Well, I’m sorry, on behalf of…

LinkedIn Mess 

 

Martin: No, it’s fine. I can roll that out here while we’re at it, if you want to. I can elaborate on the specific problem and the specific point that was made here, if you’re interested in it. 

Ross: Sure. 

Martin: It has something to do with SEO so there you go. In a recent podcast, we talked about a hypothetical search engine, if Gary, John, and I from my team would be building a search engine, which makes things easier, because then we can talk about hypothetical things without necessarily meaning what we actually do in Google search. We call this new search engine “Steve.” In Steve, we were wondering, how would we make sure that we are presenting good content to users. It’s actually kind of answering ranking questions, if I’m honest, because that’s what we try to talk about. The thing is, obviously, ranking is very, very complex and there’s no way that we can talk about how Google Search actually does it but there are a few fundamental thoughts and principles that we can talk about that definitely ring true and for the hypothetical search engine, also makes sense. So if I were to build a search engine today, I would want all the good content or the relevant content to be available to my users, but that poses a range of problems. One of which is, what if there is a website with fantastic content, but terrible, terrible, terrible HTML to the point where it’s even invalid HTML. Let’s say there’s a paragraph that hasn’t been closed, or everything is just a huge div with lots of text in it and some of the text has been bolded, some of the text has been made larger, but none of it uses any reasonable HTML markup. If you would look at it as a person who knows what HTML is supposed to look like, you’d be like, “Oh, my God, this is utter garbage” but the content is good. So as a search engine, if I write a search engine today, my search engine has two choices. 

Choice number one, it’s invalid HTML – I can’t parse this so we just throw it away and we’ll never show it to a user, full stop, done. 

Or I say, “Well, we can’t say for certain what is and isn’t a section, what is and isn’t a headline, because as far as we know, it’s all a huge blob of text, because someone has not taken care of actually writing proper HTML. It’s a body tag. Then in the body tag, all of it is just text. Like there’s not even a div or a P in there, it’s just a huge mess of text. Then there’s some font tags, or some strong and emphasizes and stuff like that…

Ross:  So it’s built from front page? 

Martin: Yeah, front page 95. Fantastic, good choice. It looks great for the user and it has all the great information but it’s just like, “We can’t parse this, we don’t know, we throw it away, we don’t show it to the user.” Or as I said, the alternative is that we can say, “Well, we don’t know for sure, but at least the information in it is good. Like the content itself answers the query intent perfectly, and we have a certain amount of certainty that this might be a headline because there is this bold and bigger text that looks short and looks like a headline, and from the way of writing, it resembles the headline, and then there’s like a bunch of non bold text below it. Then there’s another bold piece of text below that, and then, you know,…” and then it continues, it looks like a site structure but it isn’t, and it’s terrible. But we can say like, “With 10% certainty, we think this is the headline, and this is the section that belongs under the headline. And with 25% certainty, this other thing is also a headline. Then this other thing here is also again, a paragraph of text.” While the HTML does not allow us to be sure about certain things, which we would if it was written properly, if it was programmed properly in HTML and valid HTML, we can still extract information, we can still make assumptions about what the user, what the author of this page meant, and we can still present this to the user. If I, for my search engine, had to choose between not showing good content and showing good content that is potentially not as well structured as it could be, just because I need to make predictions and assumptions and guesses. I’d rather go for “I guess this is good” and show it to you rather than just discarding it entirely, and that’s what I said. That’s fundamentally, I made it a lot more short and concise. I said, “So if you are writing an HTML page, that’s not valid HTML, we still want to index it because the content might be really, really good and useful for users.” That’s what I tried to say. One thing is what I meant, and you heard that was a much longer explanation. Then that’s what I said and I give a follow up on that sentence afterwards where I say, so we have to deal with that but obviously, the moment we deal with something, well, semantics, and that means the HTML isn’t clear. We might make mistakes, or we might misunderstand something. So you know, it might go wrong, we might not show the information that you wanted us to show because we don’t know if it’s good or not, because we have to make guesses, which we wouldn’t have to make if you would have given us proper HTML in the first place. This person’s like, “No, you have to write perfect HTML. Otherwise, Google can’t work with it.” I’m like, “First things first. I didn’t say that about Google. As the very first thing, I did not say that Google does this. In fact, fun fact, it does. If your page does have HTML validation errors, we will try to understand what you mean.” But if a machine tries to understand you, if you’ve ever used AutoCorrect, and said “duck you” to someone, or worse, then you do know that machines understanding things does not always go well. So obviously, you still want to write valid HTML, that should be your goal, but if something works really, really well in a search engine of your choice, then it’s not something to freak out about. Oh, this validator says these five arrows. Yeah, so that page gets the traffic and rankings that we want. So moving on, there’s better places to put your energy. But again, that assumes that everything seems to be working. Just because the tool says it’s a problem doesn’t mean that it’s actually a problem. That’s the point I’m trying to make. 

Ross: That’s good to know. I don’t see anything wrong with that. I mean, it makes good sense. I think we’ve always assumed it… well, we know it because we see a lot of sites out there that are garbage, that do have good content and still show up. And we’re like, “Oh, it was made in ’97 but it’s got good content so.”

Scott: Okay, I think it makes sense. 

Martin: And yet it makes people angry on the internet but I think that everything makes people angry on the internet so there’s that.

Ross: There’s a lot of anger out there, unfortunately, in so many different ways. 

Google’s Crawling & Understanding Difficulty?

 

Ross: Is there any content out there that Google still has difficulty crawling?

Martin: Difficulty crawling — not really.

Ross: How about understanding?

Martin: Understanding – that’s a different question. For those out there who wonder why I’m very specific about this, there’s the distinct phases of crawling, indexing, and ranking in any search engine, really. The crawling is the intake part, we need to basically download the stuff from your website to process it. The processing, understanding part is indexing, where we try to figure it out. This website is about cat food and not dog food. So we probably might want to show it for cat food queries and not for dog food queries. Then last but not least, we have a lot of documents, a lot of websites out there, talking about cat food. If you are asking for cat food, in some sort of capacity, we will have to look through all of these potential candidates and then bring them in an order and present them to the users. Then actually, that’s the last step, it’s called serving. So once things have been ranked, and put into a list, fundamentally, we need to show this list to the user, that’s the serving part. Each of these come with their own challenges. With crawling, the content is not that much of a problem, because as far as crawling is concerned, they either download stuff from your website, or they don’t. There’s not that much that can… I mean, there’s a bunch of stuff that can go wrong, but most of it is transparent and we handle it internally and you don’t have to deal with it. With understanding, it’s trickier because there’s so many factors that go into it. So I think one of the things that is really, really hard for Google search, specifically, is when the intention of the content is unclear. If you have a long rambling website that talks about 10 different things on the same page, what is this about? We’re reasonably good at figuring it out but sometimes it does go wrong and then people get very, very upset about it. 

Duplicated content

 

Martin: Another thing is, it’s not a difficulty but it’s a general problem, it’s actually not a technical problem. It’s not a Google problem, per se, but what happens if someone clones content? What if someone makes a copy of someone else’s content? There’s very clear legal implications and I’d rather not go into more details, because I’m not a lawyer. But fundamentally, then the question is, how do we determine who is the original author? We can’t. People attribute it to Google, like “You should block this because they stole from me” and we’re like, “We are not a lawyer. We are not a law enforcement agency.” You need to figure this one out. There’s the DMCA process that you can kick off and then you can come back to us with the DMCA measures and then deal with that, but who are we to judge that? We have no legal rights or ethical right to make judgments based on that. So that’s a very, very tricky one. Again, it’s not a technical problem. That’s a very legal and societal issue with stolen content. I know that NFT’s suffer the same, right? It’s like “I bought this NFT of this thing” “Yeah, but the person who just made the money actually didn’t make the piece of art. So good job.” Yeah, so it’s tricky. That’s the tricky one. 

Ross: If you’ve got duplicate content out there or someone has copied your content. If yours was indexed months earlier, there’s a certain amount of… I mean, yes, a person copied it but that’s a whole ‘nother thing, DMCA, all that stuff but Google will give credit… 

Martin: We’ll definitely try to figure that one out but what if we just discovered the copied version before your version? That’s unfortunate. That’s very, very unfortunate. If I give you two websites out of nowhere, out of the blue, I just show you two pages and I ask you which one came first, you’ll be like “I don’t know. I can’t say which one came first.” It’s very much guesswork for us. 

Google and 1-page websites

 

Ross: You were mentioning how there’s a whole bunch of stuff on one page and it’s hard for Google to sort of sometimes figure out this muddle. Again, I know, people ask me questions all the time. Our favorite answer is “it depends.” It always depends. But single page websites kind of frustrate me in that way, when I get them given to us to market or optimize. I’m like, “No, no, no, let’s break this into pages. It just it’s so much easier, contextually understanding.” How does Google handle those? 

Martin: That’s a fantastic question. I will not answer “it depends” this time, I promise. The way that we handle it is we basically look at the website from two perspectives. One, obviously, is the HTML that you send us over. So if all the content is in the HTML, that’s one thing. But we also look at what a browser or a user’s browser or user in the end would see. By ‘see,’ I don’t necessarily mean it’s on the screen, but more like, what I can potentially see on this page. In the single page application, at least if it’s done more or less, right, the way that it should work is that not all the content is in the initial HTML that gets sent from a server. So you would see a certain URL generate certain HTML with a certain content in it, if the content is loaded via JavaScript or not, doesn’t really matter for us, as long as we see some content in the final state that we go on with and index, then you would probably see a single page application and say something like, “About” and “Products” and “Contact us” or something like that, or “Our history” or whatever…“Our team” “Testimonials.” If you have these on separate views, as they’re called, in single page applications and under separate URLs, then each URL would give us a different HTML in the end. Again, if that’s done by JavaScript or not, it doesn’t really matter. Then we would just figure out, “Okay, so this URL is this content, this URL is that content.” And that would be fine, that would just be a traditional page. It gets trickier if people are then like, “No, it’s actually not that much content.” So we load all the content into the HTML. But even then, it’s not as big of a problem as people assume, because we look at what does get shown and if it’s implemented more or less correctly, then you see a bunch of content and you don’t see a bunch of other content. So we can say, “Well, looking relatively speaking for this URL, this content seems to be more important with the other URL, the other content seems to be more important because it gets positioned differently, and it’s actually visible on the screen. Thus, we can still..” Again, now we are guessing, right? Beforehand, we weren’t guessing. This URL yields this content about the products, this URL yields the content about the team. This URL yields the content about the testimonials. Now, each of these URLs yields the same content, at least in the HTML, but the rendering, the thing that you see on screen, does not. This URL sees the products, this URL sees the testimonials, but it contains all the content. So we then make guesses and we might guess like, “Okay, so maybe this URL is actually this kind of content, that URLs actually that kind of content.” 

Carousel Content on Google

 

Martin: Then there’s a follow up question that gets me into hot water again, and people say “Okay, so you can figure out what content or you try to guess the importance of content based on if you show it on screen or not, but what about carousels?” To which I say that’s a fantastic question but look at it from a user’s perspective, if you have content that you really, really, really care about, but I have to click on the little arrow or like swipe on my phone three times to see it, how much importance will I give it as a user if you hide it away like that? The search engine does the exact same thing. If I come to this page, and I don’t see it, it is maybe not as important as the other thing and that sends SEOs into a spiral going like “Oh my God, all our carousel content will not be ranking.” It might, it might very well rank nonetheless, it might just not be considered as important as whatever is immediately visible. So it’s all a relative game. Obviously, if there is something that is relevant to one search query, we call it search query A, and that is visible immediately. Then there’s something that is relevant to search query B, and that is hidden in the carousel. Then for search query A, you might still rank well with the page and for search, query B, surprise, surprise, you might still rank well as well, even though the content is hidden behind the carousel. If the content is good enough, and there’s not that much competition that makes it more prominent, then you have good chances that it’s not a problem, actually. Rankings change all the time, because people change their websites all the time. So even if you don’t do anything, other websites might disappear, they might change their content. They might change the structure of their website. They might do something that makes us not index a piece of their content anymore and thus, you keep moving around. Our algorithms keep changing on a day-to-day basis. Even if everyone just stops touching their websites for a week, you might still see ranking movements. So looking at ranking is not the best thing you can potentially do, I think. 

Ross: And I didn’t even bring up ranking!

Scott: There we go. There’s so many other metrics you have to look at. Ranking is certainly important in a lot of cases, but it’s certainly not the end all be all. If your sales are going up, who cares how you rank? 

Ross: Yeah, exactly. 

Scott: Not entirely, but you know, you want to look at your ROI and your paycheck at the end of the month. 

Ross: Yeah. Alright Scott, go for it. We got another one. 

Overthinking Content Optimization

 

Scott: Do you find that the internet community is still overthinking a lot of things when it comes to optimizing content? 

Martin: Yeah, they do. There’s a lot of thought being put into the completely wrong thing and it’s like, “Oh, do I need to have a social media profile?” Well, not for ranking purposes, but it’s probably still useful because people see your stuff there and if they like it, they might click through to your website, and that’s a good thing, I guess.

  • Oh, my God, I have duplicate content warnings.
  • So what? Is that even content that you care about? Is the content indexed?
  • Yes. But there’s a duplicate and that doesn’t get indexed.
  • Yeah. But do you care about that URL?
  • No.
  • Well, then what’s the problem? Like, you don’t need to worry about that.
  • Oh, my God, crawl budget. I heard about crawl budget. I want to improve my crawl budget.
  • How large is your website?
  • It has 12 pages.
  • No, you don’t need to worry about crawl budget.
  • Or even like, I have 100,000 pages 
  • You probably don’t need to worry as much about crawl budget as you think you do.
  • Or like, oh, Martin, we have this huge SEO problem on our website, the rate of crawling has decreased.
  • So? Did you introduce as much new content as you used to?
  • No.
  • Okay, did you update any of your content?
  • No.
  • Then why would we crawl?

Why would we crawl? Crawling means we ingest your content. We take it into our pipeline and put it into our database, fundamentally, so that it can be shown in search results. Once we’ve done that, and we have established this. The Wikipedia article on marmalade, how often does that change? I actually don’t know the answer so I’m just gonna Google it real quick. I’m a huge fan of Eddie Izzard and they made this joke “You know, these Wikipedia rabbit holes that you fall into?” You start like “Oh, what is this and that?” and then you see something related to marmalade and that’s the link and then you click on it and like marmalade was invented by Mr. And Mrs. Marmalade. Then you keep clicking on things. Here we have the Wikipedia page and it has a history. There has been a change, surprisingly. Okay, that’s a terrible example, I realized, because it actually has a lot of changes, which I didn’t see coming. There’s like one on the 20th of December. There’s a second one on the 20th of December last year. There’s one on the 12th, one on the 10th of November, 30th October, 27th of October, 3rd of October. It’s much more changes than I expected.

Scott: I think I’m going to have to check out trends on marmalade, maybe there’s some big marmalade news out there!

Martin: That’s maybe the ranking secret that everyone has looked for, I guess. 

Ross: We’re gonna have a great ranking for “marmalade” now on our show notes. 

Martin: I’m so sorry. I’m looking for something else, like a random Swiss town…There was one change in November last year, one in September, and then like, that’s pretty much it for last year, I think. Yeah, that’s pretty much it for last year. In 2020, there has been a change in December, one in June and one in April. Why would we crawl this every week? It doesn’t make sense. It does absolutely not make sense. Why would we crawl it multiple times a day, it does not make sense. So we would not crawl that. If you have a website that has been around for a year and there’s nothing that changes, why would we crawl? So then people are like, “Oh, shit, that’s a problem. So we need to create a blog and we need to update that daily.” Yeah, but what if this blog is really, really, really, really irrelevant to everyone outside of your company so no one will want to read it, so no one will search for the stuff that you put on the blog, so then what’s the point? 

  • Yeah, but the crawling has gone up. 
  • Yeah, but did your rankings change? Did your impressions change? Did your click through change? 
  • No. 

Well, then maybe that was a waste of your time. I’m not saying that a blog is a waste of your time. If you have really useful content and there are a lot of things that need to be discussed, and you have an audience that is really, really looking for all these answers, then by all means, create a daily blog post or a weekly blog post or a monthly blog post, because this might get you in. I don’t know, if the marmalade community has like smashing events where they get together and taste each other’s marmalade, then go for it, go write about every event that you can possibly go to to taste people’s marmalade, review every new marmalade that gets on the market. There might be people who are like, “Oh, I’ve seen this new marmalade. I wonder what it tastes like.” Then they find your review and they’ll be really, really happy and they might buy it from your store. Who knows? But it always depends, right? If I’m making this website for a cafe around the corner, then I don’t need a blog. I might, maybe I can get something with a blog, but maybe if I’m just putting in there “Today, our opening hours are..” then what’s the point? That should be on the website, that should be right there. This would be information that I can find more easily. That shouldn’t be like a daily blog post kind of situation. 

Ross: When we talk about blogs, we talk about authority plans and ensuring that whatever you write, it’s not a waste of time. If you’ve seen content online that’s proven to have had legs, people liked it, commented on it, take that. Get that idea and make an ultimate version of it, write that. Even if it takes you two months, it’s going to be kick ass, it’s going to do very well. 

Martin: Yeah. Also in general, figure out what your audience wants and needs, and then fill that need.

Ross: Exactly. So simple, isn’t it? That’s what I love about it. 

Martin: It’s absolutely not. That’s the problem. So I come from a developer background, and for me, at the very beginning of my career, SEO was a checklist like “Okay, so I’ve written valid HTML, I’ve been one of these people. I’ve written valid HTML. My website is fast. Caching is good. All images have alt texts. Yeah. SEO solved, big checkmark, check that, tick that off the list, done.” Then no one came ever through search and why is that? That’s weird. The website looks fantastic from a technical point of view. We have all the content that the customer asked us for. Then I talked to our SEO team and they explained to me like “No, that’s not it. The problem is that the content is wrong. The structure doesn’t make sense for people. We tested that with actual customers and potential customers, and we need to change the content. Yeah, sure, your work as a developer is done here. The technical foundation is solid, move on. Now it’s in our hands.” That’s okay, that’s fine but it’s never that easy.

Core Web Vitals 

 

Scott: Can we talk just for a sec, even though everybody hates it, core web vitals. I know when Google first announced that it was coming, everyone freaked out. We all thought it was going to be this catastrophic, Earth-ending, ranking-shattering event, and really, it kind of turned out that if Google hadn’t told us it was coming, maybe nobody would have noticed that it happened. Then we see in the past, things like SSL and site speed and mobile friendliness, all kind of start like that, you know, they’re a signal that doesn’t do anything. It just grows and grows and grows and grows. Do you think core web vitals will continue to grow in importance and evolve? Do you have any insight into the future of core web vitals? Will we ever have to freak out about it? 

Martin: Hmm. That’s a really tricky one. If you think about it, a lot of people had their web experience turned sour by the growing decrease in web performance. As in, people were just cramming together technologies and like making “hodgepodge solutions” that just basically shifted the burden of work to users’ devices. That is an ongoing trend in all software engineering, unfortunately. You see that we used to have a Pentium 2 processor with like 400 megabytes and we’re able to make 3D animated movies with that. Today, try to play a video on YouTube on an old iPhone or an old Android tablet, or something that is like seven years old, you might have a bad experience and that’s really, really unfortunate, because the devices are super powerful. The devices are like a million times as powerful as what we use to fly to the moon and yet, it seems to be that software gets slower and slower and slower over time. That is because developers could get away with it as computing power doubled every… what was it, 18 months or something in the past. It hasn’t been true in the last couple of years but you could get away with a lot of non-optimal things. And a non-optimal software that runs on a user’s device is better than the perfect software that does not because I haven’t finished it yet. So this mentality has also come to the web, I think. We are seeing that, especially with JavaScript, you can do a lot of things inefficiently with JavaScript if you’re not careful enough. You didn’t have to be careful for a really long time, because everyone just browsed on their IMAX and on their latest desktops and Windows gaming PCs and whatnot, or Linux machines, that they’ve very fine tuned to what they need and what the hardware can give them. Now we are on these phones, and we are stuck with the same phone for a longer period of time and now we actually notice it. We notice how hardware hungry the web and other applications have become. So there needed to be a counter initiative. 

We recently moved, and we needed to order furniture. When I saw a piece of furniture that I liked, I was very likely to buy it from the store that had a website that didn’t suck. Absolutely! Not because I’m a geek, not because I’m a nerd. My wife did the exact same thing. Our friends did the exact same thing. If I’m trying to buy a sofa, and I go to your website, and I see the sofa I want and I click on it and while I move my finger towards the screen pane, it shifts and then there’s like a nightstand, and I tap on the nightstand and then the nightstand goes up and then a pop up opens if I want their newsletter or not. Then another pop up opens if I want push notifications or not. Then another pop up open saying like, “Is it okay if we save your cookies on your device?” I am not going to buy that sofa from you. I’ll go somewhere else. But there has not been much besides the interstitials thing, there hasn’t been much in terms of search metrics or anything that we could use to get that feeling of a user, potentially experience of the user into search and into ranking. AMP was an interesting experiment towards making that happen, by saying, “Okay, so we give you AMP,  we give you a limited subset of what the web can do. Can we make that as fast as possible and we even take away the server side potential problems by caching it in between you and the user, so that we can make sure that it’s as fast as potentially possible.” We wanted to see how that goes and we noticed that there was a preference for the faster versions. I’m not saying it was perfect, but it did prove a point that performance is an important part of satisfaction or experience, whatever you want to call it. Obviously, it’s not just that right. It’s also “Do I like the color scheme? Is it available in a language that I speak because the most fast and beautiful website is pointless if I don’t speak the language that it’s in, then it doesn’t serve me either.” 

But language, we already know, we can figure that one out. Server site speed, we can figure out because when we crawl, what we do is we contact your server and then basically act as if we were a device from a user and receive the data, so we know that. What we don’t know is what happens afterwards. How quickly does the content show up? Does it jump around? Does it let me actually interact with it, because that’s the other thing that is really, really, really annoying. If you’re like in a web shop, and you said, “I want to order this” and nothing happens and you tap it again, like, seriously, and then you tap it again, and then you tap it again, and then you see that he added five or whatever it is to your checkout cart. It’s like, “No, I don’t want five, I wanted this one thing” and then you remove one and nothing happens and you move on and nothing happens. Then afterwards, you’re left with an empty cart, because it has removed all five, because you clicked five times because it didn’t respond. Frustrating. So we came up with a set of metrics that reflects exactly that. How much do things move around,… Oh God, what’s the name? I keep forgetting the name of that. Sorry. 

Scott: The cumulative layout shift? 

Martin: Yeah, correct, CLS. We have, 

  1. How fast does the main content of the website pop up visually? That is the largest content for paint.
  1. Then how quickly can I interact with it
  1. Which is the first input delay.

These are the three metrics that we have today. We realized when we launched this, I think two years ago, at this point, we launched the metrics or was it last year, the pandemic makes everything go weird. 

Scott: Yeah, I don’t know anymore. 

Martin: I can’t remember, it was launched at some point… in the past, and we launched these three metrics, and we got feedback. We got positive feedback, people saying like, “Oh, yeah, this actually matches more or less what I’m experiencing.” And we got negative feedback, we got people saying like, “Oh the CLS does this thing that I don’t think makes sense.” We got a lot of this kind of feedback. The way that the CLS metric, the cumulative layout shift, how much things move around on the page after they load, the way that we gathered this metric has changed. In order to reflect reality better, so will core web vitals keep evolving? Absolutely, yes. There’s a yearly cadence in which we want to be able to say, “This is going to change, and we don’t want to catch everyone off guard. We want to give everyone a heads up like, hey, starting in six months or something, we might be changing the way that we collect this information or we might be changing the metrics whatsoever. Maybe the metrics are no longer useful and relevant, maybe we need different metrics to look at.” It will evolve because the goal is not to have random metrics, but to figure out how can we measure performance and how can we make sure that the user has a good experience once they click through, because that’s what users also want. It’s not the thing that we say like, “Oh, we need to have this.” No, it’s something that people are interested in, and actually, they want their results quicker and faster and more snappy, and not like this weird, jarring experience that I had trying to buy a sofa. 

Will they become more important? That’s a really, really tricky question. So obviously, we check before we roll out changes, we check how they would affect search results, and how that would affect the experience of the user on Google search. We know that for a lot of sites, the page experience change doesn’t make that much of a difference. But for a bunch of websites, larger as well as smaller websites are affected, it did make a difference. It’s just really, really hard to say how much in proportion, I actually don’t know the actual numbers, but let’s say for 1% of the websites, it makes a difference. That is a lot of websites out there, even though it is just 1%. Obviously, like the SEOs working for all the bigger and smaller brands will say like “I don’t know anyone who had this.” Yes, but there’s diseases out there, we are what, 8 billion people at this point and there’s diseases that only 10,000 people have, that’s far less than a percent. There’s more websites than people on this earth so a percent is a lot and can have serious consequences. We had to walk this tightrope between understating the value of this and the impact of this and overstating it because no matter what we would say, if we don’t make it and it’s unfortunate if we are not vague in our statements, someone will be angry and mad and misunderstood and unhappy with it. So we said that it will have an impact, but your mileage may vary. Trying to calm down the freakout period a little bit, but maybe in the future as more websites are now paying attention to core web vitals, we might see a general improvement of performance on the web. Then those who don’t do it will be left behind. It’s the same with HTTPS. Obviously, as 90% of the websites in search results didn’t have HTTPS, if only the 10% who are not as great had HTTPS, why would we show them over the ones that had better content, but once the one website that had good content also had HTTPS, it might have seen an improvement in its position, thus having impact. Then the others were like, “Wait, how did they overtake us? Oh, they have HTTPS, it must be that” and then they started changing it over and then it became more important and more visible. It’s a chicken-egg kind of situation. But I do think core web vitals will continue to be relevant and important, just how important it is to use specifically might change and might not even be in our hands when that change will happen, and how much of a change it will be for you. 

Scott: As a user, I find that critically important, like you said, the sofa example. I hate that we have an Internet where every single person listening will relate to that. We’ve all had those experiences, and we shouldn’t. I mean, businesses want to make money, why are they doing that? Maybe this will just force everybody to make the internet better and easier and faster. 

Ross: Hopefully. I think business owners are finally understanding that, that’s the way it is. People are not going to put up with that anymore and that’s the one benefit of it growing so much, as much as it’s much more difficult to stand out, it’s also easier, in some ways, because you just got to deliver a great experience. When everyone else does that, then yeah, it’ll be a little more difficult again, but there’s always a way to get ahead. Anyway, we’ll dream of that future. 

Martin: It reminds me of this cartoon, where there’s like a scientist talking about how we can improve the way that we interact with our environment and how we can use that to  have more trees, have more green, take better care of the children and their health and their education and make sure that there’s no one out there that was not able to buy food and have shelter and everything. Then like someone in the audience gets up, he was like, “But what if it’s a hoax, and we made the world a better place for nothing.” It feels a little like that. If you improve things and get nothing in return, you still improve things and that sometimes, it just should be enough.

Ross: Exactly. 

Martin: But in this case, we want to proactively incentivize it. 

Ross: I think it’s an honorable direction to take and it definitely makes a difference. Love or hate it, Google says to do it, you don’t have a lot of choice. 

Scott: I’ve a few websites I frequent and I’m hoping they do this and fix it, but they haven’t yet. So I won’t name them, point them out but Ross knows a couple of them.

Page Speed 

 

Ross: To your absolute credit, you are very thorough in answering questions, I’m just so amazed that you really think these questions through, so thank you. I want to tie things up, I got two questions. One is, everyone sees this and it drives me a little bit crazy. Again, I’m not a programmer so I don’t actually know all the answers to this. Okay, I’ll get to the question. We noticed that running PageSpeed tests, even CWV tests or Core Web Vital tests return oddly different results from day to day and it’s frustrating, you’ll see a drop 10, 20 points on something and you didn’t change your site. Then we get a call from the client going “Why did this drop and why did that drop?” It is a bit of a.. What’s up with that? 

Martin: That’s the next really, really tricky problem. So… 

Ross: I wish it could just pick the average. 

Martin: Well, you could build that yourself. The way that I think of these tests is very different. I am not interested if it’s 90 or 95 or 88. I’m more interested in what’s the general region on the spectrum that I’m in and what’s the general trend. Obviously, if I don’t change anything, and I usually don’t change anything, I have periods of not changing my website for a few days, then I can get a feeling for how that looks like and I can do a running average myself, for instance. But generally speaking, I want to see like, “Okay, so I have this version A that is already live and performs anywhere between 80 and 95. What happens if I make this change?” Then I run the test a few times, and maybe over a couple of days, and under more or less controlled lab environments, and I see that now this is between 60 and 80, then that gives me an idea that what I changed might not have been a good change, there might have been consequences that I didn’t anticipate and I want to dig in a little more as to where these problems might come from. Also, oftentimes, you see the problem actually doesn’t come from you. Actually, it is some third party, some provider that provides you with a service, like a chat box or something, and that just happens to be slow today. Then you’re like, “Aha.” First things first, it shows you this thing has an impact. Then it shows you this might have a negative impact. It allows you to either reach out to your third party provider and ask what’s up with that, or it might allow you to rethink the way that you implement it. And this is wild. I do love the analytics team, they’re fantastic, but I did work for a company where performance was really, really important, because we were doing like real time 3D and VR stuff in the browser. Then someone asked us to add analytics. Actually, Google Analytics wasn’t the biggest problem. The typical marketing department, they wanted us to add three different types of trackers to different kinds of pages. We didn’t have the agency as developers to say, “No.” If we were asked to do that, then we could argue that it made performance worse, but then they would come back to us and say, “Well then, fix that, make it so that it doesn’t.” We have to become very, very creative. Actually, Google Analytics was the easiest to solve that with because they have an API, even though it’s not super well documented, it’s easy enough to understand how it works, so that we could make our own implementation. So that we didn’t have to deal with fluctuations and the library code that they provide you with, but we basically just use the API. We had much, much more control over how that performed and when that loaded, because only we knew when the website was ready from a user’s point of view. Then we’re like “Now that the website is ready, and the user is already happy. We are very, very happy investing CPU time into doing analytics” just not before, because that hurts our baseline. If we had data that showed it hurt our baseline to just have the three analytics tools on the page but we did understand that our marketing needed the insights that these tools provided, and they didn’t all provide the same insights. So okay, fine. So we needed to figure out a way to do that without hurting performance, and we wouldn’t have done that if it wasn’t for the tooling, such as PageSpeed that showed us very clearly where the problem was coming from. Then we had to get creative. Yes, that’s not easy. Yes, it’s not always feasible. Yes, not everyone has developer resources to do that. But I don’t think that’s a good reason to not have the tools to understand where the problem is coming from. 

The other issue with these tools, especially PageSpeed Insights, and the lighthouse tooling, is that they need to be as simple as possible to understand but that also means that you need to break things down, right? We all probably have been introduced to the idea that at a very high level, everything that we see and touch and we ourselves are made from tiny, tiny particles, and that’s it. That’s the level, that’s like level one of realizing that’s how the universe roughly works. Then you realize that actually, these atoms have protons, neutrons and electrons, and the electrons are like orbiting the protons and neutrons and that’s how that looks like. That isn’t a really well working model of understanding how this is made up but then you look no closer, you’re like, “Actually, that’s not true. They don’t orbit. There’s this probability cloud around the proton, the nuclei and that’s where the electrons are and you never really know where they are. And if you know where they are, then you don’t know this and that. So it’s a lot more complicated and it’s the same thing with Lighthouse or PageSpeed Insights.” They tried to be like, “Okay, we need to give you a number from 0 to 100 that roughly gives you an idea of where you’re at — 85.” But what the hell does that mean? Well, it’s a formula that takes a lot of metrics, including metrics that are not even core web vitals metrics, it takes these metrics, throws them into a formula that tries to represent by importance and all of that is a guesswork, really. It’s scientific-ish guesswork. It puts that into a formula and out comes a number. That number is by no means as accurate. If I look at my bank account, I get a number that’s very accurate. That’s the number of currency that I have available today and that’s it. But PageSpeed Insight number really is just a rough indicator where you are and it’s tiny little differences. It can just be, am I running another program at the same time? Is this other program that I’m running, actually fetching something from the network right now? Is my network jittery? If I’m on a mobile phone, run a tool that actually tests from my local device and my mobile phone network breaks down because I’m going into a tunnel with a train, then yeah, I’ll get a different score, does that mean that the tool is broken? No. It just means that there are so many factors, so much jitter, as it’s called in technical terms, so much noise that it is hard to get rid of. That’s why you will always see some differences. Then it’s important to look at where this is coming from. If it says like, “You lost 20 points because this font took ages to load” then you can do two things you can say like, “Well, okay, so this time the font load is slower than usual.” That might happen, I don’t care. Or you might go, “Hmm a font not loading should not make my website this much slower. Can I look into why that made my websites much slower. If there is a way to implement it so that if that happens, again, it doesn’t make my website as slow.” But finding the difference between that’s just a glitch in the network, or in the way that the service works, versus that’s something that I can improve and how easy can I improve this, is not easy to master. That’s something that requires a lot of experience and knowledge. Unfortunately, that’s something that is not very easy to put into any kind of tool.

Application Programming Interface (API)

 

Ross: Interesting. So one thing you mentioned piqued my curiosity. Something you said about Google Analytics and API. Just to be clear, that was a way to replace the snippet that you put in there that typically slows down? 

Martin: Yeah. 

Ross: Is there a product out there that you’re aware of, anything out there you know that makes it easy to implement that,  API? 

Martin: I don’t know, I actually don’t know. I also don’t know if what we did was actually technically cool with Analytics. It worked. I’m not saying it continued to work or something. I don’t even know if that was fine, but it worked. It solves the problem, 

Ross: Because it is an annoyance of ours too. It’s painfully ironic when we see this but it’s Google’s darn code! 

Scott: That tells us a Google Google Code is broken, yeah. 

Martin: Honestly, as ridiculous as that looks from the outside, I’m really, really happy about that because it means that our separation works well enough. So search wants to be this neutral, fair platform to everyone. Even Googlers, if an Analytics Team comes to us and goes like, “Can you tell us how we fix this?” We are like “No, just our public documentation, go figure it out.” This is the same as being like, “Oh, we’re using Angular because it must be better with Google search.” No, it’s not. The Angular team will use the exact same information available to everyone else out there. There’s no special treatment for Google products. If our Google products are not up to snuff, they will get pressure from the outside and they’ll be like, “Oh, shit, we’re Google, we need to fix this.” Then they’ll come to us and ask us, “Can you help?” And we say, “No. There’s documentation. There’s information out there, you can ask public questions, we’ll happily help.” 

Web 3.0

 

Ross: Interesting. Well, for our last question, a very speculative one, kind of fun. I’ve heard so much lately about this, but what are your personal thoughts on the likelihood of Web 3.0 becoming a reality? 

Martin: Web 3.0 is a non-solution looking for a problem, I mean, there will be something, there will be people building the technologies for it. But I don’t see that as solving a real problem in a realistic, useful way. 

Ross: I know you work for a lot of big, big companies. For listeners who don’t know, Web 3.0 is the anticipated switch of the web to an underlying blockchain technology that’s decentralized, so you have control over your own privacy, all this stuff… 

Martin: That’s so funny, because most of the things that call themselves ‘blockchain and decentralized’ aren’t. It bugs me a lot, because there is a thing called the “Unhosted web” or the “Decentralized web” and there’s a lot of work being done in reasonable ways. For instance, IPFS is a great technology that hasn’t really taken off much. But it’s a way to break out from this classical server-client infrastructure where you have to have a central physical machine in the end, even if we call it a cloud. In the end, it’s physical machines that sit in the data center, rather than under your desk. But that’s what it is, there’s a physical machine that has an actual virtual file on it and that serves these files to anyone asking. If that machine goes down, the file stops being available. That’s the problem or the challenge of the internet, right? If this machine goes down, because for instance, if I make a website on this machine that I’m recording on, I’m now connected to the internet. I’m in Switzerland, so I have fast internet as well. If I have a power outage, anything that would serve from this machine is not available to anyone out there. How can I solve this? Well, I can buy another machine that sits in the US. Now there must be a power outage simultaneously on my computer here in Switzerland and in the US, because if one of them goes down, the other one would continue to be available to requests and give the file out – fantastic. But what if I am annoying someone with that, and they very specifically come to my house, break in, kill my server and then fly to the US and kill that server too, that’s really, really unfortunate. That’s the idea of the decentralized web that you can’t do that because everyone who is a participant in this network is also hosting part of the internet. So there’s always, at any point in time, there’s millions of potential computers who are making a resource available, which comes with this whole other host of problems. One of them is where do I find things, right? The way that the internet works is there’s fundamentally a huge phone book that says if “I go to google.com, one of these 1000s of machines will answer.” What if there is no such phonebook? Because the moment you have a phonebook, you have centralization, because what if this phonebook becomes unavailable? So there are a lot of challenges that are really, really hard to solve, which makes a lot of engineers really, really interested in solving this because it’s interesting and fantastic and wow. But I don’t know if there is enough pain for the normal person to be like, “Oh, yeah, I really want a web where I am hosting part of the web myself and no one can ever take down a server because I’m one of millions of partial servers that are making something available.” I don’t think we’ve seen this often enough. It’s not that the website that you frequent everyday goes down because the government takes it down or something, that just doesn’t normally happen. So I don’t know if that solves a big enough problem for the real world people using the web. 

Ross: Thank you. That’s a great answer. Wow, this has been awesome. It’s gone over and I really appreciate that you’ve spent so much time. I’ve enjoyed this thoroughly, I love your answers and you’re very open about it. That’s great, thank you. 

Martin: I enjoyed this as much as you did. It’s fantastic. Thank you so much for having me. 

Ross: Well, thank you. We’d love to have you on again, we didn’t even scratch the surface, we got so many ideas and questions and wow. I just didn’t want to interrupt you. These are great. I know the listeners are going to love it as well. 

On behalf of myself, Ross Dunn, CEO of StepForth Web Marketing and my company’s senior SEO, Scott Van Achte, and our friend Martin Splitt, Developer Advocate of the Google Search Relations team, thank you very much for joining us. And Martin again, we’d love to have you back again. There’s always stuff to talk about and I hope this thing with LinkedIn clears up quickly, who needs that? Who needs it, right?

Martin: The thing is that I feel passionate about these topics, and I just want them to understand what I meant and to hopefully come out and not spread misinformation because it is possible that they have been running around telling people like, “Oh, stop everything you’re doing, please fix this markup first, because it’s so important,” when there have been bigger things that needed fixing instead. If they come out with more knowledge, and I come out with more knowledge, in this case, I actually already learned that I need to be very, very careful about this topic, apparently, because there’s a lot of people who are really, really sensitive around it. So I learned something, I’ll try to help them learn something as well. Then hopefully, we all come out smarter. 

Ross: Well, between you and John and – I haven’t met Gary yet – but you guys are doing a lot for the community, so thank you for putting yourself out there and doing such a great job. 

Martin: You’re putting a podcast out there so thanks a lot to both of you, Scott and Ross, for having us and producing this podcast. 

Ross: Our pleasure. Well, listeners, if you have any questions you’d like to share with us, please feel free to post them on our Facebook group easily found by searching SEO 101 Podcast. If you enjoyed the show, we appreciate any feedback on Apple podcasts or any of your favorite podcast streamer. Have a great week and remember to tune in to future episodes which air every week on WMR.FM. 

Scott: Great, thanks for listening, everyone.