Connect with us

SEO

Bad Grammar or Broken HTML: Which Does Google Care about More?

Published

on

Bad Grammar or Broken HTML: Which Does Google Care about More?


Imagine this…

You’re checking out a site that has two flashing, neon-red issues that need fixing:

  • Multiple spelling errors and grammatical mistakes
  • Broken HTML causing some funky spacing issues

You know that both issues can spoil the reader’s experience. Broken HTML can mess up how a page renders — and grammatical boo-boos are just…bad.

But, are both issues equally “bad” in Google’s eyes? Or, is one of the issues a bigger SEO deal?

In the immortal words of BuzzFeed, the answer may surprise you.

Here’s what Google says about broken HTML.

Google’s John Mueller, during a recent hangout, made the distinction between HTML (a technical issue) and content (a quality issue.)

Broken HTML is a pain, yes, but according to Mueller, “for the most part, we don’t care if HTML is broken or not.”

Sure, if your HTML is so bad that Google can’t crawl it, you WILL have Google issues.

But if you have the occasionally broken code snippet and weirdly-rendering page? You’re probably OK.

Are grammar and spelling more important than HTML? Yes.

According to Mueller, “I would almost say …like… spelling and grammar is probably for most websites a higher priority than broken HTML.”

Why? Because bunches of spelling errors make a page read poorly. It’s a quality issue — and Google only wants to reward high-quality pages.

Mueller said, “we try to find really high-quality content on the web, and sometimes it can appear that a page is lower quality content because it has a lot of …kind of… grammatical and technical mistakes in the text.”

Here’s the write-up by Roger Montti for Search Engine Journal if you want to read the entire scoop.

Plus, think about typos and grammatical issues from your prospects’ experience. How much will someone trust your firm if they see mistake after mistake?

Especially if you’re trying to establish yourself as an expert?

What do other SEO writers think about this news?

The news seems to have put a smile on many writers’ faces.

I posted the Search Engine Journal article to my SEO Copywriting Certification training Facebook group. The feedback was fun.

Michelle Lowery, a digital content editor, posted the single word, “VINDICATION!”

Helen McCrone, a freelance translator, pointed out machine-translated content is often chock-full of huge grammatical mistakes. If you’re translating your content into multiple languages, working with a person is a smarter bet than running your copy through translation software. 

Deb Ferguson, the in-house writer for a law firm, commented, “Grammar and spelling issues make the company or writer seem less of an expert. They can ruin a company’s credibility in the industry and amongst its clients.”

That’s true. Can you imagine visiting a legal site with lots of typos? After all, if the legal firm can’t handle little details like site spelling and grammar — how can you feel good about them taking your case?

So yes, know that any broken HTML will eventually need a tune-up. But if it’s between rewriting pages with lots of spelling errors and fixing minor HTML issues, making the content better for your readers should win. 

Every time.

What do you think?

Does this latest bit of Google news surprise you? Leave a comment and let me know!

You can also keep up with the latest SEO writing news by signing up for my newsletter, or joining the SEO Writing Tips Facebook group. See you there!



Source link

Continue Reading
Comments

SEO

11 SEO Tips & Tricks To Improve Search Indexation

Published

on

11 SEO Tips & Tricks To Improve Search Indexation


The SEO game has so many moving parts that it often seems like, as soon as we’re done optimizing one part of a website, we have to move back to the part we were just working on.

Once you’re out of the “I’m new here” stage and feel that you have some real SEO experience under your belt, you might start to feel that there are some things you can devote less time to correcting.

Indexability and crawl budgets could be two of those things, but forgetting about them would be a mistake.

I always like to say that a website with indexability issues is a site that’s in its own way; that website is inadvertently telling Google not to rank its pages because they don’t load correctly or they redirect too many times.

If you think you can’t or shouldn’t be devoting time to the decidedly not-so-glamorous task of fixing your site’s indexability, think again.

Indexability problems can cause your rankings to plummet and your site traffic to dry up quickly.

So, your crawl budget has to be top of mind.

In this post, I’ll present you with 11 tips to consider as you go about improving your website’s indexability.

1. Track Crawl Status With Google Search Console

Errors in your crawl status could be indicative of a deeper issue on your site.

Checking your crawl status every 30-60 days is important to identify potential errors that are impacting your site’s overall marketing performance.

It’s literally the first step of SEO; without it, all other efforts are null.

Right there on the sidebar, you’ll be able to check your crawl status under the index tab.

Screenshot by author, May 2022
errors in google search consoleScreenshot by author, May 2022

Now, if you want to remove access to a certain webpage, you can tell Search Console directly. This is useful if a page is temporarily redirected or has a 404 error.

A 410 parameter will permanently remove a page from the index, so beware of using the nuclear option.

Common Crawl Errors & Solutions

If your website is unfortunate enough to be experiencing a crawl error, it may require an easy solution or be indicative of a much larger technical problem on your site.

The most common crawl errors I see are:

semrush crawlability problemsScreenshot by author, May 2022

To diagnose some of these errors, you can leverage the URL Inspection tool to see how Google views your site.

Failure to properly fetch and render a page could be indicative of a deeper DNS error that will need to be resolved by your DNS provider.

google search console URL inspectionScreenshot by author, May 2022

Resolving a server error requires diagnosing a specific error. The most common errors include:

  • Timeout.
  • Connection refused.
  • Connect failed.
  • Connect timeout.
  • No response.

Most of the time, a server error is usually temporary, although a persistent problem could require you to contact your hosting provider directly.

Robots.txt errors, on the other hand, could be more problematic for your site. If your robots.txt file is returning a 200 or 404 error, it means search engines are having difficulty retrieving this file.

You could submit a robots.txt sitemap or avoid the protocol altogether, opting to manually noindex pages that could be problematic for your crawl.

Resolving these errors quickly will ensure that all of your target pages are crawled and indexed the next time search engines crawl your site.

2. Create Mobile-Friendly Webpages

With the arrival of the mobile-first index, we must also optimize our pages to display mobile-friendly copies on the mobile index.

The good news is that a desktop copy will still be indexed and displayed under the mobile index if a mobile-friendly copy does not exist. The bad news is that your rankings may suffer as a result.

There are many technical tweaks that can instantly make your website more mobile-friendly including:

  • Implementing responsive web design.
  • Inserting the viewpoint meta tag in content.
  • Minifying on-page resources (CSS and JS).
  • Tagging pages with the AMP cache.
  • Optimizing and compressing images for faster load times.
  • Reducing the size of on-page UI elements.

Be sure to test your website on a mobile platform and run it through Google PageSpeed Insights. Page speed is an important ranking factor and can affect the speed at which search engines can crawl your site.

3. Update Content Regularly

Search engines will crawl your site more regularly if you produce new content on a regular basis.

This is especially useful for publishers who need new stories published and indexed on a regular basis.

Producing content on a regular basis signal to search engines that your site is constantly improving and publishing new content, and therefore needs to be crawled more often to reach its intended audience.

4. Submit A Sitemap To Each Search Engine

One of the best tips for indexation to this day remains to submit a sitemap to Google Search Console and Bing Webmaster Tools.

You can create an XML version using a sitemap generator or manually create one in Google Search Console by tagging the canonical version of each page that contains duplicate content.

5. Optimize Your Interlinking Scheme

Establishing a consistent information architecture is crucial to ensuring that your website is not only properly indexed, but also properly organized.

Creating main service categories where related webpages can sit can further help search engines properly index webpage content under certain categories when the intent may not be clear.

11 SEO Tips & Tricks To Improve Search IndexationScreenshot by author, May 2022

6. Deep Link To Isolated Webpages

If a webpage on your site or a subdomain is created in isolation or an error preventing it from being crawled, you can get it indexed by acquiring a link on an external domain.

This is an especially useful strategy for promoting new pieces of content on your website and getting it indexed quicker.

Beware of syndicating content to accomplish this as search engines may ignore syndicated pages, and it could create duplicate errors if not properly canonicalized.

7. Minify On-Page Resources & Increase Load Times

Forcing search engines to crawl large and unoptimized images will eat up your crawl budget and prevent your site from being indexed as often.

Search engines also have difficulty crawling certain backend elements of your website. For example, Google has historically struggled to crawl JavaScript.

Even certain resources like Flash and CSS can perform poorly over mobile devices and eat up your crawl budget.

In a sense, it’s a lose-lose scenario where page speed and crawl budget are sacrificed for obtrusive on-page elements.

Be sure to optimize your webpage for speed, especially over mobile, by minifying on-page resources, such as CSS. You can also enable caching and compression to help spiders crawl your site faster.

Search Engine Journal PageSpeed InsightsScreenshot by author, May 2022

8. Fix Pages With Noindex Tags

Over the course of your website’s development, it may make sense to implement a noindex tag on pages that may be duplicated or only meant for users who take a certain action.

Regardless, you can identify webpages with noindex tags that are preventing them from being crawled by using a free online tool like Screaming Frog.

The Yoast plugin for WordPress allows you to easily switch a page from index to noindex. You could also do this manually in the backend of pages on your site.

9. Set A Custom Crawl Rate

In the old version of Google Search Console, you can actually slow or customize the speed of your crawl rates if Google’s spiders are negatively impacting your site.

This also gives your website time to make necessary changes if it is going through a significant redesign or migration.

Google Search Console crawl rateScreenshot by author, May 2022

10. Eliminate Duplicate Content

Having massive amounts of duplicate content can significantly slow down your crawl rate and eat up your crawl budget.

You can eliminate these problems by either blocking these pages from being indexed or placing a canonical tag on the page you wish to be indexed.

Along the same lines, it pays to optimize the meta tags of each individual page to prevent search engines from mistaking similar pages as duplicate content in their crawl.

11. Block Pages You Don’t Want Spiders To Crawl

There may be instances where you want to prevent search engines from crawling a specific page. You can accomplish this by the following methods:

  • Placing a noindex tag.
  • Placing the URL in a robots.txt file.
  • Deleting the page altogether.

This can also help your crawls run more efficiently, instead of forcing search engines to pour through duplicate content.

Conclusion

The state of your website’s crawlability problems will more or less depend on how much you’ve been staying current with your own SEO.

If you’re tinkering in the back end all the time, you may have identified these issues before they got out of hand and started affecting your rankings.

If you’re not sure, though, run a quick scan in Google Search Console to see how you’re doing.

The results can really be educational!

More Resources:


Featured Image: Ernie Janes/Shutterstock





Source link

Continue Reading

SEO

Google Says Keywords In Domain Names Are Overrated

Published

on

Google Says Keywords In Domain Names Are Overrated


Google’s John Mueller said once again that “keywords in domain names are overrated.” He and Googlers over time have said this numerous times over the years and he said it again. Instead he said “pick something for your business, pick something for the long term.”

If you want, I can license variations of Rusty[words-go-here], if you need an excellent domain. Like RustySEO, RustyDesign, RustyFood – you know it makes for a good name. ;-P

Here is John’s tweet on this:

A bit more from this conversation on Twitter:

Here is some of our older, previous coverage of this topic:

Forum discussion at Twitter.





Source link

Continue Reading

SEO

Googlebot Crawls & Indexes First 15 MB HTML Content

Published

on

Googlebot Crawls & Indexes First 15 MB HTML Content


In an update to Googlebot’s help document, Google quietly announced it will crawl the first 15 MB of a webpage. Anything after this cutoff will not be included in rankings calculations.

Google specifies in the help document:

“Any resources referenced in the HTML such as images, videos, CSS and JavaScript are fetched separately. After the first 15 MB of the file, Googlebot stops crawling and only considers the first 15 MB of the file for indexing. The file size limit is applied on the uncompressed data.”

This left some in the SEO community wondering if this meant Googlebot would completely disregard text that fell below images at the cutoff in HTML files.

“It’s specific to the HTML file itself, like it’s written,” John Mueller, Google Search Advocate, clarified via Twitter. “Embedded resources/content pulled in with IMG tags is not a part of the HTML file.”

What This Means For SEO

To ensure it is weighted by Googlebot, important content must now be included near the top of webpages. This means code must be structured in a way that puts the SEO-relevant information with the first 15 MB in an HTML or supported text-based file.

It also means images and videos should be compressed not be encoded directly into the HTML, whenever possible.

SEO best practices currently recommend keeping HTML pages to 100 KB or less, so many sites will be unaffected by this change. Page size can be checked with a variety of tools, including Google Page Speed Insights.

In theory, it may sound worrisome that you could potentially have content on a page that doesn’t get used for indexing. In practice, however, 15MB is a considerably large amount of HTML.

As Google states, resources such as images and videos are fetched separately. Based on Google’s wording, it sounds like this 15MB cutoff applies to HTML only.

It would be difficult to go over that limit with HTML unless you were publishing entire books’ worth of text on a single page.

Should you have pages that exceed 15MB of HTML it’s likely you have underlying issues that need to be fixed anyway.


Source: Google Search Central
Featured Image: SNEHIT PHOTO/Shutterstock





Source link

Continue Reading

Trending

Copyright © 2021 Liveseo.com