Screaming Frog https://www.screamingfrog.co.uk Wed, 18 Dec 2019 11:23:10 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.5 Screaming Frog Wins Double at the UK Search Awards https://www.screamingfrog.co.uk/screaming-frog-wins-double-uk-search-awards/ https://www.screamingfrog.co.uk/screaming-frog-wins-double-uk-search-awards/#respond Fri, 29 Nov 2019 14:18:03 +0000 https://www.screamingfrog.co.uk/?p=15682 Earlier last week the UK Search Awards was hosted at The Brewery in London and I was lucky enough to attend with some of the Screaming Frog team. The ceremony celebrates the very best achievements in the search industry and we were delighted to win not just one, but two...

The post Screaming Frog Wins Double at the UK Search Awards appeared first on Screaming Frog.

]]>
Earlier last week the UK Search Awards was hosted at The Brewery in London and I was lucky enough to attend with some of the Screaming Frog team. The ceremony celebrates the very best achievements in the search industry and we were delighted to win not just one, but two search awards, doubling our success from last year.

Our night started off well as we won the award for the ‘Best Use of Search – Finance’ alongside our client Moneybarn for our creative SEO campaign and subsequent results for the client.

Screaming Frog Search Award Win

Just when we thought the night couldn’t get any better we were announced as winners for the ‘Best Low Budget Campaign’ with our client The Solar Centre, for our festive content campaign.

Screaming Frog Search Award Win

We were thrilled with our double win, especially competing against such tough competition. Well done to all the winners and nominees. We’re looking forward to what 2020 will bring and hope to be making a return!

The post Screaming Frog Wins Double at the UK Search Awards appeared first on Screaming Frog.

]]>
https://www.screamingfrog.co.uk/screaming-frog-wins-double-uk-search-awards/feed/ 0
Screaming Frog SEO Spider Update – Version 12.0 https://www.screamingfrog.co.uk/seo-spider-12/ https://www.screamingfrog.co.uk/seo-spider-12/#comments Tue, 22 Oct 2019 09:46:34 +0000 https://www.screamingfrog.co.uk/?p=14903 We are delighted to announce the release of Screaming Frog SEO Spider version 12.0, codenamed internally as ‘Element 115’. In version 11 we introduced structured data validation, the first for any crawler. For version 12, we’ve listened to user feedback and improved upon existing features, as well as introduced some...

The post Screaming Frog SEO Spider Update – Version 12.0 appeared first on Screaming Frog.

]]>
We are delighted to announce the release of Screaming Frog SEO Spider version 12.0, codenamed internally as ‘Element 115’.

In version 11 we introduced structured data validation, the first for any crawler. For version 12, we’ve listened to user feedback and improved upon existing features, as well as introduced some exciting new ones. Let’s take a look.

1) PageSpeed Insights Integration – Lighthouse Metrics, Opportunities & CrUX Data

You’re now able to gain valuable insights about page speed during a crawl. We’ve introduced a new ‘PageSpeed’ tab and integrated the PSI API which uses Lighthouse, and allows you to pull in Chrome User Experience Report (CrUX) data and Lighthouse metrics, as well as analyse speed opportunities and diagnostics at scale.

PageSpeed

The field data from CrUX is super useful for capturing real-world user performance, while Lighthouse lab data is excellent for debugging speed related issues and exploring the opportunities available. The great thing about the API is that you don’t need to use JavaScript rendering, all the heavy lifting is done off box.

You’re able to choose and configure over 75 metrics, opportunities and diagnostics (under ‘Config > API Access > PageSpeed Insights > Metrics’) to help analyse and make smarter decisions related to page speed.

(The irony of releasing pagespeed auditing, and then including a gif in the blog post.)

In the PageSpeed tab, you’re able to view metrics such as performance score, TTFB, first contentful paint, speed index, time to interactive, as well as total requests, page size, counts for resources and potential savings in size and time – and much, much more.

There are 19 filters for opportunities and diagnostics to help identify potential speed improvements from Lighthouse.

PageSpeed Tab Opportunity Filters

Click on a URL in the top window and then the ‘PageSpeed Details’ tab at the bottom, the lower window populates with metrics for that URL, and orders opportunities by those that will make the most impact at page level based upon Lighthouse savings.

By clicking on an opportunity in the lower left-hand window panel, the right-hand window panel then displays more information on the issue, such as the specific resources with potential savings.

Page Speed Details Tab

As you would expect, all of the data can be exported in bulk via ‘Reports‘ in the top-level menu.

There’s also a very cool ‘PageSpeed Opportunities Summary’ report, which summaries all the opportunities discovered across the site, the number of URLs it affects, and the average and total potential saving in size and milliseconds to help prioritise them at scale, too.

PageSpeed reporting

As well as bulk exports for each opportunity, there’s a CSS coverage report which highlights how much of each CSS file is unused across a crawl and the potential savings.

Please note, using the PageSpeed Insights API (like the interface) can affect analytics currently. Google are aware of the issue and we have included an FAQ on how to set-up an exclude filter to prevent it from inflating analytics data.

2) Database Storage Crawl Auto Saving & Rapid Opening

Last year we introduced database storage mode, which allows users to choose to save all data to disk in a database rather than just keep it in RAM, which enables the SEO Spider to crawl very large websites.

Based upon user feedback, we’ve improved the experience further. In database storage mode, you no longer need to save crawls (as an .seospider file), they will automatically be saved in the database and can be accessed and opened via the ‘File > Crawls…’ top-level menu.

Crawl Menu

The ‘Crawls’ menu displays an overview of stored crawls, allows you to open them, rename, organise into project folders, duplicate, export, or delete in bulk.

Crawl Menu Details

The main benefit of this switch is that re-opening the database files is significantly quicker than opening an .seospider crawl file in database storage mode. You won’t need to load in .seospider files anymore, which previously could take some time for very large crawls. Database opening is significantly quicker, often instant.

You also don’t need to save anymore, crawls will automatically be committed to the database. But it does mean you will need to delete crawls you don’t want to keep from time to time (this can be done in bulk).

You can export the database crawls to share with colleagues, or if you’d prefer export as an .seospider file for anyone using memory storage mode still. You can obviously also still open .seospider files in database storage mode as well, which will take time to convert to a database (in the same way as version 11) before they are compiled and available to re-open each time almost instantly.

Export and import options are available under the ‘File’ menu in database storage mode.

Import or Export Crawls in DB Storage Mode

To avoid accidentally wiping crawls every time you ‘clear’ or start a new crawl from an existing crawl, or close the program – the crawl is stored. This leads us nicely onto the next enhancement.

3) Resume Previously Lost or Crashed Crawls

Due to the feature above, you’re now able to resume from an otherwise ‘lost’ crawl in database storage mode.

Previously if Windows had kindly decided to perform an update and restart your machine mid crawl, there was a power-cut, software crash, or you just forgot you were running a week-long crawl and switched off your machine, the crawl would sadly be lost forever.

We’ve all been there and we didn’t feel this was user error, we could do better! So if any of the above happens, you should now be able to just open it back up via the ‘File > Crawls’ menu and resume the crawl.

Unfortunately this can’t be completely guaranteed, but it will provide a very robust safety net as the crawl is always stored, and generally retrievable – even when pulling the plug directly from a machine mid-crawl.

4) Configurable Tabs

You can now select precisely what tabs are displayed and how they are ordered in the GUI. Goodbye forever meta keywords.

Goodbye ForEVER Meta Keywords!

The tabs can be dragged and moved in order, and they can be configured via the down arrow icon to the right-hand side of the top-level tabs menu.

This only affects how they are displayed in the GUI, not whether the data is stored. However…

5) Configurable Page Elements

You can de-select specific page elements from being crawled and stored completely to help save memory. These options are available under ‘Config > Spider > Extraction’. For example, if you wanted to stop storing meta keywords the configuration could be disabled.

Configurable Page Elements

This allows users to run a ‘bare bones’ crawl when required.

6) Configurable Link Elements For Focused Auditing In List Mode

You can now choose whether to specifically store and crawl link elements as well (under ‘Config > Spider > Crawl’).

Configurable Link Elements

This enables the SEO Spider to be infinitely more flexible, particularly with the new configurable ‘Internal hyperlinks’ configuration option. This becomes really powerful in list mode in particular, which might not be immediately clear why at face value.

However, if you deselect ‘Crawl’ and ‘Store’ options for all ‘Resource Links’ and ‘Page Links’, switch to list mode, go to ‘Config > Spider > Limits’ and remove the crawl depth that gets applied in list mode, you can then choose to audit any link element you wish alongside the URLs you upload.

For example, you can supply a list of URLs in list mode, and only crawl them and their hreflang links only.

Crawl hreflang in list mode

Or you could supply a list of URLs and audit their AMP versions only. You could upload a list of URLs, and just audit the images on them. You could upload a list of URLs and only crawl the external links on them for broken link building. You get the picture.

Previously this level of control and focus just wasn’t available, as removing the crawl depth in list mode would mean internal links would also be crawled.

This advanced configurability allows for laser focused auditing of precisely the link elements you require saving time and effort.

7) More Extractors!

You wanted more extractors, so you can now configure up to 100 in custom extraction. Just click ‘Add’ each time you need another extractor.

More Custom Extractors

Custom extraction also now has its own tab for more granular filtering, which leads us onto the next point.

8) Custom Search Improvements

Custom Search has been separated from extraction into its own tab, and you can now have up to 100 search filters.

A dedicated tab allows the SEO Spider to display all filter data together, so you can combine filters and export combined.

Custom Search Improvements

We have more plans for this feature in the future, too.

9) Redirect Chain Report Improvements

Based upon user feedback, we’ve split up the ‘Redirect & Canonical Chains’ report into three.

You can now choose to export ‘All Redirects’ (1:1 redirects and chains together), ‘Redirect Chains’ (just redirects with 2+ redirects) and ‘Redirect & Canonical Chains’ (2+ redirects, or canonicals in a chain).

Redirect Reporting Improvements

All of these will work in list mode when auditing redirects. This should cover different scenarios when a variety of data combined or separated can be useful.

Other Updates

Version 12.0 also includes a number of smaller updates and bug fixes, outlined below.

  • There’s a new ‘Link Attributes’ column for inlinks and outlinks. This will detail whether a link has a nofollow, sponsored or ugc value. ‘Follow Internal Nofollow‘ and ‘Follow External Nofollow‘ configuration options will apply to links which have sponsored or ugc, similar to a normal nofollow link.
  • The SEO Spider will pick up the new max-snippet, max-video-preview and max-image-preview directives and there are filters for these within the ‘Directives‘ tab. We plan to add support for data-nosnippet at a later date, however this can be analysed using custom extraction for now.
  • We’re committed to making the tool as reliable as possible and encouraging user reporting. So we’ve introduced in-app crash reporting, so you don’t even need to bring up your own email client or download the logs manually to send them to us. Our support team may get back to you if we require more information.
  • The crawl name is now displayed in the title bar of the application. If you haven’t named the crawl (or saved a name for the .seospider crawl file), then we will use a smart name based upon your crawl. This should help when comparing two crawls in separate windows.
  • Structured data validation has been updated to use Schema.org 3.9 and now supports FAQ, How To, Job Training and Movie Google features. We’ve also updated nearly a dozen features with changing required and recommended properties.
  • ga:users metric has now been added to the Google Analytics integration.
  • ‘Download XML Sitemap’ and ‘Download XML Sitemap Index’ options in list mode, have been combined into a single ‘Download XML Sitemap’ option.
  • The exclude configuration now applies when in list mode, and to robots.txt files.
  • Scroll bars have now been removed from rendered page screenshots.
  • Our SERP snippet emulator has been updated with Google’s latest changes to larger font on desktop, which has resulted in less characters being displayed before truncation in the SERPs. The ‘Over 65 Characters’ default filter for page titles has been amended to 60. This can of course be adjusted under ‘Config > Preferences’.
  • We’ve significantly sped up robots.txt parsing.
  • Custom extraction has been improved to use less memory.
  • We’ve added support for x-gzip content encoding, and content type ‘application/gzip’ for sitemap crawling.
  • We removed the descriptive export name text from the first row of all exports as it was annoying.

That’s everything. If you experience any problems with the new version, then please do just let us know via our support and we’ll help as quickly as possible.

Thank you to everyone for all their feature requests, feedback, and bug reports. We appreciate each and every one of them.

Now, go and download version 12.0 of the Screaming Frog SEO Spider and let us know what you think!

Small Update – Version 12.1 Released 25th October 2019

We have just released a small update to version 12.1 of the SEO Spider. This release is mainly bug fixes and small improvements –

  • Fix bug preventing saving of .seospider files when PSI is enabled.
  • Fix crash in database mode when crawling URLs with more than 2,000 characters.
  • Fix crash when taking screenshots using JavaScript rendering.
  • Fix issue with Majestic with not requesting data after a clear/pause.
  • Fix ‘inlinks’ tab flickering during crawl if a URL is selected.
  • Fix crash re-spidering a URL.
  • Fix crash editing text input fields with special characters.
  • Fix crash when renaming a crawl in database mode.

Small Update – Version 12.2 Released 1st November 2019

We have just released a small update to version 12.2 of the SEO Spider. This release is mainly bug fixes and small improvements –

  • Improved performance of opening database crawls.
  • Aggregate Rating and Review Snippet property names updated.
  • Fix regression in parsing of XML sitemaps missing opening XML declarations.
  • Fix issue loading saved list mode crawl opens in Spider mode.
  • Remove API error pop-ups. The number of errors can still be seen in the API tab, but better reporting to come.
  • Fix crash sorting tables in some situations.
  • Fix crash displaying GSC configuration.
  • Fix crash changing custom filters in paused crawl.
  • Fix issue with PageSpeed details saying ‘not connected’, when you are.
  • Fix crash taking screenshots during JavaScript rendering.
  • Fix crash when renaming a database crawl with multiple SEO Spider instances open.
  • Fix crash starting a crawl with an invalid URL for some locales.
  • Fix crash showing PSI data
  • Fix crash caused by illegal cookie names when using JavaScript rendering.

Small Update – Version 12.3 Released 28th November 2019

We have just released a small update to version 12.3 of the SEO Spider. This release is mainly bug fixes and small improvements –

  • You can now use a URL regular expression to highlight nodes in tree and FDD visualisations i.e. show all nodes that contain foo/bar.
  • PageSpeed Insights now show errors against URLs including error message details from the API.
  • A (right click) re-spider of a URL will now re-request PSI data when connected to the API.
  • Improve robustness of recovering crawls when OS is shutdown during running crawl.
  • Fix major slow down of JavaScript crawls experienced by some macOS users.
  • Fix windows installer to not allow install when the SEO Spider is running.
  • Fix crash when editing database storage location.
  • Fix crash using file dialogs experienced by some macOS users.
  • Fix crash when sorting columns.
  • Fix crash when clearing data.
  • Fix crash when searching.
  • Fix crash undoing changes in text areas.
  • Fix crash adjusting sliders in visualisations.
  • Fix crash removing/re-spidering duplicates titles/meta descritions after editing in SERP View.
  • Fix crash in AHREFs API.
  • Fix crash in SERP Panel.
  • Fix crash viewing structured data.

Small Update – Version 12.4 Released 18th December 2019

We have just released a small update to version 12.4 of the SEO Spider. This release is mainly bug fixes and small improvements –

  • Remove checks from deprecated Google Features.
  • Respect sort and search when exporting.
  • Improved config selection in scheduler UI.
  • Allow users without crontab entries to schedule crawls on Ubuntu.
  • Speed up table scrolling when using PSI.
  • Fix crash when sorting, searching and clearing table views.
  • Fix crash editing scheduled tasks.
  • Fix crash when dragging and dropping.
  • Fix crash editing internal URL config.
  • Fix crash editing speed config.
  • Fix crash when editing custom extractions then resuming a paused crawl.
  • Fix freeze when performing crawl analysis.
  • Fix issues with GA properties with + signs not parsed on a Mac.
  • Fix crash when invalid path is used for database directory.
  • Fix crash when plugging in and out multiple screens on macOS.
  • Fix issue with exporting inlinks being empty sometimes.
  • Fix issue on Windows with not being able to export the current crawl.
  • Fix crash on macOS caused by using accessibility features.
  • Fix issue where scroll bar stops working in the main table views.
  • Fix crash exporting .xlsx files.
  • Fix issue with word cloud using title tags from svg tags.

The post Screaming Frog SEO Spider Update – Version 12.0 appeared first on Screaming Frog.

]]>
https://www.screamingfrog.co.uk/seo-spider-12/feed/ 78
How To Find Broken Links Using The SEO Spider https://www.screamingfrog.co.uk/broken-link-checker/ https://www.screamingfrog.co.uk/broken-link-checker/#comments Tue, 22 Oct 2019 08:34:32 +0000 https://www.screamingfrog.co.uk/?p=4752 You can use the Screaming Frog SEO Spider for free (and paid) to check for broken links (the http response ‘404 not found error’) on your website. Below is a very quick and easy tutorial on how to use the tool as a broken link checker. First of all, you’ll...

The post How To Find Broken Links Using The SEO Spider appeared first on Screaming Frog.

]]>
You can use the Screaming Frog SEO Spider for free (and paid) to check for broken links (the http response ‘404 not found error’) on your website.

Below is a very quick and easy tutorial on how to use the tool as a broken link checker. First of all, you’ll need to download the SEO Spider which is free for crawling up to 500 URLs. You can download via the green button in the right hand side bar.

You can crawl more than 500 URLs with the paid version. The next steps to find broken links within your website can be viewed in our video, and tutorial below.

1) Crawl The Website

Open up the SEO Spider, type or copy in the website you wish to crawl in the ‘Enter URL to spider’ box and hit ‘Start’.

Find Broken Links

2) Click The ‘Response Codes’ tab & ‘Client Error (4XX)’ Filter To View Broken Links

You can wait until the crawl finishes and reaches 100%, or you can just view 404 broken links while crawling by navigating to the ‘Response Codes’ tab and using the filter for ‘Client Error 4XX’.

There are two ways to do this, you can simply click on the ‘tab’ at the top and use the drop down filter –

View Broken Links

Alternatively you can use the right-hand window crawl overview pane and just click directly on ‘Client Error (4xx)’ tree view under the ‘Response Codes’ folder. They both show the same results, regardless of which way you navigate.

404 Errors Via Right Hand Window

This crawl overview pane updates while crawling, so you can see there number of client error 4XX links you have at a glance. In the instance above, there are 9 client errors which is 0.18% of the links discovered in the crawl.

3) View The Source Of The Broken Links By Clicking The ‘Inlinks’ Tab

Obviously you’ll want to know the source of the broken links discovered (which URLs on the website link to these broken links), so they can be fixed. To do this, simply click on a URL in the top window pane and then click on the ‘Inlinks’ tab at the bottom to populate the lower window pane.

View Broken Links Source Pages

You can click on the above to view a larger image. As you can see in this example, there is a broken link to the BrightonSEO website (https://www.brightonseo.com/people/oliver-brett/), which is linked to from this page – https://www.screamingfrog.co.uk/2018-a-year-in-review/.

Here’s a closer view of the lower window pane which details the ‘inlinks’ data –

‘From’ is the source where the 404 broken link can be found, while ‘To’ is the broken link. You can also see the anchor text, alt text (if it’s an image which is hyperlinked) and whether the link is followed (true) or nofollow (false).

It looks like the only broken links on our website are external links (sites we link out to), but obviously the SEO Spider will discover any internal broken links if you have any.

4) Use The ‘Bulk Export > Response Codes > Client Error (4XX) Inlinks’ Export

If you’d rather view the data in a spreadsheet you can export both the ‘source’ URLs and ‘broken links’ by using the ‘Bulk Export’, ‘Response Codes’ and ‘Client Error (4XX) Inlinks’ option in the top level menu.

Bulk Export Broken Links & Source Pages

There’s a number of ways you can export data from the Screaming Frog SEO spider, so I recommend reading our user guide on exporting.

Crawling A List Of URLs For Broken Links

Finally, if you have a list of URLs you’d like to check for broken links instead of crawling a website, then you can simply upload them in list mode.

To switch to ‘list’ mode, simply click on ‘mode > list’ in the top level navigation and you’ll then be able to choose to paste in the URLs or upload via a file.

Find broken Links in list mode

Hopefully the above guide helps illustrate how to use the SEO Spider tool to check for broken links efficiently.

Please also read our Screaming Frog SEO spider FAQs and full user guide for more information.

The post How To Find Broken Links Using The SEO Spider appeared first on Screaming Frog.

]]>
https://www.screamingfrog.co.uk/broken-link-checker/feed/ 290
The Beginner’s Guide to SEO Competitor Analysis https://www.screamingfrog.co.uk/the-beginners-guide-to-seo-competitor-analysis/ https://www.screamingfrog.co.uk/the-beginners-guide-to-seo-competitor-analysis/#comments Thu, 03 Oct 2019 07:45:57 +0000 https://www.screamingfrog.co.uk/?p=15055 Unless you’re lucky enough to operate in a monopoly, competition will be an everyday part of your business. Online is no different. Knowing who you’re fighting for visibility in SERPs is the first step to building a watertight SEO strategy. This post will explore how to find your competitors and...

The post The Beginner’s Guide to SEO Competitor Analysis appeared first on Screaming Frog.

]]>
Unless you’re lucky enough to operate in a monopoly, competition will be an everyday part of your business. Online is no different. Knowing who you’re fighting for visibility in SERPs is the first step to building a watertight SEO strategy.

This post will explore how to find your competitors and analyse their backlink profiles. If all goes to plan, you’ll find weaknesses you can exploit and strengths you can use to inspire future campaigns.

To illustrate the process, we’ll be using an imaginary chemistry facts website as our ‘new’ site in need of a strategy. This site wants to rank for keywords including [chemistry facts], [chemistry revision] and [learn chemistry].

Finding the Competition

The first step in any competitor analysis is finding competitors to analyse. To do this the easiest way is to run a Google Search for your target keywords.

SERP for chemistry facts

From these searches, we can identify our main SEO rivals. While these competitors may not necessarily offer the exact same things as our site, they all compete for the same key search phrases.

This is an important point: SEO competitors may not overlap with direct business competitors. Being aware of this will save you a lot of headaches further down the line.

Once you have your list of competitors, I recommend narrowing down to ten or fewer. This keeps the analysis manageable while still being in-depth enough to provide insight.

Competitor Visibility

Now you need to see which of these competitors are within reach and which are dominating currently. Splitting your competitors into two tiers can be useful; realistic ones you can target over the short to medium term, and ambitious competitors that’ll take sustained investment and effort to overtake.

The way to assess this is by looking at your competitors’ visibility in SERPs. SISTRIX is one tool that measures this metric, and the way it is calculated is as follows.

First, SISTRIX takes a sample of the top 100 search positions for one million keywords or keyword combinations. (As a comparison, the Oxford English Dictionary contains about 120,000 words). It then weights the results according to position and search volume for a keyword. (See here for more detail about the visibility index).

SISTRIX visibility fpr ZMEScience

Enter your first competitor into the SISTRIX toolbar (we’ve chosen ZMEScience) and scroll to their visibility index. Next, click on the cog in the top right corner and then ‘Compare Data in Chart’. This allows a comparison of up to 4 websites’ visibilities. Enter the rest of your first batch of competitors and hit ‘Compare’.

You may have one visibility that’s so large that it doesn’t let you see the others clearly. If that’s the case, set it aside for now and replace with another competitor until you have a graph that’s readable. These will be our realistic competitors. The ones you’ve set aside will be your ambitious, longer-term competition.

SISTRIX competitor visibility comparison
Clicking the cog again and selecting ‘Show More Pins’ allows SISTRIX to show the dates of known Google algorithm updates. It’s interesting to note if any competitors have surges or drops in visibility that coincide with these dates.

From the above graph we can see that RevisionWorld (in blue) surged after the second Medic Update (pin M). Conversely, ZMEScience (red) has dropped dramatically after the June 2019 Core Update (pin O).

You can use this to inform your strategy; is there anything surging competitors are doing that you aren’t? Or is there something you’re getting away with, but another competitor has been hit for?

You can also use other tools to measure your visibility. Searchmetrics has a nice feature that allows you to see how many keywords you share with your competitors. As before, we’ve chosen ZMEScience to be our representative example.

Competitors for ZMEScience on Searchmetrics

From this we can see that ZMEScience shares a lot of its keywords with a lot of high-authority sites such as the Encyclopedia Britannica and National Geographic. These would obviously be considered long-term competitors that we wouldn’t be able to target immediately.

Finally, SEMrush also shows something similar. Its Competitive Positioning Map shows competitors by organic traffic and the number of keywords they are ranking in the top 20 Google results for. The size of the bubble represents a website’s visibility in SERPs.

ZMEScience competitors SEMrush

SERP Analysis

If there is one particular keyword that you are targeting, it can be worth analysing the SERP for this keyword. For example, what is the type of content ranking for this query?

The results that Google shows can give you insight into what it thinks the intent behind that search is. If the results are all guides, blog posts and listicles, then it is fair to assume that the intent is informational. People are looking for information in this instance, so to rank for this query you’ll have to provide that information.

Looking at the SERP for [chemistry facts], this is exactly what we get. All ten organic results are information pages, which shouldn’t really be surprising. People aren’t generally looking to buy facts. (But if you know someone who is, send them my way. I’ve got some good ones).

Moz’s SERP Analysis section contains useful metrics such as overall Keyword Difficulty, as well as individual Domain Authority and Page Authority scores for each result. Keyword Difficulty estimates how easy it is to rank above the current competitors for that query; the lower the score, the better.

The SERP analysis results can also be used to get an idea of what you might need to achieve to compete.

Using minimum and average metrics for the top ten results can be one way to do this. In this Google Sheet, I’ve created a SERP analysis template (in the tab imaginatively named SERP Analysis). You will need to make a copy before you can do anything.

Fill this in with the Domain and Page Authority for each result, as well as the number of Referring Domains to both the page and overall domain. You should see something like the following:

competitor analysis template

This gives the minimum and average for each of the metrics mentioned above. As we should anticipate, both the number and quality of referring domains is important in order to rank well.

These numbers should be taken with a heavy pinch of salt; we will not need 62,000 referring domains just to compete. In this case the very high number to ThoughtCo skews the averages high.

Nevertheless, it remains useful to see all the numbers in one place to give an overview of where your competitors are.

Backlink Profile Analysis

Now you can take a deeper dive into the backlink profile of your immediate competition. Using a mixture of metrics from three well-known SEO tools (Moz, Ahrefs and Majestic) allows a more detailed comparison than using any one alone.

When doing comparative work like this, it’s important to make it as efficient as possible. All three SEO tools have a comparison part where you can submit multiple URLs rather than doing it one-by-one.

Moz has its Compare Link Profile section under Link Research, Ahrefs has a Batch Analysis tool, and Majestic has a Comparator section. With Ahrefs, make sure you use the ‘Live’ index to make sure the data is as up to date as possible.

Note that you can’t directly compare numbers from different sources as they are calculated differently. It’s also worth noting that you should judge numbers relative to your site rather than in absolute terms.

Preparing a table like the below allows an overall look at each competitor’s backlink profile. It also makes it easy to note any outliers that you need to investigate further.

Backlink comparison table

Out of the metrics, Referring Domains and Domain Authority (DA) are particularly important. We often see a large correlation between these and SEO performance.

Referring Domains is the number of separate websites linking to a given site, while DA (a score out of 100) is an estimation of the quality of these links.

From this, we can see that ZMEScience has by far and away the highest quality link profile. It has the highest number of Referring Domains and the highest DA. Therefore, it’s interesting to see from the visibility graph that it’s not as visible as RevisionWorld.

It is also worth noting that RevisionScience and RevisionWorld have nearly the same number of backlinks. These come from 200 and 1,500 referring domains respectively.

This implies that a large proportion of the backlinks to RevisionScience may be low-quality links, potentially due to mass submission or scraper sites. This should be a competitor our site should look to be challenging, but not replicating in terms of linking.

Link Quality

You can also compare competitors by link quality. In theory, links from domains with a higher DA (Moz) or Domain Rating (Ahrefs) score should pass more authority to the linked site.

Moz shows this by default, segmenting DA into batches of 10: 0-10, 11-20 and so on. You can see this in Moz’s Link Explorer. Simply input your competitor’s domain and hit enter. Scroll down and the bottom-right chart should look like the below (for ZMEScience).

Segmenting referring domains by DA

If you want to use Ahrefs to segment referring domains, it gets a little more involved. However, this has the advantage of being able to compare competitors side-by-side.

We segment the Domain Rating scores as follows:

  • 100 to 70 – Most Valuable
  • 69 to 50 – Valuable
  • 49 to 30 – Average
  • 29 to 0 – Low Value

To do this go back to your Ahrefs Batch Analysis and click on the Total number of Referring Domains for the top competitor. This will bring up the Referring Domains report for this website.

ahrefs batch analysis

Export this to a CSV file, then filter Column C by the Domain Rating segments shown above. (Filter dropdown > Number Filters > Between…)

Referring domains CSV export

Make a copy of the Google Sheet template found here. Paste the number of overall Referring Domains for each segment into the template (Columns H onwards). Repeat this for all segments and all competitors until the chart is fully populated. For our science competitors, we see the following.

Link analysis graph from template
This visualisation allows a quick comparison of how many of each site’s referring domains falls under the segments described above. The proportions are also represented as a table in the template.

Link analysis proportions from template

In this case, the table is clearer due to the comparatively high number of referring domains to ZMEScience. We can see the vast majority of the other sites’ referring domains are the lowest quality. This would suggest by targeting high-quality sites with our content, our new website would have an advantage.

Top Pages

Looking at a site’s most-linked to pages is a good way of understanding what linking work it’s been up to. If you can find out what works for your competitors, you can try something similar for yourself.

You can use the Best by Links report from Ahrefs to investigate this. (Enter domain or blog subfolder/subdomain > Pages > Best by links).

When looking at RevisionWorld, we see that one of its top pages is a revision calendar creator. This has 27 referring domains and over 1,700 dofollow backlinks.

Ahrefs best by links report for RevisionWorld

Therefore, our new site could look at creating something similar, but even better. We could then target those sites that link to the now inferior content and ask them to link back to our new piece.

To find these sites to target, simply click on the number of referring domains in the Top Pages report.

Ahrefs referring domains report for revision calendar creator

Link Growth

Finally, you can study competitors’ backlink growth. The rate at which they’re acquiring referring domains gives you a rough target to aim for with your site’s link building efforts.

Ahrefs’ Domain Comparison shows this in a visual way. Enter the URLs of your competitors into the boxes and hit ‘Compare’.

Competitor link growth chart Link growth chart legend

This shows at what rate competitors’ backlink profiles have been growing or declining.

Consistent growth, as seen for ZMEScience, can be natural or the result of long-term link building work.

RevisionWorld has experienced more inconsistent growth over the last five years. (I have removed ZMEScience for clarity – click its name in the legend to achieve this).

Link growth chart excluding ZMEScience Link growth chart legend excluding ZMEScience

From this we can see rapid growth between October 2015 and March 2016.

By looking at Ahrefs’ New Referring Domains report (Enter domain > Referring domains > New), you can work out what might have caused this.

If most of the links point to the same page, then it’s likely down to a piece of content going viral. But if most of the added domains look spammy, it it’s more likely to be poor quality link building. It could also be due to low quality syndication sites which are usually present (to an extent) in most sites’ backlink profiles.

In RevisionWorld’s case, a lot of the new links with high Domain Ratings are .ac.uk, .sch.uk and .edu domains. These link back to revision guides on the RevisionWorld site.

This suggests they’ve had success outreaching their revision guides as an educational resource to schools and universities. This could be something our new site could look to replicate.

As seen earlier, this could be one reason why RevisionWorld is currently more visible in SERPs than ZMEScience. This is despite RevisionWorld having only a tenth of the number of referring domains.

Conclusion

The analysis above will help you find your SEO competitors and how to replicate their successes and learn from their mistakes.

As your site changes and grows, so will your competitors. You’ll need to keep tabs on who you’re fighting for SERP space with. Hopefully one day you’ll be challenging those ambitious competitors you identified way back at the start.

The post The Beginner’s Guide to SEO Competitor Analysis appeared first on Screaming Frog.

]]>
https://www.screamingfrog.co.uk/the-beginners-guide-to-seo-competitor-analysis/feed/ 10
The brightonSEO Crawling Clinic https://www.screamingfrog.co.uk/brightonseo-crawling-clinic-2019/ https://www.screamingfrog.co.uk/brightonseo-crawling-clinic-2019/#respond Mon, 09 Sep 2019 12:52:00 +0000 https://www.screamingfrog.co.uk/?p=14843 For the first time last year we ran a crawling clinic at the legendary brightonSEO. The team had a lot of fun meeting everyone and chatting about crawling and technical SEO while just a little hungover from the pre-party, so we decided to do it again this year. The idea...

The post The brightonSEO Crawling Clinic appeared first on Screaming Frog.

]]>
For the first time last year we ran a crawling clinic at the legendary brightonSEO. The team had a lot of fun meeting everyone and chatting about crawling and technical SEO while just a little hungover from the pre-party, so we decided to do it again this year.

The idea of the crawling clinic is that you’re able to meet the Screaming Frog team and chat about any crawling issues you’re experiencing, how best to tackle them, and any feature requests you’d like to see for the software – or just pilfer some swag.

We’re also running our SEO Spider training course at brightonSEO on the Thursday (12th September). This is the same SEO Spider training course that we offer. It is aimed at those who are familiar with the basic uses of the SEO Spider, but want to learn how to make more of the tool.

We’re looking forward to meeting everyone attending the course, and if you’d like to join the workshop there’s still a few places left.

Version 12.0 Sneak Preview

The team will also be running the new beta 12 version of the Screaming Frog SEO Spider at the Crawling Clinic. So if you’d like a sneak peek of some very cool new features that are coming soon before everyone else, then come on over to the clinic. We’ll be on the main exhibition floor (B7) on Friday (13th September) throughout the day.

If you’re attending the conference, then also make sure you pick up the latest edition of the Screaming Frog brightonSEO beer mats!

Come & Chat About Crawling

You don’t need to book anything, you can just come over and chat to us at our crawling clinic stand. We’ll be on hand to help with any crawling issues and will have a few machines to run through anything. So if you’d like to meet our team and chat about crawling, log files, SEO in general then, then please do come over and see us.

Alternatively, you can say hello in the bar before at the pre or after parties! See you all on Thursday and Friday.

The post The brightonSEO Crawling Clinic appeared first on Screaming Frog.

]]>
https://www.screamingfrog.co.uk/brightonseo-crawling-clinic-2019/feed/ 0
How to Scrape Google Search Features Using XPath https://www.screamingfrog.co.uk/how-to-scrape-google-search-features-using-xpath/ https://www.screamingfrog.co.uk/how-to-scrape-google-search-features-using-xpath/#comments Tue, 03 Sep 2019 08:36:39 +0000 https://www.screamingfrog.co.uk/?p=14738 Google’s search engine results pages (SERPs) have changed a great deal over the last 10 years, with more and more data and information being pulled directly into the results pages themselves. Google search features are a regular occurrence on most SERPs nowadays, some of most common features being featured snippets...

The post How to Scrape Google Search Features Using XPath appeared first on Screaming Frog.

]]>
Google’s search engine results pages (SERPs) have changed a great deal over the last 10 years, with more and more data and information being pulled directly into the results pages themselves. Google search features are a regular occurrence on most SERPs nowadays, some of most common features being featured snippets (aka ‘position zero’), knowledge panels and related questions (aka ‘people also ask’). Data suggests that some features such as related questions may feature on nearly 90% of SERPs today – a huge increase over the last few years.

Understanding these features can be powerful for SEO. Reverse engineering why certain features appear for particular query types and analysing the data or text included in said features can help inform us in making optimisation decisions. With organic CTR seemingly on the decline, optimising for Google search features is more important than ever, to ensure content is as visible as it possibly can be to search users.

This guide runs through the process of gathering search feature data from the SERPs, to help scale your analysis and optimisation efforts. I’ll demonstrate how to scrape data from the SERPs using the Screaming Frog SEO Spider using XPath, and show just how easy it is to grab a load of relevant and useful data very quickly. This guide focuses on featured snippets and related questions specifically, but the principles remain the same for scraping other features too.

TL;DR

If you’re already an XPath and scraping expert and are just here for the syntax and data type to setup your extraction (perhaps you saw me eloquently explain the process at SEOCamp Paris or Pubcon Las Vegas this year!), here you go (spoiler alert for everyone else!) –

Featured snippet XPath syntax

  • Featured snippet page title (Text) – (//span[@class='S3Uucc'])[1]
  • Featured snippet text paragraph (Text) – (//span[@class="e24Kjd"])[1]
  • Featured snippet bullet point text (Text) – //ul[@class="i8Z77e"]/li
  • Featured snippet numbered list (Text) – //ol[@class="X5LH0c"]/li
  • Featured snippet table (Text) – //table//tr
  • Featured snippet URL (Inner HTML) – (//div[@class="xpdopen"]//a/@href)[2]
  • Featured snippet image source (Text) – (//img[@id="dimg_7"]//@title)
  • Related questions XPath syntax

  • Related question 1 text (Text) – (//g-accordion-expander//h3)[1]
  • Related question 2 text (Text) – (//g-accordion-expander//h3)[2]
  • Related question 3 text (Text) – (//g-accordion-expander//h3)[3]
  • Related question 4 text (Text) – (//g-accordion-expander//h3)[4]
  • Related question snippet text for all 4 questions (Text) – //g-accordion-expander//span[@class="e24Kjd"]
  • Related question page titles for all 4 questions (Text) – //g-accordion-expander//h3
  • Related question page URLs for all 4 questions (Inner HTML) – //g-accordion-expander//div[@class="r"]//a/@href
  • You can also get this list in our accompanying Google doc. Back to our regularly scheduled programming for the rest of you…follow these steps to start scraping featured snippets and related questions!

    1) Preparation

    To get started, you’ll need to download and install the SEO Spider software and have a licence to access the custom extraction feature necessary for scraping. I’d also recommend our web scraping and data extraction guide as a useful bit of light reading, just to cover the basics of what we’re getting up to here.

    2) Gather keyword data

    Next you’ll need to find relevant keywords where featured snippets and / or related questions are showing in the SERPs. Most well-known SEO intelligence tools have functionality to filter keywords you rank for (or want to rank for) and where these features show, or you might have your own rank monitoring systems to help. Failing that, simply run a few searches of important and relevant keywords to look for yourself, or grab query data from Google Search Console. Wherever you get your keyword data from, if you have a lot of data and are looking to prune and prioritise your keywords, I’d advise the following –

  • Prioritise keywords where you have a decent ranking position already. Not only is this relevant to winning a featured snippet (almost all featured snippets are taken from pages ranking organically in the top 10 positions, usually top 5), but more generally if Google thinks your page is already relevant to the query, you’ll have a better chance of targeting all types of search features.
  • Certainly consider search volume (the higher the better, right?), but also try and determine the likelihood of a search feature driving clicks too. As with keyword intent in the main organic results, not all search features will drive a significant amount of additional traffic, even if you achieve ‘position zero’. Try to consider objectively the intent behind a particular query, and prioritise keywords which are more likely to drive additional clicks.
  • 3) Create a Google search query URL

    We’re going to be crawling Google search query URLs, so need to feed the SEO Spider a URL to crawl using the keyword data gathered. This can either be done in Excel using find and replace and the ‘CONCATENATE’ formula to change the list of keywords into a single URL string (replace word spaces with + symbol, select your Google of choice, then CONCATENATE the cells to create an unbroken string), or, you can simply paste your original list of keywords into this handy Google doc with formula included (please make a copy of the doc first).

    google search query string URL

    At the end of the process you should have a list of Google search query URLs which look something like this –

    https://www.google.co.uk/search?q=keyword+one
    https://www.google.co.uk/search?q=keyword+two
    https://www.google.co.uk/search?q=keyword+three
    https://www.google.co.uk/search?q=keyword+four
    https://www.google.co.uk/search?q=keyword+five etc.

    4) Configure the SEO Spider

    Experienced SEO Spider users will know that our tool has a multitude of configuration options to help you gather the important data you need. Crawling Google search query URLs requires a few configurations to work. Within the menu you need to configure as follows –

  • Configuration > Spider > Rendering > JavaScript
  • Configuration > robots.txt > Settings > Ignore robots.txt
  • Configuration > User-Agent > Present User Agents > Chrome
  • Configuration > Speed > Max Threads = 1 > Max URI/s = 0.5
  • These config options ensure that the SEO Spider can access the features and also not trigger a captcha by crawling too fast. Once you’ve setup this config I’d recommend saving it as a custom configuration which you can load up again in future.

    5) Setup your extraction

    Next you need to tell the SEO spider what to extract. For this, go into the ‘Configuration’ menu and select ‘Custom’ and ‘Extraction’ –

    screaming frog seo spider custom extraction

    You should then see a screen like this –

    screaming frog seo spider xpath

    From the ‘Inactive’ drop down menu you need to select ‘XPath’. From the new dropdown which appears on the right hand side, you need to select the type of data you’re looking to extract. This will depend on what data you’re looking to extract from the search results (full list of XPath syntax and data types listed below), so let’s use the example of related questions –

    scraping google related questions

    The above screenshot shows the related questions showing for the search query ‘seo’ in the UK. Let’s say we wanted to know what related questions were showing for the query, to ensure we had content and a page which targeted and answered these questions. If Google thinks they are relevant to the original query, at the very least we should consider that for analysis and potentially for optimisation. In this example we simply want the text of the questions themselves, to help inform us from a content perspective.

    Typically 4 related questions show for a particular query, and these 4 questions have a separate XPath syntax –

  • Question 1 – (//g-accordion-expander//h3)[1]
  • Question 2 – (//g-accordion-expander//h3)[2]
  • Question 3 – (//g-accordion-expander//h3)[3]
  • Question 4 – (//g-accordion-expander//h3)[4]
  • To find the correct XPath syntax for your desired element, our web scraping guide can help, but we have a full list of the important ones at the end of this article!

    Once you’ve input your syntax, you can also rename the extraction fields to correspond to each extraction (Question 1, Question 2 etc.). For this particular extraction we want the text of the questions themselves, so need to select ‘Extract Text’ in the data type dropdown menu. You should have a screen something like this –

    screaming frog custom extraction

    If you do, you’re almost there!

    6) Crawl in list mode

    For this task you need to use the SEO Spider in List Mode. In the menu go Mode > List. Next, return to your list of created Google search query URL strings and copy all URLs. Return to the SEO Spider, hit the ‘Upload’ button and then ‘Paste’. Your list of search query URLs should appear in the window –

    screaming frog list mode

    Hit ‘OK’ and your crawl will begin.

    7) Analyse your results

    To see your extraction you need to navigate to the ‘Custom’ tab in the SEO Spider, and select the ‘Extraction’ filter. Here you should start to see your extraction rolling in. When complete, you should have a nifty looking screen like this –

    screaming frog seo spider custom extraction

    You can see your search query and the four related questions appearing in the SERPs being pulled in alongside it. When complete you can export the data and match up your keywords to your pages, and start to analyse the data and optimise to target the relevant questions.

    8) Full list of XPath syntax

    As promised, we’ve done a lot of the heavy lifting and have a list of XPath syntax to extract various featured snippet and related question elements from the SERPs –

    Featured snippet XPath syntax

  • Featured snippet page title (Text) – (//span[@class='S3Uucc'])[1]
  • Featured snippet text paragraph (Text) – (//span[@class="e24Kjd"])[1]
  • Featured snippet bullet point text (Text) – //ul[@class="i8Z77e"]/li
  • Featured snippet numbered list (Text) – //ol[@class="X5LH0c"]/li
  • Featured snippet table (Text) – //table//tr
  • Featured snippet URL (Inner HTML) – (//div[@class="xpdopen"]//a/@href)[2]
  • Featured snippet image source (Text) – (//img[@id="dimg_7"]//@title)
  • Related questions XPath syntax

  • Related question 1 text (Text) – (//g-accordion-expander//h3)[1]
  • Related question 2 text (Text) – (//g-accordion-expander//h3)[2]
  • Related question 3 text (Text) – (//g-accordion-expander//h3)[3]
  • Related question 4 text (Text) – (//g-accordion-expander//h3)[4]
  • Related question snippet text for all 4 questions (Text) – //g-accordion-expander//span[@class="e24Kjd"]
  • Related question page titles for all 4 questions (Text) – //g-accordion-expander//h3
  • Related question page URLs for all 4 questions (Text) – //g-accordion-expander//div[@class="r"]//a/@href
  • We’ve also included them in our accompanying Google doc for ease.

    Conclusion

    Hopefully our guide has been useful and can set you on your way to extract all sorts of useful and relevant data from the search results. Let me know how you get on, and if you have any other nifty XPath tips and tricks, please comment below!

    The post How to Scrape Google Search Features Using XPath appeared first on Screaming Frog.

    ]]>
    https://www.screamingfrog.co.uk/how-to-scrape-google-search-features-using-xpath/feed/ 16
    The Do’s and Don’ts of Chasing for a Link https://www.screamingfrog.co.uk/chasing-for-a-link/ https://www.screamingfrog.co.uk/chasing-for-a-link/#comments Thu, 15 Aug 2019 11:30:38 +0000 https://www.screamingfrog.co.uk/?p=14779 It’s happened to all of us. You bag another piece of coverage for your client’s content piece on a top-tier publication, which you’re ecstatic about. However, after a quick scroll through your elation is suddenly offset by a small twang of disappointment. There isn’t a link to your client. Getting...

    The post The Do’s and Don’ts of Chasing for a Link appeared first on Screaming Frog.

    ]]>
    It’s happened to all of us. You bag another piece of coverage for your client’s content piece on a top-tier publication, which you’re ecstatic about. However, after a quick scroll through your elation is suddenly offset by a small twang of disappointment. There isn’t a link to your client.

    Getting links these days is tough. Publications have enforced sitewide no-link policies, and these days journalists can be apprehensive to add them. I’m not going to delve into the reasons for why this is, or why they shouldn’t be apprehensive/strict with linking, but I will drop in this Tweet from Danny Sullivan, Google’s Public Search Liaison:

    I’m going to talk today about how best to approach journalists who have covered your content or story, but haven’t linked to your client. There are some obvious do’s or don’ts that you’d hope everyone was aware of, but unfortunately Tweets from journalists like this are a regular occurrence:

    Which leads nicely onto the first point.

    How Long After an Article Going Live, Is It Appropriate to Chase Up?

    When it comes to approaching someone to add a link into a piece of coverage, the quicker you do so after the time of publishing, the more likely they are to be receptive to your request. In my opinion it would be acceptable to approach someone for a link within 1 week of an article going live, and if the topic is still somewhat relevant.

    The chances of them doing so does start to tail off quite quickly, and after the 1-week period is gone it really is best to move on and let that one go. Otherwise you risk damaging your relationship with a journalist and/or making your client look bad.

    I ran a little poll over on Twitter to hear other people’s thoughts, and the majority voted that within a few days is an acceptable time-frame to chase up a link.

    Ensure That a Link Adds Value

    Before chasing someone to add a link, ask yourself if doing so actually adds value to the piece. Is there more data to be found on your client’s site? Is it a nice looking interactive that makes it easier to view, sort and filter data? If the answer to these questions is no, you’re hindering your chances of people naturally linking to your content, and the chances of them adding a link as a result of you approaching them.

    When chasing up, make sure you include your reasons for why they should consider adding a link to the piece, as this likely to increase your chances.

    This is why it’s super important to make sure that content you make are linkable assets, where it makes sense to point users and readers to the page (via a link), and it needs to be baked into the process during the early stages of ideation.

    To give you an example, we put together this index for a client that ranks the world’s best tourist attractions. It presents the data in a visual way, allowing users to click on each tourist attraction to view a picture of it and it’s location, and they can also sort the data as they see fit. With these features in mind, people would struggle to think of a reason why they shouldn’t add a link through to the content piece.

    Ensure You’re Emailing the Right Person

    A small point: make sure you are getting in touch with the right person. The majority of the time this will be the individual who you emailed initially or who authored the article, though occasionally there may not be a name associated to the post. As well as this, the author of the article may not always be the one who makes the decision whether a link can be added or not, as it can sometimes be the responsibility of the digital editor or similar role.

    Use your best judgement and ensure you’re getting in touch with the right person, to avoid confusion and mild embarrassment.

    Agree up Front to Add a Link

    If you have the opportunity to discuss with a journalist prior to them covering the piece, for example if they respond positively to your original pitch email asking for more information, it can sometimes make sense to propose that a link is included at this point.

    Be polite and keep it simple, again highlighting the reasons why a link adds value to their article. To give a quick example:

    “If you do cover the piece, it would be great if you could add a link to it. All the data can be found on the aforementioned page, and users can filter and sort the data as they see fit. The methodology is also explained in-depth, as well as links to all the sources we used.”

    Sites That Don’t Link Out

    There are some sites that never link out, and in this instance, it may make sense to save yourself some time an effort and not chase up for links. You could try your luck, but use your judgement and previous experiences here, for example don’t chase a publication for a link that’s already told you that they don’t link out, as you could potentially harm your relationship with the site.

    Where Should the Link Point To?

    Generally speaking, if a journalist has covered your client’s content piece, the best place to link to is the subsequent page on the client’s domain. However, this may not always be the case.

    If your client has provided a comment or is quoted within an article, you may find you have more success asking them to link to a bio page of the spokesperson on your client’s site. If you don’t have a bio page for your client’s spokesperson, it’s definitely worth considering if they are regularly quoted or contribute to articles within the industry.

    On the subject of asking people to link to a specific page, it’s common for people to link to the client’s homepage instead of where the content sits on their site. Proceed with caution if you’re thinking of asking someone to change where a link points to. Generally speaking it’s best to be happy with another link in the bag, and it all helps add to a natural and diverse link profile.

    To Summarise

    To summarise the above, when chasing people to add a link to your content:

    • It should be within a few days of the article going live. If it’s outside that window, it’s best to move on otherwise you risk damaging your relationship with journalists.
    • You should ensure that including a link actually adds value to the article and its readers (more data, methodology, sources etc.)
    • Ensure you’re emailing the right person!
    • Consider proposing to a journalist that they include a link a head of the article going live, if the opportunity for discussion arises.
    • If you know a site doesn’t link out, it may make sense to take it on the chin and move on.
    • If your client is regularly quoted or contributes to industry news and articles, consider creating an bio page on their site. We’ve found that people are more receptive to add links.

    Over to You

    I’d love to hear if you have any additional experiences or tips in regards to chasing links, so please do get involved in the comments. 👇

    The post The Do’s and Don’ts of Chasing for a Link appeared first on Screaming Frog.

    ]]>
    https://www.screamingfrog.co.uk/chasing-for-a-link/feed/ 5
    Reviving Retired Search Console Reports https://www.screamingfrog.co.uk/reviving-search-console/ https://www.screamingfrog.co.uk/reviving-search-console/#comments Mon, 08 Apr 2019 13:18:41 +0000 https://www.screamingfrog.co.uk/?p=14161 Since I started my journey in the world of SEO, the old Google Search Console (GSC) has been a mainstay of every campaign I’ve worked on. Together, we’ve dealt with some horrific JavaScript issues, tackled woeful hreflang implementation, and watched site performance reach its highest highs and lowest lows. Sadly,...

    The post Reviving Retired Search Console Reports appeared first on Screaming Frog.

    ]]>
    Since I started my journey in the world of SEO, the old Google Search Console (GSC) has been a mainstay of every campaign I’ve worked on. Together, we’ve dealt with some horrific JavaScript issues, tackled woeful hreflang implementation, and watched site performance reach its highest highs and lowest lows.

    Sadly, all good things must come to an end, and in Jan ’19 Google announced most of the old Search Console features would be shut down for good at the end of March.

    But it’s not all doom and gloom. As a successor, we now have an updated Google Search Console v2.0 to guide us into the modern web. This new console has a fresh coat of paint, is packed with new reports, gives us 16 months of data, and provides a live link straight into Google’s index — it’s all rather lovely stuff!

    Despite all this… I still can’t help looking longingly for a few of the old reports sitting neatly tiered on the left-hand side of the browser.

    While we can’t quite turn back time, using the trusty SEO Spider we can replicate a few of these reports to fill the void for tabs now deleted or yet to be transferred over. Before jumping in, I should note this post mostly covers reports deleted or not fully transferred or across. If you can’t find something here, chances are it’s already available on GSC 2.0.

    Structured Data

    The new GSC does indeed have some structured data auditing in the new ‘Enhancements’ tab. However, it only monitors a few select forms of structured data (like Products and Events markup etc…). While I’m sure Google intends to expand this to cover all supported features, it doesn’t quite meet the comprehensiveness of the old report.

    Well, hot on the heels of the v11.0 release for the SEO Spider, we now have bulk structured data auditing and validation built in. To activate, just head over to Configuration > Spider > Advanced > Enable the various structured data settings shown here:

    Once your crawl is complete, there are two areas to view structured data. The first of which is in the main Structured Data tab and various sub filters, here:

    Or, if you just want to examine one lone URL, click on it and open the Structured Data Details tab at the bottom of the tool:

    There are also two exportable reports found in the main report’s menu: the Validation Errors & Warnings Summary, and the Validation Errors and Warnings.

    For the full details, have a look at:
    https://www.screamingfrog.co.uk/seo-spider/user-guide/tabs/#structured-data

    HTML Improvements

    The HTML Improvements was a neat little tab Google used to show off errors with page titles, meta descriptions, and non-indexable content. Mainly it highlighted when they were missing, duplicated, short, long, or non-informative.

    Unlike many other reports, rather than transferring over to the new GSC it’s been completely removed. Despite this, it’s still an incredibly important aspect of page alignment, and in Google’s own words: “there are some really good tools that help you to crawl your website to extract titles & descriptions too.” Well — taking their hint, we can use the Spider and various tabs or filters for exactly that.

    Want page title improvements? Look no further than the filters on the Page Title tab:

    Or if you’re curious about your Meta Descriptions:

    Want to see if any pages reference non-indexable content? Just sort by the Indexability column on any tab/filter combo:

    International Targeting

    Ahh, hreflang… the stuff of nightmares for even the most skilled of SEO veterans. Despite this, correctly configuring a multi-region/language domain is crucial. It not only ensures each user is served the relevant version, but also helps avoid any larger site or content issues. Thankfully, we’ve had this handy Search Console tab to help report any issues or errors with implementation:

    Google hasn’t announced the removal of this report, and no doubt it will soon be viewable within the new GSC. However, if for any reason they don’t include it, or if it takes a while longer to migrate across, then look no further than the hreflang tab of the SEO Spider (once enabled in Configuration > Spider > hreflang).

    With detailed filters to explore every nook and cranny of hreflang implementation — no matter what issues your site faces, you’ll be able to make actionable recommendations to bridge the language gap.

    There’s also a handful of exportable hreflang reports from the top ‘Reports’ dropdown. While I won’t go through each tab here, I’d recommend you check out the following link which explains everything involving hreflang and the spider in much more detail:
    https://www.screamingfrog.co.uk/how-to-audit-hreflang/

    Blocked Resources

    Another report that’s been axed — it was introduced as a way to keep track of any CSS or JavaScript files being blocked to search bots. Helping flag anything which might break the rendering, make the domain uncrawlable, or just straight up slow it down.

    While these issues have drastically decreased over the years, they’re still important to keep track of. Fortunately, after running a crawl as Googlebot (Configuration > User-Agent > Googlebot) we can find all blocked resources within the Response Codes tab of the Spider — or if you’re just looking for issues relating to rendering, have a look at the bottom Rendered Page details tab:

    Fetch as Google

    “But wait — you can just use the new URL inspect tool…”. Well, yes — you can indeed use the new URL inspect to get a live render straight from Googlebot. But I still have a few quarrels with this.

    For a start, you can only view your render from Googlebot mobile, while poor desktop is completely neglected. Secondly, the render is just a static above-the-fold screenshot, rather than the full-page scrollable view we used to get in Fetch As.

    While it’s not quite the same as a direct request from Google, we can still emulate this within the Spider’s JavaScript rendering feature. To enable JavaScript rendering head over to Configuration > Spider > Rendering and switch the drop down to JavaScript.

    Once your crawl is complete, highlight a URL and head over to the Rendered Page tab towards the bottom. Here you can view (or export) a screenshot of your rendered page, alongside a list showing all the resources needed:

    If you want to mimic Google as much as possible, try switching the User-Agent to Googlebot or Googlebot mobile (Configuration > User-Agent). This will make the Spider spoof a request as if it were Google making it.

    It’s also worth mentioning that Googlebot renders JavaScript based on v41 of Chrome, whereas the Spider uses the updated v64 of Chromium. While there aren’t many massive differences between the two, there may be some discrepancies.

    As a bonus, if you still want a desktop render direct from Google (or don’t have access to Search Console of a domain), the PageSpeed Insights tool still produces a static desktop image as a representation of how Googlebot is rendering a page. It’s not the most high-res or detailed image but will get the job done!

    Robots.txt tester

    Another tab I’m hopeful Google will eventually migrate over — testing your robots before submitting is crucial to avoid disallowing or blocking half your site to search engines.

    If for any reason they don’t happen to transfer this across to the new GSC, you can easily test any robot’s configuration directly within the SEO Spider (Configuration > Robots.txt > Custom).

    This window will allow you to either import a live robots.txt file or make your own custom one. You can test if an individual URL is blocked by entering it into the search at the bottom. Alternatively, run a crawl of your site and the spider will obey the custom crawl behaviour.

    For a much more in-depth guide on all the robots.txt capabilities of the SEO Spider, look here:
    https://www.screamingfrog.co.uk/robots-txt-tester/

    URL Parameters

    An extremely useful tab — the URL Parameters helps to highlight all of the various parameter queries Google found on its journey through your site. This is particularly useful when examining the crawl efficiency or dealing with faceted navigations.

    Currently, there’s no way of replicating this report within the Spider, but we are able to get a similar sample from a crawl and some Excel tinkering.

    Just follow these steps or download the macro (linked below) –

    1. Run a crawl of the domain, export the internal HTML tab
    2. Cut & Paste the URL list into Column A of a fresh Excel sheet
    3. Highlight Column A > Data > Text-to-Columns > Delimited > Other: ? > Finish
    4. Highlight Column B > Data > Text-to-Columns > Delimited > Other: & > Finish
    5. Highlight Column A > Right-click > Delete
    6. Home > Editing > Find & Select > Go to Special > Blanks > OK
    7. With these highlighted > Home > Cells > Delete
    8. CTRL+A to highlight everything > Find & Replace > Replace: =* with nothing
    9. Stack all columns into one & add a heading of ‘Parameter’
    10. Highlight this master column > Insert > Pivot Table > Recommended > Count of Parameter

    To save some time, I’ve made an Excel macro to do this all for you, which you can download here. Just download the spreadsheet > click Enable Content & Enable Editing then follow the instructions.

    If everything’s done correctly, you should end up with a new table similar to this:

    It’s worth noting there will be some discrepancies between this and Google’s own URL report. This boils down to the fundamental differences between the Spider & Googlebot, most of which is explained in much greater detail here:
    https://www.screamingfrog.co.uk/seo-spider/faq/#why-does-the-number-of-urls-crawled-not-match-the-number-of-results-indexed-in-google-or-errors-reported-within-google-webmaster-tools

    The King Is Dead, Long Live the King!

    Well, that’s all for now — hopefully you find some of these reports useful. If you want a full list of our other how-to guides, take a look through our user guide & FAQ pages. Alternatively, if you have any other suggestions and alternatives to the retired Google system, I’d love to hear about them in the comments below.

    As a side note: for many of these reports, you can also combine them with the Scheduling feature to keep them running on a regular basis. Or, if you’d like some automatic reporting, take a quick look at setting this up in the Crawl Reporting in Google Data Studio of my previous post.

    The post Reviving Retired Search Console Reports appeared first on Screaming Frog.

    ]]>
    https://www.screamingfrog.co.uk/reviving-search-console/feed/ 14
    Screaming Frog SEO Spider Update – Version 11.0 https://www.screamingfrog.co.uk/seo-spider-11/ https://www.screamingfrog.co.uk/seo-spider-11/#comments Tue, 05 Mar 2019 09:52:50 +0000 https://www.screamingfrog.co.uk/?p=13763 We are delighted to announce the release of Screaming Frog SEO Spider version 11.0, codenamed internally as ‘triples’, which is a big hint for those in the know. In version 10 we introduced many new features all at once, so we wanted to make this update smaller, which also means...

    The post Screaming Frog SEO Spider Update – Version 11.0 appeared first on Screaming Frog.

    ]]>
    We are delighted to announce the release of Screaming Frog SEO Spider version 11.0, codenamed internally as ‘triples’, which is a big hint for those in the know.

    In version 10 we introduced many new features all at once, so we wanted to make this update smaller, which also means we can release it quicker. This version includes one significant exciting new feature and a number of smaller updates and improvements. Let’s get to them.

    1) Structured Data & Validation

    Structured data is becoming increasingly important to provide search engines with explicit clues about the meaning of pages, and enabling special search result features and enhancements in Google.

    The SEO Spider now allows you to crawl and extract structured data from the three supported formats (JSON-LD, Microdata and RDFa) and validate it against Schema.org specifications and Google’s 25+ search features at scale.

    Structured Data

    To extract and validate structured data you just need to select the options under ‘Config > Spider > Advanced’.

    Structured Data Advanced Configuration

    Structured data itemtypes will then be pulled into the ‘Structured Data’ tab with columns for totals, errors and warnings discovered. You can filter URLs to those containing structured data, missing structured data, the specific format, and by validation errors or warnings.

    Structured Data tab

    The structured data details lower window pane provides specifics on the items encountered. The left-hand side of the lower window pane shows property values and icons against them when there are errors or warnings, and the right-hand window provides information on the specific issues discovered.

    The right-hand side of the lower window pane will detail the validation type (Schema.org, or a Google Feature), the severity (an error, warning or just info) and a message for the specific issue to fix. It will also provide a link to the specific Schema.org property.

    In the random example below from a quick analysis of the ‘car insurance’ SERPs, we can see lv.com have Google Product feature validation errors and warnings. The right-hand window pane lists those required (with an error), and recommended (with a warning).

    Structured Data Details tab

    As ‘product’ is used on these pages, it will be validated against Google product feature guidelines, where an image is required, and there are half a dozen other recommended properties that are missing.

    Another example from the same SERP, is Hastings Direct who have a Google Local Business feature validation error against the use of ‘UK’ in the ‘addressCountry‘ schema property.

    Structured Data Details Tab Error!

    The right-hand window pane explains that this is because the format needs to be two-letter ISO 3166-1 alpha-2 country codes (and the United Kingdom is ‘GB’). If you check the page in Google’s structured data testing tool, this error isn’t picked up. Screaming Frog FTW.

    The SEO Spider will validate against 26 of Google’s 28 search features currently and you can see the full list in our structured data section of the user guide.

    As many of you will be aware, frustratingly Google don’t currently provide an API for their own Structured Data Testing Tool (at least a public one we can legitimately use) and they are slowly rolling out new structured data reporting in Search Console. As useful as the existing SDTT is, our testing found inconsistency in what it validates, and the results sometimes just don’t match Google’s own documented guidelines for search features (it often mixes up required or recommended properties for example).

    We researched alternatives, like using the Yandex structured data validator (which does have an API), but again, found plenty of inconsistencies and fundamental differences to Google’s feature requirements – which we wanted to focus upon, due to our core user base.

    Hence, we went ahead and built our own structured data validator, which considers both Schema.org specifications and Google feature requirements. This is another first to be seen in the SEO Spider, after previously introducing innovative new features such as JavaScript Rendering to the market.

    There are plenty of nuances in structured data and this feature will not be perfect initially, so please do let us know if you spot any issues and we’ll fix them up quickly. We obviously recommend using this new feature in combination with Google’s Structured Data Testing Tool as well.

    2) Structured Data Bulk Exporting

    As you would expect, you can bulk export all errors and warnings via the ‘reports’ top-level menu.

    Structured Data Validation Error & Warning Reports

    The ‘Validation Errors & Warnings Summary’ report is a particular favourite, as it aggregates the data to unique issues discovered (rather than reporting every instance) and shows the number of URLs affected by each issue, with a sample URL with the specific issue. An example report can be seen below.

    Structured Data Validation Summary Report

    This means the report is highly condensed and ideal for a developer who wants to know the unique validation issues that need to be fixed across the site.

    3) Multi-Select Details & Bulk Exporting

    You can now select multiple URLs in the top window pane, view specific lower window details for all the selected URLs together, and export them. For example, if you click on three URLs in the top window, then click on the lower window ‘inlinks’ tab, it will display the ‘inlinks’ for those three URLs.

    You can also export them via the right click or the new export button available for the lower window pane.

    Multi-Select Bulk Exporting

    Obviously this scales, so you can do it for thousands, too.

    This should provide a nice balance between exporting everything in bulk via the ‘Bulk Export’ menu and then filtering in spreadsheets, or the previous singular option via the right click.

    4) Tree-View Export

    If you didn’t already know, you can switch from the usual ‘list view’ of a crawl to a more traditional directory ‘tree view’ format by clicking the tree icon on the UI.

    directory tree view

    However, while you were able to view this format within the tool, it hasn’t been possible to export it into a spreadsheet. So, we went to the drawing board and worked on an export which seems to make sense in a spreadsheet.

    When you export from tree view, you’ll now see the results in tree view form, with columns split by path, but all URL level data still available. Screenshots of spreadsheets generally look terrible, but here’s an export of our own website for example.

    tree-view export spread sheet

    This allows you to quickly see the break down of a website’s structure.

    5) Visualisations Improvements

    We have introduced a number of small improvements to our visualisations. First of all, you can now search for URLs, to find specific nodes within the visualisations.

    Search visualisations

    By default, the visualisations have used the last URL component for naming of nodes, which can be unhelpful if this isn’t descriptive. Therefore, you’re now able to adjust this to page title, h1 or h2.

    Node Labelling In Visualisations

    Finally, you can now also save visualisations as HTML, as well as SVGs.

    6) Smart Drag & Drop

    You can drag and drop any file types supported by the SEO Spider directly into the GUI, and it will intelligently work out what to do. For example, you can drag and drop a saved crawl and it will open it.

    You can drag and drop a .txt file with URLs, and it will auto switch to list mode and crawl them.

    Smart Drag & Drop

    You can even drop in an XML Sitemap and it will switch to list mode, upload the file and crawl that for you as well.

    Nice little time savers for hardcore users.

    7) Queued URLs Export

    You’re now able to view URLs remaining to be crawled via the ‘Queued URLs’ export available under ‘Bulk Export’ in the top level menu.

    queued URLs export

    This provides an export of URLs discovered and in the queue to be crawled (in order to be crawled, based upon a breadth-first crawl).

    8) Configure Internal CDNs

    You can now supply a list of CDNs to be treated as ‘Internal’ URLs by the SEO Spider.

    CDN Configuration

    This feature is available under ‘Configuration > CDNs’ and both domains and subfolder combinations can be supplied. URLs will then be treated as internal, meaning they appear under the ‘Internal’ tab, will be used for discovery of new URLs, and will have data extracted like other internal URLs.

    9) GA Extended URL Matching

    Finally, if you have accounts that use extended URL rewrite filters in Google Analytics to view the full page URL (and convert /example/ to www.example.com/example) in the interface, they break what is returned from the API, and shortcuts in the interface (i.e they return www.example.comwww.example.com/example).

    This means URLs won’t match when you perform a crawl obviously. We’ve now introduced an algorithm which will take this into account automatically and match the data for you, as it was really quite annoying.

    Other Updates

    Version 11.0 also includes a number of smaller updates and bug fixes, outlined below.

    • The ‘URL Info’ and ‘Image Info’ lower window tabs has been renamed from ‘Info’ to ‘Details’ respectively.
    • ‘Auto Discover XML Sitemaps via robots.txt’ has been unticked by default for list mode (it was annoyingly ticked by default in version 10.4!).
    • There’s now a ‘Max Links per URL to Crawl’ configurable limit under ‘Config > Spider > Limits’ set at 10k max.
    • There’s now a ‘Max Page Size (KB) to Crawl’ configurable limit under ‘Config > Spider > Limits’ set at 50k.
    • There are new tool tips across the GUI to provide more helpful information on configuration options.
    • The HTML parser has been updated to fix an error with unquoted canonical URLs.
    • A bug has been fixed where GA Goal Completions were not showing.

    That’s everything. If you experience any problems with the new version, then please do just let us know via support and we can help. Thank you to everyone for all their feature requests, bug reports and general support, Screaming Frog would not be what it is, without you all.

    Now, go and download version 11.0 of the Screaming Frog SEO Spider.

    Small Update – Version 11.1 Released 13th March 2019

    We have just released a small update to version 11.1 of the SEO Spider. This release is mainly bug fixes and small improvements –

    • Add 1:1 hreflang URL report, available under ‘Reports > Hreflang > All hreflang URLs’.
    • Cleaned up the preset user-agent list.
    • Fix issue reading XML sitemaps with leading blank lines.
    • Fix issue with parsing and validating structured data.
    • Fix issue with list mode crawling more than the list.
    • Fix issue with list mode crawling of XML sitemaps.
    • Fix issue with scheduling UI unable to delete/edit tasks created by 10.x.
    • Fix issue with visualisations, where the directory tree diagrams were showing the incorrect URL on hover.
    • Fix issue with GA/GSC case insensitivty and trailing slash options.
    • Fix crash when JavaScript crawling with cookies enabled.

    Small Update – Version 11.2 Released 9th April 2019

    We have just released a small update to version 11.2 of the SEO Spider. This release is mainly bug fixes and small improvements –

    • Update to schema.org 3.5 which was released on the 1st of April.
    • Update splash screen, so it’s not always on top and can be dragged.
    • Ignore HTML inside amp-list tags.
    • Fix crash in visualisations when focusing on a node and using search.
    • Fix issue with ‘Bulk Export > Queued URLs’ failing for crawls loaded from disk.
    • Fix issue loading scheduling UI with task scheduled by version 10.x.
    • Fix discrepancy between master and detail view Structured Data warnings when loading in a saved crawl.
    • Fix crash parsing RDF.
    • Fix ID stripping issue with Microdata parsing.
    • Fix crashing in Google Structured Data validation.
    • Fix issue with JSON-LD parse errors not being shown for pages with multiple JSON-LD sections.
    • Fix displaying of Structured Data values to not include escape characters.
    • Fix issue with not being able to read Sitemaps containing a BOM (Byte Order Mark).
    • Fix Forms based Authentication so forms can be submitted by pressing enter.
    • Fix issue with URLs ending ?foo.xml throwing off list mode.
    • Fix GA to use URL with highest number of sessions when configuration options lead to multiple GA URLs matching.
    • Fix issue opening crawls via .seospider files with ++ in their file name.

    Small Update – Version 11.3 Released 30th May 2019

    We have just released a small update to version 11.3 of the SEO Spider. This release is mainly bug fixes and small improvements –

    • Added relative URL support for robots.txt redirects.
    • Fix crash importing crawl file as a configuration file.
    • Fix crash when clearing config in SERP mode
    • Fix crash loading in configuration to perform JavaScript crawling on a platform that doesn’t support it.
    • Fix crash creating images sitemap.
    • Fix crash in right click remove in database mode.
    • Fix crash in scheduling when editing tasks on Windows.
    • Fix issue with Sitemap Hreflang data not being attached when uploading a sitemap in List mode.
    • Fix configuration window too tall for small screens.
    • Fix broken FDD HTML export.
    • Fix unable to read sitemap with BOM when in Spider mode.

    The post Screaming Frog SEO Spider Update – Version 11.0 appeared first on Screaming Frog.

    ]]>
    https://www.screamingfrog.co.uk/seo-spider-11/feed/ 48
    Learn To Crawl: SEO Spider Training Days https://www.screamingfrog.co.uk/learn-to-crawl/ https://www.screamingfrog.co.uk/learn-to-crawl/#comments Tue, 12 Feb 2019 14:59:44 +0000 https://www.screamingfrog.co.uk/?p=13712 On the 24th of January SEOs gathered in London for Screaming Frog’s inaugural SEO Spider Training Event. Attendees flew in from far-flung places such as France, Germany, and even… Cornwall. (If you’re British you’ll appreciate just how far away that is!) Their destination was Marble Arch, London. More specifically, room...

    The post Learn To Crawl: SEO Spider Training Days appeared first on Screaming Frog.

    ]]>
    On the 24th of January SEOs gathered in London for Screaming Frog’s inaugural SEO Spider Training Event. Attendees flew in from far-flung places such as France, Germany, and even… Cornwall. (If you’re British you’ll appreciate just how far away that is!)

    Their destination was Marble Arch, London. More specifically, room ‘Adjust’ within the exquisite function centre we’d hired for the event. Other rooms on the same floor were named: ’Accept’, ‘Action’, ‘Affirm’, ‘Assume’, and ‘Agree’; So positive vibes (and jokes about adjusting crawl speed) were felt throughout the day.

    Veteran SEO Frog and all-round nice guy Charlie Williams was our expert for the day. Charlie’s day was targeted towards intermediate users who knew how to crawl sites, but wanted to get the most out of the plethora of extra features the SEO Spider ships with after nine years of continuous development and improvement.

    His excellent sermon, which was frequently expanded on through enticing audience questions, was divided into the following topics:

    • Setup, configuration, & crawling
    • Advanced crawling scenarios
    • External data & API integration
    • Analysis & reporting
    • Debugging & diagnosis

    (Spoiler Alert!) For more specific details on what was covered, Ian from Venture Stream who attended has put together this great roundup.

    https://twitter.com/VentureStreamUK/status/1090687591412887557?ref_src=twsrc%5Etfw

    We also had an exclusive live link (Twitter DMs) back to Frog HQ, so we could pass any questions straight back to the development team in real time. While they were super helpful on numerous queries, they remained tight-lipped when pressed on what might be included in the upcoming SF Version 11…

    Hot actionable advice wasn’t the only thing on the menu, though. There was food on the menu too! Frequent coffee breaks gave everyone time to refresh and network; and our venue provided a premium cooked lunch which was delicious, and crucially came with amle pudding. (No, frogs legs were not an option.)

    Another added bonus was a helping of branded swag to take home- bottles, pens, notebooks, and those illusive SF stickers that everyone wants for their laptops.

    We had some great feedback from attendees:

    • Of those surveyed, 88% rated the day as either ‘very good’ or ‘excellent’.
    • 88% of those surveyed felt that the event was at the right skill level for the audience- something we were very keen to get right!
    • 100% said they would recommend the training to a friend

    Some other feedback included:

    “Charlie was a top bloke and it was a great place to learn more about the features of Screaming Frog I seldom use. It was a great place to voice very technical queries and issues. The live link to HQ via the SF helpers was also a great addition.”

    “The event was very well structured and thought out. Individual sessions were just long enough to keep the audiences attention without splitting the day in too many separate parts. I especially liked our host, as he was able to explain complex subjects very easily understandable.”

    “I learned things that I didn’t know existed – for example what you can do with AOPIS from GA/SC… I also liked being surrounded by high level SEO people- it’s not often that you get to meet such experts.”

    If you consider yourself a budding technical SEO, and you want to gain total confidence using Screaming Frog’s SEO spider then you’re in luck. Our next training event will be on the 18th of March in London. You can get an early bird ticket here, though act fast, as our first event sold out quickly and we have very limited spaces!

    We’re also open to running more bespoke inhouse training events, if you have an internal SEO team, or you’re an agency, you’re welcome to pop us an email via support.

    The post Learn To Crawl: SEO Spider Training Days appeared first on Screaming Frog.

    ]]>
    https://www.screamingfrog.co.uk/learn-to-crawl/feed/ 6