Tools LinkedIn Pricing API FAQ
Login

Frequently asked questions

Keyword SERP scraping

Introduction

A quick introduction into keyword scraping


Web scraping is the process of automated data extraction from a website or service.

The most famous web scraping business is Google, search engines rely on entering websites and scraping the most relevant information from them.

 

Keyword scraping is the process of extracting data from SERPs. A SERP is a "search engine result page", that's the page a user of Google or Bing will see after entering a keyword.

 

The most important parts of the SERP

  • The SERP is usually separated into
    • paid results (creative/advertisements)
    • organic results (normal search engine results)
  • Each result contains valueable data
    • Rank : the position of the result
    • URL : the URL/website of the result
    • Title : the website title
    • Description : a meta information or a gathered summary of the website content
    • (optional) Sitelinks : more important links inside the page
  • Google Knowledge Graph
    This is an encyclopedia about attractions, persons, businesses, books, movies, weather, etc.

Scraping.services is a professional scraping business, our clients can scrape any amount of data without having to deal with the technical difficulties.

 

Defenses against scraping
Scraping or any sort of automated access to websites is often an unwelcome act. There are several layers of possible defense.

  • Filtering by geo-location or meta parameters (browser type, cookies)
  • Filtering by IP address or IP network and the number of requests within a range, as well as delays between requests
  • Filtering by keywords used and/or unusual search parameters
  • Modification of the frontend html code
  • An artificial intelligence taking all available input data and weighting a decision about the human nature of the request

Google for instance is using all of the above mentioned mechanisms, it is likely the worldwide best defended public website against automated access.
Almost all public websites do use one or several layers of scraping defense.

 

Methods of scraping, winning against the defense

To scrape successfully and overcome defenses a lot of different challenges have to be met.

In the example of Google it is important to emulate an up to date browser as good as possible, understanding as well as handling cookies and URL parameters correctly. It's important to have a large number of IP addresses without pre-abuse history and having them dedicated on the job.

Each IP address should be handled like an own identity and the scraping tool needs to behave like a new, believable website user.

 

Depening on the domain of Google (country) and the language used (localization) as well as on the type of search query the scraping tool needs to adapt it's rates and put delays at the right moments, sleep for a while after working on a set of keywords and exchange identity/IP before any scraping detection mechanism might trigger.

 

To scrape longterm successful it's important to have some sort of self-learning and adapting system implemented, if Google starts to detect the activity the scraping tool should adapt to the new situation.

 

The most important feature of a scraping script might be the ability to adapt and even stop scraping if required, a scraping tool should never continue after triggering detection mechanisms.

By ignoring detection a scraping project would cause unwanted attention as well as trouble and administrative workload on the target website.

The aim should be to scrape in a way that does not harm the target website, best is to stay undetected.

 


Our service is removing all of these difficulties from the task, leaving our clients at a simple frontend or API to just do and get what they want.