Dashboard Order API FAQ
Login
EN
 What is scraping ?
 How to export CSV data ?
 What keywords work best for scraping?
 Why to use a scraping service?
 What does a "!" during scraping mean ?

Frequently Asked Questions





A quick introduction into data scraping


Web scraping is the process of data exctraction from a website.

The most famous web scraping business is Google, search engines rely on entering websites and scraping the most relevant information from them.

 

Scraping can be an easy as well as a very challenging task, depending on the website or service targeted.

 

Defenses against scraping
Scraping or any sort of automated access of websites is often an unwelcome act. There are several layers of possible defense.

Google for instance is using all of the above mentioned mechanisms, it is likely the worldwide best defended public website against automated access.
Bing is a little bit easier, they likely do not have an AI in place and allow more access than Google.

 

Methods of scraping, winning against the defense

To scrape successfully and overcome defenses a lot of different challenges have to be met.

In the example of Google it is important to emulate an up to date browser as good as possible, understanding as well as handling cookies and URL parameters correctly. It's important to have a large number of IP addresses without pre-abuse history and having them dedicated on the job.

Each IP address should be handled like an own identity and the scraping tool needs to behave like a new, believable website user.


Depening on the domain of Google (country) and the language used (localization) as well as on the type of search query the scraping tool needs to adapt it's rates and put delays at the right moments, sleep for a while after working on a set of keywords and exchange identity/IP before any scraping detection mechanism might trigger.

 

The scrape longterm successful it's important to have some sort of self-learning and adapting system implemented, if Google starts to detect the activity the scraping tool should adapt to the new behaviour and alert developers to look into it.

 

The most important feature of a scraping script might be the ability to adapt and even stop scraping if required, a scraping tool should never continue after triggering detection mechanisms.

By ignoring detection a scraping project would cause unwanted attention as well as trouble and administrative workload on the target website.

The aim should be to scrape in a way that does not harm the target website, best is to stay undetected.

 


Our service is aimed at removing all of the difficulties from the task, leaving our customers at a nice frontend/API to just get what they want.