ScrapeBox General Usage Instructions

ScrapeBox Videos & Tutorials

ScrapeBox Video Tutorials
ScrapeBox FAQ Tutorials
Trackback Submitter
The Comment Poster
Problems and Crashes

Other Third Party Resources.
Ultimate Guide to ScrapeBox
Advanced Guide to Link Building
ScrapeBox White Hat SEO Guide

ScrapeBox Files and Folders

Firstly ScrapeBox does not need installing, simply run it.

ScrapeBox has a file called scrapebox.ini in the /Configuration/ folder. This just remembers the various settings, so next time you run ScrapeBox it will retain all your settings such as what search engines you have ticked, the amount of harvested results you last selected and so on. If you delete this, ScrapeBox will recreate another with default values.

There is also a file called footprint.ini and this contains the footprints for your custom footprint library, we have included some example ones to get you started. This is a simple text file you can open and edit in Notepad if you like, and you can add or delete entries from the file in Notepad or you can enter custom footprints in the harvester and ScrapeBox will automatically add the footprints to this file so you can reuse them again without having to retype them.

You do NOT have to replace your footprint file when you update, keep your footprint.ini and do not overwrite it if you want to keep the footprints you have accumulated. You can of course delete the footprint.ini if you like, and ScrapeBox will create another blank one ready to start a fresh footprint library.

Also ScrapeBox has a number of folders, each contain simple text files with examples of what data to put in them in order to use the various features of ScrapeBox. They are logically arranged according to what feature they are used with.


You are free to change any of the file names and folder names "Except" Addons, Blacklist and Configuration. The Addons folder stores all the free addons you can download from the Addons menu in ScrapeBox, and the Configuration folder contains various settings.

The Blacklist is a special folder with a text file inside it that houses blacklist domains, the domains in this file will be automatically stripped out of the URL list when harvesting and when posting. You can add and remove domains from this list however it must stay in it's present location with the same name in order to function. ScrapeBox looks for /Blacklist/Blacklist.txt and if it's not present because you renamed it or moved it then no URL's will be removed. To completely remove ScrapeBox from your computer you can just delete the entire ScrapeBox folder.
Custom Footprints


When you enter any “Custom Footprints”, upon closing ScrapeBox will also save a file called footprint.ini so you can build a library of your custom footprints to save retyping them each time you run the application. We have included footprints for Wordpress and Movable Type built in to the application, and some sample Custom Footprints in your footprint.ini to give you examples to use or modify.

The first time you run ScrapeBox, it will also create a file called scrapebox.ini in the directory it’s run from. This just remembers the various settings, so next time you run ScrapeBox it will retain all your settings.

You can edit these files with Notepad, or you can delete them and ScrapeBox will recreate them again with default values.

Using Custom Footprints


  • The use your Custom Footprints, just select your desired Footprint from the drop down menu and select the keywords you want to use in conjunction with your Footprint.
  • Select the Search Engines you want to harvest URL’s from, Google, Yahoo or Bing.
  • Select the Proxies Checkbox if you wish to use random proxies from your list, otherwise leave this blank.
  • Select Threads, this is optional. The default value of 10 is suitable for the average PC and Internet Connection Speed, however you may experiment with this value by raising or lowering it to see what’s optimal for your system.
  • Click “Start Harvesting”

If you select the built-in “Wordpress” or “Movable Type” footprints, there is no need to populate the Footprint Box with anything, just enter your keywords and the rest of the process is the same.



ScrapeBox lets you import Proxies to hide your IP when it connects with the internet. The “Proxies” Checkbox is a global switch, when this is ticked every internet connected task (Harvesting, Pagerank Checking, Ping aka PRStorm Mode, Wordpress & Movable Type Commenting) is run through your proxies. Free Proxies found on various public lists can be unreliable, search engines have limitations on automated queries if they are accessed by the same IP too many times. It’s possible someone is running queries similar to you are through the same IP lists, or some proxy servers themselves will detect you rapidly accessing them and time out. So we do advise you try try and obtain your own Proxies for running ScrapeBox for the best results. You can use regular proxies, or private proxies which require authentication. IP's can be in the following format:


When using Private Proxies, you receive a Username and a Password from your proxy provider. To input Private Proxies, input them in ScrapeBox in the following format:


So if "Fred" had a private proxy and his password was "fred123" his Private Proxy setup would look like this:


ScrapeBox will do it’s best to test your proxies and ignore dead and non-responsive ones in a number of different ways. ScrapeBox has a built in proxy harvester and tester, where you can gather free proxies and text proxies to see if they are blocked from Google or are non-responsive by using the Proxy Manager:


Also when running, ScrapeBox will randomly rotate proxies automatically and ignore any dead ones so the Harvesting or Posting operation isn’t interrupted.

Harvested URL’s and List Management


Harvested URL’s are available in the URL box after a harvesting run and from here you can perform some basic list management and Pagerank checking. You can Import and Export lists of URL’s, you can remove any Duplicate URL’s or you can remove Duplicate Domains. You can also click on the column headers to sort your list from the Highest Pagerank to the Lowest or sort the URL’s in Alphabetical order and you can export a list of URL’s complete with their Pagerank values. You can also Right Click any URL in the list and remove it.

Ping Mode


Ping Mode sends referrer hits, and works in the same fashion as PRStorm. It’s useful in a number of ways, for example..

  • Artificially inflate the traffic to your domains, be prepared when affiliate companies ask “Where is your traffic coming from”. You can add your domain to one file and place a list of sites you want to show as your traffic sources in the other, this could be sites that do have a link to yours (Collect them with the harvester with a link: footprint) or it could be Google search strings for phrases you rank for to make it appear you receive a lot of organic traffic. You could also place the referrer URL’s various advertising platforms send to make it appear you obtained the traffic through media buys.
  • You can use Ping Mode to make your site appear on other websites who show their “Highest Referring Domains” list, these types of referrer lists or widgets often have default text which you can use as a harvester footprint to gather hundreds of domains who display referrers. Many of these have high Pagerank, for example this one is Pagerank 5
  • Also with Ping Mode, you can use it to inflate the number of “Views” something you have submitted such as an Article. Often sites that display user submitted content, list it in order of popularity so the piece of content with the highest view count is at the top of it’s category or on a special “Top 100 List” and it receives even more exposure. You can make your content appear there also, but do it the fast way with Ping Mode.
  • Another example use of Ping Mode is published referrer logs like Awstats and Webalizer display clickable links to sites that have sent them hits or traffic, Ping Mode can also make your site appear in those which can send traffic and backlinks if the logs are published publicly.

Comment Posting


ScrapeBox can auto post to Wordpress, Movable Type, BlogEngine, B2Evolution blog platforms in the Fast Poster and additionally to those as well as Drupal and ExpressionEngine with the slow poster, to do this you can load files for each variable such as Name (which is hyperlinked as the anchor text), your Email, your Website URL, your Comment and the last box is the list of Blogs you wish to post to. We have included a sample set of files in the Comment Poster folder so you can see how this works. You can place as many entries in the Name, Email, Website and Comment files as you would like and ScrapeBox will randomly fetch one from each file so the comments posted will have a variety of info so each comment is somewhat different.

Also ScrapeBox has a number of the internets most popular useragents built-in and it will also randomly select a useragent from it’s internal list, this helps make each connection more varied. If you also use a list of proxies, and tick the proxies checkbox this will make the tool completely undetectable with posts being made from different countries if you have a varied proxy list, along with different useragents and different comment details for every comment.

When posting to Wordpress or Movable Type, ensure your list of blog URL’s you want to comment on only contain Wordpress “or” Movable Type and select the right checkbox for the platform you are posting to. In other words, don’t mix platforms in your file create separate lists for each. Also the Threads value is set at 10 by default, you may want to experiment with raising and lowering this value like with all the tools operations to help tune it to what’s best for your system, connection speed or proxies speed.

Mass Posting to the Same Blog

Wordpress contains built in "Flood Protection" as well as "Duplicate Comment Protection" that will impact your ability to mass post to the one Wordpress Blog using similar Name, Email, IP Address or Comment. This does "not" impact making one comment to thousands of blogs, it only applies to making thousands of comments to the one blog.

You can modify a file in Wordpress to disable the Flood Protection, the code to edit this is located on different lines depending on your Wordpress version. So if you do want to populate your own blog with comments, the Wordpress forum has posts regarding the removal of flood protection or if you are stuck contact us with your Wordpress version and we will let you know what to edit.

Also introduced in v1.0.0.5 is "Spinnable Comments" which uses tokens the same as many article spinners like Power Article Rewriter output. For example, in your Comments.txt file you can have:

Have a {great|good|excellent|fantastic} day!

<a href="">{Harvester|Scraper|Poster|Commenter}</a>

Every time ScrapeBox posts a comment, it will randomly fetch one word from inside the curly braces and this has the power to create many unique versions of the same comment, as well as add variation to your backlink anchor text.

Will my backlinks show up instantly?

Some will and some won’t, it just depends on the comment approval settings of the particular blog. Some blogs are set to auto-approve comments, some require the blog owner to approve them and others only moderate first time commenters so if you have a previously approved comment your new one will show up instantly.

ScrapeBox Version Numbers

The ScrapeBox version numbers feature 3 digits, for example:

Version 1.2.3

The first digit represents a Major Build, the second a Minor build, the third a Release which is normally bug fixes and minor tweaks. The latest version can always be downloaded from the members page where you downloaded ScrapeBox originally after payment. The last digit is likely to increment very quickly, these Incremental Builds contain minor things such as tweaks, fixing our bad spelling, (thanks Olden) and generally small ajdustments and fixes as we go along.

You are free to visit the download page at any time and use the inbuilt "Check for Updates" feature, and download the latest incremental build if there is one higher than the one you are currently running. Updating just involves downloading, and overwriting your old scrapebox.exe and running the application to update manually, or simply by running the built-in updater. There is no need to complete any installation process.

We won't email you when Incremental Builds are pushed out to the download page, this would likely become very annoying due to the frequency of releases. But if you wish to stay on the "bleeding edge" and always run the most up to date build, the best way is to bookmark the download page and visit it every couple of days.