How to customize Larbin
Where do the pages arrive ?
In order to customize larbin according to your needs, you have to
create a userouput file (see src/interf/useroutput.cc). This file must
define the 4 following functions :
- void loaded (html *page) : This function is called when the
fetch ended with success. From the page object, you can
For more details, see src/fetcher/file.h (for html), src/utils/url.h,
- get the url of the page by calling the method getUrl()
- get the content of the page by calling the method getPage()
- get the list of the sons by calling the method getLinks() (if
options.h includes "#define LINKS_INFO")
- get the http headers by calling the method getHeaders()
- get the tag with getUrl()->tag (if options.h includes "#define URL_TAGS")
- void failure (url *u, FetchError reason) : This function is
called when the fetch ended by an error. u describes the url of the
page. A description of its class can be found in src/utils/url.h.
reason explains why the fetch failed. enum FetchError is defined in src/types.h.
- void initUserOutput () : Function for initialising all your data,
called after all other initialisations
- void outputStats(int fds) : This function is called from the
webserver if you want to track some data. fds is the file descriptor
on which you must write to exchange with the net. This function is
called in another thread than the main one with no lock at all, so be
In case of specificSearch, the functions loaded and failure are
only called for specific pages.
For examples of useroutput files, see src/interf/xxxuseroutput.cc.
There are several default modules you can use. For more details,
The basic configurations are made in larbin.conf. Here are the
different fields of this file :
In this file, you can define options which can change what will be
done. Here are the different thing you can define (you must recompile
larbin if you change one of those) :
- From : YOUR mail : sent with http headers : very usefull when someone
wants to complain about the robot :-(
- UserAgent : name of the robot (sent with each request)
- httpPort : port on which is launched the http statistic webserver
(see http://localhost:8081/ when larbin is launched). If you set port
to 0, no webserver will be launched. This can allow larbin not to
launch a single thread.
- inputPort : port on which you can submit urls to fetch. If this
line does not exist or if the port is 0, no input will be available.
- pagesConnexions : Number of page you fetch in parallel (to adapt
depending of your network speed). Decrease this if you have too many
timeouts (see stats) : 10% seems to be a maximum.
- dnsConnexions : Number of dns calls in parallel. 10 should be ok.
- depthInSite : How deep do you want to go in a site.
- noExternalLinks : Only follow links which are related to the same
- waitDuration : time between 2 calls at the same server in
seconds. It should never be less than 30 s. However, even with 60 s,
it won't slow the crawler much, and it is a much
- proxy : if you want to connect through a proxy (host port). Unless
you have no other way to connect to the internet, you should not use
this because it might slow the crawler a lot, and is probably also not
so good for the proxy (especially if it has a cache).
- StartUrl : Where the search starts. This appears not to be very
important, as soon as the page contains external urls.
- limitToDomain : with this option enabled, you will only crawl
pages of some specific domain (.fr and .dk for example).
- forbiddenExtensions : What are the extensions you don't want ?
(write all of them and terminate your list with end)
- The first thing you can define is the module you want to use for
ouput. This defines what you want to do with the pages larbin
gets. Here are the different options :
These modules can be customized in src/types.h.
- DEFAULT_OUTPUT : This module mainly does nothing,
- SIMPLE_SAVE : This module saves pages on disk. It stores
2000 files per directory (with an index).
- MIRROR_SAVE : This module saves pages on disk with the
hierarchy of the site they come from. It uses one directory per site.
- STATS_OUTPUT : This modules makes some stats on the
pages. In order to see the results, see
If you want to define a new module, please have a look at
"src/interf/useroutput.cc", and do not hesitate to send me your work
- SPECIFICSEARCH : If this option is set, larbin's goal is
to search for specific document. You must then define 2 arrays (NULL
terminated) of char *, contentTypes and
privilegedExts, which define respectively the content types
which are looked for, and the extension of files (this extension is
only used for speeding the search, pages are said to be specific only
by looking at the content/type in http headers). You should also
define another option telling how you want to manage specific pages :
If you want to define a new policy, please have a look at
"src/fetch/specbuf.cc" and "src/fetch/specbuf.h", and do not hesitate
to send me your work for inclusion.
- DEFAULT_SPECIFIC : Default way of managing specific files
: they are treated as html (ie same size limitations...), except that
they are not parsed.
- SAVE_SPECIFIC : Specific pages are saved on disk. this
allows in particular specific pages to be much bigger (see src/types.h for customizating this module).
- DYNAMIC_SPECIFIC : for big pages, larbin uses
dynamically allocated buffers.
- LINKS_INFO : Associate to each page the list of the
links it contains. This information can be used in
"useroutput.cc" with page->getLinks().
- FOLLOW_LINKS : If this option is not set, html pages
won't be parsed and links won't be followed. This can be usefull when
you feed larbin through the input system.
- NO_DUP : if this option is set, larbin does not return
success when a page with the same content than an old one is
- URL_TAGS : if this option is set, an int is associated
to every url (by default 0). If you use the input system, you'll have
to give an int and the url instead of just the url. When the pages is
fetched, you'll get it with the int (redirections are followed).
- EXIT_AT_END : If this option is set, larbin exits when
there are not any more urls to get.
- IMAGES : If set, larbin gets the images contained in
pages (ie follow img src links). Make sure to update
forbiddenExtensions in larbin.conf
according to your needs.
- ANYTYPE : If set, larbin gets every pages without caring
about content type. Make sure to update forbiddenExtensions in
larbin.conf according to your needs.
- COOKIES : If set, larbin manages cookies. Up to now, it
is a very simple implementation, but it should be suitable in more
than 90% of the situations.
- CGILEVEL : This option is foolowed by an integer which
specified how reluctant to cgi you are. 0 means you want all cgis, 1
means you refuse urls with '?' or '=' inside, 2 means you also
want to ban urls with 'cgi' inside.
- MAXBANDWIDTH : This option is followed by an integer
which indicates the maximum bandwidth larbin should use. Because of
the way bandwidth is limited, larbin might use 10 to 20 per cent more
bandwidth than expected. If this option is not set, there is no
- DEPTHBYSITE : If this option is set, when a links points
to another site, the depth of the new url is reinitialized, else it is
- THREAD_OUTPUT : This option must be set if the code in
"useroutput.cc" (the code you add) can use blocking instructions
(read/write on network file descriptor...). If it is not set, there is
only one thread in the program (except the webserver if any), so no
locking is needed.
- RELOAD : If this option is enabled, larbin restarts from
where it last stopped when you launch it. This allows to stop and
restart larbin as needed (or restart after a crash). If you want to
restart from scratch, use the -scratch option.
If you want to tune larbin a little more, go and see this file (it is
supposed to be commented enough). Of course, for those changes to have
effects, you have to recompile larbin.
- NOWEBSERVER : Do not launch the webserver. This can be
usefull if you don't want to launch any thread.
- GRAPH : Include nice histograms in the real time stat page.
- NDEBUG : Disable debugging information in the webserver.
- NOSTATS : Disable stats information in the webserver.
- STATS : Display stats on stdout every 8 seconds.
- BIGSTATS : Display the name of every page that is
fetched on stdout. This might slow larbin quite much.
- CRASH : Should only be used for reporting terrible
bugs (with make debug).
If you need something more, you'll have to do it (or ask me
to do it :-)).