dcrawl – Web Crawler For Unique Domains - Tech Office - Latest Hacking News,IT Security News and Cyber Security

This Blog is protected by DMCA.com

Navigation

dcrawl – Web Crawler For Unique Domains

dcrawl
dcrawl is a simple, but smart, multithreaded web crawler for randomly gathering huge lists of unique domain names.


How does dcrawl work?


dcrawl takes one site URL as input and detects all a href= links in the site’s body. Each found link is put into the queue. Successively, each queued link is crawled in the same way, branching out to more URLs found in links on each site’s body.

dcrawl Web Crawler Features

  • Branching out only to predefined number of links found per one hostname.
  • Maximum number of allowed different hostnames per one domain (avoids subdomain crawling hell e.g. blogspot.com).
  • Can be restarted with same list of domains – last saved domains are added to the URL queue.
  • Crawls only sites that return text/html Content-Type in HEAD response.
  • Retrieves site body of maximum 1MB size.
  • Does not save inaccessible domains.

dcrawl Usage



Example :

go build dcrawl.go./dcrawl -url http://wired.com -out ~/domain_lists/domains1.txt -t 8





Share

Osman

Osman Gani is the Chief Seo Expert and the Founder of ‘Tech Office’. He has a very deep interest in all current affairs topics whatsoever. Well, he is the power of our team and he lives in India. who loves to be a self-dependent person. As an author, I am trying my best to improve this platform day by day. His passion, dedication and quick decision-making ability make him stand apart from others.

Post A Comment:

0 comments: