Scrapes multiple domains for emails and saves them in files per domain.
pip install -r requirements.txt
Copy/Paste the links you want to get scraped in the websites.txt (row per row). Copy/Paste just the domain or a precize part of the website (more info in the wiki). Edit the banned.txt if you want to ban more link parts.
For more examples and usage, please refer to the Wiki.
- 0.0.1
- First Release
MrEdinLaw – @MrEdinLaw
https://github.com/mredinlaw/github-link
- Fork it (https://github.com/mredinlaw/WebScraper/fork)
- Create your feature branch (
git checkout -b feature/myFeature
) - Commit your changes (
git commit -am 'Add some myFeature'
) - Push to the branch (
git push origin feature/myFeature
) - Create a new Pull Request