web-scraping, web-crawling, web-parsing
I’m a python programmer who specializes in web scraping.
I code web scrapers, web bots that use websites' and applications' APIs (e.g. Telegram, Google Places API). I can also parse different documents (e.g. PDF, XML, JSON).
I can deliver:
- extracted data (in any spreadsheet, e.g. CSV, excel, google sheets, in JSON, or in a database file);
- automated script/application to be re-used later (I can also pack it into a Docker container for easier installation/running it by yourself);
- a deployed application (e.g. on your server, or in a cloud).
The scrapers can send scraped data anywhere, including your website's API, for instance.
I also fix broken scrapers and maintain my own scripts.
Most often I use Scrapy, sometimes Selenium (for executing JS, but I try to avoid it when possible, e.g. when a website uses Ajax and not JS I don't need Selenium).
Though I don't limit myself to these tools (the solution for scraper depends on the website and a particular task.
Data accuracy and attention to details are primary things to me. I'm adaptive to your time zone and preferred working hours.
If you have a web scraping or data extraction project, go ahead and contact me!