Web Crawler To Extract Links

Authors

  • Chhaveesh Agnihotri, Yash Rupavatiya, Akshat Bansal, Rahmatullah Payam, Sheetal Laroiya Author

DOI:

https://doi.org/10.48047/CU/54/02/2039-2051

Keywords:

Web Crawler, URL Extraction, Domain Filtering, Rate Limiting, Performance Optimization

Abstract

Web crawlers are essential tools for automatically extracting URLs and other information from websites, but they often face 
challenges when dealing with large and complex sites. This project focuses on creating a robust web crawler that efficiently 
extracts URLs from a target domain and its subdomains, filters out external links, and allows users to filter links based on 
HTTP status codes. The tool is designed to handle large websites through performance optimizations like rate limiting and 
caching, ensuring it does not overload the websites it crawls. Implemented using Python and Bash, the crawler provides 
flexible output options for ease of use, making it a versatile solution for web administrators, data analysts, and security 
professionals. This paper discusses the design and implementation of the crawler, along with its performance-monitoring 
features, ensuring accuracy and scalability in diverse web environments. 

Downloads

Download data is not yet available.

References

. Kumar, R., & Sharma, V. (2021). Web Crawling and Data Mining: A Comprehensive Overview. Springer..

. Zhang, L., & Liu, Z. (2020). Machine learning techniques for improving web crawling efficiency. Journal of Computer Science, 38(4), 502-510.

https://doi.org/10.1016/j.jcse.2020.02.003.

. Singh, A., & Gupta, S. (2022). Performance optimization in web crawlers using caching and rate limiting. International Journal of Computer

https://doi.org/10.1080/123456789.

Downloads

Published

2025-01-10

How to Cite

Web Crawler To Extract Links (Chhaveesh Agnihotri, Yash Rupavatiya, Akshat Bansal, Rahmatullah Payam, Sheetal Laroiya , Trans.). (2025). Cuestiones De Fisioterapia, 54(2), 2039-2051. https://doi.org/10.48047/CU/54/02/2039-2051