The main purpose of this approach is to bypass scraping defense systems. The downside is that by doing this you add an overhead in terms of time.

I have faced you exact same issues and solved this way:

1) by disabling IP rotation: this is effective if your target website does not have a defense system on IPs. Based on my experience, only some webpages are usually protected by these systems

2) by launching many processes at the same time: this is the approach I recommend. If you need to scrape multiple pages, then simply open as many scraping processes as you can (I suggest not more than 5)

I hope this helps!

Technology Bishop, Software Engineer & Technical Writer | Hire me: ‎‏‏https://antonellozanini.com/