| commit | 79f2dd9707666ca8fb9e63e331d71c0a376c029e | [log] [tgz] |
|---|---|---|
| author | Jordan Cook <JWCook@users.noreply.github.com> | Tue Jul 06 21:29:32 2021 |
| committer | GitHub <noreply@github.com> | Tue Jul 06 21:29:32 2021 |
| tree | d1535ee72763463d06430249ce1e578b8ab36747 | |
| parent | df4d83c606dbd3c993ae0d8705dd0d983c5b9f1d [diff] | |
| parent | a520d1c36338dae43673deb6b662237766115061 [diff] |
Merge pull request #311 from JWCook/type-hints Finish type annotations and PEP-561 compliance
requests-cache is a transparent, persistent HTTP cache for the python requests library. It‘s a convenient tool to use with web scraping, consuming REST APIs, slow or rate-limited sites, or any other scenario in which you’re making lots of requests that are expensive and/or likely to be sent more than once.
See full project documentation at: https://requests-cache.readthedocs.io
requests.Session, or install globally to add caching to all requests functionsFirst, install with pip:
pip install requests-cache
Next, use requests_cache.CachedSession to send and cache requests. To quickly demonstrate how to use it:
This takes ~1 minute:
import requests session = requests.Session() for i in range(60): session.get('http://httpbin.org/delay/1')
This takes ~1 second:
import requests_cache session = requests_cache.CachedSession('demo_cache') for i in range(60): session.get('http://httpbin.org/delay/1')
The URL in this example adds a delay of 1 second, simulating a slow or rate-limited website. With caching, the response will be fetched once, saved to demo_cache.sqlite, and subsequent requests will return the cached response near-instantly.
If you don't want to manage a session object, requests-cache can also be installed globally:
requests_cache.install_cache('demo_cache') requests.get('http://httpbin.org/delay/1')
To find out more about what you can do with requests-cache, see: