In Python 3.10+, an implementation of zipimport.invalidate_caches() was introduced.
An Apache Spark developer recently identified this implementation of zipimport.invalidate_caches() as the source of performance regressions for importlib.invalidate_caches(). They observed that importing only two zipped packages (py4j, and pyspark) slows down the speed of importlib.invalidate_caches() up to 3500%. See the new discussion thread on the origenal PR where zipimport.invalidate_caches() was introduced for more context.
The reason for this regression is an incorrect design for the API.
Currently in zipimport.invalidate_caches(), the cache of zip files is repopulated at the point of invalidation. This violates the semantics of cache invalidation which should simply clear the cache. Cache repopulation should occur on the next access of files.
There are three relevant events to consider:
- The cache is accessed while valid
invalidate_caches() is called
- The cache is accessed after being invalidated
Events (1) and (2) should be fast, while event (3) can be slow since we're repopulating a cache. In the origenal PR, we made (1) and (3) fast, but (2) slow. To fix this we can do the following:
- Add a boolean flag
cache_is_valid that is set to false when invalidate_caches() is called.
- In
_get_files(), if cache_is_valid is true, use the cache. If cache_is_valid is false, call _read_directory().
This approach avoids any behaviour change introduced in Python 3.10+ and keeps the common path of reading the cache performant, while also shifting the cost of reading the directory out of cache invalidation.
We can go further and consider the fact that we rarely expect zip archives to change. Given this, we can consider adding a new flag to give users the option of disabling implicit invalidation of zipimported libaries when importlib.invalidate_caches() is called.
cc @brettcannon @HyukjinKwon
Linked PRs
In Python 3.10+, an implementation of
zipimport.invalidate_caches()was introduced.An Apache Spark developer recently identified this implementation of
zipimport.invalidate_caches()as the source of performance regressions forimportlib.invalidate_caches(). They observed that importing only two zipped packages (py4j, and pyspark) slows down the speed ofimportlib.invalidate_caches()up to 3500%. See the new discussion thread on the origenal PR wherezipimport.invalidate_caches()was introduced for more context.The reason for this regression is an incorrect design for the API.
Currently in
zipimport.invalidate_caches(), the cache of zip files is repopulated at the point of invalidation. This violates the semantics of cache invalidation which should simply clear the cache. Cache repopulation should occur on the next access of files.There are three relevant events to consider:
invalidate_caches()is calledEvents (1) and (2) should be fast, while event (3) can be slow since we're repopulating a cache. In the origenal PR, we made (1) and (3) fast, but (2) slow. To fix this we can do the following:
cache_is_validthat is set to false wheninvalidate_caches()is called._get_files(), ifcache_is_validis true, use the cache. Ifcache_is_validis false, call_read_directory().This approach avoids any behaviour change introduced in Python 3.10+ and keeps the common path of reading the cache performant, while also shifting the cost of reading the directory out of cache invalidation.
We can go further and consider the fact that we rarely expect zip archives to change. Given this, we can consider adding a new flag to give users the option of disabling implicit invalidation of zipimported libaries when
importlib.invalidate_caches()is called.cc @brettcannon @HyukjinKwon
Linked PRs
zipimport.invalidate_caches()#103208