Most Java caching solutions rely on maps and other in-heap structures to make your data accessible as fast as possible. While this is usually a good idea for reasonably sized caches, when the volume of the cached objects grows outside the usual few gigabytes and goes into tens (or hundreds) of gigabytes, you will find the heap increasing to sizes where the garbage collector starts to become a hindrance.
This problem is usually mitigated by flushing cache storage to disk or using external platforms like memcached and its modern NoSQL cousins. Either of these solutions however add complexity to the stack and the deployment, not to mention they are not able to deliver data as fast as any in-memory cache.
BigCache addresses this problem by persisting the cached data in memory within the same JVM process, but outside the JVM heap. This prevents the Garbage Collector from interacting with the cache’s memory zone, allowing the JVM heap size to be scaled based on processing needs only. While