Skip to content

Remote Caching (s3 and gcs)

By default, Grog caches your build output files and directories in the local GROG_ROOT (default $HOME/.grog) directory. For CI use-cases or when developing on larger repositories it can be beneficial to share caches between machines. Grog supports the following providers for remote caching:

Behavior

When using a remote cache grog will effectively use the remote file system as a backup system for your local file system. This means that local outputs are first cached locally and then on the cloud. Likewise, when checking the cache grog will fall back to the remote cache if there is no local copy

Note: Grog does not take garbage collect your cache files in any way so even though storage is relatively cheap it can be good practice to set up a mechanism for monitoring its size.

Google Cloud Storage (GCS)

To enable remote caching via GCS add the following to your config:

[cache]
backend = "gcs"
[cache.gcs]
bucket = "<bucket-name>"
prefix = "<prefix-for-cache-files>" # optional default: '/'
credentials_file = "<path-to-google-credentials-json>" # optional
shared_cache = true # optional default: true

The shared_cache option controls whether the cache is shared between different hosts. When enabled (default), the cache uses only the directory name of your workspace for cache isolation, allowing different machines with the same repository name to share cache entries. When disabled, the cache uses a hash of the full workspace path, which isolates caches between different machines even if they have the same repository name.

credentials_file should be a path to a service account json key file. When it is not provided grog will attempt to use whatever authentication is associated with the current session (see Application Default Credentials).

AWS S3

To enable remote caching via S3 add the following to your config:

[cache]
backend = "s3"
[cache.s3]
bucket = "<bucket-name>"
prefix = "<prefix-for-cache-files>" # optional default: '/'