Skip to content

Remote Caching (s3 and gcs)

By default, Grog caches your build output files and directories in the local GROG_ROOT (default $HOME/.grog) directory. For CI use-cases or when developing on larger repositories it can be beneficial to share caches between machines. Grog supports the following providers for remote caching:

Behavior

When using a remote cache grog will effectively use the remote file system as a backup system for your local file system. This means that local outputs are first cached locally and then on the cloud. Likewise, when checking the cache grog will fall back to the remote cache if there is no local copy

Note: Grog does not take garbage collect your cache files in any way so even though storage is relatively cheap it can be good practice to set up a mechanism for monitoring its size.

Google Cloud Storage (GCS)

To enable remote caching via GCS add the following to your config:

[cache]
backend = "gcs"
[cache.gcs]
bucket = "<bucket-name>"
prefix = "<prefix-for-cache-files>" # optional default: '/'
credentials_file = "<path-to-google-credentials-json>" # optional

credentials_file should be a path to a service account json key file. When it is not provided grog will attempt to use whatever authentication is associated with the current session (see Application Default Credentials).

AWS S3

To enable remote caching via S3 add the following to your config:

[cache]
backend = "s3"
[cache.s3]
bucket = "<bucket-name>"
prefix = "<prefix-for-cache-files>" # optional default: '/'
credentials_file = "<path-to-aws-credentials-json>" # optional

credentials_file should be a path to an iam account json key file.