Comment on page
Git-like operations for datasets and Jupyter notebooks
quilt3provides a simple command-line for versioning large datasets and storing them in Amazon S3. There are only two commands you need to know:
pushcreates a new package revision in an S3 bucket that you designate
installdownloads data from a remote package to disk
In short, neither Git nor Git LFS have the capacity or performance to function as a repository for data. S3, on the other hand, is widely used, fast, supports versioning, and currently stores some trillions of data objects.
Similar concerns apply when baking datasets into Docker containers: images bloat and slow container operations down.
You will need either an AWS account, credentials, and an S3 bucket, OR a Quilt enterprise stack with at least one bucket. In order to read from and write to S3 with
quilt3, you must first do one of the following:
A Quilt package contains any collection of data (usually as files), metadata, and documentation that you specify.
Let's get a data package from S3 and write it
quilt3 install \
--registry s3://quilt-example \
Now you've got data in the current working directory.
CA-06-california-counties.json quilt_summarize.json urchins-interactive.json
README.md reef-check.ipynb urchins2006-2019.parquet
Now let's imagine that we've modified this data locally. We save our Jupyter notebook and push the results back to Quilt:
# Be sure to substitute YOUR_NAME and YOUR_BUCKET with the desired strings
quilt3 push \
--dir . \
--registry s3://YOUR_BUCKET \
--message "Initial commit of reef data"
Quilt will then print out something like the following:
Package YOUR_NAME/reef-check@ea334b7 pushed to s3://YOUR_BUCKET
Successfully pushed the new package to https://yourquilt.yourocmpany.com/b/YOUR_NAME/packages/akarve/reef-check
quilt3 list-packages s3://YOUR_BUCKET
In the Quilt catalog, you will now see a new package revision, complete with a README, datagrid preview, and an interactive visualization in Altair.