Archive old raw files to other cost efficient storage?

We are planning to use Nexus Repository or JFrog Artifactory with AWS storages and we have this one requirement that I have tried to solve from the Nexus documentation. Artifactory has this JFrog Cold Artifact Storage empowers operations to archive outdated data from artifact repositories to lower-cost storage, reducing accumulated artifact clutter and maintaining performance for active data. The archived data remains accessible through a secondary (“Cold”) instance of Artifactory.

Is this same feature possible to achieve with Nexus Repository somehow?

Hello @kari.heinonen ! Yes, great question. Nexus Repository Pro has a few options to make this possible.

If you want to use cheaper storage but you don’t want the overhead of running a second instance, you can use the Staging feature to move the components to an archival hosted repository which lives in the same instance, but which is associated with a blob store living on cheaper storage. This approach works well if you are either manually identifying what should be archived, or if you’re using an external system to drive that.

For example, you might have an inventory of what apps are deployed to production; that could initiate the staging operation to move a binary from your fast repository to your archival repository when a version hasn’t been deployed to production in a while.

Content Replication + Cleanup
If you do want a second instance for whatever reason, and/or your archival policy is rule-based, then you could use the Content Replication feature to stream all of your binaries to your archival server, and then set up an aggressive Cleanup Policy on the primary.

You can also use this between repositories on the same Repository instance if you want a policy-driven archiving.

Export Unused + Cleanup
A third option is to use our upcoming modification to the Export task which lets you export unused assets. (You can either re-import these to a second Nexus Repository instance, or possibly just leave them on disk if you believe the chance of needing quick access to them is low.) That’s expected out around the end of the month.

To keep the primary clean you have a couple of options. One, you can use a Cleanup Policy (with a less aggressive window than your Export, so you don’t lose anything - e.g. Export 365d unused but only cleanup 400d unused). Alternately, if you want the removals to be a little more watertight and you don’t mind a little scripting, you can use the contents of the export to drive deletions.

Hopefully that gives you a few options to consider.

1 Like