Hi everyone, I have Nexus Repository OSS installed in a kubernetes cluster, and setup to save artifacts in S3. I need to setup a new Nexus instance, and after a couple of days tear down the current Nexus. My question is, will having 2 Nexus pointing to the same S3 bucket mess anything up (corrupt indexes, whatever)?

Yes, Nexus may delete files which are not referenced in the database.

If you were starting from new when creating the s3 blobstores you could utilize the prefix option with distinct values for each instance, however that cannot be changed once the blobstore has been created.

Thanks for the info.

I guess I have two options:

  • ensure the current Nexus is not running before standing up the new one
  • make the new Nexus use a new S3 bucket (which I copy from the current bucket perhaps via replication)

If replication will work in your case I think that would probably be ideal since the new instance would continue to get any new hosted artifacts while the old one was still running. If you do end up going that route would you be interested in providing us feedback on your experience?

Exactly, that’s what I did and looks like data all copied (took 2.5 hours for 750 GB). I will be doing the switchover sometime next week, I will post update.

I just realized you must be talking about some aws s3 replication feature. I was thinking about our new pro replication feature. Those are very different things and the aws one probably won’t do what you want? The new instance won’t know what the blobs are for. I think you can run the restore from blobstore task so long as your repositories are named the same and you might get your content back.

I think there might be a terminology clash. @mmartz was referring to a Repository Pro feature called Repository Replication that’s meant for proactive mirroring of hosted repositories. That’s different from S3 bucket replication.

S3 bucket replication will move all the binaries across, but unless you have a corresponding metadata database, your new instance of Nexus Repo won’t have indexed those binaries and they’ll be orphaned. As I see @mmartz just said in his new reply, you can use a scheduled task to repair inconsistencies, indexing those orphaned binaries.