I would strongly recommend against exporting secret key chains, especially for script for transmission across servers. It basically goes against the thought process of secret key storage.
The ultimate goal is to touch keys as little as possible, and alias them to sub-sets of credentials or sub-keys that are revocable and ephemeral time/session-sensitive.
I recommend these best practices when generating keys:
https://gist.github.com/forktheweb/75346d3259989e0c6ef5
For instance, Amazon KMS works that way, so you can use aws s3 sync (part of the aws cli) to move data around using SSL connections and hardware crypto in various ways in and out of a cloud VPC.
All auth providers use some form of strategy, be it IAM, SAML, OpenAuth, or what have you.
They all require some sort of aliased token that is typically rotated in and out after a certain period in order to be re-verified, but you are never passin the actual master key, storing it, or decrypting it from the hypervisor or memory unless absolutely necessary, on root-login.
So a normal method for accessing credentials is a root-stored credential file like:
mkdir -p /root/.aws/; printf "[profile info]"; tee | /root/.aws/credentials
In most cases you have your root keyring for all gnupg and aws / kube/ node credentials stored in a root-read only directory rw- --- --- (600). On ubuntu the root account is never used, so it's very difficult to see what's in that directory even as an admin user, and it's less likely the keyring will be messed with or accidentally leak information when exporting. The best case scenario for dealing with a cluster of servers would be to copy the entire key ring set without exporting the secret key ever (except the first time you set it up... and just piping the key directly to a key storage faculty such as Etcd or Amazon KMS.
That's very simple to do:
Here's my guide on using KMS & ETCD for a more secure Devops setup for master keyrings using GNUPG:
https://gist.github.com/forktheweb/ee1d90d7a930bdf8e9732ef9101ae6a1
My recommendation is to use an ephemeral docker container with something like Etcd, or AWS S3 Sync, or supplementally you could use Bup or Duplicity which are both good at backing up with "hot backups". You could also export the docker as a tar to store state when you need to.
OTHER EXAMPLES:
**~ push your entire root directory to s3**
$ sudo sh /home/ubuntu/.scripts/dockers/aws.sh "aws s3 sync /root/ s3://stackfork.com --sse aws:kms"
**~custom s3 duplicity backups using GPG/ SSL**
#### https://easyengine.io/tutorials/backups/duplicity-amazon-s3/
**now you set the value for next time you need to call this up via ptero**
$ ptero set s3-gpg.up "sudo sh /home/ubuntu/.scripts/dockers/aws.sh 'aws s3 sync /root/ s3://stackfork.com --sse aws:kms' "
### NOW TEST :
$ ptero get s3-gpg.up
SIMPLE SYNC VIA DOCKER COMMAND
sudo sh /home/ubuntu/.scripts/dockers/aws.sh "aws s3 sync /root/ s3://stackfork.com --sse aws:kms"```
### FINALIZE:
# print the command
$ ptero get s3-gpg.up
# execute the command to shell
$ ptero get s3-gpg.up | sh
```bash
# should echo:
Status: Image is up to date for xueshanf/awscli:latest
ubuntu@ip-172-31-22-197:~$