aboutsummaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorDavid Timber <dxdt@dev.snart.me>2022-05-16 15:53:36 +0800
committerDavid Timber <dxdt@dev.snart.me>2022-05-16 15:53:36 +0800
commit990a7a560c98dcbaa9c9e8deb0968819b646a664 (patch)
treeb56f57e853b41ba19db7a6b7099ba6c8e6cfa829 /README.md
parente80babb6e02c647101766c802a0378d12149fda7 (diff)
Changes ...
- Deprecate palhm-dnssec-check.sh - Merge check-dnssec and boot-report config into the sample config - Add crontab sample - Reduce Python requirement to 3.5 - Remove use of capture_output - boot-report: remove systemd-analyze as the command is not available during boot time - Change config schema - "object-groups" and "objects" are now optional - Change "boot-report" include behaviour
Diffstat (limited to 'README.md')
-rw-r--r--README.md10
1 files changed, 10 insertions, 0 deletions
diff --git a/README.md b/README.md
index 98e9361..1d87724 100644
--- a/README.md
+++ b/README.md
@@ -242,5 +242,15 @@ Also, you can always do a dry run of your backup task by setting the backend to
## TODO
* JSON schema validation
+### AWS S3 Replication Daemon
+To prepare for very unlikely events of
+[disasters](https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html)
+affecting an entire AWS region, you may wish to implement cross-region
+replication of S3 objects. The replication the S3 provides does not work on very
+large objects. So replication of large objects across AWS regions has to be done
+manually by a client - another implementation is required.
+
+Cross-region data transfer is costly, so this idea came to a halt.
+
## Footnotes
[^1]: Even with SSDs, disrupting sequential reads decreases overall performance