Comments (6)
could share
aws s3 ls s3://s3-bucket-from-config/s3-path-from-config/
??? 20/03/2024 10:36:21 remote broken (can't stat metadata.json)
means you have key with the same name as s3->path from config
you need to delete it manually
something like that
aws s3 rm s3://s3-bucket-from-config/s3-path-from-config
and check clickhouse-backup list remote
after it
from clickhouse-backup.
[root@dba01 tmp]# cat /etc/clickhouse-backup/config.yml
general:
remote_storage: s3
max_file_size: 1073741824
backups_to_keep_local: 2 # keep 1 last backups locally.
backups_to_keep_remote: 2 # s3 is responsible for cleanup.
log_level: info
download_concurrency: 1
upload_concurrency: 1
clickhouse:
username: default
password: SecurePassHere
host: localhost
port: 9000
disk_mapping: {}
skip_tables:
- system.*
- INFORMATION_SCHEMA.*
- information_schema.*
s3:
access_key: SecureKeyHere
secret_key: SecureSecretHere
bucket: backups
region: eu-central-1
path: "/clickhouse/"
This is how config looks like.
[root@dba01 tmp]# /usr/local/bin/aws s3 ls s3://backups/clickhouse/
2024-03-20 13:36:21 0
Command just dropped root folder for clickhouse backups:
[root@dba01 tmp]# /usr/local/bin/aws s3 rm s3://backups/clickhouse/
delete: s3://backups/clickhouse/
Executed backup list looks ok
[root@dba01 tmp]# clickhouse-backup list remote
2024/05/15 17:27:10.414012 info clickhouse connection prepared: tcp://localhost:9000 run ping logger=clickhouse
2024/05/15 17:27:10.417324 info clickhouse connection success: tcp://localhost:9000 logger=clickhouse
2024/05/15 17:27:10.417393 info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros' SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/05/15 17:27:10.424550 info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/05/15 17:27:10.426835 info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros' SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/05/15 17:27:10.433860 info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/05/15 17:27:10.601145 info clickhouse connection closed logger=clickhouse
But created "clickhouse" directory on AWS S3 side again and broken back again:
[root@dba01 tmp]# clickhouse-backup list remote
2024/05/15 17:29:15.757570 info clickhouse connection prepared: tcp://localhost:9000 run ping logger=clickhouse
2024/05/15 17:29:15.760031 info clickhouse connection success: tcp://localhost:9000 logger=clickhouse
2024/05/15 17:29:15.760056 info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros' SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/05/15 17:29:15.765824 info SELECT macro, substitution FROM system.macros logger=clickhouse
2024/05/15 17:29:15.767730 info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros' SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse
2024/05/15 17:29:15.782615 info SELECT macro, substitution FROM system.macros logger=clickhouse
??? 15/05/2024 14:29:13 remote broken (can't stat metadata.json)
2024/05/15 17:29:15.965758 info clickhouse connection closed logger=clickhouse
from clickhouse-backup.
clickhouse-backup list remote
can't create anything on S3 this is readonly operation
Are you sure you just run clickhouse-backup list remote twice? and dind't do anything?
from clickhouse-backup.
I did created "clickhouse" root directory on AWS S3 side manually, myself.
After directory is back, broken is back as well.
from clickhouse-backup.
Why did you create it and how?
S3 doesn't contains "directories"
this is a KEY->VALUE storage, which contains only prefixes with separators ("/")
try to remove again aws s3 rm s3://backups/clickhouse/
and change backup config
s3:
path: clickhouse
instead of path: "/clickhouse/"
from clickhouse-backup.
Ok, I've got it. Seems I misunderstood the concept of "path" parameter from documentation.
I thought it's like path to directory where backups are located.
I created that "clickhouse" directory using S3 browser tool. I believe it could be done on AWS side using gui as well.
I've made changes in config as you suggested and created test backup.
Everything looks good right now. Thank you for so fast response and support.
[root@dba01 clickhouse-backup]# clickhouse-backup list remote 2024/05/15 18:18:31.355430 info clickhouse connection prepared: tcp://localhost:9000 run ping logger=clickhouse 2024/05/15 18:18:31.357843 info clickhouse connection success: tcp://localhost:9000 logger=clickhouse 2024/05/15 18:18:31.357882 info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros' SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse 2024/05/15 18:18:31.364131 info SELECT macro, substitution FROM system.macros logger=clickhouse 2024/05/15 18:18:31.365791 info SELECT count() AS is_macros_exists FROM system.tables WHERE database='system' AND name='macros' SETTINGS empty_result_for_aggregation_by_empty_set=0 logger=clickhouse 2024/05/15 18:18:31.369204 info SELECT macro, substitution FROM system.macros logger=clickhouse test_backup 24.45KiB 15/05/2024 15:17:02 remote tar, regular 2024/05/15 18:18:31.496676 info clickhouse connection closed logger=clickhouse
from clickhouse-backup.
Related Issues (20)
- add --delete-source to `watch` command
- unable to restore from backup HOT 10
- add skip_disks option
- clickhouse is fail to start HOT 20
- skip ValidateObjectDiskConfig for --diff-from-remote when object disk doesn't contains data
- EKS Irsa doesnt work HOT 1
- Create_remote results in `error: data in objects disks` (Azure Blob) HOT 1
- Can i restore backups from one cluster to other? HOT 2
- restore stop works, if RBAC objects present in backup but user which used for connect to clickhouse don't have access_management
- wrong skip tables by engine when empty variables value "CLICKHOUSE_SKIP_TABLE_ENGINES=engine," instead of "CLICKHOUSE_SKIP_TABLE_ENGINES=engine"
- implements `--partitions=db.table:part_name1,part_name2` and `--partitions=db2.table2:*` to allow more flexible backup logic HOT 1
- create system.backup_version, add version to log, add GET /backup/version endpoint
- Implements `X/Y tables` logging for `done` logging HOT 1
- How to backup replica clickhouse cluster correctly ? HOT 1
- `acccess` and `configs` download should use archieve extensions the same manner as main data, for example allow download zstd instead of .tar
- API server should restart if `watch` command fails
- Multipart Upload to Minio Fails in Newest Minio Release HOT 1
- Permission denied while doing restore_remote if backup is also present locally HOT 10
- Authentication Failed When Configuring Automatic Backup HOT 12
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from clickhouse-backup.