Comments (9)
do you have object disk like s3 or azure
could you share
SELECT * FROM system.disks
SELECT * FROM system.storage_policies
?
According to non-empty
s3:
object_disk_path: tiered-backup
looks like yes
so this is not "pause" actually this is server side CopyObject execution which allow you restore your data after DROP TABLE ... SYNC \ DROP DATABASE ... SYNC
try to change /etc/clickhouse-server/config.yml
general:
log_level: debug
and share the logs
from clickhouse-backup.
@Slach Thanks for your assistance. Related sharing show the below:
SELECT * FROM system.disks
Row 1:
──────
name: default
path: /var/lib/clickhouse/
free_space: 33461338112
total_space: 52521566208
unreserved_space: 33461338112
keep_free_space: 0
type: local
is_encrypted: 0
is_read_only: 0
is_write_once: 0
is_remote: 0
is_broken: 0
cache_path:
Row 2:
──────
name: s3_tier_cold
path: /var/lib/clickhouse/disks/s3_tier_cold/
free_space: 18446744073709551615
total_space: 18446744073709551615
unreserved_space: 18446744073709551615
keep_free_space: 0
type: s3
is_encrypted: 0
is_read_only: 0
is_write_once: 0
is_remote: 1
is_broken: 0
cache_path:
2 rows in set. Elapsed: 0.002 sec.
SELECT * FROM system.storage_policies
Row 1:
──────
policy_name: default
volume_name: default
volume_priority: 1
disks: ['default']
volume_type: JBOD
max_data_part_size: 0
move_factor: 0
prefer_not_to_merge: 0
perform_ttl_move_on_insert: 1
load_balancing: ROUND_ROBIN
Row 2:
──────
policy_name: move_from_local_disks_to_s3
volume_name: cold
volume_priority: 1
disks: ['s3_tier_cold']
volume_type: JBOD
max_data_part_size: 0
move_factor: 0.1
prefer_not_to_merge: 0
perform_ttl_move_on_insert: 1
load_balancing: ROUND_ROBIN
Row 3:
──────
policy_name: move_from_local_disks_to_s3
volume_name: hot
volume_priority: 2
disks: ['default']
volume_type: JBOD
max_data_part_size: 0
move_factor: 0.1
prefer_not_to_merge: 0
perform_ttl_move_on_insert: 1
load_balancing: ROUND_ROBIN
3 rows in set. Elapsed: 0.003 sec.
- The path
tiered-backup
of AWS S3, which is not empty and have objects.
from clickhouse-backup.
During create backup
for all tables with SETTINGS storage_policy='move_from_local_disks_to_s3'
will execute s3:CopyObject
into tiered-backup
path in your backup bucket
we will improve speed ща incremental backups СopyObject
execution for object disks data in v2.5
from clickhouse-backup.
check
SELECT
count() AS parts, database,
uniqExact(table) AS tables, active, disk_name,
formatReadableSize(sum(bytes_on_disk))
FROM system.parts
GROUP BY database, active, disk_name
FORMAT Vertical
from clickhouse-backup.
@Slach
Thanks for answer it. I would like to confirm what is the ETA for v2.5
?
from clickhouse-backup.
check
SELECT count() AS parts, database, uniqExact(table) AS tables, active, disk_name, formatReadableSize(sum(bytes_on_disk)) FROM system.parts GROUP BY database, active, disk_name FORMAT Vertical
@Slach For your information:
Row 1:
──────
parts: 38
database: otel
tables: 4
active: 1
disk_name: default
formatReadableSize(sum(bytes_on_disk)): 1.86 GiB
Row 2:
──────
parts: 10462
database: otel
tables: 3
active: 0
disk_name: default
formatReadableSize(sum(bytes_on_disk)): 2.98 GiB
Row 3:
──────
parts: 439
database: system
tables: 5
active: 0
disk_name: default
formatReadableSize(sum(bytes_on_disk)): 128.27 MiB
Row 4:
──────
parts: 218
database: otel
tables: 4
active: 1
disk_name: s3_tier_cold
formatReadableSize(sum(bytes_on_disk)): 37.44 GiB
Row 5:
──────
parts: 234
database: system
tables: 7
active: 1
disk_name: default
formatReadableSize(sum(bytes_on_disk)): 7.48 GiB
5 rows in set. Elapsed: 0.023 sec. Processed 11.39 thousand rows, 721.45 KB (488.22 thousand rows/s., 30.92 MB/s.)
Peak memory usage: 15.72 KiB.
--> But this our dev environment data size.
For our production size FYI:
Row 1:
──────
parts: 204
database: otel
tables: 5
active: 1
disk_name: default
formatReadableSize(sum(bytes_on_disk)): 334.00 GiB
Row 2:
──────
parts: 11862
database: otel
tables: 3
active: 0
disk_name: default
formatReadableSize(sum(bytes_on_disk)): 3.19 GiB
Row 3:
──────
parts: 571
database: system
tables: 7
active: 0
disk_name: default
formatReadableSize(sum(bytes_on_disk)): 275.28 MiB
Row 4:
──────
parts: 220
database: otel
tables: 3
active: 1
disk_name: s3_tier_cold
formatReadableSize(sum(bytes_on_disk)): 444.90 GiB
Row 5:
──────
parts: 343
database: system
tables: 7
active: 1
disk_name: default
formatReadableSize(sum(bytes_on_disk)): 11.09 GiB
5 rows in set. Elapsed: 0.023 sec. Processed 13.20 thousand rows, 822.72 KB (565.34 thousand rows/s., 35.24 MB/s.)
Peak memory usage: 20.98 KiB.
from clickhouse-backup.
@Slach Thanks for answer it. I would like to confirm what is the ETA for
v2.5
?
subscribe to #843 and watch progress
from clickhouse-backup.
@Slach
Does v2.5 can resolve automatically execute watch cli after watch is stopped because of some errors?
from clickhouse-backup.
Does v2.5 can resolve automatically execute watch cli after watch is stopped because of some errors?
it resolve issue with reconnect to clickhouse-server
but if backup will failure more time than allow to store full backup in full watch period
then watch commands sequence will stop
cause you need to figure out with your configuration before continue watch
maybe we should change this behavior
but please create new issue in this case
from clickhouse-backup.
Related Issues (20)
- replicated RBAC backup doesn't work if /var/lib/clickhouse/access doesn't present
- add --delete-source to `watch` command
- unable to restore from backup HOT 10
- add skip_disks option
- clickhouse is fail to start HOT 20
- skip ValidateObjectDiskConfig for --diff-from-remote when object disk doesn't contains data
- EKS Irsa doesnt work HOT 1
- Create_remote results in `error: data in objects disks` (Azure Blob) HOT 1
- Can i restore backups from one cluster to other? HOT 2
- restore stop works, if RBAC objects present in backup but user which used for connect to clickhouse don't have access_management
- wrong skip tables by engine when empty variables value "CLICKHOUSE_SKIP_TABLE_ENGINES=engine," instead of "CLICKHOUSE_SKIP_TABLE_ENGINES=engine"
- implements `--partitions=db.table:part_name1,part_name2` and `--partitions=db2.table2:*` to allow more flexible backup logic HOT 1
- create system.backup_version, add version to log, add GET /backup/version endpoint
- Implements `X/Y tables` logging for `done` logging HOT 1
- How to backup replica clickhouse cluster correctly ? HOT 1
- `acccess` and `configs` download should use archieve extensions the same manner as main data, for example allow download zstd instead of .tar
- API server should restart if `watch` command fails
- Unable to clean remote broken (can't stat metadata.json) HOT 6
- Multipart Upload to Minio Fails in Newest Minio Release HOT 1
- Permission denied while doing restore_remote if backup is also present locally HOT 10
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from clickhouse-backup.