Giter Club home page Giter Club logo

milvus-backup's People

Contributors

axiangcoding avatar bennu-li avatar erigo avatar huanghaoyuanhhy avatar lentitude2tk avatar nanjangpan avatar punkerpunker avatar roca avatar shanghaikid avatar syang1997 avatar thomas-huwei avatar wayblink avatar wuyifan0108 avatar xushaoxiao avatar yelusion2 avatar yinheli avatar zhuwenxing avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

milvus-backup's Issues

[Bug]: The num entities of the restored collection are different with original collection

Current Behavior

[2022-12-26 07:56:49 - INFO - ci_test]: create backup response: {'requestId': 'd9ab9d69-84f2-11ed-9533-6045bd854453', 'msg': 'success', 'data': {'id': 'd9abd1f5-84f2-11ed-9533-6045bd854453', 'state_code': 2, 'name': 'backup_knsGqu3g', 'backup_timestamp': 1672041408722, 'collection_backups': [{'collection_name': 'restore_backup_e07dlzqP', 'backup_timestamp': 438315622858752}, {'collection_name': 'restore_backup_6U2mJl6c', 'backup_timestamp': 438315622858752}, {'collection_name': 'restore_backup_ioWHjfUI', 'backup_timestamp': 438315623120896}]}} (test_restore_backup.py:50)
[2022-12-26 07:56:58 - INFO - ci_test]: restore_backup: {'requestId': 'da1a0a1a-84f2-11ed-9533-6045bd854453', 'msg': 'success', 'data': {'id': 'da1a0c85-84f2-11ed-9533-6045bd854453', 'state_code': 2, 'start_time': 1672041409, 'end_time': 1672041418, 'collection_restore_tasks': [{'state_code': 2, 'start_time': 1672041409, 'target_collection_name': 'restore_backup_e07dlzqP_bak', 'progress': 100}, {'state_code': 2, 'start_time': 1672041409, 'target_collection_name': 'restore_backup_6U2mJl6c_bak', 'progress': 100}, {'state_code': 2, 'start_time': 1672041409, 'target_collection_name': 'restore_backup_ioWHjfUI_bak', 'progress': 100}], 'progress': 100}} (test_restore_backup.py:65)
[2022-12-26 07:56:58 - INFO - ci_test]: restore ['restore_backup_e07dlzqP', 'restore_backup_6U2mJl6c', 'restore_backup_ioWHjfUI'] cost time: 9.230499029159546 (test_restore_backup.py:70)

FAILED testcases/test_restore_backup.py::TestRestoreBackup::test_milvus_restore_back[float-3-False-3000] - AssertionError: collection_src num_entities: 3000 != collection_dist num_entities: 1503

The restoration process is 100%, but the num entities of the restored collection are only half of the original collection

Expected Behavior

The num entities of the restored collection are same as the original collection

Steps To Reproduce

No response

Environment

The binary is built based on https://github.com/zhuwenxing/milvus-backup/tree/debug

failed job: https://github.com/zhuwenxing/milvus-backup/actions/runs/3779925868/jobs/6425577880
log: https://github.com/zhuwenxing/milvus-backup/suites/10055427045/artifacts/489428976

Anything else?

No response

[DOCS]: Better readme

Documentation Link

No response

Describe the problem

No response

Describe the improvement

No response

Anything else?

No response

[Bug]: Restored collection much larger than backup, restores data multiple times

Current Behavior

When I restore a large collection (400GB), which first requires increasing all relevant timeouts, I find that the restore keeps on going and the collection being restored grows larger than the original.

Looking at the S3 access logs, I find that it starts to retrieve the same objects again after some five hours running time.

It seems that the issue is with listing the keys based on a prefix with groupId. For groupId 1 it lists all prefixes for all groupIds that start with the digit 1, so also 10, 11, etc. It's when milvus-backup reaches groupId 10 that it starts to 'restore' from objects from which it has already restored.

Expected Behavior

Don't restore data more than once.

Steps To Reproduce

Restore a collection that has more than 9 segment groups.

Environment

Milvus 2.2.4, milvus-backup 0.2.1.

Anything else?

No response

`get` API is not work as expected with error ``

API

decorest.errors.HTTPErrorWrapper: 404 Client Error: Not Found for url: http://localhost:8080/api/v1/get?backup_name=test_api

CLI

❯ ./milvus-backup get test_api
config:backup.yaml
[2022/11/23 19:11:44.929 +08:00] [INFO] [logutil/logutil.go:165] ["Log directory"] [configDir=]
[2022/11/23 19:11:44.929 +08:00] [INFO] [logutil/logutil.go:166] ["Set log file to "] [path=logs/backup.log]
[2022/11/23 19:11:44.930 +08:00] [DEBUG] [core/backup_context.go:60] ["Start Milvus client"] [endpoint=localhost:19530]
[2022/11/23 19:11:44.939 +08:00] [DEBUG] [core/backup_context.go:84] ["Start minio client"] [address=localhost:9000] [bucket=a-bucket] [backupBucket=a-bucket]
[2022/11/23 19:11:44.951 +08:00] [INFO] [storage/minio_chunk_manager.go:114] ["minio chunk manager init success."] [bucketname=a-bucket] [root=files]
<nil>

server log

[GIN] 2022/11/23 - 19:04:24 | 200 |    1.378942ms |             ::1 | POST     "/api/v1/create"
[2022/11/23 19:04:24.090 +08:00] [INFO] [core/backup_context.go:606] ["List Backups' path"] [backup_paths="[backup/test_api/]"]
[2022/11/23 19:04:24.090 +08:00] [DEBUG] [core/backup_context.go:607] ["List Backups' path"] [backup_paths="[backup/test_api/]"]
[GIN] 2022/11/23 - 19:04:24 | 200 |    9.216577ms |             ::1 | GET      "/api/v1/list"
[GIN] 2022/11/23 - 19:04:43 | 200 |      18.867µs |             ::1 | GET      "/api/v1/hello"
[2022/11/23 19:04:43.867 +08:00] [INFO] [core/backup_context.go:606] ["List Backups' path"] [backup_paths="[backup/test_api/]"]
[2022/11/23 19:04:43.867 +08:00] [DEBUG] [core/backup_context.go:607] ["List Backups' path"] [backup_paths="[backup/test_api/]"]
[GIN] 2022/11/23 - 19:04:43 | 200 |    6.562917ms |             ::1 | GET      "/api/v1/list"
[GIN] 2022/11/23 - 19:08:39 | 200 |      79.738µs |             ::1 | GET      "/api/v1/hello"
[2022/11/23 19:08:39.786 +08:00] [INFO] [core/backup_context.go:606] ["List Backups' path"] [backup_paths="[backup/test_api/]"]
[2022/11/23 19:08:39.787 +08:00] [DEBUG] [core/backup_context.go:607] ["List Backups' path"] [backup_paths="[backup/test_api/]"]
[GIN] 2022/11/23 - 19:08:39 | 200 |    7.999047ms |             ::1 | GET      "/api/v1/list"
[GIN] 2022/11/23 - 19:08:39 | 404 |         594ns |             ::1 | GET      "/api/v1/get?backup_name=test_api"
[GIN] 2022/11/23 - 19:09:42 | 404 |         501ns |             ::1 | GET      "/api/v1/get?backup_name=test_api"

[Bug]: Create index hang for the collection that recovered

Current Behavior

❯ python example/verify_data.py

=== start connecting to Milvus     ===

Does collection hello_milvus_recover exist in Milvus: True
Number of entities in Milvus: hello_milvus_recover : 3000

=== Start Creating index IVF_FLAT  ===

The output stuck at this step.

standalone.log

Expected Behavior

No response

Steps To Reproduce

No response

Environment

No response

Anything else?

No response

[Enhancement]: Cache gcs oauth token

What would you like to be added?

Before getting the gcs token, try to use cache first.

Why is this needed?

better performance

Anything else?

No response

[Bug]: integer divide by zero

Current Behavior

./milvus-backup restore -n test -s retest
image

Expected Behavior

Steps To Reproduce

No response

Environment

No response

Anything else?

No response

[Bug]: Access denied error

Current Behavior

When I run ./milvus-backup create -n milvus-backup-(date +%d-%m-%y) I get the following warning with repeats it self:

[WARN] [storage/minio_chunk_manager.go:85] ["failed to check blob bucket exist"] [bucket=a-bucket] [error="Access Denied."]

Expected Behavior

When I run the mentioned command ./milvus-backup create -n milvus-backup-(date +%d-%m-%y), I should receive a backup.

Steps To Reproduce

  1. Run the following docker-compose.yml file
version: '3.5'

services:
  etcd:
    container_name: milvus-etcd
    image: quay.io/coreos/etcd:v3.5.0
    environment:
      - ETCD_AUTO_COMPACTION_MODE=revision
      - ETCD_AUTO_COMPACTION_RETENTION=1000
      - ETCD_QUOTA_BACKEND_BYTES=4294967296
      - ETCD_SNAPSHOT_COUNT=50000
    volumes:
      - ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/etcd:/etcd
    command: etcd -advertise-client-urls=http://127.0.0.1:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd

  minio:
    container_name: milvus-minio
    image: minio/minio:RELEASE.2022-03-17T06-34-49Z
    environment:
      MINIO_ACCESS_KEY: minioadmin
      MINIO_SECRET_KEY: minioadmin
    ports:
      - "9001:9001"
    volumes:
      - ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/minio:/minio_data
    command: minio server /minio_data --console-address ":9001"
    healthcheck:
      test:
        [
          "CMD",
          "curl",
          "-f",
          "http://localhost:9000/minio/health/live"
        ]
      interval: 30s
      timeout: 20s
      retries: 3

  standalone:
    container_name: milvus-standalone
    image: milvusdb/milvus:v2.2.0
    command: [ "milvus", "run", "standalone" ]
    environment:
      ETCD_ENDPOINTS: etcd:2379
      MINIO_ADDRESS: minio:9000
    volumes:
      - ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/milvus:/var/lib/milvus
    ports:
      - "19530:19530"
      - "9091:9091"
    depends_on:
      - "etcd"
      - "minio"
  1. Add some data inside the milvus database
  2. Run the mentioned command ./milvus-backup create -n milvus-backup-(date +%d-%m-%y)


### Environment

```markdown
- go version go1.19.5 linux/amd64
- Linux

Anything else?

No response

[Bug]: When the job returns an error, the corresponding goroutine will exit.

Current Behavior

func (p *WorkerPool) work() error {
for job := range p.job {
if p.lim != nil {
if err := p.lim.Wait(p.subCtx); err != nil {
return err
}
}
if err := job(p.subCtx); err != nil {
return err
}
}
return nil
}

When the job returns an error, the corresponding goroutine will exit, resulting in a decrease in consumers and eventually a backlog of tasks.

Expected Behavior

The number of consumers should not decrease

Steps To Reproduce

No response

Environment

No response

Anything else?

No response

[Enhancement]:Add test cases for the config which is customized

What would you like to be added?

Add test cases for the config which is customized

Why is this needed?

For many users, the config of minio is customized rather than default.
We need to verify the correctness of this scenario.

Anything else?

No response

[Bug]: backup fail on gcp

[Enhancement]:Add negative test cases

What would you like to be added?

Add negative test cases, for example:

  • unsupported behavior
  • Illegal parameters
  • and so on

Why is this needed?

To make the application more robust, and check whether it can handle the mistake

Anything else?

No response

[Feature]: When no binlog is backed up, the result of the backup operation should be set to fail

Is your feature request related to a problem? Please describe.

see #45

The current backup operation does not have a binlog but will show a warning log, but the result will still be success, this backup is meaningless because the restored collection will be empty. This success result will be misleading to the user.

Describe the solution you'd like.

No response

Describe an alternate solution.

No response

Anything else? (Additional Context)

No response

[Bug]: vectors not included in backup to minio. no vectors present on restore.

Current Behavior

My config

Screenshot from 2023-01-23 13-26-16

It is possible I'm just setting this up wrong, but I'll walk through what I'm doing.
I have bucket storage setup with s3 for my k8s cluster.
The helm chart has been configured to sync with bucketName: shf-prod-milvus.
Here I'm inserting data into milvus. It's similar to hello_milvus in the example scripts. It puts 10,000 vectors into a collection called demo. And I have a script to list entity num after.

Screenshot from 2023-01-23 13-34-50

Here is my s3 bucket after data has been inserted:
Screenshot from 2023-01-23 13-32-44

I can click down into it and see that it's storing some index info and RAW_DATA.

I assume this s3 bucket mirrors the current data in my k8s cluster.
The backup should go to the minio hosted instance in k8s. So I try this...

Screenshot from 2023-01-23 13-39-54

Looking at the minio bucket I see only metadata.

Screenshot from 2023-01-23 13-42-01

Doesn't look like any vectors are being stored. Then I attempt a recover.

Screenshot from 2023-01-23 13-44-54

No vectors in the recovered collection.

Screenshot from 2023-01-23 13-45-23

Expected Behavior

I'm expecting some additional vector data to be stored in my minio bucket.
Is milvus supposed to access minio for the current data in k8s? or Should it be connected to s3 like I currently have in the screenshot?
Just seems like vector data isn't being accessed correctly from milvus server.

Steps To Reproduce

The example at the bottom of your readme. I've tried those steps as well. Same issue. I've never gotten data to successfully restore.

Environment

Kubernetes cluster with a modified helm chart for the bucket name.
Running milvus-backup from a container built with the provided Dockerfile.

Anything else?

No response

[API] API `create` does not work synchronously when `async` is False

body

{"async": false,
"backup_name": "backup_IGo9Oasd",
"collection_names": "e2e_Jss1Sydv"
}

response

{
    "requestId": "3ad6d832-6eea-11ed-beb0-acde48001122",
    "msg": "create backup is executing asynchronously",
    "data": {
        "id": "3adcb284-6eea-11ed-beb0-acde48001122",
        "start_time": 1669618780729,
        "name": "backup_IGo9Oasd"
    }
}

server log:

[2022/11/28 15:00:03.316 +08:00] [INFO] [core/backup_context.go:538] ["finish executeCreateBackup"] [requestId=3ad6d832-6eea-11ed-beb0-acde48001122] [backupName=backup_IGo9Oasd] [collections="[]"] [async=true] ["backup meta"="{\"id\":\"3adcb284-6eea-11ed-beb0-acde48001122\",\"state_code\":2,\"start_time\":1669618780729,\"end_time\":1669618803036,\"name\":\"backup_IGo9Oasd\",\"backup_timestamp\":1669618780730}"]
[2022/11/28 15:00:03.316 +08:00] [DEBUG] [core/backup_context.go:244] ["call refreshBackupMetaFunc"] [id=3adcb284-6eea-11ed-beb0-acde48001122]

[Bug]: use Dockerfile build failure

Current Behavior

use Dockerfile build failure

Expected Behavior

No response

Steps To Reproduce

1.docker file 

From  golang:1.18 AS builder

ENV CGO_ENABLED=0
WORKDIR /app
COPY . .
RUN go mod tidy
RUN go build -o /app/milvus-backup

From alpine:3.17
WORKDIR /app
COPY --from=builder /app/milvus-backup .
COPY --from=builder /app/configs ./configs
EXPOSE 8080
ENTRYPOINT ["milvus-backup", "server"]

Environment

No response

Anything else?

#DOME:git checkout -f 287d0185e3bd532406d5c55ef924356b4f93a015
HEAD is now at 287d018 hot fix
#DOME:git submodule init
#DOME:git submodule update
#DOME:cd /code//
#DOME:docker build --rm --pull -f /code//Dockerfile -t private-registry.sohucs.com/milvus-backup/milvus-backup:master_txy_287d018 /code//
Sending build context to Docker daemon 71.13MB
Step 1/12 : From golang:1.18 AS builder
1.18: Pulling from library/golang
bbeef03cda1f: Pulling fs layer
f049f75f014e: Pulling fs layer
56261d0e6b05: Pulling fs layer
9bd150679dbd: Pulling fs layer
bfcb68b5bd10: Pulling fs layer
06d0c5d18ef4: Pulling fs layer
cc7973a07a5b: Pulling fs layer
9bd150679dbd: Waiting
bfcb68b5bd10: Waiting
cc7973a07a5b: Waiting
f049f75f014e: Verifying Checksum
f049f75f014e: Download complete
56261d0e6b05: Verifying Checksum
56261d0e6b05: Download complete
bbeef03cda1f: Verifying Checksum
bbeef03cda1f: Download complete
bbeef03cda1f: Pull complete
f049f75f014e: Pull complete
56261d0e6b05: Pull complete
9bd150679dbd: Verifying Checksum
9bd150679dbd: Download complete
cc7973a07a5b: Verifying Checksum
cc7973a07a5b: Download complete
bfcb68b5bd10: Verifying Checksum
bfcb68b5bd10: Download complete
9bd150679dbd: Pull complete
bfcb68b5bd10: Pull complete
06d0c5d18ef4: Verifying Checksum
06d0c5d18ef4: Download complete
06d0c5d18ef4: Pull complete
cc7973a07a5b: Pull complete
Digest: sha256:50c889275d26f816b5314fc99f55425fa76b18fcaf16af255f5d57f09e1f48da
Status: Downloaded newer image for golang:1.18
---> c37a56a6d654
Step 2/12 : ENV CGO_ENABLED=0
---> Running in fa3d0db82704
Removing intermediate container fa3d0db82704
---> 4233f1e5a140
Step 3/12 : WORKDIR /app
---> Running in 012825a8994c
Removing intermediate container 012825a8994c
---> fbc0df0d07f8
Step 4/12 : COPY . .
---> 859d26d83120
Step 5/12 : RUN go mod tidy
---> Running in e97ec92c78de
go: downloading github.com/swaggo/swag v1.8.10
go: downloading github.com/spf13/cobra v1.5.0
go: downloading golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4
go: downloading google.golang.org/grpc v1.48.0
go: downloading go.uber.org/atomic v1.10.0
go: downloading github.com/stretchr/testify v1.8.1
go: downloading github.com/golang/protobuf v1.5.2
go: downloading go.etcd.io/etcd/client/v3 v3.5.0
go: downloading github.com/google/btree v1.0.1
go: downloading golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6
go: downloading github.com/spf13/cast v1.3.1
go: downloading go.uber.org/zap v1.17.0
go: downloading github.com/spf13/viper v1.8.1
go: downloading github.com/minio/minio-go/v7 v7.0.17
go: downloading gopkg.in/natefinch/lumberjack.v2 v2.0.0
go: downloading github.com/uber/jaeger-client-go v2.25.0+incompatible
go: downloading github.com/sony/sonyflake v1.1.0
go: downloading golang.org/x/time v0.0.0-20191024005414-555d28b269f0
go: downloading github.com/lingdor/stackerror v0.0.0-20191119040541-976d8885ed76
go: downloading github.com/google/uuid v1.1.2
go: downloading github.com/gin-gonic/gin v1.8.1
go: downloading github.com/blang/semver/v4 v4.0.0
go: downloading github.com/wayblink/milvus-sdk-go/v2 v2.2.16
go: downloading github.com/swaggo/gin-swagger v1.5.3
go: downloading github.com/swaggo/files v1.0.0
go: downloading golang.org/x/oauth2 v0.0.0-20210402161424-2e8d93401602
go: downloading github.com/pkg/errors v0.9.1
github.com/zilliztech/milvus-backup/cmd imports
github.com/spf13/cobra: github.com/spf13/[email protected]: Get "https://proxy.golang.org/github.com/spf13/cobra/@v/v1.5.0.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core imports
github.com/gin-gonic/gin: github.com/gin-gonic/[email protected]: Get "https://proxy.golang.org/github.com/gin-gonic/gin/@v/v1.8.1.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core imports
github.com/milvus-io/milvus-sdk-go/v2/client: github.com/wayblink/milvus-sdk-go/[email protected]: Get "https://proxy.golang.org/github.com/wayblink/milvus-sdk-go/v2/@v/v2.2.16.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core imports
github.com/milvus-io/milvus-sdk-go/v2/entity: github.com/wayblink/milvus-sdk-go/[email protected]: Get "https://proxy.golang.org/github.com/wayblink/milvus-sdk-go/v2/@v/v2.2.16.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core imports
github.com/swaggo/files: github.com/swaggo/[email protected]: Get "https://proxy.golang.org/github.com/swaggo/files/@v/v1.0.0.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core imports
github.com/swaggo/gin-swagger: github.com/swaggo/[email protected]: Get "https://proxy.golang.org/github.com/swaggo/gin-swagger/@v/v1.5.3.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core imports
go.uber.org/zap: go.uber.org/[email protected]: Get "https://proxy.golang.org/go.uber.org/zap/@v/v1.17.0.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core/paramtable imports
github.com/spf13/cast: github.com/spf13/[email protected]: Get "https://proxy.golang.org/github.com/spf13/cast/@v/v1.3.1.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core/paramtable imports
github.com/spf13/viper: github.com/spf13/[email protected]: Get "https://proxy.golang.org/github.com/spf13/viper/@v/v1.8.1.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core/proto/backuppb imports
github.com/golang/protobuf/proto: github.com/golang/[email protected]: Get "https://proxy.golang.org/github.com/golang/protobuf/@v/v1.5.2.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core/proto/backuppb imports
google.golang.org/grpc: google.golang.org/[email protected]: Get "https://proxy.golang.org/google.golang.org/grpc/@v/v1.48.0.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core/proto/backuppb imports
google.golang.org/grpc/codes: google.golang.org/[email protected]: Get "https://proxy.golang.org/google.golang.org/grpc/@v/v1.48.0.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core/proto/backuppb imports
google.golang.org/grpc/status: google.golang.org/[email protected]: Get "https://proxy.golang.org/google.golang.org/grpc/@v/v1.48.0.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core/storage imports
github.com/minio/minio-go/v7: github.com/minio/minio-go/[email protected]: Get "https://proxy.golang.org/github.com/minio/minio-go/v7/@v/v7.0.17.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core/storage imports
github.com/minio/minio-go/v7/pkg/credentials: github.com/minio/minio-go/[email protected]: Get "https://proxy.golang.org/github.com/minio/minio-go/v7/@v/v7.0.17.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core/storage imports
golang.org/x/exp/mmap: golang.org/x/[email protected]: Get "https://proxy.golang.org/golang.org/x/exp/@v/v0.0.0-20200224162631-6cc2880d07d6.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core/storage/gcp imports
github.com/pkg/errors: github.com/pkg/[email protected]: Get "https://proxy.golang.org/github.com/pkg/errors/@v/v0.9.1.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core/storage/gcp imports
go.uber.org/atomic: go.uber.org/[email protected]: Get "https://proxy.golang.org/go.uber.org/atomic/@v/v1.10.0.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core/storage/gcp imports
golang.org/x/oauth2: golang.org/x/[email protected]: Get "https://proxy.golang.org/golang.org/x/oauth2/@v/v0.0.0-20210402161424-2e8d93401602.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core/storage/gcp imports
golang.org/x/oauth2/google: golang.org/x/[email protected]: Get "https://proxy.golang.org/golang.org/x/oauth2/@v/v0.0.0-20210402161424-2e8d93401602.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core/utils imports
github.com/blang/semver/v4: github.com/blang/semver/[email protected]: Get "https://proxy.golang.org/github.com/blang/semver/v4/@v/v4.0.0.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core/utils imports
github.com/google/uuid: github.com/google/[email protected]: Get "https://proxy.golang.org/github.com/google/uuid/@v/v1.1.2.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core/utils imports
github.com/sony/sonyflake: github.com/sony/[email protected]: Get "https://proxy.golang.org/github.com/sony/sonyflake/@v/v1.1.0.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/docs imports
github.com/swaggo/swag: github.com/swaggo/[email protected]: Get "https://proxy.golang.org/github.com/swaggo/swag/@v/v1.8.10.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/internal/common imports
golang.org/x/sync/errgroup: golang.org/x/[email protected]: Get "https://proxy.golang.org/golang.org/x/sync/@v/v0.0.0-20220722155255-886fb9371eb4.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/internal/common imports
golang.org/x/time/rate: golang.org/x/[email protected]: Get "https://proxy.golang.org/golang.org/x/time/@v/v0.0.0-20191024005414-555d28b269f0.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/internal/kv imports
go.etcd.io/etcd/client/v3: go.etcd.io/etcd/client/[email protected]: Get "https://proxy.golang.org/go.etcd.io/etcd/client/v3/@v/v3.5.0.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/internal/kv/mem imports
github.com/google/btree: github.com/google/[email protected]: Get "https://proxy.golang.org/github.com/google/btree/@v/v1.0.1.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/internal/log imports
github.com/uber/jaeger-client-go/utils: github.com/uber/[email protected]+incompatible: Get "https://proxy.golang.org/github.com/uber/jaeger-client-go/@v/v2.25.0+incompatible.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/internal/log imports
go.uber.org/zap/buffer: go.uber.org/[email protected]: Get "https://proxy.golang.org/go.uber.org/zap/@v/v1.17.0.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/internal/log imports
go.uber.org/zap/zapcore: go.uber.org/[email protected]: Get "https://proxy.golang.org/go.uber.org/zap/@v/v1.17.0.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/internal/log imports
go.uber.org/zap/zaptest: go.uber.org/[email protected]: Get "https://proxy.golang.org/go.uber.org/zap/@v/v1.17.0.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/internal/log imports
gopkg.in/natefinch/lumberjack.v2: gopkg.in/natefinch/[email protected]: Get "https://proxy.golang.org/gopkg.in/natefinch/lumberjack.v2/@v/v2.0.0.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/internal/util/grpcclient imports
google.golang.org/grpc/backoff: google.golang.org/[email protected]: Get "https://proxy.golang.org/google.golang.org/grpc/@v/v1.48.0.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/internal/util/grpcclient imports
google.golang.org/grpc/keepalive: google.golang.org/[email protected]: Get "https://proxy.golang.org/google.golang.org/grpc/@v/v1.48.0.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/internal/util/logutil imports
google.golang.org/grpc/grpclog: google.golang.org/[email protected]: Get "https://proxy.golang.org/google.golang.org/grpc/@v/v1.48.0.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core tested by
github.com/zilliztech/milvus-backup/core.test imports
github.com/stretchr/testify/assert: github.com/stretchr/[email protected]: Get "https://proxy.golang.org/github.com/stretchr/testify/@v/v1.8.1.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/core tested by
github.com/zilliztech/milvus-backup/core.test imports
github.com/stretchr/testify/require: github.com/stretchr/[email protected]: Get "https://proxy.golang.org/github.com/stretchr/testify/@v/v1.8.1.zip": dial tcp 142.251.42.241:443: i/o timeout
github.com/zilliztech/milvus-backup/internal/util/retry tested by
github.com/zilliztech/milvus-backup/internal/util/retry.test imports
github.com/lingdor/stackerror: github.com/lingdor/[email protected]: Get "https://proxy.golang.org/github.com/lingdor/stackerror/@v/v0.0.0-20191119040541-976d8885ed76.zip": dial tcp 142.251.42.241:443: i/o timeout
The command '/bin/sh -c go mod tidy' returned a non-zero code: 1

[Bug]: Backup server terminated due to error `fatal error: concurrent map writes` when running testcases

Current Behavior

[GIN] 2022/12/20 - 13:31:11 | 200 |     388.259µs |             ::1 | GET      "/api/v1/get_backup?backup_name=backup_L8W4U3J2"
[2022/12/20 13:31:11.212 +08:00] [INFO] [core/backup_context.go:297] ["collections to backup"] [collections="[{\"ID\":437994733847285187,\"Name\":\"create_backup_M01kVRSU\",\"Schema\":{\"CollectionName\":\"create_backup_M01kVRSU\",\"Description\":\"\",\"AutoID\":false,\"Fields\":[{\"ID\":100,\"Name\":\"int64\",\"PrimaryKey\":true,\"AutoID\":false,\"Description\":\"\",\"DataType\":5,\"TypeParams\":{},\"IndexParams\":{}},{\"ID\":101,\"Name\":\"float\",\"PrimaryKey\":false,\"AutoID\":false,\"Description\":\"\",\"DataType\":10,\"TypeParams\":{},\"IndexParams\":{}},{\"ID\":102,\"Name\":\"varchar\",\"PrimaryKey\":false,\"AutoID\":false,\"Description\":\"\",\"DataType\":21,\"TypeParams\":{\"max_length\":\"65535\"},\"IndexParams\":{}},{\"ID\":103,\"Name\":\"binary_vector\",\"PrimaryKey\":false,\"AutoID\":false,\"Description\":\"\",\"DataType\":100,\"TypeParams\":{\"dim\":\"128\"},\"IndexParams\":{}}]},\"PhysicalChannels\":[\"by-dev-rootcoord-dml_144\",\"by-dev-rootcoord-dml_145\"],\"VirtualChannels\":[\"by-dev-rootcoord-dml_144_437994733847285187v0\",\"by-dev-rootcoord-dml_145_437994733847285187v1\"],\"Loaded\":false,\"ConsistencyLevel\":0,\"ShardNum\":2}]"]
fatal error: concurrent map writes

goroutine 17921 [running]:
runtime.throw({0x1b68fbc?, 0x0?})
        /usr/local/go/src/runtime/panic.go:992 +0x71 fp=0xc001ce2148 sp=0xc001ce2118 pc=0x1036bf1
runtime.mapassign_faststr(0xc001ce2ba8?, 0x40?, {0xc0019104b0, 0x24})
        /usr/local/go/src/runtime/map_faststr.go:212 +0x39c fp=0xc001ce21b0 sp=0xc001ce2148 pc=0x1014edc
github.com/zilliztech/milvus-backup/core.BackupContext.executeCreateBackup.func1({0xc0019104b0, 0x24}, 0xc00003a0c0?)
        /Users/zilliz/workspace/milvus-backup/core/backup_context.go:250 +0x145 fp=0xc001ce2250 sp=0xc001ce21b0 pc=0x193d685
github.com/zilliztech/milvus-backup/core.BackupContext.executeCreateBackup({{0x1cfc7e0, 0xc00003a0c0}, {0x0, 0x0}, 0x1, {{{0x1, {...}}, 0xc00014c060, {0xc0001523f0, 0x2e}, ...}, ...}, ...}, ...)
        /Users/zilliz/workspace/milvus-backup/core/backup_context.go:343 +0xf4c fp=0xc001ce3d20 sp=0xc001ce2250 pc=0x193876c
github.com/zilliztech/milvus-backup/core.BackupContext.CreateBackup.func1()
        /Users/zilliz/workspace/milvus-backup/core/backup_context.go:215 +0x65 fp=0xc001ce3fe0 sp=0xc001ce3d20 pc=0x1937285
runtime.goexit()
        /usr/local/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc001ce3fe8 sp=0xc001ce3fe0 pc=0x10697c1
created by github.com/zilliztech/milvus-backup/core.BackupContext.CreateBackup
        /Users/zilliz/workspace/milvus-backup/core/backup_context.go:215 +0xf05

goroutine 1 [IO wait]:
internal/poll.runtime_pollWait(0x2a631e48, 0x72)
        /usr/local/go/src/runtime/netpoll.go:302 +0x89
internal/poll.(*pollDesc).wait(0xc0005c1200?, 0x4?, 0x0)
        /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32
internal/poll.(*pollDesc).waitRead(...)
        /usr/local/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Accept(0xc0005c1200)
        /usr/local/go/src/internal/poll/fd_unix.go:614 +0x22c
net.(*netFD).accept(0xc0005c1200)
        /usr/local/go/src/net/fd_unix.go:172 +0x35
net.(*TCPListener).accept(0xc0005a8270)
        /usr/local/go/src/net/tcpsock_posix.go:139 +0x28
net.(*TCPListener).Accept(0xc0005a8270)
        /usr/local/go/src/net/tcpsock.go:288 +0x3d
net/http.(*Server).Serve(0xc0005ee0e0, {0x1cfb6f0, 0xc0005a8270})
        /usr/local/go/src/net/http/server.go:3039 +0x385
net/http.(*Server).ListenAndServe(0xc0005ee0e0) /usr/local/go/src/net/http/server.go:2968 +0x7d
net/http.ListenAndServe(...)
        /usr/local/go/src/net/http/server.go:3222
github.com/gin-gonic/gin.(*Engine).Run(0xc000582820, {0xc0005a5ae8, 0x1, 0x1})
        /Users/zilliz/go/pkg/mod/github.com/gin-gonic/[email protected]/gin.go:382 +0x20e
github.com/zilliztech/milvus-backup/core.(*Server).Start(0xc0005a8180)
        /Users/zilliz/workspace/milvus-backup/core/backup_server.go:58 +0x54
github.com/zilliztech/milvus-backup/cmd.glob..func7(0x2bd9120?, {0x1b51ec9?, 0x0?, 0x0?})
        /Users/zilliz/workspace/milvus-backup/cmd/server.go:32 +0x206
github.com/spf13/cobra.(*Command).execute(0x2bd9120, {0x2c18178, 0x0, 0x0})
        /Users/zilliz/go/pkg/mod/github.com/spf13/[email protected]/command.go:876 +0x67b
github.com/spf13/cobra.(*Command).ExecuteC(0x2bd8ea0)
        /Users/zilliz/go/pkg/mod/github.com/spf13/[email protected]/command.go:990 +0x3b4
github.com/spf13/cobra.(*Command).Execute(...)
        /Users/zilliz/go/pkg/mod/github.com/spf13/[email protected]/command.go:918
github.com/zilliztech/milvus-backup()
        /Users/zilliz/workspace/milvus-backup/cmd/root.go:24 +0x71
main.main()
        /Users/zilliz/workspace/milvus-backup/main.go:17 +0x17

goroutine 21 [chan receive, 1 minutes]:
gopkg.in/natefinch/lumberjack%2ev2.(*Logger).millRun(0xc0001449c0)
        /Users/zilliz/go/pkg/mod/gopkg.in/natefinch/[email protected]/lumberjack.go:379 +0x45
created by gopkg.in/natefinch/lumberjack%2ev2.(*Logger).mill.func1
        /Users/zilliz/go/pkg/mod/gopkg.in/natefinch/[email protected]/lumberjack.go:390 +0x8e

goroutine 22 [select, 1 minutes]:
google.golang.org/grpc.(*ccBalancerWrapper).watcher(0xc000047000)
        /Users/zilliz/go/pkg/mod/google.golang.org/[email protected]/balancer_conn_wrappers.go:112 +0x73
created by google.golang.org/grpc.newCCBalancerWrapper
        /Users/zilliz/go/pkg/mod/google.golang.org/[email protected]/balancer_conn_wrappers.go:73 +0x22a

goroutine 39 [select]:
google.golang.org/grpc/internal/transport.(*http2Client).keepalive(0xc0001ae000)
        /Users/zilliz/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_client.go:1575 +0x165
created by google.golang.org/grpc/internal/transport.newHTTP2Client
        /Users/zilliz/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_client.go:360 +0x16fe

goroutine 40 [runnable]:
internal/poll.runtime_pollWait(0x2a632028, 0x72)
        /usr/local/go/src/runtime/netpoll.go:302 +0x89
internal/poll.(*pollDesc).wait(0xc0001a2100?, 0xc0001a4000?, 0x0)
        /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32
internal/poll.(*pollDesc).waitRead(...)
        /usr/local/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc0001a2100, {0xc0001a4000, 0x8000, 0x8000})
        /usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc0001a2100, {0xc0001a4000?, 0x1aea2a0?, 0x19d2e01?})
        /usr/local/go/src/net/fd_posix.go:55 +0x29
net.(*conn).Read(0xc0001de038, {0xc0001a4000?, 0xc00142ede0?, 0x0?})
        /usr/local/go/src/net/net.go:183 +0x45
bufio.(*Reader).Read(0xc0001e6360, {0xc0001ac040, 0x9, 0xc001bbff1c?})
        /usr/local/go/src/bufio/bufio.go:236 +0x1b4
io.ReadAtLeast({0x1cf0be0, 0xc0001e6360}, {0xc0001ac040, 0x9, 0x9}, 0x9)
        /usr/local/go/src/io/io.go:331 +0x9a
io.ReadFull(...)
        /usr/local/go/src/io/io.go:350
golang.org/x/net/http2.readFrameHeader({0xc0001ac040?, 0x9?, 0x106b0a0?}, {0x1cf0be0?, 0xc0001e6360?})
        /Users/zilliz/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:237 +0x6e
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0001ac000)
        /Users/zilliz/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:498 +0x95
google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0001ae000)
        /Users/zilliz/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_client.go:1501 +0x414
created by google.golang.org/grpc/internal/transport.newHTTP2Client
        /Users/zilliz/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_client.go:365 +0x1745

goroutine 41 [select]:
google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0001802d0, 0x1)
        /Users/zilliz/go/pkg/mod/google.golang.org/[email protected]/internal/t:408 +0x115
google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000092d80)
        /Users/zilliz/go/pkg/mod/google.golang.org/[email protected]/internal/transport/controlbuf.go:535 +0x85
google.golang.org/grpc/internal/transport.newHTTP2Client.func3()
        /Users/zilliz/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_client.go:415 +0x65
created by google.golang.org/grpc/internal/transport.newHTTP2Client
        /Users/zilliz/go/pkg/mod/google.golang.org/[email protected]/internal/transport/http2_client.go:413 +0x1d91

goroutine 25 [IO wait]:
internal/poll.runtime_pollWait(0x2a631f38, 0x72)
        /usr/local/go/src/runtime/netpoll.go:302 +0x89
internal/poll.(*pollDesc).wait(0xc0005c0180?, 0xc000160000?, 0x0)
        /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32
internal/poll.(*pollDesc).waitRead(...)
        /usr/local/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc0005c0180, {0xc000160000, 0x1000, 0x1000})
        /usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc0005c0180, {0xc000160000?, 0x1007489?, 0x4?})
        /usr/local/go/src/net/fd_posix.go:55 +0x29
net.(*conn).Read(0xc0001bcab8, {0xc000160000?, 0x0?, 0x0?})
        /usr/local/go/src/net/net.go:183 +0x45
net/http.(*persistConn).Read(0xc0005d2120, {0xc000160000?, 0xc000182840?, 0xc000132d30?})
        /usr/local/go/src/net/http/transport.go:1929 +0x4e
bufio.(*Reader).fill(0xc000144de0)
        /usr/local/go/src/bufio/bufio.go:106 +0x103
bufio.(*Reader).Peek(0xc000144de0, 0x1)
        /usr/local/go/src/bufio/bufio.go:144 +0x5d
net/http.(*persistConn).readLoop(0xc0005d2120)
        /usr/local/go/src/net/http/transport.go:2093 +0x1ac
created by net/http.(*Transport).dialConn
        /usr/local/go/src/net/http/transport.go:1750 +0x173e

goroutine 26 [select]:
net/http.(*persistConn).writeLoop(0xc0005d2120)
        /usr/local/go/src/net/http/transport.go:2392 +0xf5
created by net/http.(*Transport).dialConn
        /usr/local/go/src/net/http/transport.go:1751 +0x1791

goroutine 57 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0x2a631d58, 0x72)
        /usr/local/go/src/runtime/netpoll.go:302 +0x89
internal/poll.(*pollDesc).wait(0xc00019c980?, 0x0?, 0x0)
        /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32
internal/poll.(*pollDesc).waitRead(...)
        /usr/local/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Accept(0xc00019c980)
        /usr/local/go/src/internal/poll/fd_unix.go:614 +0x22c
net.(*netFD).accept(0xc00019c980)
        /usr/local/go/src/net/fd_unix.go:172 +0x35
net.(*TCPListener).accept(0xc00000ef60)
        /usr/local/go/src/net/tcpsock_posix.go:139 +0x28
net.(*TCPListener).Accept(0xc00000ef60)
        /usr/local/go/src/net/tcpsock.go:288 +0x3d
net/http.(*Server).Serve(0xc00020e2a0, {0x1cfb6f0, 0xc00000ef60})
        /usr/local/go/src/net/http/server.go:3039 +0x385
net/http.(*Server).ListenAndServe(0xc00020e2a0)
        /usr/local/go/src/net/http/server.go:2968 +0x7d
net/http.ListenAndServe(...)
        /usr/local/go/src/net/http/server.go:3222
github.com/zilliztech/milvus-backup/core.(*Server).registerProfilePort.func1()
        /Users/zilliz/workspace/milvus-backup/core/backup_server.go:78 +0x65
created by github.com/zilliztech/milvus-backup/core.(*Server).registerProfilePort
        /Users/zilliz/workspace/milvus-backup/core/backup_server.go:76 +0x25

goroutine 16163 [IO wait]:
internal/poll.runtime_pollWait(0x2a6317b8, 0x72)
        /usr/local/go/src/runtime/netpoll.go:302 +0x89
internal/poll.(*pollDesc).wait(0xc00029f100?, 0xc0016a6c11?, 0x0)
        /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32
internal/poll.(*pollDesc).waitRead(...)
        /usr/local/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc00029f100, {0xc0016a6c11, 0x1, 0x1})
        /usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc00029f100, {0xc0016a6c11?, 0x1?, 0x1000000000001?})
        /usr/local/go/src/net/fd_posix.go:55 +0x29
net.(*conn).Read(0xc0001bceb8, {0xc0016a6c11?, 0xc0006fb320?, 0x0?})
        /usr/local/go/src/net/net.go:183 +0x45
net/http.(*connReader).backgroundRead(0xc0016a6c00)
        /usr/local/go/src/net/http/server.go:672 +0x3f
created by net/
        /usr/local/go/src/net/http/server.go:668 +0xca

goroutine 16162 [chan receive]:
github.com/minio/minio-go/v7.(*Object).doGetRequest(...)
        /Users/zilliz/go/pkg/mod/github.com/minio/minio-go/[email protected]/api-get-object.go:312
github.com/minio/minio-go/v7.(*Object).Stat(_)
        /Users/zilliz/go/pkg/mod/github.com/minio/minio-go/[email protected]/api-get-object.go:424 +0x26c
github.com/zilliztech/milvus-backup/core/storage.(*MinioChunkManager).Read(0x1b51fcd?, {0x1cfc7e0?, 0xc00003a0c0?}, {0xc00014a608?, 0x3c?}, {0xc001910660, 0x30})
        /Users/zilliz/workspace/milvus-backup/core/storage/minio_chunk_manager.go:216 +0x385
github.com/zilliztech/milvus-backup/core.BackupContext.readBackup({{0x1cfc7e0, 0xc00003a0c0}, {0x0, 0x0}, 0x1, {{{0x1, {...}}, 0xc00014c060, {0xc0001523f0, 0x2e}, ...}, ...}, ...}, ...)
        /Users/zilliz/workspace/milvus-backup/core/backup_context.go:1129 +0x6db
github.com/zilliztech/milvus-backup/core.BackupContext.GetBackup({{0x1cfc7e0, 0xc00003a0c0}, {0x0, 0x0}, 0x1, {{{0x1, {...}}, 0xc00014c060, {0xc0001523f0, 0x2e}, ...}, ...}, ...}, ...)
        /Users/zilliz/workspace/milvus-backup/core/backup_context.go:604 +0x66e
github.com/zilliztech/milvus-backup/core.BackupContext.ListBackups({{0x1cfc7e0, 0xc00003a0c0}, {0x0, 0x0}, 0x1, {{{0x1, {...}}, 0xc00014c060, {0xc0001523f0, 0x2e}, ...}, ...}, ...}, ...)
        /Users/zilliz/workspace/milvus-backup/core/backup_context.go:660 +0xc30
github.com/zilliztech/milvus-backup/core.(*Handlers).handleListBackups(0xc0005b4040, 0xc0004d1800)
        /Users/zilliz/workspace/milvus-backup/core/backup_server.go:156 +0x145
github.com/zilliztech/milvus-backup/core.(*Handlers).RegisterRoutesTo.func3(0xc0015d36f8?)
        /Users/zilliz/workspace/milvus-backup/core/backup_server.go:111 +0x1d
github.com/gin-gonic/gin.(*Context).Next(...)
        /Users/zilliz/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:173
github.com/gin-gonic/gin.CustomRecoveryWithWriter.func1(0xc0004d1800)
        /Users/zilliz/go/pkg/mod/github.com/gin-gonic/[email protected]/recovery.go:101 +0x82
github.com/gin-gonic/gin.(*Context).Next(...)
        /Users/zilliz/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:173
github.com/gin-gonic/gin.LoggerWithConfig.func1(0xc0004d1800)
        /Users/zilliz/go/pkg/mod/github.com/gin-gonic/[email protected]/logger.go:240 +0xe7
github.com/gin-gonic/gin.(*Context).Next(...)
        /Users/zilliz/go/pkg/mod/github.com/gin-gonic/[email protected]/context.go:173
github.com/gin-gonic/gin.(*Engine).handleHTTPRequest(0xc000582820, 0xc0004d1800)
        /Users/zilliz/go/pkg/mod/github.com/gin-gonic/[email protected]/gin.go:616 +0x671
github.com/gin-gonic/gin.(*Engine).ServeHTTP(0xc000582820, {0x1cfb900?, 0xc0001ad180}, 0xc00179c100)
        /Users/zilliz/go/pkg/mod/github.com/gin-gonic/[email protected]/gin.go:572 +0x1dd
net/http.serverHandler.ServeHTTP({0xc0016a6c00?}, {0x1cfb900, 0xc0001ad180}, 0xc00179c100)
        /usr/local/go/src/net/http/server.go:2916 +0x43b
net/http.(*conn).serve(0xc000279d60, {0x1cfc850, 0xc0005f2450})
        /usr/local/go/src/net/http/server.go:1966 +0x5d7
created by net/http.(*Server).Serve
        /usr/local/go/src/net/http/server.go:3071 +0x4db

goroutine 776 [select]:
net/http.(*persistConn).writeLoop(0xc0013d4c60)
        /usr/local/go/src/net/http/transport.go:2392 +0xf5
created by net/http.(*Transport).dialConn
        /usr/local/go/src/net/http/transport.go:1751 +0x1791

goroutine 750 [runnable]:
internal/poll.runtime_pollWait(0x2a6316c8, 0x72)
        /usr/local/go/src/runtime/netpoll.go:302 +0x89
internal/poll.(*pollDesc).wait(0xc0011faf80?, 0xc001450000?, 0x0)
        /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32
internal/poll.(*pollDesc).waitRead(...)
        /usr/local/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc0011faf80, {0xc001450000, 0x1000, 0x1000})
        /usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc0011faf80, {0xc001450000?, 0x1007489?, 0x4?})
        /usr/local/go/src/net/fd_posix.go:55 +0x29
net.(*conn).Read(0xc0005b4960, {0xc001450000?, 0x0?, 0x0?})
        /usr/local/go/src/net/net.go:183 +0x45
net/http.(*persistConn).Read(0xc0013d4ea0, {0xc001450000?, 0xc001432ba0?, 0xc0015d2d30?})
        /usr/local/go/src/net/http/transport.go:1929 +0x4e
bufio.(*Reader).fill(0xc001434c00)
        /usr/local/go/src/bufio/bufio.go:106 +0x103
bufio.(*Reader).Peek(0xc001434c00, 0x1)
        /usr/local/go/src/bufio/bufio.go:144 +0x5d
net/http.(*persistConn).readLoop(0xc0013d4ea0)
        /usr/local/go/src/net/http/transport.go:2093 +0x1ac
created by net/http.(*Transport).dialConn
        /usr/local/go/src/net/http/transport.go:1750 +0x173e

goroutine 236 [IO wait]:
internal/poll.runtime_pollWait(0x2a6318a8, 0x72)
        /usr/local/go/src/runtime/netpoll.go:302 +0x89
internal/poll.(*pollDesc).wait(0xc0011fad00?, 0xc00134e000?, 0x0)
        /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32
internal/poll.(*pollDesc).waitRead(...)
        /usr/local/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc0011fad00, {0xc00134e000, 0x1000, 0x1000})
        /usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc0011fad00, {0xc00134e000?, 0x0?, 0x4?})
        /usr/local/go/src/net/fd_posix.go:55 +0x29
net.(*conn).Read(0xc00134a000, {0xc00134e000?, 0x0?, 0x0?})
        /usr/local/go/src/net/net.go:183 +0x45
net/http.(*persistConn).Read(0xc0013d4d80, {0xc00134e000?, 0x1049d60?, 0xc000136ec8?})
        /usr/local/go/src/net/http/transport.go:1929 +0x4e
bufio.(*Reader).fill(0xc00134c000)
        /usr/local/go/src/bufio/bufio.go:106 +0x103
bufio.(*Reader).Peek(0xc00134c000, 0x1)
        /usr/local/go/src/bufio/bufio.go:144 +0x5d
net/http.(*persistConn).readLoop(0xc0013d4d80)
        /usr/local/go/src/net/http/transport.go:2093 +0x1ac
created by net/http.(*Transport).dialConn
        /usr/local/go/src/net/http/transport.go:1750 +0x173e

goroutine 237 [select]:
net/http.(*persistConn).writeLoop(0xc0013d4d80)
        /usr/local/go/src/net/http/transport.go:2392 +0xf5
created by net/http.(*Transport).dialConn
        /usr/local/go/src/net/http/transport.go:1751 +0x1791

goroutine 775 [IO wait]:
internal/poll.runtime_pollWait(0x2a631a88, 0x72)
        /usr/local/go/src/runtime/netpoll.go:302 +0x89
internal/poll.(*pollDesc).wait(0xc0011faa00?, 0xc0014ac000?, 0x0)
        /usr/local/go/src/internal/poll/fd_poll_runtime.go:83 +0x32
internal/poll.(*pollDesc).waitRead(...)
        /usr/local/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc0011faa00, {0xc0014ac000, 0x1000, 0x1000})
        /usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc0011faa00, {0xc0014ac000?, 0x0?, 0x4?})
        /usr/local/go/src/net/fd_posix.go:55 +0x29
net.(*conn).Read(0xc0001bd018, {0xc0014ac000?, 0x0?, 0x0?})
        /usr/local/go/src/net/net.go:183 +0x45
net/http.(*persistConn).Read(0xc0013d4c60, {0xc0014ac000?, 0x1049d60?, 0xc00007cec8?})
        /usr/local/go/src/net/http/transport.go:1929 +0x4e
bufio.(*Reader).fill(0xc00148fc20)
        /usr/local/go/src/bufio/bufio.go:106 +0x103
bufio.(*Reader).Peek(0xc00148fc20, 0x1)
        /usr/local/go/src/bufio/bufio.go:144 +0x5d
net/http.(*persistConn).readLoop(0xc0013d4c60)
        /usr/local/go/src/net/http/transport.go:2093 +0x1ac
created by net/http.(*Transport).dialConn
        /usr/local/go/src/net/http/transport.go:1750 +0x173e

goroutine 751 [select]:
net/http.(*persistConn).writeLoop(0xc0013d4ea0)
        /usr/local/go/src/net/http/transport.go:2392 +0xf5
created by net/http.(*Transport).dialConn
        /usr/local/go/src/net/http/transport.go:1751 +0x1791

goroutine 17987 [select]:
github.com/minio/minio-go/v7.(*Client).newRetryTimer.func2()
        /Users/zilliz/go/pkg/mod/github.com/minio/minio-go/[email protected]/retry.go:79 +0x1ca
created by github.com/minio/minio-go/v7.(*Client).newRetryTimer
        /Users/zilliz/go/pkg/mod/github.com/minio/minio-go/[email protected]/retry.go:70 +0x16a

goroutine 17934 [runnable]:
google.golang.org/grpc.(*csAttempt).newStream(0xc00163ad80)
        /Users/zilliz/go/pkg/mod/google.golang.org/[email protected]/stream.go:444 +0x125
google.golang.org/grpc.newClientStreamWithParams.func2(0xc00163ad80)
        /Users/zilliz/go/pkg/mod/google.golang.org/[email protected]/stream.go:318 +0x3a
google.golang.org/grpc.(*clientStream).withRetry(0xc00157fc20, 0xc001215140, 0xc0005a3b10)
        /Users/zilliz/go/pkg/mod/google.golang.org/[email protected]/stream.go:709 +0xd3
google.golang.org/grpc.newClientStreamWithParams({0x1cfc7e0, 0xc00003a0c0}, 0x2be6040, 0xc000165880, {0x1b889eb, 0x31}, {0x0, 0x0, 0x0, 0x0, ...}, ...)
        /Users/zilliz/go/pkg/mod/google.golang.org/[email protected]/stream.go:327 +0x9da
google.golang.org/grpc.newClientStream.func2({0x1cfc7e0?, 0xc00003a0c0?}, 0xc00003a0c0?)
        /Users/zilliz/go/pkg/mod/google.golang.org/[email protected]/stream.go:192 +0x9f
google.golang.org/grpc.newClientStream({0x1cfc7e0, 0xc00003a0c0}, 0x2be6040, 0xc000165880, {0x1b889eb, 0x31}, {0x0, 0x0, 0x0})
        /Users/zilliz/go/pkg/mod/google.golang.org/[email protected]/stream.go:220 +0x4c2
google.golang.org/grpc.invoke({0x1cfc7e0?, 0xc00003a0c0?}, {0x1b889eb?, 0xc0016d2960?}, {0x1b02aa0, 0xc00047c2a0}, {0x1b02be0, 0xc0016d2a00}, 0x98?, {0x0, ...})
        /Users/zilliz/go/pkg/mod/google.golang.org/[email protected]/call.go:66 +0x7d
github.com/milvus-io/milvus-sdk-go/v2/client.RetryOnRateLimitInterceptor.func1({0x1cfc7e0, 0xc00003a0c0}, {0x1b889eb, 0x31}, {0x1b02aa0, 0xc00047c2a0}, {0x1b02be0, 0xc0016d2a00}, 0x0?, 0x1bd5fa0, ...)
        /Users/zilliz/go/pkg/mod/github.com/wayblink/milvus-sdk-go/[email protected]/client/rate_limit_interceptor.go:54 +0x225
github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryClient.func1.1.1({0x1cfc7e0?, 0xc00003a0c0?}, {0x1b889eb?, 0x0?}, {0x1b02aa0?, 0xc00047c2a0?}, {0x1b02be0?, 0xc0016d2a00?}, 0x8b0?, {0x0, ...})
        /Users/zilliz/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:72 +0x86
github.com/grpc-ecosystem/go-grpc-middleware/retry.UnaryClientInterceptor.func1({0x1cfc7e0, 0xc00003a0c0}, {0x1b889eb, 0x31}, {0x1b02aa0, 0xc00047c2a0}, {0x1b02be0, 0xc0016d2a00}, 0x8?, 0xc001b0a030, ...)
        /Users/zilliz/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/retry/retry.go:44 +0x2c9
github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryClient.func1.1.1({0x1cfc7e0?, 0xc00003a0c0?}, {0x1b889eb?, 0x203000?}, {0x1b02aa0?, 0xc00047c2a0?}, {0x1b02be0?, 0xc0016d2a00?}, 0x4?, {0x0, ...})
        /Users/zilliz/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:72 +0x86
github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryClient.func1({0x1cfc7e0, 0xc00003a0c0}, {0x1b889eb, 0x31}, {0x1b02aa0, 0xc00047c2a0}, {0x1b02be0, 0xc0016d2a00}, 0x2a67d970?, 0x1bd5fa0, ...)
        /Users/zilliz/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:81 +0x157
google.golang.org/grpc.(*ClientConn).Invoke(0x2ebdf18?, {0x1cfc7e0?, 0xc00003a0c0?}, {0x1b889eb?, 0x0?}, {0x1b02aa0?, 0xc00047c2a0?}, {0x1b02be0?, 0xc0016d2a00?}, {0x0, ...})
        /Users/zilliz/go/pkg/mod/google.golang.org/[email protected]/call.go:35 +0x223
github.com/milvus-io/milvus-proto/go-api/milvuspb.(*milvusServiceClient).ShowPartitions(0xc0005b4008, {0x1cfc7e0, 0xc00003a0c0}, 0xc0002fc1a0?, {0x0, 0x0, 0x0})
        /Users/zilliz/go/pkg/mod/github.com/milvus-io/milvus-proto/[email protected]/milvuspb/milvus.pb.go:8247 +0xc9
github.com/milvus-io/milvus-sdk-go/v2/client.(*GrpcClient).ShowPartitions(0xc00000e9f0, {0x1cfc7e0, 0xc00003a0c0}, {0xc001929d88, 0x16})
        /Users/zilliz/go/pkg/mod/github.com/wayblink/milvus-sdk-go/[email protected]/client/client_grpc_partition.go:116 +0xf1
github.com/zilliztech/milvus-backup/core.BackupContext.executeCreateBackup({{0x1cfc7e0, 0xc00003a0c0}, {0x0, 0x0}, 0x1, {{{0x1, {...}}, 0xc00014c060, {0xc0001523f0, 0x2e}, ...}, ...}, ...}, ...)
        /Users/zilliz/workspace/milvus-backup/core/backup_context.go:349 +0x1098
created by github.com/zilliztech/milvus-backup/core.BackupContext.CreateBackup
        /Users/zilliz/workspace/milvus-backup/core/backup_context.go:215 +0xf05

goroutine 17986 [select]:
net/http.(*persistConn).roundTrip(0xc0013d4ea0, 0xc0012272c0)
        /usr/local/go/src/net/http/transport.go:2620 +0x974
net/http.(*Transport).roundTrip(0xc000510000, 0xc0005c9300)
        /usr/local/go/src/net/http/transport.go:594 +0x7c9
net/http.(*Transport).RoundTrip(0x100ebe5?, 0x1cf2fe0?)
        /usr/local/go/src/net/http/roundtrip.go:17 +0x19
net/http.send(0xc0005c9300, {0x1cf2fe0, 0xc000510000}, {0x1b28780?, 0x1?, 0x0?})
        /usr/local/go/src/net/http/client.go:252 +0x5d8
net/http.(*Client).send(0xc00023a1e0, 0xc0005c9300, {0x0?, 0x0?, 0x0?})
        /usr/local/go/src/net/http/client.go:176 +0x9b
net/http.(*Client).do(0xc00023a1e0, 0xc0005c9300)
        /usr/local/go/src/net/http/client.go:725 +0x8f5
net/http.(*Client).Do(...)
        /usr/local/go/src/net/http/client.go:593
github.com/minio/minio-go/v7.(*Client).do(0xc000342f20, 0xc00014a608?)
        /Users/zilliz/go/pkg/mod/github.com/minio/minio-go/[email protected]/api.go:549 +0xb3
github.com/minio/minio-go/v7.(*Client).executeMethod(0xc000342f20, {0x1cfc7e0, 0xc00003a0c0}, {0x1b51795, 0x4}, {0x0, {0xc00014a608, 0x8}, {0xc001910660, 0x30}, ...})
        /Users/zilliz/go/pkg/mod/github.com/minio/minio-go/[email protected]/api.go:653 +0x655
github.com/minio/minio-go/v7.(*Client).StatObject(_, {_, _}, {_, _}, {_, _}, {0x0, {0x0, 0x0}, ...})
        /Users/zilliz/go/pkg/mod/github.com/minio/minio-go/[email protected]/api-stat.go:79 +0x3bc
github.com/minio/minio-go/v7.(*Client).GetObject.func1()
        /Users/zilliz/go/pkg/mod/github.com/minio/minio-go/[email protected]/api-get-object.go:143 +0x785
created by github.com/minio/minio-go/v7.(*Client).GetObject
        /Users/zilliz/go/pkg/mod/github.com/minio/minio-go/[email protected]/api-get-object.go:66 +0x385

Expected Behavior

No response

Steps To Reproduce

run test cases by command `pytest -s -v -n 4 --tags L0 L1`

Environment

No response

Anything else?

No response

[Bug]: Restore backup failed with error `failed to seal segment`

Current Behavior

[2023-02-16 10:08:32 - INFO - ci_test]: create backup response: {'requestId': 'ddec5cf8-ade1-11ed-8ed5-6045bdee0712', 'msg': 'success', 'data': {'id': 'ddeca40e-ade1-11ed-8ed5-6045bdee0712', 'state_code': 2, 'start_time': 1676542112129, 'end_time': 1676542112938, 'name': 'backup_CLPSXNL9', 'backup_timestamp': 1676542112129, 'collection_backups': [{'collection_name': 'restore_backup_LDLiOtfv', 'schema': {'name': 'restore_backup_LDLiOtfv', 'fields': [{'fieldID': 100, 'name': 'int64', 'is_primary_key': True, 'data_type': 5}, {'fieldID': 101, 'name': 'float', 'data_type': 10}, {'fieldID': 102, 'name': 'varchar', 'data_type': 21, 'type_params': [{'key': 'max_length', 'value': '65535'}]}, {'fieldID': 103, 'name': 'float_vector', 'data_type': 101, 'type_params': [{'key': 'dim', 'value': '128'}]}]}, 'backup_timestamp': 439495455408128, 'size': 1430626, 'has_index': False, 'load_state': 'NotLoad'}, {'collection_name': 'restore_backup_tpKUlg9M', 'schema': {'name': 'restore_backup_tpKUlg9M', 'fields': [{'fieldID': 100, 'name': 'int64', 'is_primary_key': True, 'data_type': 5}, {'fieldID': 101, 'name': 'float', 'data_type': 10}, {'fieldID': 102, 'name': 'varchar', 'data_type': 21, 'type_params': [{'key': 'max_length', 'value': '65535'}]}, {'fieldID': 103, 'name': 'float_vector', 'data_type': 101, 'type_params': [{'key': 'dim', 'value': '128'}]}]}, 'backup_timestamp': 439495455408128, 'size': 1430198, 'has_index': False, 'load_state': 'NotLoad'}, {'collection_name': 'restore_backup_et3qWmn0', 'schema': {'name': 'restore_backup_et3qWmn0', 'fields': [{'fieldID': 100, 'name': 'int64', 'is_primary_key': True, 'data_type': 5}, {'fieldID': 101, 'name': 'float', 'data_type': 10}, {'fieldID': 102, 'name': 'varchar', 'data_type': 21, 'type_params': [{'key': 'max_length', 'value': '65535'}]}, {'fieldID': 103, 'name': 'float_vector', 'data_type': 101, 'type_params': [{'key': 'dim', 'value': '128'}]}]}, 'backup_timestamp': 439495455408128, 'size': 1430344, 'has_index': False, 'load_state': 'NotLoad'}], 'size': 4291168, 'milvus_version': 'fa8a9224'}} (test_restore_backup.py:50)
[2023-02-16 10:08:43 - INFO - ci_test]: restore_backup: {'requestId': 'de78da7b-ade1-11ed-8ed5-6045bdee0712', 'code': 3, 'msg': "bulk insert fail, info: failed to seal segment, shard id 1, segment id 439495386580269205, channel 'by-dev-rootcoord-dml_189_439495386580269191v1', error: All attempts results:\nattempt #1:failed to save import segment, reason = failed to get segment 439495386580269205\n", 'data': {'id': 'de7d15d0-ade1-11ed-8ed5-6045bdee0712', 'state_code': 1, 'start_time': 1676542113, 'collection_restore_tasks': [{'state_code': 2, 'start_time': 1676542113, 'target_collection_name': 'restore_backup_LDLiOtfv_bak', 'restored_size': 0, 'to_restore_size': 0, 'progress': 100}, {'state_code': 1, 'start_time': 1676542113, 'target_collection_name': 'restore_backup_tpKUlg9M_bak', 'restored_size': 0, 'to_restore_size': 0}, {'start_time': 1676542113, 'target_collection_name': 'restore_backup_et3qWmn0_bak', 'restored_size': 0, 'to_restore_size': 0}], 'restored_size': 0, 'to_restore_size': 0, 'progress': 33}} (test_restore_backup.py:65)
[2023-02-16 10:08:43 - INFO - ci_test]: restore ['restore_backup_LDLiOtfv', 'restore_backup_tpKUlg9M', 'restore_backup_et3qWmn0'] cost time: 10.243428468704224 (test_restore_backup.py:70)
[2023-02-16 10:08:43 - DEBUG - ci_test]: (api_request)  : [list_collections] args: [20, 'default'], kwargs: {} (api_request.py:56)
[2023-02-16 10:08:43 - DEBUG - ci_test]: (api_response) : ['restore_backup_HhheqyXU', 'restore_backup_Asfq48qq', 'create_backup_hXC908CY', 'delete_backup_tC3qcZ85', 'restore_backup_woEA08ug', 'restore_backup_aIpObdLo', 'restore_backup_9jbgnP9m_bak', 'restore_backup_4eIhkQuZ_bak', 'restore_backup_tpKUlg9M_bak', 'create_backup_lK2M585O', 'create_backup_Mv1TP......  (api_request.py:31)
------------- generated html file: file:///tmp/ci_logs/report.html -------------
=========================== short test summary info ============================
FAILED testcases/test_restore_backup.py::TestRestoreBackup::test_milvus_restore_back[float-3-False-3000] - AssertionError: assert ('restore_backup_et3qWmn0' + '_bak') in ['restore_backup_HhheqyXU', 'restore_backup_Asfq48qq', 'create_backup_hXC908CY', 'delete_backup_tC3qcZ85', 'restore_backup_woEA08ug', 'restore_backup_aIpObdLo', ...]
=================== 1 failed, 90 passed in 278.52s (0:04:38) ===================

Expected Behavior

all test cases passed

Steps To Reproduce

see https://github.com/zilliztech/milvus-backup/actions/runs/4192783963/jobs/7268821432

Environment

helm
standalone

Anything else?

It is an unstable issue because it succeeded after rerun
see https://github.com/zilliztech/milvus-backup/actions/runs/4192783963/jobs/7273552316

log:
logs-helm-standalone (2).zip

[Feature]:Milvus-backup server should support dynamic config

Is your feature request related to a problem? Please describe.

After the build, the config of backup.yaml can not be changed when using milvus-backup server command. Build out binaries that do not support dynamic configuration, so it is not user-friendly.

Describe the solution you'd like.

add a subcommand for server command

milvus-backup server -config

Describe an alternate solution.

No response

Anything else? (Additional Context)

No response

[Enhancement]:Add a nightly pipeline to run all test cases

What would you like to be added?

Add a nightly pipeline to run all test cases

Why is this needed?

For the CI GitHub Action, only L0, and L1 test cases are covered.
We need to add a nightly pipeline to run all test cases

Anything else?

No response

Using data derived from tool to cut the data collected in the cutting insert data error < DataTypeNotMatchException: (code = 0, message= the data fields number is not match with schema.)> ID becomes mandatory

Current Behavior

Utility derived data reduction data collection in the reduction of the insert data error < DataTypeNotMatchException: (code=0, message=The data fields number is not match with schema.)> ID becomes mandatory

Expected Behavior

No response

Steps To Reproduce

No response

Environment

No response

Anything else?

No response

[Bug]: Restore will fail if `backupBucketName` is different with `bucketName`

Current Behavior

when backupBucketName is different with bucketName in backup.yaml, the restored collection will be an empty collection.

Since restore rely on bulk insert, so the backup files should be in the same bucket as Milvus as bulk insert required.

Expected Behavior

No response

Steps To Reproduce

No response

Environment

No response

Anything else?

No response

[API] API `restore` does not recovery collection

res = client.restore_backup({"async": False, "backup_name": back_up_name, "collection_names": [name_origin],
                             "collection_suffix": "bak"})
  1. the response is as follows which is still in a asynchronously way
 {"requestId":"67477580-6ee4-11ed-beb0-acde48001122","msg":"restore backup is executing asynchronously","data":{"id":"67477c92-6ee4-11ed-beb0-acde48001122","start_time":1669616278}}
  1. the collection is restored
    The output after restore is
list collection ['e2e_wGUoLniy', 'e2e_qiHLjFzt', 'e2e_Jss1Sydv', 'e2e_QQSph39l', 'e2e_5MQZd4iW', 'e2e_BuWmk8Gn', 'e2e_BWmQoVZu', 'e2e_vfpp0ahc', 'e2e_TbRKKYNg']

[Enhancement]:milvus-backup连接bucket失败

What would you like to be added?

我想测试milvus的高可用功能,现在是处理单机环境,没有启用集群
1、我新建的bucket
image
2、提示我连接失败
image
3、这是我的backup的配置
image
4、这是我milvus监听的端口
image
请问如何配置呢,才能运行:./milvus-backup server

Why is this needed?

No response

Anything else?

No response

[Feature]: Add `version` command for CLI

Is your feature request related to a problem? Please describe.

In most CLI tools, we can show the version of it by xxx version (xxx -v), like kubectl, helm, and so on

 helm version
version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"dirty", GoVersion:"go1.16.5"}

When the tool has the version, the user can report bugs more conveniently for the specific version.

Describe the solution you'd like.

The most common way is by using -ldflags to inject the info to the variable when building with Makefile

refer:
https://web3.coach/golang-include-last-git-commit-in-your-go-program-version

Here is a backup tool repo for a nebula graph database: https://github.com/vesoft-inc/nebula-br

Describe an alternate solution.

No response

Anything else? (Additional Context)

No response

[Bug]: The response of `list backup` is `{'requestId': '26340f9f-8291-11ed-88ec-6045bdae15be', 'code': 3, 'msg': 'not found'}` after creating backup

Current Behavior

[2022-12-23 07:12:19 - INFO - ci_test]: create backup response: {'requestId': '232fe47b-8291-11ed-88ec-6045bdae15be', 'msg': 'create backup is executing asynchronously', 'data': {'id': '23300c0c-8291-11ed-88ec-6045bdae15be', 'name': 'backup_x8FhJ0AW'}} (test_create_backup.py:52)
[2022-12-23 07:12:24 - INFO - ci_test]: backup backup_x8FhJ0AW is ready in 5.049671649932861 seconds (milvus_backup.py:66)
[2022-12-23 07:12:24 - INFO - ci_test]: list backup response: {'requestId': '26340f9f-8291-11ed-88ec-6045bdae15be', 'code': 3, 'msg': 'not found'} (test_create_backup.py:57)

Expected Behavior

The response of list backup should return the right backups

Steps To Reproduce

No response

Environment

No response

Anything else?

[2022-12-23 07:11:45 - INFO - ci_test]: create backup response: {'requestId': '0ed37b7f-8291-11ed-bd61-002248a6f8c2', 'msg': 'success', 'data': {'id': '0ed39f5a-8291-11ed-bd61-002248a6f8c2', 'state_code': 2, 'name': 'backup_KQLLsowq', 'backup_timestamp': 1671779504897, 'collection_backups': [{'collection_name': 'create_backup_dZjB4oVF', 'backup_timestamp': 438246966296576}]}} (test_create_backup.py:52)
[2022-12-23 07:11:45 - INFO - ci_test]: list backup response: {'requestId': '0ee594b5-8291-11ed-bd61-002248a6f8c2', 'code': 3, 'msg': 'not found'} (test_create_backup.py:57)
------------- generated html file: file:///tmp/ci_logs/report.html -------------
=========================== short test summary info ============================
FAILED testcases/test_create_backup.py::TestCreateBackup::test_milvus_create_backup[float-1-False] - AssertionError: assert 'backup_KQLLsowq' in []

failed job: https://github.com/zilliztech/milvus-backup/actions/runs/3763895849/jobs/6397810361
log:
https://github.com/zilliztech/milvus-backup/suites/10025422439/artifacts/487361838

[Enhancement]: Allow specifying the address for the server to listen on.

What would you like to be added?

No response

Why is this needed?

For security reasons, I want the server to only listen to the address 127.0.0.1. But now I can only specify the port number that needs to be listened, and the program will automatically listened on :port.

Anything else?

No response

[Bug]: Didn't backup field autoid parameter.

Current Behavior

Didn't backup field autoid parameter.

An autoID collection will be restored to a not auotID collection

Expected Behavior

No response

Steps To Reproduce

No response

Environment

No response

Anything else?

No response

[Bug]: how does this work?

Current Behavior

Goal: I try to backup a local collection so I can after move it to my server.

Is this what this is for?

On the readme it says:

Backup data is stored in Minio or another object storage solution used by your Milvus instance.

does this mean its stored on minio and not saved to disk?

however further below it says:

/create
Creates a backup for the cluster. Data of selected collections will be copied to a backup directory. You can specify a group of collection names to backup, or if left empty (by default), it will backup all collections.

Either way I tried to run it
./milvus-backup create --colls anything_ViT_H_14_v001 -n anything_ViT_H_14_v001

here is the output:

config:backup.yaml
[2023/03/30 08:51:59.193 +01:00] [INFO] [logutil/logutil.go:165] ["Log directory"] [configDir=]
[2023/03/30 08:51:59.193 +01:00] [INFO] [logutil/logutil.go:166] ["Set log file to "] [path=logs/backup.log]
[2023/03/30 08:51:59.193 +01:00] [INFO] [core/backup_context.go:154] ["receive CreateBackupRequest"] [requestId=bfedcbc9-cecf-11ed-83fd-1e7e76fef458] [backupName=anything_ViT_H_14_v001] [collections="[anything_ViT_H_14_v001]"] [async=false]
[2023/03/30 08:51:59.198 +01:00] [INFO] [storage/minio_chunk_manager.go:112] ["minio chunk manager init success."] [bucketname=a-bucket] [root=files]
[2023/03/30 08:51:59.207 +01:00] [INFO] [core/backup_context.go:321] ["collections to backup"] [collections="[{\"ID\":440400067464532391,\"Name\":\"anything_ViT_H_14_v001\",\"Schema\":{\"CollectionName\":\"anything_ViT_H_14_v001\",\"Description\":\"anything_data\",\"AutoID\":false,\"Fields\":[{\"ID\":100,\"Name\":\"thing_id\",\"PrimaryKey\":true,\"AutoID\":true,\"Description\":\"\",\"DataType\":5,\"TypeParams\":{},\"IndexParams\":{}},{\"ID\":101,\"Name\":\"img_embedding\",\"PrimaryKey\":false,\"AutoID\":false,\"Description\":\"\",\"DataType\":101,\"TypeParams\":{\"dim\":\"1024\"},\"IndexParams\":{}},{\"ID\":102,\"Name\":\"original_id\",\"PrimaryKey\":false,\"AutoID\":false,\"Description\":\"\",\"DataType\":21,\"TypeParams\":{\"max_length\":\"255\"},\"IndexParams\":{}}]},\"PhysicalChannels\":[\"by-dev-rootcoord-dml_0\",\"by-dev-rootcoord-dml_1\",\"by-dev-rootcoord-dml_2\",\"by-dev-rootcoord-dml_3\",\"by-dev-rootcoord-dml_4\",\"by-dev-rootcoord-dml_5\",\"by-dev-rootcoord-dml_6\",\"by-dev-rootcoord-dml_7\",\"by-dev-rootcoord-dml_8\",\"by-dev-rootcoord-dml_9\"],\"VirtualChannels\":[\"by-dev-rootcoord-dml_0_440400067464532391v0\",\"by-dev-rootcoord-dml_1_440400067464532391v1\",\"by-dev-rootcoord-dml_2_440400067464532391v2\",\"by-dev-rootcoord-dml_3_440400067464532391v3\",\"by-dev-rootcoord-dml_4_440400067464532391v4\",\"by-dev-rootcoord-dml_5_440400067464532391v5\",\"by-dev-rootcoord-dml_6_440400067464532391v6\",\"by-dev-rootcoord-dml_7_440400067464532391v7\",\"by-dev-rootcoord-dml_8_440400067464532391v8\",\"by-dev-rootcoord-dml_9_440400067464532391v9\"],\"Loaded\":false,\"ConsistencyLevel\":2,\"ShardNum\":10}]"]
[2023/03/30 08:51:59.209 +01:00] [INFO] [core/backup_context.go:355] ["try to get index"] [collection_name=anything_ViT_H_14_v001]
[2023/03/30 08:51:59.219 +01:00] [INFO] [core/backup_context.go:375] ["field index"] [collection_name=anything_ViT_H_14_v001] [field_name=thing_id] ["index info"="[{}]"]
[2023/03/30 08:51:59.227 +01:00] [INFO] [core/backup_context.go:375] ["field index"] [collection_name=anything_ViT_H_14_v001] [field_name=img_embedding] ["index info"="[{}]"]
[2023/03/30 08:51:59.236 +01:00] [INFO] [core/backup_context.go:375] ["field index"] [collection_name=anything_ViT_H_14_v001] [field_name=original_id] ["index info"="[{}]"]
[2023/03/30 08:51:59.247 +01:00] [INFO] [core/backup_context.go:468] ["GetPersistentSegmentInfo before flush from milvus"] [collectionName=anything_ViT_H_14_v001] [segmentNumBeforeFlush=20]
[2023/03/30 08:51:59.254 +01:00] [INFO] [core/backup_context.go:477] ["flush segments"] [collectionName=anything_ViT_H_14_v001] [newSealedSegmentIDs="[]"] [flushedSegmentIDs="[440400067465306965,440400067465530753,440400067465306966,440400067465530910,440400067465530911,440400067465306957,440400067465306963,440400067465306967,440400067465531058,440400067465531059,440400067465531220,440400067465306958,440400067465306959,440400067465306960,440400067465306962,440400067465530642,440400067465531219,440400067465306961,440400067465530644,440400067465530754]"] [timeOfSeal=1680162719]
[2023/03/30 08:51:59.262 +01:00] [INFO] [core/backup_context.go:490] ["GetPersistentSegmentInfo after flush from milvus"] [collectionName=anything_ViT_H_14_v001] [segmentNumBeforeFlush=20] [segmentNumAfterFlush=20]
[2023/03/30 08:51:59.262 +01:00] [WARN] [core/backup_context.go:513] ["this may be old segments before flush, skip it"] [id=440400067465306960]
[2023/03/30 08:51:59.262 +01:00] [WARN] [core/backup_context.go:513] ["this may be old segments before flush, skip it"] [id=440400067465306962]
[2023/03/30 08:51:59.262 +01:00] [WARN] [core/backup_context.go:513] ["this may be old segments before flush, skip it"] [id=440400067465530642]
[2023/03/30 08:51:59.262 +01:00] [WARN] [core/backup_context.go:513] ["this may be old segments before flush, skip it"] [id=440400067465306958]
[2023/03/30 08:51:59.262 +01:00] [WARN] [core/backup_context.go:513] ["this may be old segments before flush, skip it"] [id=440400067465306959]
[2023/03/30 08:51:59.262 +01:00] [WARN] [core/backup_context.go:513] ["this may be old segments before flush, skip it"] [id=440400067465531219]
[2023/03/30 08:51:59.262 +01:00] [WARN] [core/backup_context.go:513] ["this may be old segments before flush, skip it"] [id=440400067465306961]
[2023/03/30 08:51:59.262 +01:00] [WARN] [core/backup_context.go:513] ["this may be old segments before flush, skip it"] [id=440400067465530644]
[2023/03/30 08:51:59.262 +01:00] [WARN] [core/backup_context.go:513] ["this may be old segments before flush, skip it"] [id=440400067465530754]
[2023/03/30 08:51:59.262 +01:00] [WARN] [core/backup_context.go:513] ["this may be old segments before flush, skip it"] [id=440400067465306965]
[2023/03/30 08:51:59.262 +01:00] [WARN] [core/backup_context.go:513] ["this may be old segments before flush, skip it"] [id=440400067465530753]
[2023/03/30 08:51:59.262 +01:00] [WARN] [core/backup_context.go:513] ["this may be old segments before flush, skip it"] [id=440400067465530911]
[2023/03/30 08:51:59.262 +01:00] [WARN] [core/backup_context.go:513] ["this may be old segments before flush, skip it"] [id=440400067465306966]
[2023/03/30 08:51:59.262 +01:00] [WARN] [core/backup_context.go:513] ["this may be old segments before flush, skip it"] [id=440400067465530910]
[2023/03/30 08:51:59.262 +01:00] [WARN] [core/backup_context.go:513] ["this may be old segments before flush, skip it"] [id=440400067465306957]
[2023/03/30 08:51:59.262 +01:00] [WARN] [core/backup_context.go:513] ["this may be old segments before flush, skip it"] [id=440400067465306963]
[2023/03/30 08:51:59.263 +01:00] [WARN] [core/backup_context.go:513] ["this may be old segments before flush, skip it"] [id=440400067465306967]
[2023/03/30 08:51:59.263 +01:00] [WARN] [core/backup_context.go:513] ["this may be old segments before flush, skip it"] [id=440400067465531058]
[2023/03/30 08:51:59.263 +01:00] [WARN] [core/backup_context.go:513] ["this may be old segments before flush, skip it"] [id=440400067465531059]
[2023/03/30 08:51:59.263 +01:00] [WARN] [core/backup_context.go:513] ["this may be old segments before flush, skip it"] [id=440400067465531220]
[2023/03/30 08:51:59.263 +01:00] [INFO] [core/backup_context.go:527] ["Finished fill segment"] [collectionName=anything_ViT_H_14_v001]
[2023/03/30 08:51:59.539 +01:00] [INFO] [core/backup_context.go:544] ["readSegmentInfo from storage"] [collectionName=anything_ViT_H_14_v001] [segmentNum=20]
[2023/03/30 08:51:59.539 +01:00] [INFO] [core/backup_context.go:576] ["finish build partition info"] [collectionName=anything_ViT_H_14_v001] [partitionNum=1]
[2023/03/30 08:51:59.539 +01:00] [INFO] [core/backup_context.go:580] ["Begin copy data"] [collectionName=anything_ViT_H_14_v001] [segmentNum=20]
[2023/03/30 08:51:59.539 +01:00] [INFO] [core/backup_context.go:610] ["partition size is smaller than MaxSegmentGroupSize, won't separate segments into groups in backup files"] [collectionId=440400067464532391] [partitionId=440400067464532392] [partitionSize=1115884214] [MaxSegmentGroupSize=2147483648]
[2023/03/30 08:52:00.945 +01:00] [INFO] [core/backup_context.go:776] ["finish executeCreateBackup"] [requestId=bfedcbc9-cecf-11ed-83fd-1e7e76fef458] [backupName=anything_ViT_H_14_v001] [collections="[anything_ViT_H_14_v001]"] [async=false] ["backup meta"="{\"id\":\"bfeec3c9-cecf-11ed-83fd-1e7e76fef458\",\"state_code\":2,\"start_time\":1680162719201,\"end_time\":1680162720866,\"name\":\"anything_ViT_H_14_v001\",\"backup_timestamp\":1680162719201,\"size\":0,\"milvus_version\":\"v2.2.4\"}"]
Success
 success
 

but then I dont know where the backup is.

(base) thomas@tm-9:~/code/milvus-backup$ ls -l
total 37368
-rw-r--r--  1 thomas thomas      286 Mar 29 14:03 Dockerfile
-rw-r--r--  1 thomas thomas       79 Mar 29 14:03 OWNERS
-rw-r--r--  1 thomas thomas     5829 Mar 29 14:03 README.md
drwxr-xr-x  3 thomas thomas     4096 Mar 29 14:03 build
-rwxr-xr-x  1 thomas thomas       44 Mar 29 14:03 build.sh
drwxr-xr-x  2 thomas thomas     4096 Mar 29 14:03 cmd
drwxr-xr-x  2 thomas thomas     4096 Mar 29 14:03 configs
drwxr-xr-x  6 thomas thomas     4096 Mar 29 14:03 core
drwxr-xr-x  4 thomas thomas     4096 Mar 29 14:03 deployment
drwxr-xr-x  2 thomas thomas     4096 Mar 29 14:03 docs
drwxr-xr-x  2 thomas thomas     4096 Mar 29 14:03 example
-rw-r--r--  1 thomas thomas     4162 Mar 29 14:03 go.mod
-rw-r--r--  1 thomas thomas    84684 Mar 29 14:03 go.sum
drwxr-xr-x  6 thomas thomas     4096 Mar 29 14:03 internal
drwxr--r--  2 thomas thomas     4096 Mar 29 14:05 logs
-rw-r--r--  1 thomas thomas      457 Mar 29 14:03 main.go
-rwxr-xr-x  1 thomas thomas 38087920 Mar 29 14:03 milvus-backup
-rwxr-xr-x  1 thomas thomas     1132 Mar 29 14:03 proto_gen_go.sh
drwxr-xr-x  2 thomas thomas     4096 Mar 29 14:03 scripts
drwxr-xr-x 10 thomas thomas     4096 Mar 29 14:03 tests
-rw-r--r--  1 thomas thomas       13 Mar 29 14:03 ut_test.go

Expected Behavior

No response

Steps To Reproduce

No response

Environment

No response

Anything else?

No response

[Bug]: Backup failed when the Milvus is deployed by helm

Current Behavior

There are two problems:

  1. the description of the collection is wrong, all of them are null "hello_milvus\",\"Schema\":null,\"PhysicalChannels\":null
  2. it shows this segment has no insert binlog but it actually has
❯ ./milvus-backup create -n my_backup
config:backup.yaml
[2022/12/14 12:09:41.656 +08:00] [INFO] [logutil/logutil.go:165] ["Log directory"] [configDir=]
[2022/12/14 12:09:41.660 +08:00] [INFO] [logutil/logutil.go:166] ["Set log file to "] [path=logs/backup.log]
[2022/12/14 12:09:41.660 +08:00] [INFO] [core/backup_context.go:150] ["receive CreateBackupRequest"] [requestId=225a3cae-7b65-11ed-babd-acde48001122] [backupName=my_backup] [collections="[]"] [async=false]
[2022/12/14 12:09:41.765 +08:00] [INFO] [storage/minio_chunk_manager.go:112] ["minio chunk manager init success."] [bucketname=a-bucket] [root=files]
[2022/12/14 12:09:41.767 +08:00] [INFO] [core/backup_context.go:564] ["receive GetBackupRequest"] [requestId=226a9ae0-7b65-11ed-babd-acde48001122] [backupName=my_backup] [backupId=]
[2022/12/14 12:09:41.786 +08:00] [WARN] [core/backup_context.go:1120] ["read backup meta file not exist, you may need to create it first"] [path=backup/my_backup/meta/backup_meta.json] []
[2022/12/14 12:09:41.797 +08:00] [INFO] [core/backup_context.go:175] ["backup not exist"] [backup_name=my_backup]
[2022/12/14 12:09:41.822 +08:00] [INFO] [core/backup_context.go:297] ["collections to backup"] [collections="[{\"ID\":438040127353389066,\"Name\":\"hello_milvus\",\"Schema\":null,\"PhysicalChannels\":null,\"VirtualChannels\":null,\"Loaded\":false,\"ConsistencyLevel\":0,\"ShardNum\":0},{\"ID\":438040127353589085,\"Name\":\"hello_milvus2\",\"Schema\":null,\"PhysicalChannels\":null,\"VirtualChannels\":null,\"Loaded\":false,\"ConsistencyLevel\":0,\"ShardNum\":0}]"]
[2022/12/14 12:09:41.904 +08:00] [INFO] [core/backup_context.go:361] ["flush segments"] [newSealedSegmentIDs="[]"] [flushedSegmentIDs="[438040127353589075]"] [timeOfSeal=1670990982]
[2022/12/14 12:09:41.970 +08:00] [WARN] [core/backup_context.go:407] ["this segment has no insert binlog"] [id=438040127353589075]
[2022/12/14 12:09:42.019 +08:00] [INFO] [core/backup_context.go:361] ["flush segments"] [newSealedSegmentIDs="[]"] [flushedSegmentIDs="[438040127353589094,438040127353589102]"] [timeOfSeal=1670990982]
[2022/12/14 12:09:42.097 +08:00] [WARN] [core/backup_context.go:407] ["this segment has no insert binlog"] [id=438040127353589102]
[2022/12/14 12:09:42.128 +08:00] [WARN] [core/backup_context.go:407] ["this segment has no insert binlog"] [id=438040127353589094]
[2022/12/14 12:09:42.212 +08:00] [INFO] [core/backup_context.go:551] ["finish executeCreateBackup"] [requestId=225a3cae-7b65-11ed-babd-acde48001122] [backupName=my_backup] [collections="[]"] [async=false] ["backup meta"="{\"id\":\"226f30fa-7b65-11ed-babd-acde48001122\",\"state_code\":2,\"start_time\":1670990981797,\"end_time\":1670990982128,\"name\":\"my_backup\",\"backup_timestamp\":1670990981797}"]
Success
 success

Expected Behavior

image

Steps To Reproduce

No response

Environment

No response

Anything else?

No response

[Bug]: cluster backup and restore collection is empty

Current Behavior

Hello, I am testing the backup and restore tasks, and found that the result of the backup and restore is empty, please help to confirm the reason.

Expected Behavior

No response

Steps To Reproduce

[root@db-test milvus-backup-master]# ./milvus-backup create -n my_backup --colls car
config:backup.yaml
[2022/12/28 17:56:10.922 +08:00] [INFO] [logutil/logutil.go:165] ["Log directory"] [configDir=]
[2022/12/28 17:56:10.922 +08:00] [INFO] [logutil/logutil.go:166] ["Set log file to "] [path=logs/backup.log]
[2022/12/28 17:56:10.922 +08:00] [INFO] [core/backup_context.go:151] ["receive CreateBackupRequest"] [requestId=db8054c8-8695-11ed-a1b2-005056aaf4da] [backupName=my_backup] [collections="[car]"] [async=false]
[2022/12/28 17:56:10.926 +08:00] [INFO] [storage/minio_chunk_manager.go:112] ["minio chunk manager init success."] [bucketname=a-bucket] [root=files]
[2022/12/28 17:56:10.926 +08:00] [INFO] [core/backup_context.go:565] ["receive GetBackupRequest"] [requestId=db80f8a0-8695-11ed-a1b2-005056aaf4da] [backupName=my_backup] [backupId=]
[2022/12/28 17:56:10.927 +08:00] [WARN] [core/backup_context.go:1128] ["read backup meta file not exist, you may need to create it first"] [path=backup/my_backup/meta/backup_meta.json] []
[2022/12/28 17:56:10.927 +08:00] [INFO] [core/backup_context.go:176] ["backup not exist"] [backup_name=my_backup]
[2022/12/28 17:56:10.930 +08:00] [INFO] [core/backup_context.go:298] ["collections to backup"] [collections="[{\"ID\":438362781912399873,\"Name\":\"car\",\"Schema\":{\"CollectionName\":\"car\",\"Description\":\"'car_collection'\",\"AutoID\":false,\"Fields\":[{\"ID\":100,\"Name\":\"id\",\"PrimaryKey\":true,\"AutoID\":true,\"Description\":\"primary_field\",\"DataType\":5,\"TypeParams\":{},\"IndexParams\":{}},{\"ID\":101,\"Name\":\"vector\",\"PrimaryKey\":false,\"AutoID\":false,\"Description\":\"\",\"DataType\":101,\"TypeParams\":{\"dim\":\"128\"},\"IndexParams\":{}},{\"ID\":102,\"Name\":\"color\",\"PrimaryKey\":false,\"AutoID\":false,\"Description\":\"color\",\"DataType\":5,\"TypeParams\":{},\"IndexParams\":{}},{\"ID\":103,\"Name\":\"brand\",\"PrimaryKey\":false,\"AutoID\":false,\"Description\":\"brand\",\"DataType\":5,\"TypeParams\":{},\"IndexParams\":{}}]},\"PhysicalChannels\":[\"by-dev-rootcoord-dml_28\",\"by-dev-rootcoord-dml_29\"],\"VirtualChannels\":[\"by-dev-rootcoord-dml_28_438362781912399873v0\",\"by-dev-rootcoord-dml_29_438362781912399873v1\"],\"Loaded\":false,\"ConsistencyLevel\":2,\"ShardNum\":2}]"]
[2022/12/28 17:56:10.935 +08:00] [INFO] [core/backup_context.go:362] ["flush segments"] [newSealedSegmentIDs="[]"] [flushedSegmentIDs="[]"] [timeOfSeal=0]
[2022/12/28 17:56:10.937 +08:00] [WARN] [core/backup_context.go:383] ["this may be new segments after flush, skip it"] [id=438362788596809729]
[2022/12/28 17:56:10.937 +08:00] [WARN] [core/backup_context.go:383] ["this may be new segments after flush, skip it"] [id=438362788596809730]
[2022/12/28 17:56:10.945 +08:00] [INFO] [core/backup_context.go:552] ["finish executeCreateBackup"] [requestId=db8054c8-8695-11ed-a1b2-005056aaf4da] [backupName=my_backup] [collections="[car]"] [async=false] ["backup meta"="{\"id\":\"db811569-8695-11ed-a1b2-005056aaf4da\",\"state_code\":2,\"start_time\":1672221370927,\"end_time\":1672221370937,\"name\":\"my_backup\",\"backup_timestamp\":1672221370927}"]
Success 
 success


[root@db-test milvus-backup-master]# ./milvus-backup restore -n my_backup -s _recover
config:backup.yaml
[2022/12/28 17:56:36.681 +08:00] [INFO] [logutil/logutil.go:165] ["Log directory"] [configDir=]
[2022/12/28 17:56:36.681 +08:00] [INFO] [logutil/logutil.go:166] ["Set log file to "] [path=logs/backup.log]
[2022/12/28 17:56:36.681 +08:00] [INFO] [core/backup_context.go:771] ["receive RestoreBackupRequest"] [requestId=eadae5f0-8695-11ed-8e5b-005056aaf4da] [backupName=my_backup] [collections="[]"] [CollectionSuffix=_recover] [CollectionRenames={}] [async=false]
[2022/12/28 17:56:36.684 +08:00] [INFO] [storage/minio_chunk_manager.go:112] ["minio chunk manager init success."] [bucketname=a-bucket] [root=files]
[2022/12/28 17:56:36.685 +08:00] [INFO] [core/backup_context.go:565] ["receive GetBackupRequest"] [requestId=eadb7453-8695-11ed-8e5b-005056aaf4da] [backupName=my_backup] [backupId=]
[2022/12/28 17:56:36.691 +08:00] [INFO] [core/backup_context.go:847] ["Collections to restore"] [collection_num=1]
[2022/12/28 17:56:36.694 +08:00] [INFO] [core/backup_context.go:970] ["start restore"] [collection_name=car_recover]
[2022/12/28 17:56:36.716 +08:00] [INFO] [core/backup_context.go:944] ["end restore"] [collection_name=car_recover]
Success 
 success


milvus_cli > describe collection -c car
+---------------+----------------------------------+
| Name          | car                              |
+---------------+----------------------------------+
| Description   | 'car_collection'                 |
+---------------+----------------------------------+
| Is Empty      | False                            |
+---------------+----------------------------------+
| Entities      | 50                               |
+---------------+----------------------------------+
| Primary Field | id                               |
+---------------+----------------------------------+
| Schema        | Description: 'car_collection'    |
|               |                                  |
|               | Auto ID: True                    |
|               |                                  |
|               | Fields(* is the primary field):  |
|               |  - *id INT64  primary_field      |
|               |  - vector FLOAT_VECTOR dim: 128  |
|               |  - color INT64  color            |
|               |  - brand INT64  brand            |
+---------------+----------------------------------+
| Partitions    | - _default                       |
+---------------+----------------------------------+
| Indexes       | -                                |
+---------------+----------------------------------+
milvus_cli > describe collection -c  car_recover
+---------------+----------------------------------+
| Name          | car_recover                      |
+---------------+----------------------------------+
| Description   | 'car_collection'                 |
+---------------+----------------------------------+
| Is Empty      | True                             |
+---------------+----------------------------------+
| Entities      | 0                                |
+---------------+----------------------------------+
| Primary Field | id                               |
+---------------+----------------------------------+
| Schema        | Description: 'car_collection'    |
|               |                                  |
|               | Auto ID: False                   |
|               |                                  |
|               | Fields(* is the primary field):  |
|               |  - *id INT64  primary_field      |
|               |  - vector FLOAT_VECTOR dim: 128  |
|               |  - color INT64  color            |
|               |  - brand INT64  brand            |
+---------------+----------------------------------+
| Partitions    | - _default                       |
+---------------+----------------------------------+
| Indexes       | -                                |
+---------------+----------------------------------+

Environment

[root@db-test milvus-backup-master]# cat configs/backup.yaml 
# Configures the system log output.
log:
  level: info # Only supports debug, info, warn, error, panic, or fatal. Default 'info'.
  console: true
  file:
    rootPath: "logs/backup.log"

http:
  simpleResponse: true

# milvus proxy address, compatible to milvus.yaml
milvus:
  address: localhost
  port: 19530
  authorizationEnabled: false
  # tls mode values [0, 1, 2]
  # 0 is close, 1 is one-way authentication, 2 is two-way authentication.
  tlsMode: 0
  user: "root"
  password: "Milvus"

# Related configuration of minio, which is responsible for data persistence for Milvus.
minio:
  address: localhost # Address of MinIO/S3
  port: 9000   # Port of MinIO/S3
  accessKeyID: minioadmin # accessKeyID of MinIO/S3
  secretAccessKey: minioadmin # MinIO/S3 encryption string
  useSSL: false # Access to MinIO/S3 with SSL
  bucketName: "a-bucket" # Bucket name in MinIO/S3
  rootPath: files # The root path where the message is stored in MinIO/S3
  useIAM: false
  cloudProvider: "aws"
  iamEndpoint: ""

  backupBucketName: "a-bucket"
  backupRootPath: "backup"

Anything else?

No response

[Feature]:Release binary using GoReleaser

Is your feature request related to a problem? Please describe.

As a user, I just want a binary that can execute. I do not want to build it from source code from scratch.

Describe the solution you'd like.

https://goreleaser.com/
https://github.com/goreleaser/goreleaser

Release Go projects as fast and easily as possible!
With GoReleaser, you can:

Cross-compile your Go project
Release to GitHub, GitLab and Gitea
Create nightly builds
Create Docker images and manifests
Create Linux packages and Homebrew taps
Sign artifacts, checksums and container images
Announce new releases on Twitter, Slack, Discord and others
Generate SBOMs (Software Bill of Materials) for binaries and container images
... and much more!

Describe an alternate solution.

No response

Anything else? (Additional Context)

No response

[API]`restore` API does not report error when restore already exists

API does not report errors, but when using CLI, it reports errors.

❯ ./milvus-backup restore -n test_api
config:backup.yaml
[2022/11/23 19:23:22.775 +08:00] [INFO] [logutil/logutil.go:165] ["Log directory"] [configDir=]
[2022/11/23 19:23:22.775 +08:00] [INFO] [logutil/logutil.go:166] ["Set log file to "] [path=logs/backup.log]
[2022/11/23 19:23:22.775 +08:00] [DEBUG] [core/backup_context.go:60] ["Start Milvus client"] [endpoint=localhost:19530]
[2022/11/23 19:23:22.786 +08:00] [DEBUG] [core/backup_context.go:84] ["Start minio client"] [address=localhost:9000] [bucket=a-bucket] [backupBucket=a-bucket]
[2022/11/23 19:23:22.800 +08:00] [INFO] [storage/minio_chunk_manager.go:114] ["minio chunk manager init success."] [bucketname=a-bucket] [root=files]
[2022/11/23 19:23:22.882 +08:00] [INFO] [core/backup_context.go:796] ["Collections to restore"] [collection_num=1]
[2022/11/23 19:23:22.892 +08:00] [ERROR] [core/backup_context.go:819] ["The collection to restore already exists, backupCollectName: e2e__xPJK5zXX, targetCollectionName: e2e__xPJK5zXX"] [stack="github.com/zilliztech/milvus-backup/core.BackupContext.executeRestoreBackupTask\n\t/Users/zilliz/workspace/milvus-backup/core/backup_context.go:819\ngithub.com/zilliztech/milvus-backup/core.BackupContext.RestoreBackup\n\t/Users/zilliz/workspace/milvus-backup/core/backup_context.go:755\ngithub.com/zilliztech/milvus-backup/cmd.glob..func5\n\t/Users/zilliz/workspace/milvus-backup/cmd/restore.go:51\ngithub.com/spf13/cobra.(*Command).execute\n\t/Users/zilliz/go/pkg/mod/github.com/spf13/[email protected]/command.go:876\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/Users/zilliz/go/pkg/mod/github.com/spf13/[email protected]/command.go:990\ngithub.com/spf13/cobra.(*Command).Execute\n\t/Users/zilliz/go/pkg/mod/github.com/spf13/[email protected]/command.go:918\ngithub.com/zilliztech/milvus-backup/cmd.Execute\n\t/Users/zilliz/workspace/milvus-backup/cmd/root.go:24\nmain.main\n\t/Users/zilliz/workspace/milvus-backup/main.go:8\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250"]
status_code:Success reason:"The collection to restore already exists, backupCollectName: e2e__xPJK5zXX, targetCollectionName: e2e__xPJK5zXX"

API http://localhost:8080/api/v1/restore
reponse:

{
    "status": {
        "status_code": 1
    },
    "task": {
        "id": 435659124601001000,
        "start_time": 1669202673,
        "backup_info": {
            "id": 435656253046264872,
            "state_code": 2,
            "start_time": 1669200962,
            "name": "test_api",
            "backup_timestamp": 1669200962,
            "collection_backups": [
                {
                    "id": 435656253096596520,
                    "start_time": 1669200962,
                    "collection_id": 437569868359994919,
                    "collection_name": "e2e__xPJK5zXX",
                    "schema": {
                        "name": "e2e__xPJK5zXX",
                        "fields": [
                            {
                                "fieldID": 100,
                                "name": "int64",
                                "is_primary_key": true,
                                "data_type": 5
                            },
                            {
                                "fieldID": 101,
                                "name": "float",
                                "data_type": 10
                            },
                            {
                                "fieldID": 102,
                                "name": "varchar",
                                "data_type": 21,
                                "type_params": [
                                    {
                                        "key": "max_length",
                                        "value": "65535"
                                    }
                                ]
                            },
                            {
                                "fieldID": 103,
                                "name": "float_vector",
                                "data_type": 101,
                                "type_params": [
                                    {
                                        "key": "dim",
                                        "value": "128"
                                    }
                                ]
                            }
                        ]
                    },
                    "shards_num": 2,
                    "partition_backups": [
                        {
                            "partition_id": 437569868359994920,
                            "partition_name": "_default",
                            "collection_id": 437569868359994919,
                            "segment_backups": [
                                {
                                    "segment_id": 437569868359994968,
                                    "collection_id": 437569868359994919,
                                    "partition_id": 437569868359994920,
                                    "num_of_rows": 1497,
                                    "binlogs": [
                                        {
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994968/0/437569868359994982"
                                                }
                                            ]
                                        },
                                        {
                                            "fieldID": 1,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994968/1/437569868359994983"
                                                }
                                            ]
                                        },
                                        {
                                            "fieldID": 100,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994968/100/437569868359994978"
                                                }
                                            ]
                                        },
                                        {
                                            "fieldID": 101,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994968/101/437569868359994979"
                                                }
                                            ]
                                        },
                                        {
                                            "fieldID": 102,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994968/102/437569868359994980"
                                                }
                                            ]
                                        },
                                        {
                                            "fieldID": 103,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994968/103/437569868359994981"
                                                }
                                            ]
                                        }
                                    ],
                                    "statslogs": [
                                        {
                                            "fieldID": 100,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/stats_log/437569868359994919/437569868359994920/437569868359994968/100/437569868359994984"
                                                }
                                            ]
                                        }
                                    ],
                                    "deltalogs": [
                                        {}
                                    ]
                                },
                                {
                                    "segment_id": 437569868359994928,
                                    "collection_id": 437569868359994919,
                                    "partition_id": 437569868359994920,
                                    "num_of_rows": 1503,
                                    "binlogs": [
                                        {
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994928/0/437569868359994941"
                                                }
                                            ]
                                        },
                                        {
                                            "fieldID": 1,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994928/1/437569868359994942"
                                                }
                                            ]
                                        },
                                        {
                                            "fieldID": 100,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994928/100/437569868359994937"
                                                }
                                            ]
                                        },
                                        {
                                            "fieldID": 101,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994928/101/437569868359994938"
                                                }
                                            ]
                                        },
                                        {
                                            "fieldID": 102,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994928/102/437569868359994939"
                                                }
                                            ]
                                        },
                                        {
                                            "fieldID": 103,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994928/103/437569868359994940"
                                                }
                                            ]
                                        }
                                    ],
                                    "statslogs": [
                                        {
                                            "fieldID": 100,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/stats_log/437569868359994919/437569868359994920/437569868359994928/100/437569868359994943"
                                                }
                                            ]
                                        }
                                    ],
                                    "deltalogs": [
                                        {}
                                    ]
                                },
                                {
                                    "segment_id": 437569868359994929,
                                    "collection_id": 437569868359994919,
                                    "partition_id": 437569868359994920,
                                    "num_of_rows": 1497,
                                    "binlogs": [
                                        {
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994929/0/437569868359994934"
                                                }
                                            ]
                                        },
                                        {
                                            "fieldID": 1,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994929/1/437569868359994935"
                                                }
                                            ]
                                        },
                                        {
                                            "fieldID": 100,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994929/100/437569868359994930"
                                                }
                                            ]
                                        },
                                        {
                                            "fieldID": 101,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994929/101/437569868359994931"
                                                }
                                            ]
                                        },
                                        {
                                            "fieldID": 102,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994929/102/437569868359994932"
                                                }
                                            ]
                                        },
                                        {
                                            "fieldID": 103,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994929/103/437569868359994933"
                                                }
                                            ]
                                        }
                                    ],
                                    "statslogs": [
                                        {
                                            "fieldID": 100,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/stats_log/437569868359994919/437569868359994920/437569868359994929/100/437569868359994936"
                                                }
                                            ]
                                        }
                                    ],
                                    "deltalogs": [
                                        {}
                                    ]
                                },
                                {
                                    "segment_id": 437569868359994967,
                                    "collection_id": 437569868359994919,
                                    "partition_id": 437569868359994920,
                                    "num_of_rows": 1503,
                                    "binlogs": [
                                        {
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994967/0/437569868359994975"
                                                }
                                            ]
                                        },
                                        {
                                            "fieldID": 1,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994967/1/437569868359994976"
                                                }
                                            ]
                                        },
                                        {
                                            "fieldID": 100,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994967/100/437569868359994971"
                                                }
                                            ]
                                        },
                                        {
                                            "fieldID": 101,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994967/101/437569868359994972"
                                                }
                                            ]
                                        },
                                        {
                                            "fieldID": 102,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994967/102/437569868359994973"
                                                }
                                            ]
                                        },
                                        {
                                            "fieldID": 103,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/insert_log/437569868359994919/437569868359994920/437569868359994967/103/437569868359994974"
                                                }
                                            ]
                                        }
                                    ],
                                    "statslogs": [
                                        {
                                            "fieldID": 100,
                                            "binlogs": [
                                                {
                                                    "log_path": "files/stats_log/437569868359994919/437569868359994920/437569868359994967/100/437569868359994977"
                                                }
                                            ]
                                        }
                                    ],
                                    "deltalogs": [
                                        {}
                                    ]
                                }
                            ]
                        }
                    ],
                    "backup_timestamp": 437571016982528
                }
            ]
        }
    }
}

[Feature]: Add `clean` command for backup tools

Is your feature request related to a problem? Please describe.

For now, when some backups or restores fail, we need to clean them manually, so it would be better to offer a command or API to clean them.

Describe the solution you'd like.

For the backups, we can use the list_backup API to find the failed backup and then use the delete API to delete them.

But for the restores, we do not have an API list_restore, so it is hard to find the failed restores.

Describe an alternate solution.

No response

Anything else? (Additional Context)

No response

[bug]:milvus-backup connects to bucket failed

What would you like to be added?

我想测试milvus的高可用功能,现在是处理单机环境,没有启用集群
1、我新建的bucket
image
2、提示我连接失败
image
3、这是我的backup的配置
image
4、这是我milvus监听的端口
image
请问如何配置呢,才能运行:./milvus-backup server

Why is this needed?

No response

Anything else?

No response

[Bug]: Milvus-backup only restored the collection meta instead of index data

Current Behavior

git clone https://github.com/zilliztech/milvus-backup.git

go get

go build

Change config/backup.yaml to my own s3 information.

./milvus-backup create -n my_backup

./milvus-backup restore -n my_backup -s _recover

Expected Behavior

I should see a new collection named my_collection_recover with the same stats as my_collection_recover, yet the new collection only has the metadata except for index data.

cadcef7c3a072e7c728fa99c6005609

3388be7344ddc9e1ae90a87787bd85e

Steps To Reproduce

See above

Environment

Ubuntu 20.04 LTS
4 core 32 G ram
Docker Compose 
Milvus-Standalone 2.2.2

Anything else?

Logs
backup.log

[Bug]: Fail to backup

Current Behavior

Can't find segment through GetPersistentSegments we need.

SegmentIDs return by flush doesn't exist in GetPersistentSegments response.

This may caused by some segments were compacted during flush.

2023-02-21 09:05:45 | [2023/02/21 09:05:45.339 +00:00] [DEBUG] [core/backup_context.go:244] ["call refreshBackupMetaFunc"] [id=e59f38bd-b1c6-11ed-8864-a242590ec3a2]
-- | --
  |   | 2023-02-21 09:05:47 | [2023/02/21 09:05:47.000 +00:00] [INFO] [core/backup_context.go:438] ["flush segments"] [collectionName=book1] [newSealedSegmentIDs="[439607656426704980,439607656426704981]"] [flushedSegmentIDs="[439607656426704693,439607656426704935,439607656426704754,439607656426704934,439607656426704685,439607656426704817,439607656426704512,439607656426704634,439607656426704897,439607656426704467,439607656426704818,439607656426704633,439607656426704694,439607656426704755,439607656426704449,439607656426704650,439607656426704896,439607656426704561]"] [timeOfSeal=1676970345]
  |   | 2023-02-21 09:05:47 | [2023/02/21 09:05:47.005 +00:00] [INFO] [core/backup_context.go:450] ["GetPersistentSegmentInfo from milvus"] [collectionName=book1] [segmentNum=14]
  |   | 2023-02-21 09:05:47 | [2023/02/21 09:05:47.005 +00:00] [WARN] [core/backup_context.go:463] ["this may be new segments after flush, skip it"] [id=439607656426704852]
  |   | 2023-02-21 09:05:47 | [2023/02/21 09:05:47.005 +00:00] [WARN] [core/backup_context.go:463] ["this may be new segments after flush, skip it"] [id=439607656426704826]
  |   | 2023-02-21 09:05:47 | [2023/02/21 09:05:47.005 +00:00] [WARN] [core/backup_context.go:468] ["Segment return in Flush not exist in GetPersistentSegmentInfo. segment ids: [439607656426704693 439607656426704754 439607656426704633 439607656426704467 439607656426704561 439607656426704512 439607656426704449 439607656426704685]"]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.524 +00:00] [INFO] [datacoord/meta.go:258] ["meta update: adding segment"] ["segment ID"=439607656426704693]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.525 +00:00] [INFO] [datacoord/meta.go:270] ["meta update: adding segment - complete"] ["segment ID"=439607656426704693]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.525 +00:00] [INFO] [datacoord/segment_manager.go:377] ["datacoord: estimateTotalRows: "] [CollectionID=439607656426504209] [SegmentID=439607656426704693] [Rows=986895] [Channel=in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.525 +00:00] [INFO] [datacoord/meta.go:851] ["meta update: add allocation"] [segmentID=439607656426704693] [allocation="SegmentID: 439607656426704693, NumOfRows: 8, ExpireTime: 2023-02-21 09:05:29.499 +0000 UTC"]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.526 +00:00] [INFO] [datacoord/meta.go:875] ["meta update: add allocation - complete"] [segmentID=439607656426704693]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.526 +00:00] [INFO] [datacoord/services.go:188] ["success to assign segments"] [collectionID=439607656426504209] [assignments="[{\"SegmentID\":439607656426704693,\"NumOfRows\":8,\"ExpireTime\":439607710056185858}]"]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.672 +00:00] [INFO] [querynode/flow_graph_insert_node.go:147] ["Add growing segment"] [collectionID=439607656426504209] [segmentID=439607656426704693] [startPosition=439607709531897857]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.673 +00:00] [INFO] [querynode/segment.go:186] ["create segment"] [collectionID=439607656426504209] [partitionID=439607656426504210] [segmentID=439607656426704693] [segmentType=Growing]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.673 +00:00] [INFO] [querynode/partition.go:74] ["add a segment to replica"] [collectionID=439607656426504209] [partitionID=439607656426504210] [segmentID=439607656426704693] [segmentType=Growing]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.673 +00:00] [INFO] [querynode/meta_replica.go:614] ["new segment added to collection replica"] ["query node ID"=18] ["collection ID"=439607656426504209] ["partition ID"=439607656426504210] ["segment ID"=439607656426704693] ["segment type"=Growing] ["row count"=0] ["segment indexed fields"=0]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.677 +00:00] [INFO] [datanode/channel_meta.go:188] ["adding segment"] [type=New] [segmentID=439607656426704693] [collectionID=439607656426504209] [partitionID=439607656426504210] [channel=in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0] [startPosition="channel_name:\"in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\" msgID:\"\\306\\000\\000\\000\\000\\000\\000\\000\" msgGroup:\"in01-e0a52d06b33e103-dataNode-24-in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\" timestamp:439607709492314113 "] [endPosition="channel_name:\"in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\" msgID:\"\\310\\000\\000\\000\\000\\000\\000\\000\" msgGroup:\"in01-e0a52d06b33e103-dataNode-24-in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\" timestamp:439607709545005057 "] [recoverTs=0] [importing=false]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.677 +00:00] [INFO] [datanode/channel_meta.go:285] ["begin to init pk bloom filter"] [segmentID=439607656426704693] ["stats bin logs"=0]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.677 +00:00] [WARN] [datanode/channel_meta.go:314] ["no stats files to load"] [segmentID=439607656426704693]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.677 +00:00] [INFO] [datanode/channel_meta.go:461] ["updating segment"] ["Segment ID"=439607656426704693] [numRows=8]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.678 +00:00] [INFO] [datanode/flow_graph_insert_buffer_node.go:287] ["segment buffer status"] [no.=0] [segmentID=439607656426704693] [channel=in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0] [size=8] [limit=30841]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.682 +00:00] [INFO] [datacoord/server.go:625] ["Updating segment number of rows"] ["segment ID"=439607656426704693] ["old value"=0] ["new value"=8]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.685 +00:00] [INFO] [datacoord/meta.go:330] ["meta update: setting segment state"] ["segment ID"=439607656426704693] ["target state"=Sealed]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.686 +00:00] [INFO] [datacoord/meta.go:366] ["meta update: setting segment state - complete"] ["segment ID"=439607656426704693] ["target state"=Sealed]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.687 +00:00] [INFO] [datacoord/services.go:121] ["flush response with segments"] [collectionID=439607656426504209] [sealSegments="[439607656426704693,439607656426704694]"] [flushSegments="[439607656426704633,439607656426704449,439607656426704560,439607656426704511,439607656426704561,439607656426704326,439607656426704450,439607656426704634,439607656426704512,439607656426704416,439607656426704378,439607656426704507,439607656426704467]"] [timeOfSeal=1676970327]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.847 +00:00] [INFO] [proxy/impl.go:3995] ["received get flush state request"] [request="segmentIDs:439607656426704693 segmentIDs:439607656426704694 "]
2023-02-21 09:05:27	
[2023/02/21 09:05:27.847 +00:00] [INFO] [datacoord/services.go:1125] ["DataCoord receive GetFlushState request, Flushed is false"] [segmentIDs="[439607656426704693,439607656426704694]"] [len=2]
2023-02-21 09:05:28	
[2023/02/21 09:05:28.548 +00:00] [INFO] [proxy/impl.go:3995] ["received get flush state request"] [request="segmentIDs:439607656426704693 segmentIDs:439607656426704694 "]
2023-02-21 09:05:28	
[2023/02/21 09:05:28.548 +00:00] [INFO] [datacoord/services.go:1125] ["DataCoord receive GetFlushState request, Flushed is false"] [segmentIDs="[439607656426704693,439607656426704694]"] [len=2]
2023-02-21 09:05:29	
[2023/02/21 09:05:29.111 +00:00] [INFO] [querynode/search.go:100] ["search growing/sealed segments without indexes"] [traceID=2ffbbda788f0b538] [segmentIDs="[439607656426704693,439607656426704511,439607656426704560,439607656426704633]"]
2023-02-21 09:05:29	
[2023/02/21 09:05:29.206 +00:00] [INFO] [proxy/impl.go:3995] ["received get flush state request"] [request="segmentIDs:439607656426704693 segmentIDs:439607656426704694 "]
2023-02-21 09:05:29	
[2023/02/21 09:05:29.207 +00:00] [INFO] [datacoord/services.go:1125] ["DataCoord receive GetFlushState request, Flushed is false"] [segmentIDs="[439607656426704693,439607656426704694]"] [len=2]
2023-02-21 09:05:29	
[2023/02/21 09:05:29.881 +00:00] [INFO] [datacoord/server.go:601] ["start flushing segments"] ["segment IDs"="[439607656426704693]"]
2023-02-21 09:05:29	
[2023/02/21 09:05:29.882 +00:00] [INFO] [datanode/data_node.go:606] ["receiving FlushSegments request"] ["collection ID"=439607656426504209] [segments="[439607656426704693]"]
2023-02-21 09:05:29	
[2023/02/21 09:05:29.882 +00:00] [INFO] [datanode/data_node.go:649] ["flow graph flushSegment tasks triggered"] [flushed=true] ["collection ID"=439607656426504209] ["segments sending to flush channel"="[439607656426704693]"]
2023-02-21 09:05:29	
[2023/02/21 09:05:29.882 +00:00] [INFO] [datanode/data_node.go:659] ["sending segments to flush channel"] ["newly sealed segment IDs"="[439607656426704693]"]
2023-02-21 09:05:29	
[2023/02/21 09:05:29.885 +00:00] [INFO] [proxy/impl.go:3995] ["received get flush state request"] [request="segmentIDs:439607656426704693 segmentIDs:439607656426704694 "]
2023-02-21 09:05:29	
[2023/02/21 09:05:29.885 +00:00] [INFO] [datacoord/services.go:1125] ["DataCoord receive GetFlushState request, Flushed is false"] [segmentIDs="[439607656426704693,439607656426704694]"] [len=2]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.076 +00:00] [INFO] [datanode/flow_graph_insert_buffer_node.go:261] ["(Manual Sync) batch processing flush messages"] [batchSize=1] [flushedSegments="[439607656426704693]"] [staleSegments="[]"] [channel=in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.076 +00:00] [INFO] [datanode/flow_graph_insert_buffer_node.go:451] ["insertBufferNode syncing BufferData"] [segmentID=439607656426704693] [flushed=true] [dropped=false] [auto=false] [position="channel_name:\"in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\" msgID:\"\\325\\000\\000\\000\\000\\000\\000\\000\" msgGroup:\"in01-e0a52d06b33e103-dataNode-24-in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\" timestamp:439607710174150657 "] [channel=in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.078 +00:00] [INFO] [datanode/flush_manager.go:287] ["handling insert task"] ["segment ID"=439607656426704693] [flushed=true] [dropped=false] [position="channel_name:\"in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\" msgID:\"\\325\\000\\000\\000\\000\\000\\000\\000\" msgGroup:\"in01-e0a52d06b33e103-dataNode-24-in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\" timestamp:439607710174150657 "]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.078 +00:00] [INFO] [datanode/flush_manager.go:139] ["new flush task runner created and initialized"] ["segment ID"=439607656426704693] ["pos message ID"="\ufffd\u0000\u0000\u0000\u0000\u0000\u0000\u0000"]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.078 +00:00] [INFO] [datanode/flush_task.go:134] ["running flush insert task"] ["segment ID"=439607656426704693] [flushed=true] [dropped=false] [position="channel_name:\"in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\" msgID:\"\\325\\000\\000\\000\\000\\000\\000\\000\" msgGroup:\"in01-e0a52d06b33e103-dataNode-24-in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\" timestamp:439607710174150657 "] [PosTime=2023/02/21 09:05:29.949 +00:00]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.078 +00:00] [INFO] [datanode/channel_meta.go:353] ["roll pk stats"] ["segment id"=439607656426704693]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.078 +00:00] [INFO] [datanode/flush_manager.go:314] ["handling delete task"] ["segment ID"=439607656426704693]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.274 +00:00] [INFO] [datanode/flush_manager.go:789] [SaveBinlogPath] [SegmentID=439607656426704693] [CollectionID=439607656426504209] [startPos="[{\"start_position\":{\"channel_name\":\"in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\",\"msgID\":\"xgAAAAAAAAA=\",\"msgGroup\":\"in01-e0a52d06b33e103-dataNode-24-in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\",\"timestamp\":439607709492314113},\"segmentID\":439607656426704693}]"] [checkPoints="[{\"segmentID\":439607656426704693,\"position\":{\"channel_name\":\"in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\",\"msgID\":\"1QAAAAAAAAA=\",\"msgGroup\":\"in01-e0a52d06b33e103-dataNode-24-in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\",\"timestamp\":439607710174150657},\"num_of_rows\":8}]"] ["Length of Field2BinlogPaths"=5] ["Length of Field2Stats"=1] ["Length of Field2Deltalogs"=0] [vChannelName=in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.274 +00:00] [INFO] [datacoord/services.go:400] ["receive SaveBinlogPaths request"] [nodeID=24] [collectionID=439607656426504209] [segmentID=439607656426704693] [isFlush=true] [isDropped=false] [startPositions="[{\"start_position\":{\"channel_name\":\"in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\",\"msgID\":\"xgAAAAAAAAA=\",\"msgGroup\":\"in01-e0a52d06b33e103-dataNode-24-in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\",\"timestamp\":439607709492314113},\"segmentID\":439607656426704693}]"] [checkpoints="[{\"segmentID\":439607656426704693,\"position\":{\"channel_name\":\"in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\",\"msgID\":\"1QAAAAAAAAA=\",\"msgGroup\":\"in01-e0a52d06b33e103-dataNode-24-in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\",\"timestamp\":439607710174150657},\"num_of_rows\":8}]"]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.274 +00:00] [INFO] [datacoord/meta.go:412] ["meta update: update flush segments info"] [segmentId=439607656426704693] [binlog=5] ["stats log"=1] ["delta logs"=1] [flushed=true] [dropped=false] ["check points"="[{\"segmentID\":439607656426704693,\"position\":{\"channel_name\":\"in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\",\"msgID\":\"1QAAAAAAAAA=\",\"msgGroup\":\"in01-e0a52d06b33e103-dataNode-24-in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\",\"timestamp\":439607710174150657},\"num_of_rows\":8}]"] ["start position"="[{\"start_position\":{\"channel_name\":\"in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\",\"msgID\":\"xgAAAAAAAAA=\",\"msgGroup\":\"in01-e0a52d06b33e103-dataNode-24-in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\",\"timestamp\":439607709492314113},\"segmentID\":439607656426704693}]"] [importing=false]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.275 +00:00] [INFO] [datacoord/meta.go:548] ["meta update: update flush segments info - update flush segments info successfully"] ["segment ID"=439607656426704693]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.275 +00:00] [INFO] [datacoord/services.go:454] ["flush segment with meta"] ["segment id"=439607656426704693] [meta="[{\"fieldID\":100,\"binlogs\":[{\"entries_num\":8,\"timestamp_from\":439607709531897857,\"timestamp_to\":439607709531897857,\"log_path\":\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704693/100/439607656426704710\",\"log_size\":64}]},{\"fieldID\":101,\"binlogs\":[{\"entries_num\":8,\"timestamp_from\":439607709531897857,\"timestamp_to\":439607709531897857,\"log_path\":\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704693/101/439607656426704711\",\"log_size\":64}]},{\"fieldID\":102,\"binlogs\":[{\"entries_num\":8,\"timestamp_from\":439607709531897857,\"timestamp_to\":439607709531897857,\"log_path\":\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704693/102/439607656426704712\",\"log_size\":4100}]},{\"binlogs\":[{\"entries_num\":8,\"timestamp_from\":439607709531897857,\"timestamp_to\":439607709531897857,\"log_path\":\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704693/0/439607656426704713\",\"log_size\":64}]},{\"fieldID\":1,\"binlogs\":[{\"entries_num\":8,\"timestamp_from\":439607709531897857,\"timestamp_to\":439607709531897857,\"log_path\":\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704693/1/439607656426704714\",\"log_size\":64}]}]"]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.276 +00:00] [INFO] [datacoord/server.go:784] ["flush successfully"] [segmentID=439607656426704693]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.276 +00:00] [INFO] [datacoord/meta.go:330] ["meta update: setting segment state"] ["segment ID"=439607656426704693] ["target state"=Flushed]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.276 +00:00] [INFO] [datacoord/services.go:467] ["compaction triggered for segment"] ["segment ID"=439607656426704693]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.276 +00:00] [INFO] [datanode/segment.go:163] ["evictHistoryInsertBuffer done"] [segmentID=439607656426704693] [ts=2023/02/21 09:05:29.949 +00:00] [channel=in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.276 +00:00] [INFO] [datanode/segment.go:186] ["evictHistoryDeleteBuffer done"] [segmentID=439607656426704693] [ts=2023/02/21 09:05:29.949 +00:00] [channel=in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.276 +00:00] [INFO] [indexcoord/index_coord.go:1289] ["watchFlushedSegmentLoop watch event"] [segID=439607656426704693] [isFake=false]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.276 +00:00] [INFO] [indexcoord/flush_segment_watcher.go:143] ["flushed segment task enqueue successfully"] [segID=439607656426704693] [isFake=false]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.277 +00:00] [INFO] [datacoord/meta.go:366] ["meta update: setting segment state - complete"] ["segment ID"=439607656426704693] ["target state"=Flushed]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.277 +00:00] [INFO] [datacoord/server.go:827] ["flush segment complete"] [id=439607656426704693]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.277 +00:00] [INFO] [indexcoord/flush_segment_watcher.go:355] ["flushedSegmentWatcher prepare task success"] [segID=439607656426704693]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.278 +00:00] [INFO] [indexcoord/index_coord.go:1195] ["create index for flushed segment"] [collID=439607656426504209] [segID=439607656426704693] [numRows=8]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.278 +00:00] [INFO] [indexcoord/index_coord.go:1228] ["IndexCoord createIndex Enqueue successfully"] [collID=439607656426504209] [segID=439607656426704693] [IndexBuildID=439607656426704736]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.278 +00:00] [INFO] [indexcoord/task.go:302] ["IndexCoord IndexAddTask PreExecute"] [segID=439607656426704693] [IndexBuildID=439607656426704736]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.278 +00:00] [INFO] [indexcoord/task.go:309] ["IndexCoord IndexAddTask Execute"] [segID=439607656426704693] [IndexBuildID=439607656426704736]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.278 +00:00] [INFO] [indexcoord/meta_table.go:301] ["IndexCoord metaTable AddIndex"] [collID=439607656426504209] [segID=439607656426704693] [indexID=439607656426704452] [buildID=439607656426704736]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.279 +00:00] [INFO] [indexcoord/meta_table.go:319] ["IndexCoord metaTable AddIndex success"] [collID=439607656426504209] [segID=439607656426704693] [indexID=439607656426704452] [buildID=439607656426704736]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.279 +00:00] [INFO] [indexcoord/task.go:320] ["IndexCoord IndexAddTask PostExecute"] [segID=439607656426704693] [IndexBuildID=439607656426704736]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.279 +00:00] [INFO] [indexcoord/flush_segment_watcher.go:318] ["flushedSegmentWatcher construct task success"] [segID=439607656426704693] [buildID=439607656426704736] ["already have index task"=false]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.279 +00:00] [INFO] [indexcoord/handoff.go:98] ["handoff task enqueue successfully"] [segID=439607656426704693] [isFake=false]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.456 +00:00] [INFO] [indexcoord/handoff.go:329] ["IndexCoord write handoff task success"] [collID=439607656426504209] [partID=439607656426504210] [segID=439607656426704693]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.456 +00:00] [INFO] [indexcoord/handoff.go:264] ["write handoff success"] [segID=439607656426704693]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.458 +00:00] [INFO] [indexcoord/handoff.go:271] ["mark segment as write handoff success, remove task"] [segID=439607656426704693]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.541 +00:00] [INFO] [proxy/impl.go:3995] ["received get flush state request"] [request="segmentIDs:439607656426704693 segmentIDs:439607656426704694 "]
2023-02-21 09:05:30	
[2023/02/21 09:05:30.541 +00:00] [INFO] [datacoord/services.go:1128] ["DataCoord receive GetFlushState request, Flushed is true"] [segmentIDs="[439607656426704693,439607656426704694]"] [len=2]
2023-02-21 09:05:31	
[2023/02/21 09:05:31.027 +00:00] [INFO] [datacoord/services.go:121] ["flush response with segments"] [collectionID=439607656426504209] [sealSegments="[439607656426704754,439607656426704755]"] [flushSegments="[439607656426704512,439607656426704634,439607656426704416,439607656426704378,439607656426704507,439607656426704467,439607656426704633,439607656426704694,439607656426704560,439607656426704449,439607656426704326,439607656426704511,439607656426704561,439607656426704450,439607656426704693]"] [timeOfSeal=1676970330]
2023-02-21 09:05:31	
[2023/02/21 09:05:31.343 +00:00] [INFO] [datacoord/services.go:121] ["flush response with segments"] [collectionID=439607656426504209] [sealSegments="[439607656426704754,439607656426704755]"] [flushSegments="[439607656426704693,439607656426704450,439607656426704634,439607656426704512,439607656426704416,439607656426704378,439607656426704507,439607656426704467,439607656426704633,439607656426704694,439607656426704449,439607656426704560,439607656426704511,439607656426704561,439607656426704326]"] [timeOfSeal=1676970331]
2023-02-21 09:05:32	
[2023/02/21 09:05:32.668 +00:00] [INFO] [querynode/search.go:100] ["search growing/sealed segments without indexes"] [traceID=5df523e185b1d4bf] [segmentIDs="[439607656426704560,439607656426704754,439607656426704633,439607656426704693,439607656426704511]"]
2023-02-21 09:05:32	
[2023/02/21 09:05:32.796 +00:00] [INFO] [indexcoord/index_coord.go:1294] ["the segment info has been deleted"] [key=in01-e0a52d06b33e103/meta/flushed-segment/439607656426504209/439607656426504210/439607656426704693]
2023-02-21 09:05:32	
[2023/02/21 09:05:32.796 +00:00] [INFO] [indexcoord/flush_segment_watcher.go:333] ["IndexCoord remove flushed segment key success"] [collID=439607656426504209] [partID=439607656426504210] [segID=439607656426704693]
2023-02-21 09:05:34	
[2023/02/21 09:05:34.291 +00:00] [INFO] [datacoord/services.go:121] ["flush response with segments"] [collectionID=439607656426504209] [sealSegments="[439607656426704817,439607656426704818]"] [flushSegments="[439607656426704560,439607656426704450,439607656426704512,439607656426704634,439607656426704416,439607656426704467,439607656426704755,439607656426704633,439607656426704694,439607656426704449,439607656426704326,439607656426704511,439607656426704561,439607656426704693,439607656426704754,439607656426704378,439607656426704507]"] [timeOfSeal=1676970334]
2023-02-21 09:05:35	
[2023/02/21 09:05:35.944 +00:00] [INFO] [datacoord/services.go:636] ["datacoord append channelInfo in GetRecoveryInfo"] [collectionID=439607656426504209] [partitionID=439607656426504210] [channelInfo="collectionID:439607656426504209 channelName:\"in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\" seek_position:<channel_name:\"in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\" msgID:\"\\007\\000\\000\\000\\000\\000\\000\\000\" msgGroup:\"in01-e0a52d06b33e103-dataNode-24-in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\" timestamp:439607700527251457 > flushedSegmentIds:439607656426704693 flushedSegmentIds:439607656426704754 flushedSegmentIds:439607656426704685 flushedSegmentIds:439607656426704633 dropped_segmentIds:439607656426704327 dropped_segmentIds:439607656426704450 dropped_segmentIds:439607656426704379 dropped_segmentIds:439607656426704331 dropped_segmentIds:439607656426704415 dropped_segmentIds:439607656426704507 dropped_segmentIds:439607656426704286 dropped_segmentIds:439607656426704560 dropped_segmentIds:439607656426704226 dropped_segmentIds:439607656426704511 dropped_segmentIds:439607656426704255 "]
2023-02-21 09:05:35	
[2023/02/21 09:05:35.946 +00:00] [INFO] [meta/target_manager.go:134] ["finish to update next targets for collection"] [collectionID=439607656426504209] [segments="[439607656426704512,439607656426704633,439607656426704755,439607656426704650,439607656426704561,439607656426704694,439607656426704449,439607656426704634,439607656426704467,439607656426704693,439607656426704754,439607656426704685]"] [channels="[in01-e0a52d06b33e103-rootcoord-dml_1_439607656426504209v1,in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0]"]
2023-02-21 09:05:36	
[2023/02/21 09:05:36.404 +00:00] [INFO] [querynode/search.go:100] ["search growing/sealed segments without indexes"] [traceID=1be1b35a340f33da] [segmentIDs="[439607656426704817,439607656426704693,439607656426704511,439607656426704560,439607656426704633,439607656426704754]"]
2023-02-21 09:05:36	
[2023/02/21 09:05:36.657 +00:00] [INFO] [datacoord/compaction_trigger.go:570] ["generate a plan for small candidates"] [plan="segmentBinlogs:<segmentID:439607656426704685 fieldBinlogs:<fieldID:1 binlogs:<entries_num:39 log_path:\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704685/1/439607656426704699\" log_size:613 > > fieldBinlogs:<fieldID:100 binlogs:<entries_num:39 log_path:\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704685/100/439607656426704695\" log_size:626 > > fieldBinlogs:<fieldID:101 binlogs:<entries_num:39 log_path:\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704685/101/439607656426704696\" log_size:661 > > fieldBinlogs:<fieldID:102 binlogs:<entries_num:39 log_path:\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704685/102/439607656426704697\" log_size:21371 > > fieldBinlogs:<binlogs:<entries_num:39 log_path:\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704685/0/439607656426704698\" log_size:666 > > field2StatslogPaths:<fieldID:100 binlogs:<entries_num:39 log_path:\"e0a52d06b33e103/stats_log/439607656426504209/439607656426504210/439607656426704685/100/439607656426704700\" log_size:179 > > > segmentBinlogs:<segmentID:439607656426704633 fieldBinlogs:<binlogs:<entries_num:4 timestamp_from:439607708732358658 timestamp_to:439607708732358658 log_path:\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704633/0/439607656426704661\" log_size:32 > > fieldBinlogs:<fieldID:1 binlogs:<entries_num:4 timestamp_from:439607708732358658 timestamp_to:439607708732358658 log_path:\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704633/1/439607656426704662\" log_size:32 > > fieldBinlogs:<fieldID:100 binlogs:<entries_num:4 timestamp_from:439607708732358658 timestamp_to:439607708732358658 log_path:\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704633/100/439607656426704658\" log_size:32 > > fieldBinlogs:<fieldID:101 binlogs:<entries_num:4 timestamp_from:439607708732358658 timestamp_to:439607708732358658 log_path:\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704633/101/439607656426704659\" log_size:32 > > fieldBinlogs:<fieldID:102 binlogs:<entries_num:4 timestamp_from:439607708732358658 timestamp_to:439607708732358658 log_path:\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704633/102/439607656426704660\" log_size:2052 > > field2StatslogPaths:<fieldID:100 binlogs:<log_path:\"e0a52d06b33e103/stats_log/439607656426504209/439607656426504210/439607656426704633/100/439607656426704663\" log_size:115 > > deltalogs:<> > segmentBinlogs:<segmentID:439607656426704754 fieldBinlogs:<fieldID:100 binlogs:<entries_num:5 timestamp_from:439607710409818113 timestamp_to:439607710409818113 log_path:\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704754/100/439607656426704782\" log_size:40 > > fieldBinlogs:<fieldID:101 binlogs:<entries_num:5 timestamp_from:439607710409818113 timestamp_to:439607710409818113 log_path:\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704754/101/439607656426704783\" log_size:40 > > fieldBinlogs:<fieldID:102 binlogs:<entries_num:5 timestamp_from:439607710409818113 timestamp_to:439607710409818113 log_path:\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704754/102/439607656426704784\" log_size:2564 > > fieldBinlogs:<binlogs:<entries_num:5 timestamp_from:439607710409818113 timestamp_to:439607710409818113 log_path:\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704754/0/439607656426704785\" log_size:40 > > fieldBinlogs:<fieldID:1 binlogs:<entries_num:5 timestamp_from:439607710409818113 timestamp_to:439607710409818113 log_path:\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704754/1/439607656426704786\" log_size:40 > > field2StatslogPaths:<fieldID:100 binlogs:<log_path:\"e0a52d06b33e103/stats_log/439607656426504209/439607656426504210/439607656426704754/100/439607656426704787\" log_size:117 > > deltalogs:<> > segmentBinlogs:<segmentID:439607656426704693 fieldBinlogs:<fieldID:100 binlogs:<entries_num:8 timestamp_from:439607709531897857 timestamp_to:439607709531897857 log_path:\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704693/100/439607656426704710\" log_size:64 > > fieldBinlogs:<fieldID:101 binlogs:<entries_num:8 timestamp_from:439607709531897857 timestamp_to:439607709531897857 log_path:\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704693/101/439607656426704711\" log_size:64 > > fieldBinlogs:<fieldID:102 binlogs:<entries_num:8 timestamp_from:439607709531897857 timestamp_to:439607709531897857 log_path:\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704693/102/439607656426704712\" log_size:4100 > > fieldBinlogs:<binlogs:<entries_num:8 timestamp_from:439607709531897857 timestamp_to:439607709531897857 log_path:\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704693/0/439607656426704713\" log_size:64 > > fieldBinlogs:<fieldID:1 binlogs:<entries_num:8 timestamp_from:439607709531897857 timestamp_to:439607709531897857 log_path:\"e0a52d06b33e103/insert_log/439607656426504209/439607656426504210/439607656426704693/1/439607656426704714\" log_size:64 > > field2StatslogPaths:<fieldID:100 binlogs:<log_path:\"e0a52d06b33e103/stats_log/439607656426504209/439607656426504210/439607656426704693/100/439607656426704715\" log_size:125 > > deltalogs:<> > type:MixCompaction timetravel:439607711930253312 channel:\"in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0\" "] ["target segment row"=56] ["target segment size"=33733]
2023-02-21 09:05:36	
[2023/02/21 09:05:36.940 +00:00] [INFO] [task/scheduler.go:262] ["task added"] [task="[id=22] [type=1] [collectionID=439607656426504209] [replicaID=439607656689172481] [priority=1] [actionsCount=1] [actions={[type=1][node=18][streaming=false]}] [segmentID=439607656426704693]"]
2023-02-21 09:05:37	
[2023/02/21 09:05:37.434 +00:00] [INFO] [task/executor.go:287] ["load segment task committed"] [taskID=22] [collectionID=439607656426504209] [segmentID=439607656426704693] [node=18] [source=2] [shardLeader=18]
2023-02-21 09:05:37	
[2023/02/21 09:05:37.444 +00:00] [INFO] [datanode/compactor.go:676] ["compaction done"] [planID=439607656426704851] [targetSegmentID=439607656426704852] [compactedFrom="[439607656426704685,439607656426704633,439607656426704754,439607656426704693]"] ["num of binlog paths"=5] ["num of stats paths"=1] ["num of delta paths"=0]
2023-02-21 09:05:37	
[2023/02/21 09:05:37.927 +00:00] [INFO] [task/executor.go:181] ["load segments..."] [taskIDs="[19,26,24,22]"] [collectionID=439607656426504209] [shard=in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0] [segmentIDs="[439607656426704685,439607656426704633,439607656426704754,439607656426704693]"] [nodeID=18] [source=2]
2023-02-21 09:05:37	
[2023/02/21 09:05:37.928 +00:00] [INFO] [querynode/impl_utils.go:30] ["LoadSegment start to transfer load with shard cluster"] [traceID=19e47cd308626629] [shard=in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0] [segmentIDs="[439607656426704685,439607656426704633,439607656426704754,439607656426704693]"]
2023-02-21 09:05:37	
[2023/02/21 09:05:37.928 +00:00] [INFO] [querynode/impl.go:491] ["loadSegmentsTask init"] [collectionID=439607656426504209] [segmentIDs="[439607656426704633,439607656426704685,439607656426704693,439607656426704754]"] [nodeID=18]
2023-02-21 09:05:37	
[2023/02/21 09:05:37.928 +00:00] [INFO] [querynode/impl.go:496] ["loadSegmentsTask start "] [collectionID=439607656426504209] [segmentIDs="[439607656426704633,439607656426704685,439607656426704693,439607656426704754]"] [timeInQueue=27.912µs]
2023-02-21 09:05:37	
[2023/02/21 09:05:37.928 +00:00] [INFO] [querynode/impl.go:509] ["loadSegmentsTask Enqueue done"] [collectionID=439607656426504209] [segmentIDs="[439607656426704633,439607656426704685,439607656426704693,439607656426704754]"] [nodeID=18]
2023-02-21 09:05:37	
[2023/02/21 09:05:37.932 +00:00] [INFO] [querynode/segment.go:186] ["create segment"] [collectionID=439607656426504209] [partitionID=439607656426504210] [segmentID=439607656426704693] [segmentType=Sealed]
2023-02-21 09:05:38	
[2023/02/21 09:05:38.025 +00:00] [INFO] [datacoord/services.go:121] ["flush response with segments"] [collectionID=439607656426504209] [sealSegments="[439607656426704896,439607656426704897]"] [flushSegments="[439607656426704817,439607656426704634,439607656426704512,439607656426704467,439607656426704818,439607656426704633,439607656426704694,439607656426704755,439607656426704449,439607656426704650,439607656426704561,439607656426704693,439607656426704754,439607656426704685]"] [timeOfSeal=1676970337]
2023-02-21 09:05:38	
[2023/02/21 09:05:38.135 +00:00] [INFO] [querynode/segment_loader.go:231] ["start loading segment data into memory"] [collectionID=439607656426504209] [partitionID=439607656426504210] [segmentID=439607656426704693] [segmentType=Sealed]
2023-02-21 09:05:38	
[2023/02/21 09:05:38.156 +00:00] [INFO] [querynode/segment.go:815] ["load field done"] [fieldID=100] ["row count"=8] [segmentID=439607656426704693]
2023-02-21 09:05:38	
[2023/02/21 09:05:38.158 +00:00] [INFO] [querynode/segment.go:815] ["load field done"] [fieldID=0] ["row count"=8] [segmentID=439607656426704693]
2023-02-21 09:05:38	
[2023/02/21 09:05:38.158 +00:00] [INFO] [querynode/segment.go:815] ["load field done"] [fieldID=102] ["row count"=8] [segmentID=439607656426704693]
2023-02-21 09:05:38	
[2023/02/21 09:05:38.161 +00:00] [INFO] [querynode/segment.go:815] ["load field done"] [fieldID=101] ["row count"=8] [segmentID=439607656426704693]
2023-02-21 09:05:38	
[2023/02/21 09:05:38.163 +00:00] [INFO] [querynode/segment.go:815] ["load field done"] [fieldID=1] ["row count"=8] [segmentID=439607656426704693]
2023-02-21 09:05:38	
[2023/02/21 09:05:38.163 +00:00] [INFO] [querynode/segment_loader.go:387] ["load field binlogs done for sealed segment"] [collection=439607656426504209] [segment=439607656426704693] [len(field)=5] [segmentType=Sealed]
2023-02-21 09:05:38	
[2023/02/21 09:05:38.163 +00:00] [INFO] [querynode/segment_loader.go:287] ["loading bloom filter..."] [segmentID=439607656426704693]
2023-02-21 09:05:38	
[2023/02/21 09:05:38.189 +00:00] [INFO] [querynode/segment_loader.go:669] ["Successfully load pk stats"] [time=26.409347ms] [segment=439607656426704693] [size=89]
2023-02-21 09:05:38	
[2023/02/21 09:05:38.189 +00:00] [INFO] [querynode/segment_loader.go:295] ["loading delta..."] [segmentID=439607656426704693]
2023-02-21 09:05:38	
[2023/02/21 09:05:38.189 +00:00] [INFO] [querynode/segment_loader.go:690] ["there are no delta logs saved with segment, skip loading delete record"] [segmentID=439607656426704693]
2023-02-21 09:05:38	
[2023/02/21 09:05:38.212 +00:00] [INFO] [querynode/partition.go:74] ["add a segment to replica"] [collectionID=439607656426504209] [partitionID=439607656426504210] [segmentID=439607656426704693] [segmentType=Sealed]
2023-02-21 09:05:38	
[2023/02/21 09:05:38.212 +00:00] [INFO] [querynode/meta_replica.go:614] ["new segment added to collection replica"] ["query node ID"=18] ["collection ID"=439607656426504209] ["partition ID"=439607656426504210] ["segment ID"=439607656426704693] ["segment type"=Sealed] ["row count"=8] ["segment indexed fields"=0]
2023-02-21 09:05:38	
[2023/02/21 09:05:38.227 +00:00] [INFO] [querynode/impl.go:524] ["loadSegmentsTask WaitToFinish done"] [collectionID=439607656426504209] [segmentIDs="[439607656426704633,439607656426704685,439607656426704693,439607656426704754]"] [nodeID=18]
2023-02-21 09:05:38	
[2023/02/21 09:05:38.227 +00:00] [INFO] [querynode/shard_cluster.go:267] ["ShardCluster update segment"] [collectionID=439607656426504209] [channel=in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0] [replicaID=439607656689172481] [nodeID=18] [segmentID=439607656426704693] [state=3]
2023-02-21 09:05:38	
[2023/02/21 09:05:38.227 +00:00] [INFO] [querynode/impl_utils.go:57] ["LoadSegment transfer load done"] [traceID=19e47cd308626629] [shard=in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0] [segmentIDs="[439607656426704685,439607656426704633,439607656426704754,439607656426704693]"]
2023-02-21 09:05:38	
[2023/02/21 09:05:38.228 +00:00] [INFO] [task/executor.go:198] ["load segments done"] [taskIDs="[19,26,24,22]"] [collectionID=439607656426504209] [shard=in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0] [segmentIDs="[439607656426704685,439607656426704633,439607656426704754,439607656426704693]"] [nodeID=18] [source=2] [taskID=19] [timeTaken=300.64393ms]
2023-02-21 09:05:38	
[2023/02/21 09:05:38.428 +00:00] [INFO] [task/scheduler.go:690] ["task removed"] [taskID=22] [taskStatus=3] [segmentID=439607656426704693]
2023-02-21 09:05:40	
[2023/02/21 09:05:40.947 +00:00] [INFO] [datacoord/services.go:121] ["flush response with segments"] [collectionID=439607656426504209] [sealSegments="[439607656426704934,439607656426704935]"] [flushSegments="[439607656426704693,439607656426704754,439607656426704685,439607656426704817,439607656426704512,439607656426704634,439607656426704897,439607656426704467,439607656426704818,439607656426704633,439607656426704694,439607656426704755,439607656426704896,439607656426704449,439607656426704650,439607656426704561]"] [timeOfSeal=1676970340]
2023-02-21 09:05:41	
[2023/02/21 09:05:41.098 +00:00] [INFO] [datacoord/services.go:121] ["flush response with segments"] [collectionID=439607656426504209] [sealSegments="[439607656426704934,439607656426704935]"] [flushSegments="[439607656426704817,439607656426704512,439607656426704634,439607656426704897,439607656426704467,439607656426704818,439607656426704694,439607656426704755,439607656426704633,439607656426704650,439607656426704896,439607656426704449,439607656426704561,439607656426704693,439607656426704754,439607656426704685]"] [timeOfSeal=1676970341]
2023-02-21 09:05:44	
[2023/02/21 09:05:44.485 +00:00] [INFO] [datacoord/services.go:121] ["flush response with segments"] [collectionID=439607656426504209] [sealSegments="[439607656426704980,439607656426704981]"] [flushSegments="[439607656426704512,439607656426704634,439607656426704897,439607656426704467,439607656426704818,439607656426704694,439607656426704755,439607656426704633,439607656426704650,439607656426704896,439607656426704449,439607656426704561,439607656426704693,439607656426704935,439607656426704754,439607656426704934,439607656426704685,439607656426704817]"] [timeOfSeal=1676970344]
2023-02-21 09:05:45	
[2023/02/21 09:05:45.359 +00:00] [INFO] [datacoord/services.go:121] ["flush response with segments"] [collectionID=439607656426504209] [sealSegments="[439607656426704980,439607656426704981]"] [flushSegments="[439607656426704693,439607656426704935,439607656426704754,439607656426704934,439607656426704685,439607656426704817,439607656426704512,439607656426704634,439607656426704897,439607656426704467,439607656426704818,439607656426704633,439607656426704694,439607656426704755,439607656426704449,439607656426704650,439607656426704896,439607656426704561]"] [timeOfSeal=1676970345]
2023-02-21 09:05:45	
[2023/02/21 09:05:45.662 +00:00] [INFO] [datacoord/meta.go:996] ["meta update: prepare for complete compaction mutation - complete"] ["collection ID"=439607656426504209] ["partition ID"=439607656426504210] ["new segment ID"=439607656426704852] ["new segment num of rows"=56] ["compacted from"="[439607656426704685,439607656426704633,439607656426704754,439607656426704693]"]
2023-02-21 09:05:45	
[2023/02/21 09:05:45.662 +00:00] [INFO] [datacoord/meta.go:1033] ["meta update: alter meta store for compaction updates"] ["compact from segments (segments to be updated as dropped)"="[439607656426704685,439607656426704633,439607656426704754,439607656426704693]"] ["new segmentId"=439607656426704852] [binlog=5] ["stats log"=1] ["delta logs"=0] ["compact to segment"=439607656426704852]
[2023/02/21 09:05:45.671 +00:00] [INFO] [datanode/data_node.go:890] ["DataNode receives SyncSegments"] [traceID=7dfd9951eb8dcbfc] [planID=439607656426704851] ["target segmentID"=439607656426704852] ["compacted from"="[439607656426704685,439607656426704633,439607656426704754,439607656426704693]"] [numOfRows=56]
2023-02-21 09:05:45	
[2023/02/21 09:05:45.719 +00:00] [INFO] [datanode/channel_meta.go:548] ["merge flushed segments"] ["segment ID"=439607656426704852] ["collection ID"=439607656426504209] ["partition ID"=439607656426504210] ["compacted from"="[439607656426704685,439607656426704633,439607656426704754,439607656426704693]"] [planID=439607656426704851] ["channel name"=in01-e0a52d06b33e103-rootcoord-dml_0_439607656426504209v0]
2023-02-21 09:05:45	
[2023/02/21 09:05:45.720 +00:00] [INFO] [datacoord/meta.go:1058] ["meta update: alter in memory meta after compaction"] ["compact to segment ID"=439607656426704852] ["compact from segment IDs"="[439607656426704685,439607656426704633,439607656426704754,439607656426704693]"]
2023-02-21 09:05:45	
[2023/02/21 09:05:45.720 +00:00] [INFO] [datacoord/meta.go:1072] ["meta update: alter in memory meta after compaction - complete"] ["compact to segment ID"=439607656426704852] ["compact from segment IDs"="[439607656426704685,439607656426704633,439607656426704754,439607656426704693]"]
2023-02-21 09:05:45	
[2023/02/21 09:05:45.878 +00:00] [INFO] [datanode/flow_graph_delete_node.go:171] ["update delBuf for compacted segments"] ["compactedTo segmentID"=439607656426704852] ["compactedFrom segmentIDs"="[439607656426704693,439607656426704754,439607656426704685,439607656426704633]"]
2023-02-21 09:05:45	
[2023/02/21 09:05:45.878 +00:00] [INFO] [datanode/channel_meta.go:420] ["remove segments if exist"] [segmentIDs="[439607656426704693,439607656426704754,439607656426704685,439607656426704633]"]

Expected Behavior

No response

Steps To Reproduce

No response

Environment

No response

Anything else?

No response

[Bug]: config file load expected

Current Behavior

Absolute config file path, load not works as expected.

./milvus-backup --config $(pwd)/backup.yaml create 


config:/root/tmp/downloads/backup.yaml
panic: cannot access config file: /home/runner/work/milvus-backup/milvus-backup/core/paramtable/../../configs//root/tmp/downloads/backup.yaml

goroutine 1 [running]:
github.com/zilliztech/milvus-backup/core/paramtable.(*BaseTable).LoadYaml(0xc0001cda00, {0x7ffdb112e391, 0x1f})
        /home/runner/work/milvus-backup/milvus-backup/core/paramtable/base_table.go:179 +0x41b
github.com/zilliztech/milvus-backup/core/paramtable.(*BaseTable).loadFromYaml(...)
        /home/runner/work/milvus-backup/milvus-backup/core/paramtable/base_table.go:127
github.com/zilliztech/milvus-backup/core/paramtable.(*BaseTable).Init(0xc0001cda00)
        /home/runner/work/milvus-backup/milvus-backup/core/paramtable/base_table.go:85 +0x174
github.com/zilliztech/milvus-backup/core/paramtable.(*BaseTable).GlobalInitWithYaml.func1()
        /home/runner/work/milvus-backup/milvus-backup/core/paramtable/base_table.go:77 +0x4a
sync.(*Once).doSlow(0x27?, 0xc0005c3518?)
        /opt/hostedtoolcache/go/1.18.10/x64/src/sync/once.go:68 +0xc2
sync.(*Once).Do(...)
        /opt/hostedtoolcache/go/1.18.10/x64/src/sync/once.go:59
github.com/zilliztech/milvus-backup/core/paramtable.(*BaseTable).GlobalInitWithYaml(0x10c8ca0?, {0x7ffdb112e391?, 0xc0005c3858?})
        /home/runner/work/milvus-backup/milvus-backup/core/paramtable/base_table.go:75 +0x5e
github.com/zilliztech/milvus-backup/cmd.glob..func1(0x1f5d480?, {0xf25595?, 0x2?, 0x2?})
        /home/runner/work/milvus-backup/milvus-backup/cmd/create.go:26 +0xc5
github.com/spf13/cobra.(*Command).execute(0x1f5d480, {0xc000522b60, 0x2, 0x2})
        /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:876 +0x67b
github.com/spf13/cobra.(*Command).ExecuteC(0x1f5e100)
        /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:990 +0x3b4
github.com/spf13/cobra.(*Command).Execute(...)
        /home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:918
github.com/zilliztech/milvus-backup/cmd.Execute()
        /home/runner/work/milvus-backup/milvus-backup/cmd/root.go:24 +0x71
main.main()
        /home/runner/work/milvus-backup/milvus-backup/main.go:17 +0x17

Seems the config file should force to configs dir? https://github.com/zilliztech/milvus-backup/blob/v0.2.2/core/paramtable/base_table.go#L110

Expected Behavior

No response

Steps To Reproduce

# Use absolute config path, which should not in configs directory.


./milvus-backup --config $(pwd)/backup.yaml create -n backup-2023-04-26

Environment

No response

Anything else?

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.