hyperledger-twgc / tape Goto Github PK
View Code? Open in Web Editor NEWA Simple Traffic Generator for Hyperledger Fabric
License: Apache License 2.0
A Simple Traffic Generator for Hyperledger Fabric
License: Apache License 2.0
ideally this should be 0, so that system randomly picks an available port. And we dynamically render this port into config file. Not necessarily fix in this PR, pls create a new issue
Originally posted by @guoger in https://github.com/guoger/stupid/pull/59/files
also, i think we could have a followup PR to add peer.Pause()
and peer.Resume()
, so we could assert that threshold takes effect when 2/5 peers are paused
Originally posted by @guoger in #135 (comment)
After I successfully ran the Tape in my fabric sample, I got results with three columns: time, block, and tnx.
However, I didn't totally understand what these indicators mean as I have not found related documents in https://github.com/Hyperledger-TWGC/tape .
In addition, where/how can I get results such as latency and TPS.
Hope the Tape outputs could be more intuitive or more explanations in README. Thanks
tape version
should return current version of binary in use
today, tape
only support static and fixed arguments upon start. Although this is way too limited. We need to pick and support a templating language. For example: https://ghz.sh/docs/examples#metadata-using-template-variables
[root@blockchain1 stupid]# ./tape config.yaml 100
INFO[0000] Start sending transactions.
DEBU[0000] Start sending broadcast
DEBU[0000] Start sending broadcast
DEBU[0000] Start sending broadcast
DEBU[0000] Start sending broadcast
DEBU[0000] Start sending broadcast
DEBU[0000] Start sending broadcast
DEBU[0000] Start sending broadcast
DEBU[0000] Start sending broadcast
DEBU[0000] Start sending broadcast
DEBU[0000] Start sending broadcast
DEBU[0000] start observer
ERRO[0000] Err processing proposal: %!s(), status: 500, addr: peer0.org1.example.com:7051
ERRO[0000] Err processing proposal: %!s(), status: 500, addr: peer0.org1.example.com:7051
ERRO[0000] Err processing proposal: %!s(), status: 500, addr: peer0.org1.example.com:7051
ERRO[0000] Err processing proposal: %!s(), status: 500, addr: peer0.org1.example.com:7051
ERRO[0000] Err processing proposal: %!s(), status: 500, addr: peer0.org1.example.com:7051
ERRO[0000] Err processing proposal: %!s(), status: 500, addr: peer0.org1.example.com:7051
like this
[root@blockchain1 stupid]# cat config.yaml
peer1: &peer1
addr: peer0.org1.example.com:7051
tls_ca_cert: /root/go/src/github.com/hyperledger/fabric-samples/first-network/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp/tlscacerts/tlsca.org1.example.com-cert.pem
peer2: &peer2
addr: peer0.org2.example.com:9051
tls_ca_cert: /root/go/src/github.com/hyperledger/fabric-samples/first-network/crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/msp/tlscacerts/tlsca.org2.example.com-cert.pem
orderer1: &orderer1
addr: orderer.example.com:7050
tls_ca_cert: /root/go/src/github.com/hyperledger/fabric-samples/first-network/crypto-config/ordererOrganizations/example.com/msp/tlscacerts/tlsca.example.com-cert.pem
endorsers:
committer: *peer2
orderer: *orderer1
channel: mychannel
chaincode: mycc
args:
My config.yaml is written like this
private_key: /home/niqinyin/go/src/github.com/kongyixueyuan.com/education/fixtures/crypto-config/peerOrganizations/org1.kevin.kongyixueyuan.com/users/[email protected]/msp/keystore/c24c159c6de601f2703c4e4273682b08cda54ebdaf4ca1c4aa4b3f80b8a91964_sk
I've tried the user's private key, too.But it always reported such an error
ERRO[0000] error connecting to localhost:7051: failed to load client certificate: tls: private key does not match public key
please help me
Often we need to spot network bottleneck by testing 3 phases Endorsement - Order - Commitment
separately:
Proposals
to peer and observe endorsement ProposalResponse
Envelope
to orderers and observe ordered Block
Block
to peer and observed committed Block
(this is the most tricky one)This could help us benchmarking fabric components in finer-grained manners.
go build
and i set the proxy of go
go version 1.12.7
how to solve it
we should be able to observe multiple peers, to support semantics like "block is considered committed if it's done on >50% peers"
When I write config.yaml, I don't know how to write args.
My function name is "addfile",the type of its parameter is []String ,its parameter is [{"FileName":"1.txt","ID":"121","IPAdress":"1.1.1.2","OrgName":"org2","ModifyTime":"8:27 2019/9/12"},"eventAddFile"], how should I write the args?
By the way, when calling chain code from the Ubuntu command line, the common command is written like this
peer chaincode invoke --tls --cafile $ORDERER_CA -C mychannel -n mycc -c '{"Args":["get","a"]}'
So how do my functions write commands
Configs in README.md could read much better if they are enclosed in a table, i.e.
option A | this is config option A that does **absolutely** nothing |
option B | this is config option B that does `something` |
We should introduce a mocked Fabric so that stupid
can be benchmarked against it. This is to prevent performance regression of stupid
tool itself
We should be able to send tx at constant rate in order to test Fabric durability
As hot fix for CI, we switched to 2.3 version.
So ... we need add Fabric 2.2 in CI scope.
127.0.0.1 peer0.org1.example.com
127.0.0.1 peer0.org2.example.com
127.0.0.1 orderer.example.com
127.0.0.1 peer1
127.0.0.1 peer2
127.0.0.1 orderer1
N/A
Time 6.50s Block 22 Tx 542
Time 6.63s Block 23 Tx 542
Time 10.15s Block 24 Tx 244
the last block takes too long time and down total tps.
Provide a avg tps.
See here
the tps pre block may 1000+ in previouse,
but for last block, may due to batchtimeout too long, as 8 for tps.
in this case the total tps from 1000+ to 500+ ?
I suppose ew have a avg tps to replace this total tps.
the out put as:
Time 6.50s Block 22 Tx 542
Time 6.63s Block 23 Tx 542 Current TPS: 1000+
Time 10.15s Block 24 Tx 244 Current TPS: 8
time="2020-12-05T15:12:28Z" level=info msg="Completed processing transactions."
tx: 10000, duration: 10.154353182s, tps: 1000+ (this is avg tps), total tps: 984.799309
considering with external chaincode service.
Should we need chaincode level performance testing?
[root@localhost stupid-master]# ./stupid config.yaml 40000
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x9aab4f]
goroutine 347 [running]:
github.com/guoger/stupid/infra.(*Observer).Start(0xc000471c60, 0x9c40, 0xbfc015d1df916d85, 0x5cda83d, 0x109e1e0)
/root/stupid-master/infra/observer.go:50 +0x2df
created by main.main
/root/stupid-master/main.go:60 +0x660
following key metrics
section in https://www.hyperledger.org/wp-content/uploads/2018/10/HL_Whitepaper_Metrics_PDFVersion.pdf, we would need to be able to report latency with Tape
N/A
Adjust structure, filename and documents to https://github.com/Hyperledger-TWGC/fabric-performance-wiki/blob/master/performance-whitepaper.md
Make people easy to understand tape code and what is tape.
N/A
Describe the bug
In the scenario of monitoring multiple peers, if the number of threshold peers receive all transactions, txs can be considered as writed successful. However, the current implementation has a bug. As long as a peer receives all transactions, the monitoring process is deemed to be over and the program exits, which is a big deviation from the original design.
In addition, closing channel finishCh(https://github.com/Hyperledger-TWGC/tape/blob/master/pkg/infra/observer.go#L64) may panic, which has been closed in other goroutines.
To Reproduce
Expected behavior
The threshold in BlockCollector takes affect.
Logs
Attach logs from Tape/Peer/Orderer here. And please format them with markdown!
Additional context
Add any other context about the problem here.
Today, stupid
does not care if a block contains invalid transactions (in most cases, it doesn't). However, it might be useful to display success rate by inspecting blocks, especially when multiple peers are supported #10
When project was created, code panic everywhere for convenience. And we now should populate errors up and panic at main func
Describe the bug
Due to migration, some Azure managed secret not available in TWGC vault now.
The Azure pipeline file is considered not properly configured.
To Reproduce
When Azure pipeline in TWGC triggred
Logs
There was a resource authorization issue: "The pipeline is not valid. Job BuildAndReleaseBinaries: Step GitHubRelease input gitHubConnection references service connection github.com_stone-ch which could not be found. The service connection does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz."
Additional context
https://dev.azure.com/Hyperledger/TWGC/_build/results?buildId=29187&view=results
Hint: Let's decouple Tape bug with general issues as TLS cert config error,general performance testing issue as resource been eat up, or fabric bugs.
Hint: For any platform specific issues, please try to reproduce by Mac/Linux. For ex, it's hard for Tape maintainers to investigating issue on any kind of IOT device as we don't have your device.
if tape supports this feature, then tape user able to benefits
if tape supports this feature, then tape user able to benefits
...
We have client_per_conn
in config.yaml
, which is passed as client int
in Proposer
:
https://github.com/guoger/stupid/blob/6b3b46f98c82c860f4bec8af76606478977b3ed2/infra/proposer.go#L16
However, this is not used. Effectively, we have one client per connection. We should make use of it (or delete it?)
i want to test my fabric,and this error showed up,how can i deal with that?
HI sir,
How to Communicate with peer by proxy using your Tape? thank you!
Like this:
peers:
peer0.adminorg.544330u:
url: grpcs://181.71.124.24:30062
grpcOptions:
ssl-target-name-override: nginx.bc9seqlghk0u.baas
grpc.primary_user_agent: /peer0-adminorg.bceqlghk0u
grpc.keepalive_time_ms: 600000
tlsCACerts:
path: networks/cert/nginx.bc9qlghk0u.baas.pem
to avoid
time="2020-08-01T05:39:43Z" level=error msg="Err processing proposal: %!s(<nil>), status: 500, addr: localhost:9051 \n"
Describe the bug
It works when calling a chaincode from the JS script, but fails in tape. The error is ProposalResponsePayloads from Peers do not match.
Part of my config file
It's strange that when I invoke some simple chaincode functions with short parameters, tape works normally. However it fails with follow parameters in complicated functions.
args:
- base_tx
- 04379014e00469b50713374741ff06a39e24c869d6e1855a684c159268ce67331a4af3ca09c4e74e7713e50aedc1d0533d1e3cdded042229d4d093b11bdd46b881
- 1234
Tape Logs
INFO[0000] Start sending transactions.
DEBU[0000] Start sending broadcast
DEBU[0000] start observer
ERRO[0000] ProposalResponsePayloads from Peers do not match
Peer Logs
dev-peer0.org2.example.com-fabcar_1-fb5bcc8bcbed61054a4b653d3984acf4ea372f2536012e48b6b793ad5a856732|2021-01-02T14:22:26.666Z info [c-api:lib/handler.js] [mychannel-50608e54] Calling chaincode Invoke() succeeded. Sending COMPLETED message back to peer
peer0.org1.example.com|2021-01-02 14:22:26.666 UTC [endorser] callChaincode -> INFO 0dc finished chaincode: fabcar duration: 14ms channel=mychannel txID=50608e54
dev-peer0.org1.example.com-fabcar_1-fb5bcc8bcbed61054a4b653d3984acf4ea372f2536012e48b6b793ad5a856732|2021-01-02T14:22:26.666Z info [c-api:lib/handler.js] [mychannel-50608e54] Calling chaincode Invoke() succeeded. Sending COMPLETED message back to peer
peer0.org1.example.com|2021-01-02 14:22:26.666 UTC [comm.grpc.server] 1 -> INFO 0dd unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=192.168.0.1:47232 grpc.code=OK grpc.call_duration=16.306043ms
peer0.org2.example.com|2021-01-02 14:22:26.667 UTC [endorser] callChaincode -> INFO 0ed finished chaincode: fabcar duration: 15ms channel=mychannel txID=50608e54
peer0.org2.example.com|2021-01-02 14:22:26.667 UTC [comm.grpc.server] 1 -> INFO 0ee unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=192.168.0.1:35290 grpc.code=OK grpc.call_duration=17.141611ms
peer0.org2.example.com|2021-01-02 14:22:26.669 UTC [comm.grpc.server] 1 -> INFO 0ef streaming call completed grpc.service=protos.Deliver grpc.method=DeliverFiltered grpc.peer_address=192.168.0.1:35298 error="context finished before block retrieved: context canceled" grpc.code=Unknown grpc.call_duration=21.640987ms
orderer.example.com|2021-01-02 14:22:26.669 UTC [orderer.common.broadcast] Handle -> WARN 0b2 Error reading from 192.168.0.1:48902: rpc error: code = Canceled desc = context canceled
orderer.example.com|2021-01-02 14:22:26.669 UTC [comm.grpc.server] 1 -> INFO 0b3 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=192.168.0.1:48902 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=25.158893ms
Additional context
Add any other context about the problem here.
Seems that performance squad is working on 1.4.x, and we should make sure that is continuously supported
There's zero integration today in this project. User/Dev must manually spin up a fabric network and try it out. We need a proper integration test that spins up real fabric network and run stupid with light traffic against it
国密支持
仅仅考虑msp和支持国密对于背书的签名支持
不考虑tls
Some time fails create connection between tape and peer.
add a general 3 times retry for connection creation before error.
N/A
N/A
[root@blockchain1 stupid]# ./tape config.yaml 1 ERRO[0000] Err processing proposal: %!s(), status: 500, addr: peer0.org2.example.com:9051 ERRO[0000] Err processing proposal: %!s(), status: 500, addr: peer0.org1.example.com:7051
we are using fmt.Println
all over places, which should be replaced with proper loggers
if not, we should support it
automatically create release and upload assets (binary for win/osx/linux) when a new tag is created
Similar to go-sdk, we should avoid using fabric pkg as dependencies
We should create a docs
dir to host details, in order to keep current README.md
simple
enhancement mock test to bind a random port but not 10086
When I use tape to test my fabric network, the tape sets 10 transactions in each block by default. I just wondering where I can change this setting? It seems I cannot change it in the configure.yaml.
After testing, I quired blockchain height, and I found all the transactions that used for the test were really stored in ledger and can not be removed. So does Tape has a "clear" mechanism like caliper? Thanks
(Write your answer here.)
so many thanks to this awesome tape project, which frees me from the caliper.
(Write your answer here.)
I am fine with current context.
I am not sure, if we'd better refactor this part with link to https://prometheus.io/docs/introduction/overview/ (if we suggestion user use prometheus...)
so that tips can be refactor as:
tips:
and I suppose later we are able to use some result from probe to replace Increase number of messages per block in your channel configuration may help
, as adjust parameter for xxx may help for sample and ...
but I suppose the content update can be in further version/pr.
Originally posted by @SamYuan1990 in #114 (comment)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.