Giter Club home page Giter Club logo

chiavdf's Introduction

Chia VDF

Build PyPI PyPI - Format GitHub

CodeQL

Building a wheel

Compiling chiavdf requires cmake, boost and GMP/MPIR.

python3 -m venv venv
source venv/bin/activate

pip install wheel setuptools_scm pybind11
pip wheel .

The primary build process for this repository is to use GitHub Actions to build binary wheels for MacOS, Linux (x64 and aarch64), and Windows and publish them with a source wheel on PyPi. See .github/workflows/build.yml. CMake uses FetchContent to download pybind11. Building is then managed by cibuildwheel. Further installation is then available via pip install chiavdf e.g.

Building Timelord and related binaries

In addition to building the required binary and source wheels for Windows, MacOS and Linux, chiavdf can be used to compile vdf_client and vdf_bench. vdf_client is the core VDF process that completes the Proof of Time submitted to it by the Timelord. The repo also includes a benchmarking tool to get a sense of the iterations per second of a given CPU called vdf_bench. Try ./vdf_bench square_asm 250000 for an ips estimate.

To build vdf_client set the environment variable BUILD_VDF_CLIENT to "Y". export BUILD_VDF_CLIENT=Y.

Similarly, to build vdf_bench set the environment variable BUILD_VDF_BENCH to "Y". export BUILD_VDF_BENCH=Y.

This is currently automated via pip in the install-timelord.sh script in the chia-blockchain repository which depends on this repository.

If you're running a timelord, the following tests are available, depending of which type of timelord you are running:

./1weso_test, in case you're running in sanitizer_mode.

./2weso_test, in case you're running a timelord that extends the chain and you're running the slow algorithm.

./prover_test, in case you're running a timelord that extends the chain and you're running the fast algorithm.

Those tests will simulate the vdf_client and verify for correctness the produced proofs.

Contributing and workflow

Contributions are welcome and more details are available in chia-blockchain's CONTRIBUTING.md.

The master branch is the currently released latest version on PyPI. Note that at times chiavdf will be ahead of the release version that chia-blockchain requires in it's master/release version in preparation for a new chia-blockchain release. Please branch or fork master and then create a pull request to the master branch. Linear merging is enforced on master and merging requires a completed review. PRs will kick off a ci build and analysis of chiavdf at lgtm.com. Please make sure your build is passing and that it does not increase alerts at lgtm.

Background from prior VDF competitions

Copyright 2018 Ilya Gorodetskov [email protected]

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Our VDF construction is described in classgroup.pdf. The implementation details about squaring and proving phrases are described below.

Main VDF Loop

The main VDF loop produces repeated squarings of the generator form (i.e. calculates y(n) = g^(2^n)) as fast as possible, until the program is interrupted. Sundersoft's entry from Chia's 2nd VDF contest is used, together with the fast reducer used in Pulmark's entry. This approach is described below:

The NUDUPL algorithm is used. The equations are based on cryptoslava's equations from the 1st contest. They were modified slightly to increase the level of parallelism.

The GCD is a custom implementation with scalar integers. There are two base cases: one uses a lookup table with continued fractions and the other uses the euclidean algorithm with a division table. The division table algorithm is slightly faster even though it has about 2x as many iterations.

After the base case, there is a 128 bit GCD that generates 64 bit cofactor matrices with Lehmer's algorithm. This is required to make the long integer multiplications efficient (Flint's implementation doesn't do this).

The GCD also implements Flint's partial xgcd function, but the output is slightly different. This implementation will always return an A value which is > the threshold and a B value which is <= the threshold. For a normal GCD, the threshold is 0, B is 0, and A is the GCD. Also the interfaces are slightly different.

Scalar integers are used for the GCD. I don't expect any speedup for the SIMD integers that were used in the last implementation since the GCD only uses 64x1024 multiplications, which are too small and have too high of a carry overhead for the SIMD version to be faster. In either case, most of the time seems to be spent in the base case so it shouldn't matter too much.

If SIMD integers are used with AVX-512, doubles have to be used because the multiplier sizes for doubles are significantly larger than for integers. There is an AVX-512 extension to support larger integer multiplications but no processor implements it yet. It should be possible to do a 50 bit multiply-add into a 100 bit accumulator with 4 fused multiply-adds if the accumulators have a special nonzero initial value and the inputs are scaled before the multiplication. This would make AVX-512 about 2.5x faster than scalar code for 1024x1024 integer multiplications (assuming the scalar code is unrolled and uses ADOX/ADCX/MULX properly, and the CPU can execute this at 1 cycle per iteration which it probably can't).

The GCD is parallelized by calculating the cofactors in a separate slave thread. The master thread will calculate the cofactor matrices and send them to the slave thread. Other calculations are also parallelized.

The VDF implementation from the first contest is still used as a fallback and is called about once every 5000 iterations. The GCD will encounter large quotients about this often and these are not implemented. This has a negligible effect on performance. Also, the NUDUPL case where A<=L is not implemented; it will fall back to the old implementation in this case (this never happens outside of the first 20 or so iterations).

There is also corruption detection by calculating C with a non-exact division and making sure the remainder is 0. This detected all injected random corruptions that I tested. No corruptions caused by bugs were observed during testing. This cannot correct for the sign of B being wrong.

GCD continued fraction lookup table

The is implemented in gcd_base_continued_fractions.h and asm_gcd_base_continued_fractions.h. The division table implementation is the same as the previous entry and was discussed there. Currently the division table is only used if AVX2 is enabled but it could be ported to SSE or scalar code easily. Both implementations have about the same performance.

The initial quotient sequence of gcd(a,b) is the same as the initial quotient sequence of gcd(a*2^n/b, 2^n) for any n. This is because the GCD quotients are the same as the continued fraction quotients of a/b, and the initial continued fraction quotients only depend on the initial bits of a/b. This makes it feasible to have a lookup table since it now only has one input.

a*2^n/b is calculated by doing a double precision division of a/b, and then truncating the lower bits. Some of the exponent bits are used in the table in addition to the fraction bits; this makes each slot of the table vary in size depending on what the exponent is. If the result is outside the table bounds, then the division result is floored to fall back to the euclidean algorithm (this is very rare).

The table is calculated by iterating all of the possible continued fractions that have a certain initial quotient sequence. Iteration ends when all of these fractions are either outside the table or they don't fully contain at least one slot of the table. Each slot that is fully contained by such a fraction is updated so that its quotient sequence equals the fraction's initial quotient sequence. Once this is complete, the cofactor matricies are calculated from the quotient sequences. Each cofactor matrix is 4 doubles.

The resulting code seems to have too many instructions so it doesn't perform very well. There might be some way to optimize it. It was written for SSE so that it would run on both processors.

This might work better on an FPGA possibly with low latency DRAM or SRAM (compared to the euclidean algorithm with a division table). There is no limit to the size of the table but doubling the latency would require the number of bits in the table to also be doubled to have the same performance.

Other GCD code

The gcd_128 function calculates a 128 bit GCD using Lehmer's algorithm. It is pretty straightforward and uses only unsigned arithmetic. Each cofactor matrix can only have two possible signs: [+ -; - +] or [- +; + -]. The gcd_unsigned function uses unsigned arithmetic and a jump table to apply the 64-bit cofactor matricies to the A and B values. It uses ADOX/ADCX/MULX if they are available and falls back to ADC/MUL otherwise. It will track the last known size of A to speed up the bit shifts required to get the top 128 bits of A.

No attempt was made to try to do the A and B long integer multiplications on a separate thread; I wouldn't expect any performance improvement from this.

Threads

There is a master thread and a slave thread. The slave thread only exists for each batch of 5000 or so squarings and is then destroyed and recreated for the next batch (this has no measurable overhead). If the original VDF is used as a fallback, the batch ends and the slave thread is destroyed.

Each thread has a 64-bit counter that only it can write to. Also, during a squaring iteration, it will not overwrite any value that it has previously written and transmitted to the other thread. Each squaring is split up into phases. Each thread will update its counter at the start of the phase (the counter can only be increased, not decreased). It can then wait on the other thread's counter to reach a certain value as part of a spin loop. If the spin loop takes too long, an error condition is raised and the batch ends; this should prevent any deadlocks from happening.

No CPU fences or atomics are required since each value can only be written to by one thread and since x86 enforces acquire/release ordering on all memory operations. Compiler memory fences are still required to prevent the compiler from caching or reordering memory operations.

The GCD master thread will increment the counter when a new cofactor matrix has been outputted. The slave thread will spin on this counter and then apply the cofactor matrix to the U or V vector to get a new U or V vector.

It was attempted to use modular arithmetic to calculate k directly but this slowed down the program due to GMP's modulo or integer multiply operations not having enough performance. This also makes the integer multiplications bigger.

The speedup isn't very high since most of the time is spent in the GCD base case and these can't be parallelized.

Generating proofs

The nested wesolowski proofs (n-wesolowski) are used to check the correctness of a VDF result. (Simple) Wesolowski proofs are described in A Survey of Two Verifiable Delay Functions. In order to prove h = g^(2^T), a n-wesolowski proof uses n intermediate simple wesolowski proofs. Given h, g, T, t1, t2, ..., tn, h1, h2, ..., hn, a correct n-wesolowski proof will verify the following:

h1 = g^(2^t1)
h2 = h1^(2^t2)
h3 = h2^(2^t3)
...
hn = h(n-1)^(2^tn)

Additionally, we must have:

t1 + t2 + ... + tn = T
hn = h

The algorithm will generate at most 64-wesolowski proofs. Some intermediates wesolowski proofs are stored in parallel with the main VDF loop. The goal is to have a n-wesolowski proof almost ready as soon as the main VDF loop finishes computing h = g^(2^T), for a T that we're interested in. We'll call a segment a tuple (y, x, T) for which we're interested in a simple wesolowski proof that y = x^(2^T). We'll call a segment finished when we've finished computing its proof.

Segmenets stored

We'll store finished segments of length 2^x for x being multiples of 2 greater than or equal to 16. The current implementation limits the maximum segment size to 2^30, but this can be increased if needed. Let P = 16+2*l. After each 2^P steps calculated by the main VDF loop, we'll store a segment proving that we've correctly done the 2^P steps. Formally, let x be the form after k*2^P steps, y be the form after (k+1)*2^P steps, for each k >= 0, for each P = 16+2*l. Then, we'll store a segment (y, x, 2^P), together with a simple wesolowski proof.

Segment threads

In order to finish a segment of length T=2^P, the number of iterations to run for is T/k + l*2^(k+1) and the intermediate storage required is T/(k*l), for some parameters k and l, as described in the paper. The squarings used to finish a segment are about 2 times as slow as the ones used by the main VDF loop. Even so, finishing a segment is much faster than producing its y value by the main VDF loop. This allows, by the time the main VDF loop finishes 2^16 more steps, to perform work on finishing multiple segments.

The parameters used in finishing segments, for T=2^16, are k=10 and l=1. Above that, parameters are k=12 and l=2^(P-18). Note that, for P >= 18, the intermediate storage needed for a segment is constant (i.e. 2^18/12 forms stored in memory).

Prover class is responsible to finish a segment. It implements pause/resume functionality, so its work can be paused, and later resumed from the point it stopped. For each unfinished segment generated by the main VDF loop, a Prover instance is created, which will eventually finish the segment.

Segment threads are responsible for deciding which Prover instance is currently running. In the current implementation, there are 3 segment threads (however the number is configurable), so at most 3 Prover instances will run at once, at different threads (other Provers will be paused). The segment threads will always pick the segments with the shortest length to run. In case of a tie, the segments received the earliest will have priority. Every time a new segment arrives, or a segment gets finished, some pausing/resuming of Provers is done, if needed. Pausing is done to have at most 3 Provers running at any time, whilst resuming is done if less than 3 Provers are working, but some Provers are paused.

All the segments of lengths 2^16, 2^18 and 2^20 will be finished relatively soon after the main VDF worker produced them, while the segments of length 2^22 and upwards will lag behind the main VDF worker a little. Eventually, all the higher size segments will be finished, the work on them being done repeatedly via pausing (when a smaller size segment arrives) and resuming (when all smaller size segments are finished).

Currently, 4 more segment threads are added after the main VDF loop finishes 500 million iterations (after about 1 hour of running). This is done to be completely sure even the very big sized segments will be finished. This optimisation is only allowed on machines supporting at least 16 concurrent threads.

Generating n-wesolowski proof

Let T an iteration we are interested in. Firstly, the main VDF Loop will need to calculate at least T iterations. Then, in order to get fast a n-wesolowski proof, we'll concatenate finished segments. We want the proof to be as short as possible, so we'll always pick finished segments of the maximum length possible. If such segments aren't finished, we'll choose lower length segments. A segment of length 2^(16 + 2*p) can always be replaced with 4 segments of length 2^(16 + 2*p - 2). The proof will be created shortly after the main VDF loop produced the result, as the 2^16 length segments will always be up to date with the main VDF loop (and, at worst case, we can always concatenate 2^16 length segments, if bigger sizes are not finished yet). It's possible after the concatenation that we'll still need to prove up to 2^16 iterations (no segment is able to cover anything less than 2^16). This last work is done in parallel with the main VDF loop, as an optimisation.

The program limits the proof size to 64-wesolowski. If number of iterations is very large, it's possible the concatenation won't fit into this. In this case, the program will attempt again to prove every minute, until there are enough large segments to fit the 64-wesolowski limit. However, almost in all cases, the concatenation will fit the 64-wesolowski limit in the first try.

Since the maximum segment size is 2^30 and we can use at most 64 segments in a concatenation, the program will prove at most 2^36 iterations. This can be increased if needed.

Intermediates storage

In order to finish segments, some intermediate values need to be stored for each segment. For each different possible segment length, we use a sliding window of length 20 to store those. Hence, for each segment length, we'll store only the intermediates values needed for the last 20 segments produced by the main VDF loop. Since finishing segments is faster than producing them by the main VDF loop, we assume the segment threads won't be behind by more than 20 segments from the main VDF loop, for each segment length. Thanks to the sliding window technique, the memory used will always be constant.

Generally, the main VDF loop performs all the storing, after computing a form we're interested in. However, since storing is very frequent and expensive (GMP operations), this will slow down the main VDF loop.

For the machines having at least 16 concurrent threads, an optimization is provided: the main VDF loop does only repeated squaring, without storing any form. After each 2^15 steps are performed, a new thread starts redoing the work for 2^15 more steps, this time storing the intermediate values as well. All the intermediate threads and the main VDF loop will work in parallel. The only purpose of the main VDF loop becomes now to produce the starting values for the intermediate threads, as fast as possible. The squarings used in the intermediates threads will be 2 times slower than the ones used in the main VDF loop. It's expected the intermediates will only lag behind the main VDF loop by 2^15 iterations, at any point: after 2^16 iterations are done by the main VDF loop, the first thread doing the first 2^15 intermediate values is already finished. Also, at that point, half of the work of the second thread doing the last 2^15 intermediates values should be already done.

chiavdf's People

Contributors

altendky avatar aminekhaldi avatar arvidn avatar chiaautomation avatar cmmarslender avatar dannywillems avatar dependabot[bot] avatar emlowe avatar fchirica avatar hoffmang9 avatar k3a avatar mariano54 avatar mattxlee avatar mengland17 avatar richardkiss avatar rostislav avatar sundersoft2 avatar timkuijsten avatar wallentx avatar wjblanke avatar xdustinface avatar xearl4 avatar yostra avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chiavdf's Issues

Build issues with FreeBSD

I've been working throught the FreeBSD issues with 1.0.2 most of the day.

  • First, the use of GNU make invoked by make isn't portable:
--- setup.py.orig	2021-05-27 18:45:36 UTC
+++ setup.py
@@ -73,5 +73,5 @@ def copy_vdf_bench(build_dir, install_dir):


     def invoke_make(**kwargs):
    -    subprocess.check_output("make -C src -f Makefile.vdf-client", shell=True)
    +    subprocess.check_output("gmake -C src -f Makefile.vdf-client", shell=True)

Make on everything but linux is bsd make. Make on linux is gnu make. This fix is a hack for FreeBSD. There should be a
check to see if you are on *BSD or MacOS and use gmake. The make file is pretty small and could be ported to work on
both gmake and make, here is a starter: https://nullprogram.com/blog/2017/08/20/

  • Second, you need to include the system 3rd party options in compiles. I've written GNU make lines to go with the GNU makefile:
    --- src/Makefile.vdf-client.orig	2021-05-27 18:45:36 UTC
    +++ src/Makefile.vdf-client
    @@ -12,6 +12,11 @@ CXXFLAGS += -flto -std=c++1z -D VDF_MODE=0 -D FAST_MAC
     ifeq ($(UNAME),Darwin)
     CXXFLAGS += -D CHIAOSX=1
     endif
    +ifeq ($(UNAME),FreeBSD)
    +AS = clang
    +CXXFLAGS += -I /usr/local/include
    +LDFLAGS += -L /usr/local/lib
    +endif

     OPT_CFLAGS = -O3
  • Finally, the assembler code doesn't compile in clang. I suspect MacOS will have trouble as weel, as it is also clang. It's beyond my scope, but you can look at portable packages like xmrig that compile assembler on Macos, Linux, FreeBSD, etc. That should help. I seem to notice different assembler needed for each arch/processor (Linux/BSD or MacOS and Intel/Arm).
    asm_compiled.s:
asm_compiled.s:1699:14: error: invalid instruction mnemonic 'cmoveq'
cel_Xx_1699: CMOVEQ RAX, R8              # gcd_128:525                        CMOVEQ `tmp_0, `ab_0_0
             ^~~~~~
asm_compiled.s:1700:14: error: invalid instruction mnemonic 'cmoveq'
cel_Xx_1700: CMOVEQ RDX, RCX             # gcd_128:526                        CMOVEQ `tmp_1, `tmp_3
             ^~~~~~
asm_compiled.s:1721:34: error: unknown token in expression
cel_Xx_1721: MOV RCX, OFFSET FLAT:_cel_label_26 # gcd_128:575                        MOV `tmp_3, OFFSET FLAT:_cel_label_26
                                 ^
asm_compiled.s:1728:34: error: unknown token in expression
cel_Xx_1728: MOV RAX, OFFSET FLAT:_cel_label_27 # gcd_128:576                        MOV `tmp_0, OFFSET FLAT:_cel_label_27
                                 ^
asm_compiled.s:2218:14: error: invalid instruction mnemonic 'cmoveq'
cel_Xx_2218: CMOVEQ RAX, R8              # gcd_128:525                        CMOVEQ `tmp_0, `ab_0_0
             ^~~~~~
asm_compiled.s:2219:14: error: invalid instruction mnemonic 'cmoveq'
cel_Xx_2219: CMOVEQ RDX, RCX             # gcd_128:526                        CMOVEQ `tmp_1, `tmp_3
             ^~~~~~
asm_compiled.s:2234:34: error: unknown token in expression
cel_Xx_2234: MOV RCX, OFFSET FLAT:_cel_label_26 # gcd_128:575                        MOV `tmp_3, OFFSET FLAT:_cel_label_26
                                 ^
asm_compiled.s:2235:34: error: unknown token in expression
cel_Xx_2235: MOV RAX, OFFSET FLAT:_cel_label_27 # gcd_128:576                        MOV `tmp_0, OFFSET FLAT:_cel_label_27
                                 ^

avx2_asm_compiled.s:

avx2_asm_compiled.s:3622:15: error: invalid instruction mnemonic 'cmoveq'
avx2_Xx_3622: CMOVEQ RAX, R8             # gcd_128:525                        CMOVEQ `tmp_0, `ab_0_0
              ^~~~~~
avx2_asm_compiled.s:3623:15: error: invalid instruction mnemonic 'cmoveq'
avx2_Xx_3623: CMOVEQ RDX, RCX            # gcd_128:526                        CMOVEQ `tmp_1, `tmp_3
              ^~~~~~
avx2_asm_compiled.s:3644:35: error: unknown token in expression
avx2_Xx_3644: MOV RCX, OFFSET FLAT:_avx2_label_31 # gcd_128:583                        MOV `tmp_3, OFFSET FLAT:_avx2_label_31
                                  ^
avx2_asm_compiled.s:3645:35: error: unknown token in expression
avx2_Xx_3645: MOV RAX, OFFSET FLAT:_avx2_label_12 # gcd_128:584                        MOV `tmp_0, OFFSET FLAT:_avx2_label_12
                                  ^
avx2_asm_compiled.s:4007:15: error: invalid instruction mnemonic 'cmoveq'
avx2_Xx_4007: CMOVEQ RAX, R8             # gcd_128:525                        CMOVEQ `tmp_0, `ab_0_0
              ^~~~~~
avx2_asm_compiled.s:4008:15: error: invalid instruction mnemonic 'cmoveq'
avx2_Xx_4008: CMOVEQ RDX, RCX            # gcd_128:526                        CMOVEQ `tmp_1, `tmp_3
              ^~~~~~
avx2_asm_compiled.s:4023:35: error: unknown token in expression
avx2_Xx_4023: MOV RCX, OFFSET FLAT:_avx2_label_31 # gcd_128:583                        MOV `tmp_3, OFFSET FLAT:_avx2_label_31
                                  ^
avx2_asm_compiled.s:4024:35: error: unknown token in expression
avx2_Xx_4024: MOV RAX, OFFSET FLAT:_avx2_label_12 # gcd_128:584                        MOV `tmp_0, OFFSET FLAT:_avx2_label_12
                                  ^

avx512_asm_compiled.s:

ld: error: undefined symbol: main
>>> referenced by crt1_c.c:75 (/usr/src/lib/csu/amd64/crt1_c.c:75)
>>>               /usr/lib/crt1.o:(_start)
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Execute pip wheel.

err log:
Processing /Users/liuhao/Documents/pythonProject/chiavdf
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Building wheels for collected packages: chiavdf
Building wheel for chiavdf (PEP 517) ... error
ERROR: Command errored out with exit status 1:
command: /Users/liuhao/Documents/pythonProject/chiavdf/venv/bin/python3.9 /Users/liuhao/Documents/pythonProject/chiavdf/venv/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /var/folders/tn/qd5mzld10wv5xz4qp1wz9t040000gn/T/tmplx6mqoi7
cwd: /private/var/folders/tn/qd5mzld10wv5xz4qp1wz9t040000gn/T/pip-req-build-dw9sbrki
Complete output (70 lines):
running bdist_wheel
running build
running build_ext
-- The C compiler identification is AppleClang 12.0.0.12000032
-- The CXX compiler identification is AppleClang 12.0.0.12000032
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Library/Developer/CommandLineTools/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Library/Developer/CommandLineTools/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
GMP_INCLUDES=GMP_INCLUDES-NOTFOUND
CMake Error at /usr/local/Cellar/cmake/3.19.7/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:218 (message):
Could NOT find GMP (missing: GMP_INCLUDES GMP_LIBRARIES GMP_VERSION_OK)
(Required is at least version "5.1.0")
Call Stack (most recent call first):
/usr/local/Cellar/cmake/3.19.7/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:582 (_FPHSA_FAILURE_MESSAGE)
cmake/FindGMP.cmake:74 (find_package_handle_standard_args)
CMakeLists.txt:17 (find_package)

-- Configuring incomplete, errors occurred!
See also "/private/var/folders/tn/qd5mzld10wv5xz4qp1wz9t040000gn/T/pip-req-build-dw9sbrki/CMakeFiles/CMakeOutput.log".
See also "/private/var/folders/tn/qd5mzld10wv5xz4qp1wz9t040000gn/T/pip-req-build-dw9sbrki/CMakeFiles/CMakeError.log".
Traceback (most recent call last):
File "/Users/liuhao/Documents/pythonProject/chiavdf/venv/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 280, in
main()
File "/Users/liuhao/Documents/pythonProject/chiavdf/venv/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/Users/liuhao/Documents/pythonProject/chiavdf/venv/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 204, in build_wheel
return _build_backend().build_wheel(wheel_directory, config_settings,
File "/private/var/folders/tn/qd5mzld10wv5xz4qp1wz9t040000gn/T/pip-build-env-i48kintb/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 221, in build_wheel
return self._build_with_temp_dir(['bdist_wheel'], '.whl',
File "/private/var/folders/tn/qd5mzld10wv5xz4qp1wz9t040000gn/T/pip-build-env-i48kintb/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 207, in _build_with_temp_dir
self.run_setup()
File "/private/var/folders/tn/qd5mzld10wv5xz4qp1wz9t040000gn/T/pip-build-env-i48kintb/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 150, in run_setup
exec(compile(code, file, 'exec'), locals())
File "setup.py", line 266, in
setup(
File "/private/var/folders/tn/qd5mzld10wv5xz4qp1wz9t040000gn/T/pip-build-env-i48kintb/overlay/lib/python3.9/site-packages/setuptools/init.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/usr/local/Cellar/[email protected]/3.9.2_2/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/local/Cellar/[email protected]/3.9.2_2/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/usr/local/Cellar/[email protected]/3.9.2_2/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/private/var/folders/tn/qd5mzld10wv5xz4qp1wz9t040000gn/T/pip-build-env-i48kintb/overlay/lib/python3.9/site-packages/wheel/bdist_wheel.py", line 299, in run
self.run_command('build')
File "/usr/local/Cellar/[email protected]/3.9.2_2/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/local/Cellar/[email protected]/3.9.2_2/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/Cellar/[email protected]/3.9.2_2/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/usr/local/Cellar/[email protected]/3.9.2_2/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/local/Cellar/[email protected]/3.9.2_2/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "setup.py", line 112, in run
self.build_extension(ext)
File "setup.py", line 139, in build_extension
subprocess.check_call(["cmake", ext.sourcedir] + cmake_args, env=env)
File "/usr/local/Cellar/[email protected]/3.9.2_2/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '/private/var/folders/tn/qd5mzld10wv5xz4qp1wz9t040000gn/T/pip-req-build-dw9sbrki/src', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=/private/var/folders/tn/qd5mzld10wv5xz4qp1wz9t040000gn/T/pip-req-build-dw9sbrki/build/lib.macosx-11-x86_64-3.9', '-DPYTHON_EXECUTABLE=/Users/liuhao/Documents/pythonProject/chiavdf/venv/bin/python3.9', '-DCMAKE_BUILD_TYPE=Release']' returned non-zero exit status 1.

ERROR: Failed building wheel for chiavdf
Failed to build chiavdf
ERROR: Failed to build one or more wheels

vdf witness_type distribution changed since ASIC timelords ramp up

We are just evaluating if another blueboxing run would make sense and therefore we are doing some blockchain analysis.

Since the ASIC timelords got activated (assuming it was around block height 4387596), the variance of the witness types for the different vdf proofs surprisingly increased by a lot and not only towards a more compacted proof.

For instance, the avg proof size for rc_ip (reward_chain_ip_proof) got increased from about 382B before to 712B after.

Questions

  • Is this an expected behaviour with the new ASIC timelords?
  • Could this be caused by a bug in the new vdf hw software?

Data

Here is the data we gathered for all the various proofs.

Overview

  • amount of witness_types per vdf proof type
  • overall size per witness_type
  • currently compactifiable size
witness_type size rc_eos rc_ip rc_sp cc_eos cc_ip cc_sp icc_eos icc_ip size current size optimal size current compactifiable size optimal compactifiable
0 100 550 18631 18157 115785 2901115 2924074 115896 17199 611140700 611140700 605687000 605687000
1 241 1391 38277 44772 1205 37920 44248 1194 43234 51150081 21224100 20380647 8456700
2 382 137864 4426425 4370981 23307 1546096 1467114 26906 4218019 6194783984 1621671200 1170227586 306342300
3 523 819 32572 24211 706 32250 23941 0 100 59935277 11459900 29757131 5689700
4 664 938 29674 30207 830 29337 29880 0 167 80365912 12103300 39871208 6004700
5 805 432 19420 13947 372 19133 13722 0 182 54102440 6720800 26747735 3322700
6 946 533 18020 17505 480 17717 17292 0 189 67862256 7173600 33572594 3548900
7 1087 254 11223 8058 225 11025 7944 0 116 42224515 3884500 20863878 1919400
8 1228 320 10518 10170 291 10370 10046 0 79 51323032 4179400 25428196 2070700
9 1369 150 6392 4573 135 6326 4507 0 34 30278173 2211700 15015192 1096800
10 1510 203 6226 5921 180 6191 5868 0 11 37146000 2460000 18480890 1223900
11 1651 79 3758 2714 70 3728 2678 0 2 21510879 1302900 10691876 647600
12 1792 128 3502 3596 112 3476 3564 0 1 25767168 1437900 12816384 715200
13 1933 60 2226 1570 50 2206 1554 0 0 14818378 766600 7364730 381000
14 2074 76 2143 2078 68 2129 2055 0 0 17730626 854900 8818648 425200
15 2215 43 1357 884 35 1344 875 0 0 10051670 453800 4992610 225400
16 2356 44 1256 1266 41 1255 1252 0 0 12048584 511400 6003088 254800
17 2497 25 780 556 22 768 551 0 0 6746894 270200 3348477 134100
18 2638 24 742 762 21 736 750 0 0 8006330 303500 3975466 150700
19 2779 10 440 347 8 436 344 0 0 4404715 158500 2189852 78800
20 2920 14 424 467 14 423 466 0 0 5279360 180800 2636760 90300
21 3061 4 272 196 3 271 194 0 0 2877340 94000 1432548 46800
22 3202 6 236 241 5 235 239 0 0 3080324 96200 1533758 47900
23 3343 8 166 121 8 165 119 0 0 1962341 58700 976156 29200
24 3484 5 158 147 4 155 145 0 0 2139176 61400 1059136 30400
25 3625 0 97 99 0 97 98 0 0 1417375 39100 706875 19500
26 3766 2 112 90 1 111 89 0 0 1525230 40500 756966 20100
27 3907 4 70 47 4 70 46 0 0 941587 24100 468840 12000
28 4048 2 61 46 2 60 45 0 0 874368 21600 433136 10700
29 4189 3 29 28 2 29 28 0 0 498491 11900 247151 5900
30 4330 1 26 32 1 26 32 0 0 510940 11800 255470 5900
31 4471 0 22 9 0 22 9 0 0 277202 6200 138601 3100
32 4612 2 23 29 2 23 28 0 0 493484 10700 244436 5300
33 4753 2 13 11 1 13 11 0 0 242403 5100 118825 2500
34 4894 0 4 13 0 4 13 0 0 166396 3400 83198 1700
35 5035 0 10 9 0 10 9 0 0 191330 3800 95665 1900
36 5176 0 10 8 0 10 8 0 0 186336 3600 93168 1800
37 5317 0 8 7 0 8 7 0 0 159510 3000 79755 1500
38 5458 0 5 3 0 5 3 0 0 87328 1600 43664 800
39 5599 0 4 3 0 4 3 0 0 78386 1400 39193 700
40 5740 1 1 3 1 1 3 0 0 57400 1000 28700 500
41 5881 0 2 3 0 2 3 0 0 58810 1000 29405 500
42 6022 0 2 1 0 2 1 0 0 36132 600 18066 300
43 6163 0 1 1 0 1 1 0 0 24652 400 12326 200
44 6304 0 2 2 0 2 2 0 0 50432 800 25216 400
45 6445 0 1   0 1 0 0 0 12890 200 6445 100
                    6813486137 1699831100 1472109647 343028600
                           
                  difference 4.76 GiB 1.05 GiB

rc_ip proof Details

Here is a table showing the data gathered for the rc_ip proof, where the ASIC cutoff was block 4387596

witness_type size Per proof rc_ip before ASIC size per witness type Share % rc_ip after ASIC size per witness type Share %
0 100 17 1700 0.000387 18615 1861500 7.512743
1 241 24 5784 0.000547 38255 9219455 15.439162
2 382 4386592 1675678144 99.977117 39836 15217352 16.077230
3 523 108 56484 0.002461 32470 16981810 13.104420
4 664 171 113544 0.003897 29507 19592648 11.908596
5 805 201 161805 0.004581 19221 15472905 7.757316
6 946 204 192984 0.004649 17822 16859612 7.192700
7 1087 137 148919 0.003122 11088 12052656 4.474956
8 1228 89 109292 0.002028 10432 12810496 4.210203
9 1369 36 49284 0.000820 6358 8704102 2.565996
10 1510 11 16610 0.000251 6216 9386160 2.508687
11 1651 5 8255 0.000114 3753 6196203 1.514656
12 1792 1 1792 0.000023 3502 6275584 1.413356
13 1933     0.000000 2226 4302858 0.898381
14 2074     0.000000 2143 4444582 0.864884
15 2215     0.000000 1358 3007970 0.548069
16 2356     0.000000 1256 2959136 0.506903
17 2497     0.000000 780 1947660 0.314797
18 2638     0.000000 742 1957396 0.299460
19 2779     0.000000 440 1222760 0.177578
20 2920     0.000000 424 1238080 0.171120
21 3061     0.000000 272 832592 0.109775
22 3202     0.000000 236 755672 0.095246
23 3343     0.000000 166 554938 0.066995
24 3484     0.000000 158 550472 0.063767
25 3625     0.000000 97 351625 0.039148
26 3766     0.000000 112 421792 0.045202
27 3907     0.000000 70 273490 0.028251
28 4048     0.000000 61 246928 0.024619
29 4189     0.000000 29 121481 0.011704
30 4330     0.000000 26 112580 0.010493
31 4471     0.000000 22 98362 0.008879
32 4612     0.000000 23 106076 0.009282
33 4753     0.000000 13 61789 0.005247
34 4894     0.000000 4 19576 0.001614
35 5035     0.000000 10 50350 0.004036
36 5176     0.000000 10 51760 0.004036
37 5317     0.000000 8 42536 0.003229
38 5458     0.000000 5 27290 0.002018
39 5599     0.000000 4 22396 0.001614
40 5740     0.000000 1 5740 0.000404
41 5881     0.000000 2 11762 0.000807
42 6022     0.000000 2 12044 0.000807
43 6163     0.000000 1 6163 0.000404
44 6304     0.000000 2 12608 0.000807
45 6445     0.000000 1 6445 0.000404
               
  amount proofs 4387596     247779    
  size avg per proof   382.1     712.2  

Many crashes

The code crashes often with one of four issues:

munmap_chunk(): invalid pointer
Segmentation fault (core dumped)
pure virtual method called
Warning: Could not create identity: Invalid form. Can't find c.

Freebsd pip install ImportError: cannot import name 'setuptools' from 'setuptools'

I have had a install of chia on freebsd since initial release. Usually things work just fine but it seems in some recent update that during the ./install.sh things get hung up on chiavdf and it fails to build for some reason. I tried to just do a normal pip install inside my venv running python3.9 and got the same error message, logs attached below.

chiavdf-install.log

Freebsd 13.1 and I tried freebsd 13.2 both have the same issues.

FTBFS on Gentoo multilib due to CMake not finding dev-libs/gmp-6.2.1

So I wanted to benchmark my machine under Gentoo.
sh install-timelord.sh from @Chia-Network/chia-blockchain fails with a CMake error.
pip wheel from venv fails with the same error.

Timelord requires CMake 3.14+ to compile vdf_client.
Python version: python3.9             
install-timelord.sh: строка 43: type: apt-get: не найден                                                              
install-timelord.sh: строка 46: type: dnf: не найден                                                                  
install-timelord.sh: строка 46: yum: команда не найдена
Installing chiavdf from source.
venv/bin/python -m pip install --force --no-binary chiavdf chiavdf==1.0.3
Collecting chiavdf==1.0.3                    
  Downloading chiavdf-1.0.3.tar.gz (635 kB)                                                                           
     |████████████████████████████████| 635 kB 1.4 MB/s 
  Installing build dependencies ... done                                                                                                                                                                                                    
  Getting requirements to build wheel ... done
    Preparing wheel metadata ... done                                                                                                                                                                                                       
Building wheels for collected packages: chiavdf                                                                       
  Building wheel for chiavdf (PEP 517) ... error                                                                                                                                                                                            
  ERROR: Command errored out with exit status 1:                                                                      
   command: /home/altracer/git/Chia-Network/chia-blockchain/venv/bin/python /home/altracer/git/Chia-Network/chia-blockchain/venv/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmpqxd0m6m2
       cwd: /tmp/pip-install-rtd36oy4/chiavdf_6788d3a183cf4b83b4369ad39743af44
  Complete output (70 lines):                                                                                                                                                                                                               
  running bdist_wheel 
  running build                                                                                                                                                                                                                             
  running build_ext                                  
  -- The C compiler identification is GNU 10.3.0
  -- The CXX compiler identification is GNU 10.3.0
  -- Detecting C compiler ABI info                                                                                    
  -- Detecting C compiler ABI info - done 
  -- Check for working C compiler: /usr/bin/cc - skipped                                                              
  -- Detecting C compile features
  -- Detecting C compile features - done                                                                              
  -- Detecting CXX compiler ABI info
  -- Detecting CXX compiler ABI info - done                                                                           
  -- Check for working CXX compiler: /usr/bin/c++ - skipped 
  -- Detecting CXX compile features                                                                                   
  -- Detecting CXX compile features - done
  GMP_INCLUDES=/usr/include                                                                                           
  -- GMP version was not detected           
  CMake Error at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:230 (message):
    Could NOT find GMP (missing: GMP_VERSION_OK) (Required is at least version
    "5.1.0")
  Call Stack (most recent call first):
    /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:594 (_FPHSA_FAILURE_MESSAGE)
    cmake/FindGMP.cmake:74 (find_package_handle_standard_args)
    CMakeLists.txt:17 (find_package)
  
  -- Configuring incomplete, errors occurred!
  See also "/tmp/pip-install-rtd36oy4/chiavdf_6788d3a183cf4b83b4369ad39743af44/CMakeFiles/CMakeOutput.log".

However I do have full libgmp installed with all the headers and C++:

altracer@glacier ~> equery files dev-libs/gmp                                                                         
 * Searching for gmp in dev-libs ...                                                                                  
 * Contents of dev-libs/gmp-6.2.1-r1:                                                                                                                                                                                                       
/usr                                                                                                                  
/usr/include                                                                                                          
/usr/include/gmp.h                                                                                                                                                                                                                          
/usr/include/gmpxx.h                                                                                                                                                                                                                        
/usr/include/i686-pc-linux-gnu                                                                                        
/usr/include/i686-pc-linux-gnu/gmp.h                                                                                                                                                                                                        
/usr/include/x86_64-pc-linux-gnu                                                                                                                                                                                                            
/usr/include/x86_64-pc-linux-gnu/gmp.h                                                                                                                                                                                                      
/usr/lib                                                                                                              
/usr/lib/libgmp.so -> libgmp.so.10.4.1                                                                                
/usr/lib/libgmp.so.10 -> libgmp.so.10.4.1                                                                                                                                                                                                   
/usr/lib/libgmp.so.10.4.1                                                                                             
/usr/lib/libgmpxx.so -> libgmpxx.so.4.6.1                                                                                                                                                                                                   
/usr/lib/libgmpxx.so.4 -> libgmpxx.so.4.6.1                                                                                                                                                                                                 
/usr/lib/libgmpxx.so.4.6.1                                                                                                                                                                                                                  
/usr/lib/pkgconfig                                                                                                                                                                                                                          
/usr/lib/pkgconfig/gmp.pc                                                                                                                                                                                                                   
/usr/lib/pkgconfig/gmpxx.pc                                                                                                                                                                                                                 
/usr/lib64                                                                                                                                                                                                                                  
/usr/lib64/libgmp.so -> libgmp.so.10.4.1                                                                                                                                                                                                    
/usr/lib64/libgmp.so.10 -> libgmp.so.10.4.1                                                                                                                                                                                                 
/usr/lib64/libgmp.so.10.4.1                                                                                                                                                                                                                 
/usr/lib64/libgmpxx.so -> libgmpxx.so.4.6.1                                                                           
/usr/lib64/libgmpxx.so.4 -> libgmpxx.so.4.6.1                                                                         
/usr/lib64/libgmpxx.so.4.6.1                                                                                                                                                                                                                
/usr/lib64/pkgconfig                                                                                                                                                                                                                        
/usr/lib64/pkgconfig/gmp.pc                                                                                                                                                                                                                 
/usr/lib64/pkgconfig/gmpxx.pc                                                                                                                                                                                                               
/usr/share                                                                                                                                                                                                                                  
/usr/share/doc                                                                                                                                                                                                                              
/usr/share/doc/gmp-6.2.1-r1
...
(venv) altracer@glacier ~/g/C/chiavdf ((1.0.3))> env LANG=C eix -I gmp
[I] dev-libs/gmp
     Available versions:  6.2.1-r1(0/10.4){tbz2} {+asm +cxx doc pic static-libs ABI_MIPS="n32 n64 o32" ABI_S390="32 64" ABI_X86="32 64 x32"}
     Installed versions:  6.2.1-r1(0/10.4){tbz2}(12:06:49 04/11/21)(asm cxx -doc -pic -static-libs ABI_MIPS="-n32 -n64 -o32" ABI_S390="-32 -64" ABI_X86="32 64 -x32")
     Homepage:            https://gmplib.org/
     Description:         Library for arbitrary-precision arithmetic on different type of numbers

I went into src/ manually, created a build/ dir and ran cmake .. which stopped at the same error.

Using ccmake . and reading https://github.com/Chia-Network/chiavdf/blob/main/src/cmake/FindGMP.cmake I came to a conclusion that in Gentoo, the expected gmp.h is installed into /usr/include/x86_64-pc-linux-gnu/gmp.h. A following cmake .. -DGMP_INCLUDES=/usr/include/x86_64-pc-linux-gnu generates a proper build system.

The multilib eclass creates a wrapper /usr/include/gmp.h which works with real preprocessors, but simple string-searching cmake modules never expected it and are unable to follow #include redirections.

This minimal patch allows building chiavdf on Gentoo amd64 multilib (breaking the build everywhere else)

diff --git a/setup.py b/setup.py
index 9c73a4a..d95e54f 100644
--- a/setup.py
+++ b/setup.py
@@ -116,6 +116,7 @@ class CMakeBuild(build_ext):
         cmake_args = [
             "-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=" + str(extdir),
             "-DPYTHON_EXECUTABLE=" + sys.executable,
+            "-DGMP_INCLUDES=" + "/usr/include/x86_64-pc-linux-gnu",
         ]
 
         cfg = "Debug" if self.debug else "Release"

Or we could use an environment variable: GMPDIR=/usr/include/x86_64-pc-linux-gnu, if only pip kept the environment for its subprocess cmakes. Our /etc/os-release contains NAME=Gentoo ID=gentoo, if this helps.

I cannot open a bug against Portage because there is no such package (like dev-python/chiavdf).

Could you help upgrade the vulnerble shared library introduced by package chiavdf?

Hi, @hoffmang9 , @wjblanke , I'd like to report a vulnerability issue in chiavdf_1.0.5.

Dependency Graph between Python project and shared libraries

chiavdf_1 0 5

Issue Description

As shown in the above dependency graph, chiavdf_1.0.5 directly or transitively depends on 3 C libraries (.so). However, I noticed that two C libraries are vulnerable, which built from the same C project, containing the following CVE:
libgmpxx-faeb1ede.so.4.6.1 and libgmp-0b97008e.so.10.4.1 from C project gmp(version: 6.2.1) exposed a vulnerability: CVE-2021-43618

Suggested Vulnerability Patch Versions

No official patch version released, but gmp has fixed the vulnerability in patch.

Python build tools cannot report vulnerable C libraries, which may induce potential security issues to many downstream Python projects.
As a popular python package (chiavdf has 137,684 downloads per month), could you please upgrade the above shared libraries to their patch versions?

Thanks for your help~

Best regards,

Andy

Deserialize form deadlock under win32

Dear chiavdf team, recently I was trying to test chiavdf but I have got a problem when I'm deserializing a form from bytes under win32 platform. The form is generated by my test program which is almost copied from 1weso_test.cpp

I have compiled windows 64-bits program and trying to verify vdf proof, but the it is deadlock during the deserializing. The same test program under Linux(Ubuntu 20.04) is succeed.

I have traced the program and find out the name of the function which cause the deadlock: 'mpz_xgcd_partial', from xgcd_partial.c:28, and the the while loop(line 40) is never end.

The test program is:

    static const char *SZ_CHALLENGE = "9159bc7838880dcf826ba5fd7f5b693f203c01e29070ffa4eb1a73b727e09d84";
    auto challenge = HexToBytes(SZ_CHALLENGE);
    static const char *SZ_VDF_Y =
    ▏   "0000e16a31edf6070934cacf78d3c3139e6986b7cebd0a45b996720fc916a163803a9c73d43b05c0835e8f2e4c52b2e10ae5623f7c1d5b98db""2fc13b140b4c6035080f1b12cfe429abcee3912844319f2f81858d7ed4f7a6a108cf14f4a71090a1130201";
    auto D = CreateDiscriminant(challenge, 1024);
    auto form_y_data = HexToBytes(SZ_VDF_Y);

    std::cout << "deserializing form: " << SZ_VDF_Y << std::endl;
    auto form_y = DeserializeForm(D, form_y_data.data(), form_y_data.size());

I just simply add it to verifier_test.cpp, before the statement return 0;
I have it compiled by using Visual Studio 2017 with cmake.

rounding mode

there is a function set_rounding_mode() which appears to attempt to set the global rounding mode for the process. However, it's done inside an assert() macro, and all our builds are release builds, which will omit all assert()s.

Here: https://github.com/Chia-Network/chiavdf/blob/master/src/double_utility.h#L60

I'm not familiar with why we would want to set the rounding mode to round down. The tests pass without this, so it's not clear whether it should be fixed to actually set the rounding mode, or whether we should just remove it.

Error building chiavdf

Hello,

I am very new to coding, and I am trying to learn some from chialisp tutorial since it interests me a lot. But I found issues of installing chiavdf, here is the error message, can someone plz help? many thanks
OS is windows 11, and I have visual studio 2017 community

I found this error when I am using pip install chiavdf

ERROR: Command errored out with exit status 1:
command: 'D:\Chialisp\TestProject\venv\Scripts\python.exe' 'D:\Chialisp\TestProject\venv\lib\site-packages\pip_vendor\pep517\in_process_in_process.py' get_requires_for_build_wheel 'C:\Users\zchop\AppData\Local\Temp\tmp9oypl9wd'
cwd: C:\Users\zchop\AppData\Local\Temp\pip-install-uvtp8jf5\chiavdf_f0e98aa2d7e9417e847f4e0f8ea29389
Complete output (44 lines):
C:\Users\zchop\AppData\Local\Temp\pip-build-env-khdhp6fl\overlay\Lib\site-packages\setuptools_distutils\dist.py:275: UserWarning: Unknown distribution option: 'build_requires'
warnings.warn(msg)
running egg_info
listing git files failed - pretending there aren't any
Traceback (most recent call last):
File "D:\Chialisp\TestProject\venv\lib\site-packages\pip_vendor\pep517\in_process_in_process.py", line 363, in
main()
File "D:\Chialisp\TestProject\venv\lib\site-packages\pip_vendor\pep517\in_process_in_process.py", line 345, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "D:\Chialisp\TestProject\venv\lib\site-packages\pip_vendor\pep517\in_process_in_process.py", line 130, in get_requires_for_build_wheel
return hook(config_settings)
File "C:\Users\zchop\AppData\Local\Temp\pip-build-env-khdhp6fl\overlay\Lib\site-packages\setuptools\build_meta.py", line 162, in get_requires_for_build_wheel
return self._get_build_requires(
File "C:\Users\zchop\AppData\Local\Temp\pip-build-env-khdhp6fl\overlay\Lib\site-packages\setuptools\build_meta.py", line 143, in get_build_requires
self.run_setup()
File "C:\Users\zchop\AppData\Local\Temp\pip-build-env-khdhp6fl\overlay\Lib\site-packages\setuptools\build_meta.py", line 158, in run_setup
exec(compile(code, file, 'exec'), locals())
File "setup.py", line 247, in
setup(
File "C:\Users\zchop\AppData\Local\Temp\pip-build-env-khdhp6fl\overlay\Lib\site-packages\setuptools_init
.py", line 155, in setup
return distutils.core.setup(**attrs)
File "C:\Users\zchop\AppData\Local\Temp\pip-build-env-khdhp6fl\overlay\Lib\site-packages\setuptools_distutils\core.py", line 148, in setup
return run_commands(dist)
File "C:\Users\zchop\AppData\Local\Temp\pip-build-env-khdhp6fl\overlay\Lib\site-packages\setuptools_distutils\core.py", line 163, in run_commands
dist.run_commands()
File "C:\Users\zchop\AppData\Local\Temp\pip-build-env-khdhp6fl\overlay\Lib\site-packages\setuptools_distutils\dist.py", line 967, in run_commands
self.run_command(cmd)
File "C:\Users\zchop\AppData\Local\Temp\pip-build-env-khdhp6fl\overlay\Lib\site-packages\setuptools_distutils\dist.py", line 986, in run_command
cmd_obj.run()
File "C:\Users\zchop\AppData\Local\Temp\pip-build-env-khdhp6fl\overlay\Lib\site-packages\setuptools\command\egg_info.py", line 299, in run
self.find_sources()
File "C:\Users\zchop\AppData\Local\Temp\pip-build-env-khdhp6fl\overlay\Lib\site-packages\setuptools\command\egg_info.py", line 306, in find_sources
mm.run()
File "C:\Users\zchop\AppData\Local\Temp\pip-build-env-khdhp6fl\overlay\Lib\site-packages\setuptools\command\egg_info.py", line 541, in run
self.add_defaults()
File "C:\Users\zchop\AppData\Local\Temp\pip-build-env-khdhp6fl\overlay\Lib\site-packages\setuptools\command\egg_info.py", line 585, in add_defaults
self.read_manifest()
File "C:\Users\zchop\AppData\Local\Temp\pip-build-env-khdhp6fl\overlay\Lib\site-packages\setuptools\command\sdist.py", line 195, in read_manifest
self.filelist.append(line)
File "C:\Users\zchop\AppData\Local\Temp\pip-build-env-khdhp6fl\overlay\Lib\site-packages\setuptools\command\egg_info.py", line 483, in append
path = convert_path(item)
File "C:\Users\zchop\AppData\Local\Temp\pip-build-env-khdhp6fl\overlay\Lib\site-packages\setuptools_distutils\util.py", line 182, in convert_path
raise ValueError("path '%s' cannot end with '/'" % pathname)
ValueError: path './' cannot end with '/'

Build Fails - Single module only - No module named 'pybind11_tests'

If someone can assist would be great, it is just this one module I need to build the RPM for my distro.

_____________________________________________________________________________________________________ ERROR collecting test session ________________________________________________________________________________________________________
/usr/lib64/python3.8/importlib/init.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
:1014: in _gcd_import
???
:991: in _find_and_load
???
:975: in _find_and_load_unlocked
???
:671: in _load_unlocked
???
/usr/lib/python3.8/site-packages/_pytest/assertion/rewrite.py:170: in exec_module
exec(co, module.dict)
_deps/pybind11-src-src/tests/conftest.py:19: in
import pybind11_tests # noqa: F401
E ModuleNotFoundError: No module named 'pybind11_tests'

$ sudo dnf repoquery --whatprovides 'python3dist(pybind11_tests)'

$ sudo dnf repoquery --whatprovides 'python3dist(pybind11)'
python-pybind11-0:2.6.2-2.fc35.x86_64
python3-pybind11-0:2.6.2-1.fc34.x86_64

$ pip show pybind11
Name: pybind11
Version: 2.6.2
Summary: Seamless operability between C++11 and Python
Home-page: https://github.com/pybind/pybind11
Author: Wenzel Jakob
Author-email: [email protected]
License: BSD
Location: /usr/lib64/python3.8/site-packages
Requires:
Required-by:

and rpm - python-pybind11-2.6.2-2.fc35.x86_64 is already installed.

intel Q6600 don't work

Hi,

I try to create a timelord's server with a cpu intel q6600. What is options in compil to work, please ?

VDF client 1: sh: 0: getcwd() failed: No such file or directory

Hello,
I'm pool operator. In Chia Debug Logs:

VDF client 1: sh: 0: getcwd() failed: No such file or directory```
  1. We follow the steps below to install Chia in our pool. Do we have a missing point here?
#chia-blockchain installation
git clone https://github.com/Chia-Network/chia-blockchain.git -b main --recurse-submodules
cd chia-blockchain
sh install.sh
. ./activate
export CHIA_ROOT="/home/chia/.chia/mainnet"
chia init

# Should we set it up (timelord) like this? Or is it not necessary?
#chia-blockchain#timelord installation for server
export BUILD_VDF_CLIENT=Y
export BUILD_VDF_BENCH=Y
sh install-timelord.sh
chia start all

Crashes with Timelord

I am getting the following errors when running Timelord.

2021-05-11T13:08:22.689 timelord chia.timelord.timelord : ERROR Not active for 60 seconds, restarting all chains
2021-05-11T13:09:22.775 timelord chia.timelord.timelord : ERROR Not active for 60 seconds, restarting all chains
2021-05-11T13:10:07.031 full_node chia.full_node.full_node: ERROR Error with syncing: <class 'RuntimeError'>Traceback (most recent call last):
RuntimeError: Weight proof did not arrive in time from peer: 60.248.5.168
File "/chia-blockchain/chia/full_node/full_node.py", line 628, in _sync
raise RuntimeError(f"Weight proof did not arrive in time from peer: {weight_proof_peer.peer_host}")

2021-05-11T13:10:22.881 timelord chia.timelord.timelord : ERROR Not active for 60 seconds, restarting all chains
2021-05-11T13:11:22.917 timelord chia.timelord.timelord : ERROR Not active for 60 seconds, restarting all chains
2021-05-11T13:12:23.093 timelord chia.timelord.timelord : ERROR Not active for 60 seconds, restarting all chains
2021-05-11T13:13:07.419 full_node chia.full_node.full_node: WARNING Received unsolicited/late block from peer {'host': '183.159.126.12', 'port': 8444}
2021-05-11T13:13:23.374 timelord chia.timelord.timelord : ERROR Not active for 60 seconds, restarting all chains

build fails on certain versions of osx due to avx-512 instructions

The build chain "assumes" that avx-512 instructions can be assembled, even if they cannot be executed.

However on certain versions of osx the build tools do not even have the AVX-512 instructions available.

There should be an option / flag to completely block the source files from being evaluated, to prevent it erroring on some versions of xcode.

[100%] Linking CXX shared module build/lib.macosx-10.9-x86_64-3.9/chiavdf.cpython-39-darwin.so [100%] Built target chiavdf running build_hook avx512_asm_compiled.s:4:14: error: instruction requires: AVX-512 ISA avx512_Xx_4: VPBROADCASTQ ZMM4, RDI # to_avx512_integer:254 VPBROADCASTQin_num_limbs_vector, in_num_limbs ^ avx512_asm_compiled.s:5:14: error: instruction requires: AVX-512 ISA avx512_Xx_5: VPABSQ ZMM4, ZMM4 # to_avx512_integer:255 VPABSQ in_num_limbs_vector, in_num_limbs_vector ^ avx512_asm_compiled.s:20:15: error: instruction requires: AVX-512 ISA avx512_Xx_20: VPCMPUQ k1, ZMM4, ZMMWORD PTR [RIP+_avx512_label_0], 6 # to_avx512_integer:268 VPCMPUQ k1, in_num_limbs_vector, ZMMWORD PTR [RIP+_avx512_label_0], 6
^
avx512_asm_compiled.s:21:30: error: unexpected token in argument list
avx512_Xx_21: VMOVDQU64 ZMM0 {k1}{z}, [RSI+0] # to_avx512_integer:271 VMOVDQU64 input_registers_0 {k1}{z}, [in_data+0]
^
avx512_asm_compiled.s:34:15: error: instruction requires: AVX-512 ISA
avx512_Xx_34: VPCMPUQ k1, ZMM4, ZMMWORD PTR [RIP+_avx512_label_1], 6 # to_avx512_integer:268 VPCMPUQ k1, in_num_limbs_vector, ZMMWORD PTR [RIP+_avx512_label_1], 6 ^ avx512_asm_compiled.s:35:30: error: unexpected token in argument list avx512_Xx_35: VMOVDQU64 ZMM1 {k1}{z}, [RSI+64] # to_avx512_integer:271 VMOVDQU64 input_registers_1 {k1}{z}, [in_data+64]

Bluebox/sanitizer crash

Was running 10 vdf_clients in sanitize mode on benstl.chia.net and fullnode threw this error and timelord stopped doing anything:

17:37:33.910 full_node root                    : ERROR    unhandled mapping function <function partial_async.<locals>.inner at 0x7f46c3446af0> worker exception on (<StreamReader eof transport=<asyncio.sslproto._SSLProtocolTransport object at 0x7f46a7dbe880>>, <StreamWriter transport=<asyncio.sslproto._SSLProtocolTransport object at 0x7f46a7dbe880> reader=<StreamReader eof transport=<asyncio.sslproto._SSLProtocolTransport object at 0x7f46a7dbe880>>>, None, True, False)
Traceback (most recent call last):
  File "/home/hoffmang/chia-blockchain/venv/lib/python3.8/site-packages/aiter/simple_map_aiter.py", line 27, in simple_map_aiter
    yield await _map_f(_)
  File "/home/hoffmang/chia-blockchain/src/util/partial_func.py", line 21, in inner
    return await f(first_param, *args)
  File "/home/hoffmang/chia-blockchain/src/server/pipeline.py", line 161, in stream_reader_writer_to_connection
    con = ChiaConnection(
  File "/home/hoffmang/chia-blockchain/src/server/connection.py", line 46, in __init__
    self.local_host = socket.getsockname()[0]
AttributeError: 'NoneType' object has no attribute 'getsockname'

Proving error when difficulty is too high

No VDF can be produced when the difficulty, /num_iterations, is set above ~2 billion because of an integer overflow (max int is 2_147_483_647).
This is due to the fact that the for loop in ProveSlow is done on int and not int64_t see below:

std::vector<uint8_t> ProveSlow(integer& D, form& x, uint64_t num_iterations) {
    ...
    for (int i = 0; i < num_iterations; i++) {
        if (i % (k * l) == 0) {
            intermediates.push_back(y);
        }
        nudupl_form(y, y, D, L);
        reducer.reduce(y);
    }

The fix is to change int i = 0 to uint64_t i = 0.

I would also recommend initialising num_iterations with its number of elements to prevent resizing the vector.

Hence, I propose to change the code to the following:

  int kl = k * l;
  uint64_t size_vec = ceil(double(num_iterations) / kl);
  std::vector<form> intermediates(size_vec);
  for (uint64_t i = 0; i < num_iterations; i++) {
    if (i % kl == 0) {
      uint64_t index = i / kl;
      intermediates.at(index) = y;
    }
    nudupl_form(y, y, D, L);
    reducer.reduce(y);
  }

Error building on Alpine Linux aarch64 - missing x86intrin.h: No such file or directory

I'm trying to build chia-blockchain on an arm64 Alpine Linux VM and it is failing at the chiavdf wheel step. I get the same error if I try to build chiavdf independently of the chia-blockchain repo too.

I understand that I can disable the VDF client build in chia-blockchain by using a compile-time environment variable (and this is confirmed working for me), as explained in the Chia github Wiki which mentions the VDF client Timelord components being not so cross-platform friendly however, the chiavdf README mentions "The primary build process for this repository is to use GitHub Actions to build binary wheels for MacOS, Linux (x64 and aarch64), and Windows and publish them with a source wheel on PyPi...."

I'm not sure whether this is due to a bug or due to the build process not detecting my architecture and environment correctly and therefore failing to ignore/bypass/substitute the include of the x86intrin.h header file, or is expected behaviour due to aarch64 support being a work in progress? Or perhaps I'm missing a system dependency?

Platform details:

(venv) alpine:~$ gcc --version
gcc (Alpine 10.3.1_git20211027) 10.3.1 20211027
Copyright (C) 2020 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

(venv) alpine:~$ g++ --version
g++ (Alpine 10.3.1_git20211027) 10.3.1 20211027
Copyright (C) 2020 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

(venv) alpine:~$ cmake --version
cmake version 3.21.3
CMake suite maintained and supported by Kitware (kitware.com/cmake).

(venv) alpine:~$ cargo --version
cargo 1.63.0 (fd9c4297c 2022-07-01)

(venv) alpine:~$ rustc --version
rustc 1.63.0 (4b91a6ea7 2022-08-08)

(venv) alpine:~$ pip --version
pip 22.2.2 from /home/user/sources/chia-blockchain/venv/lib/python3.9/site-packages/pip (python 3.9)

(venv) alpine:~$ apk list --installed | grep gmp
libgmpxx-6.2.1-r1 aarch64 {gmp} (LGPL-3.0-or-later OR GPL-2.0-or-later) [installed]
gmp-6.2.1-r1 aarch64 {gmp} (LGPL-3.0-or-later OR GPL-2.0-or-later) [installed]
gmp-dev-6.2.1-r1 aarch64 {gmp} (LGPL-3.0-or-later OR GPL-2.0-or-later) [installed]

(venv) alpine:~$ apk list --installed | grep boost
boost1.77-atomic-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-program_options-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-iostreams-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost-dev-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-date_time-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-graph-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-math-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-random-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-json-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-wave-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-chrono-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-libs-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-prg_exec_monitor-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-nowide-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-unit_test_framework-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-context-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-stacktrace_basic-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-container-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-thread-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-fiber-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-wserialization-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-dev-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-serialization-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-filesystem-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-type_erasure-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-contract-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-log_setup-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-timer-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-stacktrace_noop-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-locale-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-log-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-regex-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-coroutine-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-system-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]
boost1.77-python3-1.77.0-r1 aarch64 {boost1.77} (BSL-1.0) [installed]

Build details:

git clone https://github.com/Chia-Network/chia-blockchain.git --branch latest
cd chia-blockchain
python3 -m venv venv
. ./venv/bin/activate
pip install --upgrade pip
bash install.sh

Debug output snippet (this is for chia-blockchain main repo but exactly the same error occurs when trying latest/1.5.1):

Building wheels for collected packages: chia-blockchain, chiavdf
  Building editable for chia-blockchain (pyproject.toml) ... done
  Created wheel for chia-blockchain: filename=chia_blockchain-1.5.2.dev2453-0.editable-py3-none-any.whl size=9471 sha256=662dcba09a50e6735c174ff64c1040ae4415a69fd05457d12d51ed433def88c8
  Stored in directory: /tmp/pip-ephem-wheel-cache-h534dyig/wheels/92/aa/d1/da406d5de1291dd5b00b242bbd4735d00778842e91d628c3b6
  Building wheel for chiavdf (pyproject.toml) ... error
  error: subprocess-exited-with-error

  × Building wheel for chiavdf (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [97 lines of output]
      running bdist_wheel
      running build
      /tmp/pip-build-env-u8jv1kik/overlay/lib/python3.9/site-packages/setuptools/command/build.py:31: SetuptoolsDeprecationWarning:
                  It seems that you are using `distutils.command.build` to add
                  new subcommands. Using `distutils` directly is considered deprecated,
                  please use `setuptools.command.build`.

        warnings.warn(msg, SetuptoolsDeprecationWarning)
      running build_ext
      -- The C compiler identification is GNU 10.3.1
      -- The CXX compiler identification is GNU 10.3.1
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - done
      -- Check for working C compiler: /usr/bin/cc - skipped
      -- Detecting C compile features
      -- Detecting C compile features - done
      -- Detecting CXX compiler ABI info
      -- Detecting CXX compiler ABI info - done
      -- Check for working CXX compiler: /usr/bin/c++ - skipped
      -- Detecting CXX compile features
      -- Detecting CXX compile features - done
      GMP_INCLUDES=/usr/include
      -- Found GMP: /usr/include (Required is at least version "5.1.0")
      GMP_INCLUDES=/usr/include
      -- Found GMPXX: /usr/lib/libgmpxx.so
      -- pybind11 v2.6.2
      -- Found PythonInterp: /home/user/sources/chia-blockchain/venv/bin/python (found version "3.9.13")
      -- Found PythonLibs: /usr/lib/libpython3.9.so
      -- Performing Test HAS_FLTO
      -- Performing Test HAS_FLTO - Success
      -- Configuring done
      -- Generating done
      -- Build files have been written to: /tmp/pip-install-nk_d3eym/chiavdf_cd2fd4e435de4e7db3bd697de5a50417
      [ 16%] Building CXX object CMakeFiles/verifier_test.dir/verifier_test.cpp.o
      [ 33%] Building CXX object CMakeFiles/chiavdf.dir/python_bindings/fastvdf.cpp.o
      [ 50%] Building C object CMakeFiles/verifier_test.dir/refcode/lzcnt.c.o
      [ 66%] Building C object CMakeFiles/chiavdf.dir/refcode/lzcnt.c.o
      [ 83%] Linking CXX executable verifier_test
      [ 83%] Built target verifier_test
      [100%] Linking CXX shared module build/lib.linux-aarch64-cpython-39/chiavdf.cpython-39-aarch64-linux-musl.so
      [100%] Built target chiavdf
      running build_hook
      In file included from vdf_client.cpp:2:
      vdf.h:6:10: fatal error: x86intrin.h: No such file or directory
          6 | #include <x86intrin.h>
            |          ^~~~~~~~~~~~~
      compilation terminated.
      make: *** [<builtin>: vdf_client.o] Error 1
      Traceback (most recent call last):
        File "/home/user/sources/chia-blockchain/venv/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 363, in <module>
          main()
        File "/home/user/sources/chia-blockchain/venv/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 345, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
        File "/home/user/sources/chia-blockchain/venv/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 261, in build_wheel
          return _build_backend().build_wheel(wheel_directory, config_settings,
        File "/tmp/pip-build-env-u8jv1kik/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 412, in build_wheel
          return self._build_with_temp_dir(['bdist_wheel'], '.whl',
        File "/tmp/pip-build-env-u8jv1kik/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 397, in _build_with_temp_dir
          self.run_setup()
        File "/tmp/pip-build-env-u8jv1kik/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 335, in run_setup
          exec(code, locals())
        File "<string>", line 266, in <module>
        File "/tmp/pip-build-env-u8jv1kik/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 87, in setup
          return distutils.core.setup(**attrs)
        File "/tmp/pip-build-env-u8jv1kik/overlay/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 185, in setup
          return run_commands(dist)
        File "/tmp/pip-build-env-u8jv1kik/overlay/lib/python3.9/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
          dist.run_commands()
        File "/tmp/pip-build-env-u8jv1kik/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 973, in run_commands
          self.run_command(cmd)
        File "/tmp/pip-build-env-u8jv1kik/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 1217, in run_command
          super().run_command(command)
        File "/tmp/pip-build-env-u8jv1kik/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 992, in run_command
          cmd_obj.run()
        File "/tmp/pip-build-env-u8jv1kik/overlay/lib/python3.9/site-packages/wheel/bdist_wheel.py", line 299, in run
          self.run_command('build')
        File "/tmp/pip-build-env-u8jv1kik/overlay/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 319, in run_command
          self.distribution.run_command(command)
        File "/tmp/pip-build-env-u8jv1kik/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 1217, in run_command
          super().run_command(command)
        File "/tmp/pip-build-env-u8jv1kik/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 992, in run_command
          cmd_obj.run()
        File "/tmp/pip-build-env-u8jv1kik/overlay/lib/python3.9/site-packages/setuptools/_distutils/command/build.py", line 132, in run
          self.run_command(cmd_name)
        File "/tmp/pip-build-env-u8jv1kik/overlay/lib/python3.9/site-packages/setuptools/_distutils/cmd.py", line 319, in run_command
          self.distribution.run_command(command)
        File "/tmp/pip-build-env-u8jv1kik/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 1217, in run_command
          super().run_command(command)
        File "/tmp/pip-build-env-u8jv1kik/overlay/lib/python3.9/site-packages/setuptools/_distutils/dist.py", line 992, in run_command
          cmd_obj.run()
        File "<string>", line 44, in run
        File "<string>", line 76, in invoke_make
        File "/usr/lib/python3.9/subprocess.py", line 424, in check_output
          return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
        File "/usr/lib/python3.9/subprocess.py", line 528, in run
          raise CalledProcessError(retcode, process.args,
      subprocess.CalledProcessError: Command 'make -C src -f Makefile.vdf-client' returned non-zero exit status 2.
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for chiavdf
Successfully built chia-blockchain
Failed to build chiavdf
ERROR: Could not build wheels for chiavdf, which is required to install pyproject.toml-based projects

BUG: Build fails - CMAKE error Could NOT find GMPXX (missing: GMPXX_LIBRARIES)

chaivdf-cmake-log.txt
I have attempted to build an rpm for a Fedora based system, I am certain to have all stated deps installed but perhaps I am missing something?

I took a look and test.yaml gave some hints.
$ grep -r lib

.github/workflows/build-aarch64.yml: && curl -L https://gmplib.org/download/gmp/gmp-6.2.1.tar.lz | lzip -dc | tar x
.github/workflows/build.yml: && curl -L https://gmplib.org/download/gmp/gmp-6.2.1.tar.lz | tar x --lzip
.github/workflows/build.yml: && cp src/lib/gmp-patch-6.2.1/longlong.h gmp-6.2.1/
.github/workflows/build.yml: && cp src/lib/gmp-patch-6.2.1/compat.c gmp-6.2.1/
.github/workflows/build.yml: CIBW_ENVIRONMENT_WINDOWS: "BUILD_VDF_CLIENT=N SETUPTOOLS_USE_DISTUTILS=stdlib"
.github/workflows/test.yaml: sudo apt-get install libgmp-dev libboost-python-dev libpython3.8-dev libboost-system-dev build-essential -y
.gitmodules:[submodule "src/lib/pybind11"]
.gitmodules: path = src/lib/pybind11

Have all which is stated below installed: (also is MPIR needed or just GMP?)

boost-devel
boost-python
boost-system
cmake
gcc
gmp
gmp-devel
lzip
python3dist(pybind11)
python3dist(pytest)
python3dist(setuptools)

build log -

GMP_INCLUDES=/usr/include
-- Found GMP: /usr/include (Required is at least version "5.1.0")
GMP_INCLUDES=/usr/include
CMake Error at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:218 (message):
Could NOT find GMPXX (missing: GMPXX_LIBRARIES)
Call Stack (most recent call first):
/usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:582 (_FPHSA_FAILURE_MESSAGE)
cmake/FindGMPXX.cmake:33 (find_package_handle_standard_args)
CMakeLists.txt:18 (find_package)

-- Configuring incomplete, errors occurred!
See also "/home/ss/rpmbuild/BUILD/chiavdf-1.0.1/CMakeFiles/CMakeOutput.log".
Traceback (most recent call last):
File "setup.py", line 266, in
setup(
File "/usr/lib/python3.8/site-packages/setuptools/init.py", line 144, in setup
return distutils.core.setup(**attrs)
File "/usr/lib64/python3.8/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib64/python3.8/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/usr/lib64/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/lib64/python3.8/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/usr/lib64/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib64/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "setup.py", line 112, in run
self.build_extension(ext)
File "setup.py", line 139, in build_extension
subprocess.check_call(["cmake", ext.sourcedir] + cmake_args, env=env)
File "/usr/lib64/python3.8/subprocess.py", line 364, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '/home/ss/rpmbuild/BUILD/chiavdf-1.0.1/src', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=/home/ss/rpmbuild/BUILD/chiavdf-1.0.1/build/lib.linux-x86_64-3.8', '-DPYTHON_EXECUTABLE=/usr/bin/python3', '-DCMAKE_BUILD_TYPE=Release']' returned non-zero exit status 1.
error: Bad exit status from /var/tmp/rpm-tmp.AEsgzl (%build)

This would suggest simply missing a standard Package

Chia VDF programming reference

Hi all,

I'm interested in using Chia's VDF for a non-blockchain project and looking for the API to call the "evaluate" and "verify" functions of the VDF from C++. It's fine if initially the API calls a simpler/slower VDF implementation for now, but I would be happy if later it can be swapped out for a more performant one.

However, I'm quite struggling with the codebase due to seemingly duplicated files and implementations.
Is a concise API + documentation available? I checked the Chia docs, but I found no programming reference for the VDF there.

Please let me know if I should post this question elsewhere. It looked like it didn't belong to the Chia developer Forum (since I'm not working on Chia itself).

Cheers

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.