Giter Club home page Giter Club logo

inference's Introduction

MLPerf™ Inference Benchmark Suite

MLPerf Inference is a benchmark suite for measuring how fast systems can run models in a variety of deployment scenarios.

Please see the MLPerf Inference benchmark paper for a detailed description of the benchmarks along with the motivation and guiding principles behind the benchmark suite. If you use any part of this benchmark (e.g., reference implementations, submissions, etc.), please cite the following:

@misc{reddi2019mlperf,
    title={MLPerf Inference Benchmark},
    author={Vijay Janapa Reddi and Christine Cheng and David Kanter and Peter Mattson and Guenther Schmuelling and Carole-Jean Wu and Brian Anderson and Maximilien Breughe and Mark Charlebois and William Chou and Ramesh Chukka and Cody Coleman and Sam Davis and Pan Deng and Greg Diamos and Jared Duke and Dave Fick and J. Scott Gardner and Itay Hubara and Sachin Idgunji and Thomas B. Jablin and Jeff Jiao and Tom St. John and Pankaj Kanwar and David Lee and Jeffery Liao and Anton Lokhmotov and Francisco Massa and Peng Meng and Paulius Micikevicius and Colin Osborne and Gennady Pekhimenko and Arun Tejusve Raghunath Rajan and Dilip Sequeira and Ashish Sirasao and Fei Sun and Hanlin Tang and Michael Thomson and Frank Wei and Ephrem Wu and Lingjie Xu and Koichi Yamada and Bing Yu and George Yuan and Aaron Zhong and Peizhao Zhang and Yuchen Zhou},
    year={2019},
    eprint={1911.02549},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

Please see here for the MLPerf inference documentation website which includes automated commands to run MLPerf inference benchmarks using different implementations.

MLPerf Inference v4.1 (submission deadline July 26, 2024)

For submissions, please use the master branch and any commit since the 4.1 seed release although it is best to use the latest commit. v4.1 tag will be created from the master branch after the result publication.

For power submissions please use SPEC PTD 1.10 (needs special access) and any commit of the power-dev repository after the code-freeze

model reference app framework dataset category
resnet50-v1.5 vision/classification_and_detection tensorflow, onnx, tvm, ncnn imagenet2012 edge,datacenter
retinanet 800x800 vision/classification_and_detection pytorch, onnx openimages resized to 800x800 edge,datacenter
bert language/bert tensorflow, pytorch, onnx squad-1.1 edge,datacenter
dlrm-v2 recommendation/dlrm_v2 pytorch Multihot Criteo Terabyte datacenter
3d-unet vision/medical_imaging/3d-unet-kits19 pytorch, tensorflow, onnx KiTS19 edge,datacenter
gpt-j language/gpt-j pytorch CNN-Daily Mail edge,datacenter
stable-diffusion-xl text_to_image pytorch COCO 2014 edge,datacenter
llama2-70b language/llama2-70b pytorch OpenOrca datacenter
mixtral-8x7b language/mixtral-8x7b pytorch OpenOrca, MBXP, GSM8K datacenter
  • Framework here is given for the reference implementation. Submitters are free to use their own frameworks to run the benchmark.

MLPerf Inference v4.0 (submission February 23, 2024)

There is an extra one-week extension allowed only for the llama2-70b submissions. For submissions, please use the master branch and any commit since the 4.0 seed release although it is best to use the latest commit. v4.0 tag will be created from the master branch after the result publication.

For power submissions please use SPEC PTD 1.10 (needs special access) and any commit of the power-dev repository after the code-freeze

model reference app framework dataset category
resnet50-v1.5 vision/classification_and_detection tensorflow, onnx, tvm, ncnn imagenet2012 edge,datacenter
retinanet 800x800 vision/classification_and_detection pytorch, onnx openimages resized to 800x800 edge,datacenter
bert language/bert tensorflow, pytorch, onnx squad-1.1 edge,datacenter
dlrm-v2 recommendation/dlrm_v2 pytorch Multihot Criteo Terabyte datacenter
3d-unet vision/medical_imaging/3d-unet-kits19 pytorch, tensorflow, onnx KiTS19 edge,datacenter
rnnt speech_recognition/rnnt pytorch OpenSLR LibriSpeech Corpus edge,datacenter
gpt-j language/gpt-j pytorch CNN-Daily Mail edge,datacenter
stable-diffusion-xl text_to_image pytorch COCO 2014 edge,datacenter
llama2-70b language/llama2-70b pytorch OpenOrca datacenter
  • Framework here is given for the reference implementation. Submitters are free to use their own frameworks to run the benchmark.

MLPerf Inference v3.1 (submission August 18, 2023)

Please use v3.1 tag (git checkout v3.1) if you would like to reproduce the v3.1 results.

For reproducing power submissions please use the master branch of the MLCommons power-dev repository and checkout to e9e16b1299ef61a2a5d8b9abf5d759309293c440.

You can see the individual README files in the benchmark task folders for more details regarding the benchmarks. For reproducing the submitted results please see the README files under the respective submitter folders in the inference v3.1 results repository.

model reference app framework dataset category
resnet50-v1.5 vision/classification_and_detection tensorflow, onnx, tvm, ncnn imagenet2012 edge,datacenter
retinanet 800x800 vision/classification_and_detection pytorch, onnx openimages resized to 800x800 edge,datacenter
bert language/bert tensorflow, pytorch, onnx squad-1.1 edge,datacenter
dlrm-v2 recommendation/dlrm_v2 pytorch Multihot Criteo Terabyte datacenter
3d-unet vision/medical_imaging/3d-unet-kits19 pytorch, tensorflow, onnx KiTS19 edge,datacenter
rnnt speech_recognition/rnnt pytorch OpenSLR LibriSpeech Corpus edge,datacenter
gpt-j language/gpt-j pytorch CNN-Daily Mail edge,datacenter

MLPerf Inference v3.0 (submission 03/03/2023)

Please use the v3.0 tag (git checkout v3.0) if you would like to reproduce v3.0 results.

You can see the individual Readme files in the reference app for more details.

model reference app framework dataset category
resnet50-v1.5 vision/classification_and_detection tensorflow, onnx, tvm imagenet2012 edge,datacenter
retinanet 800x800 vision/classification_and_detection pytorch, onnx openimages resized to 800x800 edge,datacenter
bert language/bert tensorflow, pytorch, onnx squad-1.1 edge,datacenter
dlrm recommendation/dlrm pytorch, tensorflow Criteo Terabyte datacenter
3d-unet vision/medical_imaging/3d-unet-kits19 pytorch, tensorflow, onnx KiTS19 edge,datacenter
rnnt speech_recognition/rnnt pytorch OpenSLR LibriSpeech Corpus edge,datacenter

MLPerf Inference v2.1 (submission 08/05/2022)

Use the r2.1 branch (git checkout r2.1) if you want to submit or reproduce v2.1 results.

See the individual Readme files in the reference app for details.

model reference app framework dataset category
resnet50-v1.5 vision/classification_and_detection tensorflow, onnx imagenet2012 edge,datacenter
retinanet 800x800 vision/classification_and_detection pytorch, onnx openimages resized to 800x800 edge,datacenter
bert language/bert tensorflow, pytorch, onnx squad-1.1 edge,datacenter
dlrm recommendation/dlrm pytorch, tensorflow Criteo Terabyte datacenter
3d-unet vision/medical_imaging/3d-unet-kits19 pytorch, tensorflow, onnx KiTS19 edge,datacenter
rnnt speech_recognition/rnnt pytorch OpenSLR LibriSpeech Corpus edge,datacenter

MLPerf Inference v2.0 (submission 02/25/2022)

Use the r2.0 branch (git checkout r2.0) if you want to submit or reproduce v2.0 results.

See the individual Readme files in the reference app for details.

model reference app framework dataset category
resnet50-v1.5 vision/classification_and_detection tensorflow, onnx imagenet2012 edge,datacenter
ssd-mobilenet 300x300 vision/classification_and_detection tensorflow, pytorch, onnx coco resized to 300x300 edge
ssd-resnet34 1200x1200 vision/classification_and_detection tensorflow, pytorch, onnx coco resized to 1200x1200 edge,datacenter
bert language/bert tensorflow, pytorch, onnx squad-1.1 edge,datacenter
dlrm recommendation/dlrm pytorch, tensorflow Criteo Terabyte datacenter
3d-unet vision/medical_imaging/3d-unet-kits19 pytorch, tensorflow, onnx KiTS19 edge,datacenter
rnnt speech_recognition/rnnt pytorch OpenSLR LibriSpeech Corpus edge,datacenter

MLPerf Inference v1.1 (submission 08/13/2021)

Use the r1.1 branch (git checkout r1.1) if you want to submit or reproduce v1.1 results.

See the individual Readme files in the reference app for details.

model reference app framework dataset category
resnet50-v1.5 vision/classification_and_detection tensorflow, onnx imagenet2012 edge,datacenter
ssd-mobilenet 300x300 vision/classification_and_detection tensorflow, pytorch, onnx coco resized to 300x300 edge
ssd-resnet34 1200x1200 vision/classification_and_detection tensorflow, pytorch, onnx coco resized to 1200x1200 edge,datacenter
bert language/bert tensorflow, pytorch, onnx squad-1.1 edge,datacenter
dlrm recommendation/dlrm pytorch, tensorflow Criteo Terabyte datacenter
3d-unet vision/medical_imaging/3d-unet pytorch, tensorflow(?), onnx(?) BraTS 2019 edge,datacenter
rnnt speech_recognition/rnnt pytorch OpenSLR LibriSpeech Corpus edge,datacenter

MLPerf Inference v1.0 (submission 03/19/2021)

Use the r1.0 branch (git checkout r1.0) if you want to submit or reproduce v1.0 results.

See the individual Readme files in the reference app for details.

model reference app framework dataset category
resnet50-v1.5 vision/classification_and_detection tensorflow, onnx imagenet2012 edge,datacenter
ssd-mobilenet 300x300 vision/classification_and_detection tensorflow, pytorch, onnx coco resized to 300x300 edge
ssd-resnet34 1200x1200 vision/classification_and_detection tensorflow, pytorch, onnx coco resized to 1200x1200 edge,datacenter
bert language/bert tensorflow, pytorch, onnx squad-1.1 edge,datacenter
dlrm recommendation/dlrm pytorch, tensorflow(?) Criteo Terabyte datacenter
3d-unet vision/medical_imaging/3d-unet pytorch, tensorflow(?), onnx(?) BraTS 2019 edge,datacenter
rnnt speech_recognition/rnnt pytorch OpenSLR LibriSpeech Corpus edge,datacenter

MLPerf Inference v0.7 (submission 9/18/2020)

Use the r0.7 branch (git checkout r0.7) if you want to submit or reproduce v0.7 results.

See the individual Readme files in the reference app for details.

model reference app framework dataset
resnet50-v1.5 vision/classification_and_detection tensorflow, pytorch, onnx imagenet2012
ssd-mobilenet 300x300 vision/classification_and_detection tensorflow, pytorch, onnx coco resized to 300x300
ssd-resnet34 1200x1200 vision/classification_and_detection tensorflow, pytorch, onnx coco resized to 1200x1200
bert language/bert tensorflow, pytorch, onnx squad-1.1
dlrm recommendation/dlrm pytorch, tensorflow(?), onnx(?) Criteo Terabyte
3d-unet vision/medical_imaging/3d-unet pytorch, tensorflow(?), onnx(?) BraTS 2019
rnnt speech_recognition/rnnt pytorch OpenSLR LibriSpeech Corpus

MLPerf Inference v0.5

Use the r0.5 branch (git checkout r0.5) if you want to reproduce v0.5 results.

See the individual Readme files in the reference app for details.

model reference app framework dataset
resnet50-v1.5 v0.5/classification_and_detection tensorflow, pytorch, onnx imagenet2012
mobilenet-v1 v0.5/classification_and_detection tensorflow, pytorch, onnx imagenet2012
ssd-mobilenet 300x300 v0.5/classification_and_detection tensorflow, pytorch, onnx coco resized to 300x300
ssd-resnet34 1200x1200 v0.5/classification_and_detection tensorflow, pytorch, onnx coco resized to 1200x1200
gnmt v0.5/translation/gnmt/ tensorflow, pytorch See Readme

inference's People

Contributors

aaronzhongii avatar arjunsuresh avatar badhri-intel avatar christ1ne avatar ens-lg4 avatar galv avatar georgelyuan avatar guschmue avatar jiahuanglin avatar jimmychiangmtk avatar jklingin avatar kstreee-furiosa avatar mnaumovfb avatar nathanw-mlc avatar nv-jinhosuh avatar nv-rborkar avatar nvmbreughe avatar nvpohanh avatar nvyihengz avatar nvzhihanj avatar papers-submission avatar petermattson avatar pgmpablo157321 avatar pkanwar23 avatar profvjreddi avatar psyhtest avatar sf-wind avatar sub-mod avatar thekanter avatar tjablin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

inference's Issues

RuntimeError: cuda runtime error (35) : CUDA driver version is insufficient for CUDA runtime version

Hi all, it looks like the sample inference/cloud/single_stage_detector was changed, I ran it before without issues but now it is throwing the below error :

THCudaCheck FAIL file=/pytorch/torch/csrc/cuda/Module.cpp line=34 error=35 : CUDA driver version is insufficient for CUDA runtime version
Traceback (most recent call last):
  File "infer.py", line 156, in <module>
Using seed = 1
    main()
  File "infer.py", line 151, in main
    torch.cuda.set_device(args.device)
  File "/opt/conda/lib/python3.6/site-packages/torch/cuda/__init__.py", line 264, in set_device
    torch._C._cuda_setDevice(device)
RuntimeError: cuda runtime error (35) : CUDA driver version is insufficient for CUDA runtime version at /pytorch/torch/csrc/cuda/Module.cpp:34

real    0m0.765s
user    0m1.120s
sys     0m2.000s

Error with Cloud Inference - Sentiment Analysis Mxnet Implementation

Hi all, I am getting the below error when running Cloud Inference - Sentiment Analysis Mxnet Implementation benchmark, some help?

command: python eval.py --model cnn --eval pretrained/seq2cnn_model --batch-size 1 --calc_accuracy

test: 39.84 % 0.8604417670682731
test: 39.88 % 0.8603811434302908
test: 39.92 % 0.8603206412825651
test: 39.96 % 0.8603603603603603
Finshed running 10000 batches
test: 40.0 % 0.8604
Traceback (most recent call last):
  File "eval.py", line 104, in <module>
    main()
  File "eval.py", line 74, in main
    batch.data[0]=batch.data[0].as_in_context(xpu)
AttributeError: type object 'StopIteration' has no attribute 'data'

AssertionError: Label file imagenet//labels.txt doesn't exist

INFO 17:27:41 subprocess_with_logger.py: 24: Running: awk (NR>050000/1000)&&(NR<=050000/1000+50000/1000) {print > "/var/tmp/tmpL8e5eT/caffe2/host/inputs/labels_0.txt"} /var/tmp/tmpL8e5eT/caffe2/host/labels.txt
INFO 17:27:41 subprocess_with_logger.py: 73: awk: cannot open /var/tmp/tmpL8e5eT/caffe2/host/labels.txt (No such file or directory)
INFO 17:27:41 subprocess_with_logger.py: 24: Running: {convert_image_to_tensor} --input_image_file /var/tmp/tmpL8e5eT/caffe2/host/inputs/labels_0.txt --output_tensor /var/tmp/tmpL8e5eT/caffe2/host/images_tensor.pb --batch_size 1 --scale 256,-1 --crop 224,224 --preprocess normalize,mean,std --report_time json|Caffe2Observer
ERROR 17:27:41 subprocess_with_logger.py: 85: Unknown exception <type 'exceptions.OSError'>: {convert_image_to_tensor} --input_image_file /var/tmp/tmpL8e5eT/caffe2/host/inputs/labels_0.txt --output_tensor /var/tmp/tmpL8e5eT/caffe2/host/images_tensor.pb --batch_size 1 --scale 256,-1 --crop 224,224 --preprocess normalize,mean,std --report_time json|Caffe2Observer
INFO 17:27:41 hdb.py: 27: push /var/tmp/tmpL8e5eT/caffe2/host/images_tensor.pb to /var/tmp/tmprFiKA9/ubuntu/images_tensor.pb
INFO 17:27:41 benchmark_driver.py: 64: Exception caught when running benchmark
INFO 17:27:41 benchmark_driver.py: 65: [Errno 2] No such file or directory: u'/var/tmp/tmpL8e5eT/caffe2/host/images_tensor.pb'
ERROR 17:27:41 benchmark_driver.py: 69: Traceback (most recent call last):
File "/home/caffer/Downloads/shuff_tmp/tmp/FAI-PEP/benchmarking/driver/benchmark_driver.py", line 38, in runOneBenchmark
data = _runOnePass(minfo, mbenchmark, framework, platform)
File "/home/caffer/Downloads/shuff_tmp/tmp/FAI-PEP/benchmarking/driver/benchmark_driver.py", line 106, in _runOnePass
framework.runBenchmark(info, benchmark, platform)
File "/home/caffer/Downloads/shuff_tmp/tmp/FAI-PEP/benchmarking/frameworks/caffe2/caffe2.py", line 38, in runBenchmark
platform)
File "/home/caffer/Downloads/shuff_tmp/tmp/FAI-PEP/benchmarking/frameworks/framework_base.py", line 132, in runBenchmark
if input_files else None
File "/home/caffer/Downloads/shuff_tmp/tmp/FAI-PEP/benchmarking/platforms/platform_base.py", line 102, in copyFilesToPlatform
copy_files)
File "/home/caffer/Downloads/shuff_tmp/tmp/FAI-PEP/benchmarking/platforms/platform_base.py", line 90, in copyFilesToPlatform
self.util.push(files, target_file)
File "/home/caffer/Downloads/shuff_tmp/tmp/FAI-PEP/benchmarking/platforms/host/hdb.py", line 34, in push
shutil.copyfile(src, tgt)
File "/usr/lib/python2.7/shutil.py", line 96, in copyfile
with open(src, 'rb') as fsrc:
IOError: [Errno 2] No such file or directory: u'/var/tmp/tmpL8e5eT/caffe2/host/images_tensor.pb'

INFO 17:27:41 benchmark_driver.py: 83: No data collected for on Linux-4.18.0-17-generic-x86_64-with-Ubuntu-18.04-bionic--IntelR-CoreTM-i5-8250U-CPU--1.60GHz (ubuntu). The run may be failed for c50ded8
INFO 17:27:41 harness.py: 226: ======= user and harness error =======

Speech recognition docker image does not build

Hi there,

I have the same error than when compiling by the training side, i.e. 👍

---> 5cb90201dd79
Step 13/14 : RUN cd ctcdecode; pip install .
---> Running in d406e114fed2
Processing /tmp/ctcdecode
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
File "/tmp/pip-req-build-Sq4p2m/setup.py", line 55, in
os.path.join(this_file, "build.py:ffi")
File "/usr/lib/python2.7/distutils/core.py", line 111, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python2.7/dist-packages/setuptools/dist.py", line 272, in init
_Distribution.init(self,attrs)
File "/usr/lib/python2.7/distutils/dist.py", line 287, in init
self.finalize_options()
File "/usr/lib/python2.7/dist-packages/setuptools/dist.py", line 327, in finalize_options
ep.load()(self, ep.name, value)
File "/usr/local/lib/python2.7/dist-packages/cffi/setuptools_ext.py", line 204, in cffi_modules
add_cffi_module(dist, cffi_module)
File "/usr/local/lib/python2.7/dist-packages/cffi/setuptools_ext.py", line 49, in add_cffi_module
execfile(build_file_name, mod_vars)
File "/usr/local/lib/python2.7/dist-packages/cffi/setuptools_ext.py", line 25, in execfile
exec(code, glob, glob)
File "/tmp/pip-req-build-Sq4p2m/build.py", line 28, in
'third_party/boost_1_67_0.tar.gz')
File "/tmp/pip-req-build-Sq4p2m/build.py", line 19, in download_extract
tar = tarfile.open(dl_path)
File "/usr/lib/python2.7/tarfile.py", line 1678, in open
raise ReadError("file could not be opened successfully")
tarfile.ReadError: file could not be opened successfully

----------------------------------------

Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-req-build-Sq4p2m/
The command '/bin/sh -c cd ctcdecode; pip install .' returned a non-zero code: 1

Question about inference/cloud/translation/gnmt/tensorflow/

I'm using tensorflow-gpu version ot Turing T4 for inferencing benchmarking.

Is the benchmark really using GPU for inferencing?

I got the code from mlperf/inference/cloud/translation/gnmt/tensorflow/.

I increased the # of iterations for 1 to 500000 in run_task.py.
It seems that the GPU is not being used for inferencing. Initially for first few seconds GPU utilization is 30%. Then nvidia-smi reports GPU utilization to be zero for most of the time for 500000 iterations.

It takes about 525 seconds to complete the run. For the first 125 or seconds, nvidia-smi shows any GPU utilization. After that nvidia-smi shows 0 utilization. Is the benchmark really using GPU for inferencing?

Here is is excerpt form the log of the run:

dynamic_seq2seq/decoder/output_projection/kernel:0, (1024, 32316),
2019-03-08 14:08:50.113076: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0
2019-03-08 14:08:50.113130: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-08 14:08:50.113139: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0
2019-03-08 14:08:50.113145: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0: N
2019-03-08 14:08:50.113335: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3318 MB memory) -> physical GPU (device: 0, name: GRID T4-4Q, pci bus id: 0000:02:01.0, compute capability: 7.5)
loaded infer model parameters from /home/ml/inference-master/cloud/translation/gnmt/tensorflow/ende_gnmt_model_4_layer/translate.ckpt, time 1.01s

Start decoding

decoding to output /home/ml/inference-master/cloud/translation/gnmt/tensorflow/nmt/data/output/g_nmt-out
Iterations 500000
done, num sentences 3003, num translations per input 1, time 35s, Fri Mar 8 14:09:27 2019.
done, num sentences 3003, num translations per input 1, time 35s, Fri Mar 8 14:09:27 2019.
done, num sentences 3003, num translations per input 1, time 35s, Fri Mar 8 14:09:27 2019.
done, num sentences 3003, num translations per input 1, time 35s, Fri Mar 8 14:09:27 2019.
done, num sentences 3003, num translations per input 1, time 35s, Fri Mar 8 14:09:27 2019.
done, num sentences 3003, num translations per input 1, time 35s, Fri Mar 8 14:09:27 2019.

Excerpt from nvidia-smi log: GPU utilization drops from 26% to 0 and then stays 0 for the rest of the run.
Fri Mar 8 14:09:27 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.92 Driver Version: 410.92 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GRID T4-4Q On | 00000000:02:01.0 Off | N/A |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 26% Default |
+-------------------------------+----------------------+----------------------+

| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 26% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 26% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 25% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 24% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 24% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 23% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 22% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 21% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 21% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 20% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 19% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 18% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 18% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 17% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 16% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 15% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 14% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 14% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 13% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 12% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 11% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 11% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 10% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 9% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 8% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 8% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 7% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 6% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 5% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 5% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 4% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 3% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 2% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 2% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 1% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 0% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 0% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 0% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 0% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 0% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 0% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 0% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 0% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 0% Default |
| N/A N/A P0 N/A / N/A | 3851MiB / 4096MiB | 0% Default |

Thanks,

-Uday

load_gen build failed.

root@ubuntu1:/home/data/mlperf_inference/mlperf_inference# make mlperf_loadgen
cd third_party/ninja && python configure.py --bootstrap
bootstrapping ninja...
wrote build.ninja.
bootstrap complete. rebuilding...
[25/25] LINK ninja
# Generate gn's ninja build file
python third_party/gn/build/gen.py
# Build gn
third_party/ninja/ninja -C third_party/gn/out
ninja: Entering directory `third_party/gn/out'
ninja: no work to do.
# Copy gn to third_party/gn, where depot_tools expects it.
cp third_party/gn/out/gn* third_party/gn/.
third_party/gn/gn gen out/MakefileGnProj
ERROR at //build/config/linux/pkg_config.gni:103:17: Script returned non-zero exit code.
pkgresult = exec_script(pkg_config_script, args, "value")
^----------
Current dir: /home/data/mlperf_inference/mlperf_inference/out/MakefileGnProj/
Command: python /home/data/mlperf_inference/mlperf_inference/build/config/linux/pkg-config.py glib-2.0 gmodule-2.0 gobject-2.0 gthread-2.0
Returned 1.
stderr:

File "/home/data/mlperf_inference/mlperf_inference/build/config/linux/pkg-config.py", line 57
print "You must specify an architecture via -a if using a sysroot."
^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print("You must specify an architecture via -a if using a sysroot.")?

See //build/config/linux/BUILD.gn:89:3: whence it was called.
pkg_config("glib") {
^-------------------
See //build/config/compiler/BUILD.gn:218:18: which caused the file to be included.
configs += [ "//build/config/linux:compiler" ]
^------------------------------
Makefile:22: recipe for target 'mlperf_loadgen' failed
make: *** [mlperf_loadgen] Error 1

Weight provenance

The weight provenance is not recorded for all the benchmarks and this is something that the README files should include. Need to have BM owners do this.

ssd resnet34 model

Hi,
I am try to run the ssd-resnet34 model in pytorch, but I got many issues on this model.

  1. the out_size here cause miss understand. https://github.com/mlperf/inference/blob/master/cloud/single_stage_detector/pytorch/ssd_r34.py#L22
    if the input size is 1200x1200, the out_size is 150, is 38 also works on 150 in function _build_additional_features
    https://github.com/mlperf/inference/blob/master/cloud/single_stage_detector/pytorch/ssd_r34.py#L44

  2. is the stride in loc and conf always 3?
    https://github.com/mlperf/inference/blob/master/cloud/single_stage_detector/pytorch/ssd_r34.py#L36

  3. the loc and conf shape miss match the comments, For SSD 300, shall return nbatch x 8732 x {nlabels, nlocs} results
    https://github.com/mlperf/inference/blob/master/cloud/single_stage_detector/pytorch/ssd_r34.py#L143
    the value I got is
    (Pdb) print(ploc.shape)
    torch.Size([1, 4, 1098])
    (Pdb) print(plabel.shape)
    torch.Size([1, 81, 1098])

Inference v0.5 software verification notes

Latest verified:
date 5/15
commit: fd037e6

tested commands:
under cloud/image_classification
./run_local.sh tf cpu --time 100
./run_local.sh tf gpu --time 100

--time 100 allows you to run a shorter version of the code

what to expect:

  • Single stream mode working end to end on Resnet 50. Latency results reported in mlperf_log_summary.txt, which seems to be correct.
  • Seeing QPS printouts for SingleStream, MultiStream, Server printouts in the CLI, with latency at 0.01 to 0.4 for each scenario.
  • No MultiStream or Server result reporting from LoadGen yet.
  • The program exits properly.
  • No SSD-mobilenet code has been tested.

How can I obtain the labels.txt?

In run.sh of shufflenet network, it's described as:

# The downloaded images should be in the following directory structure
# ${IMAGENET_DIR}/labels.txt
# ${IMAGENET_DIR}/val/n*

I cannot find the labels.txt. Although, I find a seem-like text file named ILSVRC2012_validation_ground_truth.txt, but the format is not as requested in imagenet_test_map.py.
Can you provide the path to downloads labels.txt?

Questions on trace generator

Hi, I have a few questions of using the trace generator:

Would be good to get some sense on the run-to-run variations. Usually how much is it?

LoadGen ERROR: mlperf_loadgen-0.5a0-cp27-cp27mu-linux_x86_64.whl is not a supported wheel on this platform.

Hi all,

Is the load generator ready to use to run automatically the MLPerf inferences? If so, what are the models already integrated to it?

Also, I have tried to build the LoadGen as a python module and as a C++ library but I got errors with both methods:

To build as a python module:
$ /mlperf_inference/loadgen# sudo -H pip install dist/mlperf_loadgen-0.5a0-cp27-cp27mu-linux_x86_64.whl
ERROR: mlperf_loadgen-0.5a0-cp27-cp27mu-linux_x86_64.whl is not a supported wheel on this platform.

To build as a C++ library:
$ /mlperf_inference# make mlperf_loadgen

cd third_party/ninja && python configure.py --bootstrap
bootstrapping ninja...
warning: A compatible version of re2c (>= 0.11.3) was not found; changes to src/*.in.cc will not affect your build.
wrote build.ninja.
bootstrap complete.  rebuilding...
[25/25] LINK ninja
# Generate gn's ninja build file
python third_party/gn/build/gen.py
# Build gn
third_party/ninja/ninja -C third_party/gn/out
ninja: Entering directory `third_party/gn/out'
[1/246] CXX tools/gn/inherited_libraries.o
FAILED: tools/gn/inherited_libraries.o
clang++ -MMD -MF tools/gn/inherited_libraries.o.d  -I/home/mlperf_inference/third_party/gn -I/home/mlperf_inference/third_party/gn/out -DNDEBUG -O3 -fdata-sections -ffunction-sections -D_FILE_OFFSET_BITS=64 -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -pthread -pipe -fno-exceptions -fno-rtti -fdiagnostics-color -std=c++14 -Wno-c++11-narrowing -c /home/mlperf_inference/third_party/gn/tools/gn/inherited_libraries.cc -o tools/gn/inherited_libraries.o
/bin/sh: 1: clang++: not found
[2/246] CXX base/callback_internal.o
..... a bunch of FAILED messages

Some help how to fix the issue?

SSD-MobileNet just support batch_size=1 now.

When we enable the pytorch SSD-MobileNet defined in "inference/edge/object_detection/ssd_mobilenet/pytorch/ssd_mobilenet_v1.py", we find out it just supports the batch_size=1 now.

In "ssd_mobilenet_v1.py" line153 (box_regression = box_regression.squeeze(0)), the batch_size information is removed by squeeze(0), and the successive "decode_boxes" function is performed on box_regression without batch_size information. Finally, line 167 (boxes = boxes[None]) expand the dims with batch_size=1.

If we want to get throughput, we need to revise the code to enable this model work well for batch_size>1, right?

image

Loadgen offline mode not implement

Hi, I find there are four scenarios in loadgen including SingleStream, MultiStream, Sever and Offline. However, the loadgen code now just supports the first three modes while for the Offline mode it just provides a API without function implementation. Do you plan to update the Offline model in loadgen? Or it should be implemented by the submitter?

vision model accuracy updates in readme

I see higher accuracy than reported in readme.
export DATA_DIR=/../CK-TOOLS/dataset-imagenet-ilsvrc2012-val/

$ ./run_local.sh tf resnet50 gpu --accuracy
Accuracy qps=129.15, mean=0.006666, time=387.14, acc=73.87, queries=50000, tiles=50.0:0.0060,80.0:0.0065,90.0:0.0089,95.0:0.0118,99.0:0.0120,99.9:0.0122

The group accuracy for different groups in shufflenet turns out the same,what causes this problem?

I run shufflenet network by executing ./run.sh imagenet group1 or ./run.sh imagenet group2. The accuracy of different groups is the same, why did it turn out this result?
For group1(./run.sh group1):
NET: shufflenet METRIC: accuracy ID: 0
PLATFORM: Linux-4.18.0-17-generic-x86_64-with-Ubuntu-18.04-bionic--IntelR-CoreTM-i5-8250U-CPU--1.60GHz HASH: ubuntu
FRAMEWORK: caffe2 COMMIT: 0ebe252c9c81f64a3604b7b340550353c0537566 TIME: 2019-05-08 15:58:19
NET latency: value median 33488.19351 MAD: 1049.27826
image_preprocess convert: value median 2865.79500 MAD: 463.04500
preprocess data_pack: value median 74817.60000 MAD: 1008.20000
shufflenet total_number_of_top1_corrects: value median 33234.00000 MAD: 0.00000
shufflenet total_percent_of_top1_corrects: value median 66.46800 MAD: 0.00000
INFO 20:09:58 harness.py: 235: ======= success =======
INFO 20:09:58 repo_driver.py: 360: One benchmark run successful for 0ebe252c9c81f64a3604b7b340550353c0537566

For group8: (./run.sh group8)
NET: shufflenet METRIC: accuracy ID: 0
PLATFORM: Linux-4.18.0-17-generic-x86_64-with-Ubuntu-18.04-bionic--IntelR-CoreTM-i5-8250U-CPU--1.60GHz HASH: ubuntu
FRAMEWORK: caffe2 COMMIT: 0ebe252c9c81f64a3604b7b340550353c0537566 TIME: 2019-05-08 15:58:19
NET latency: value median 14934.82733 MAD: 212.27360
image_preprocess convert: value median 1397.55500 MAD: 229.61500
preprocess data_pack: value median 31351.20000 MAD: 324.00000
shufflenet total_number_of_top1_corrects: value median 33234.00000 MAD: 0.00000
shufflenet total_percent_of_top1_corrects: value median 66.46800 MAD: 0.00000
INFO 11:20:34 harness.py: 235: ======= success =======
INFO 11:20:34 repo_driver.py: 360: One benchmark run successful for 0ebe252c9c81f64a3604b7b340550353c0537566

Error with Single State Detector - GPU Support: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same

hi all, I am running the inference single state detector benchmark with GPU support and I got the below error:

$ ./run_and_time.sh

stdbuf: missing operand
Try 'stdbuf --help' for more information.

real    0m0.001s
user    0m0.000s
sys     0m0.000s
/opt/conda/lib/python3.6/site-packages/torch/nn/_reduction.py:49: UserWarning: size_average and reduce args will be deprecated, please use reduction='none' instead.
  warnings.warn(warning.format(ret))
Traceback (most recent call last):
  File "infer.py", line 144, in <module>
Using seed = 1
loading annotations into memory...
Done (t=0.70s)
creating index...
index created!
loading model checkpoint /mlperf/pretrained/resnet34-ssd300.pth
    main()age: 1/4952
  File "infer.py", line 141, in main
    eval_ssd300_mlperf_coco(args)
  File "infer.py", line 127, in eval_ssd300_mlperf_coco
    coco_eval(ssd300, val_coco, cocoGt, encoder, inv_map, args.threshold,args.device)
  File "infer.py", line 67, in coco_eval
    ploc, plabel = model(inp)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/mlperf/pytorch/ssd300.py", line 125, in forward
    layers = self.model(data)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/mlperf/pytorch/base_model.py", line 71, in forward
    layer1_activation = self.layer1(data)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
    input = module(input)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 320, in forward
    self.padding, self.dilation, self.groups)
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same

It looks like the input tensors and the model tensors are different. Some help?

offline reporting with LoadGen

With offline mode, the metric should be reported as samples per second rather than QPS, as there is only 1 query in the offline mode.

cat mlperf_log_summary.txt

SUT name : PySUT
Scenario : Offline
Mode : Performance
QPS: xxxxxxx
Result is : INVALID
Min duration satisfied : NO
Min queries satisfied : Yes

@briandersn

Python support/ Invoking from python

I think that additional support for python should be added as the current code require to invoke the run from C. For instance if a part of the processing involves python (e.g. tokenization), why this should be precluded by the fact it has to be invoked from C?
Also, python support can ease the integration with the future reference benchmarks code.

Deep Speech Inference with GPU

Hi all, I want to run the deep speech inference using T4 and V100 GPUs models but I got the below warning and error:

Warning

  warnings.warn(incorrect_binary_warn % (d, name, 9000, CUDA_VERSION))
/usr/local/lib/python2.7/dist-packages/torch/cuda/__init__.py:114: UserWarning:
    Found GPU3 Tesla V100-SXM2-16GB which requires CUDA_VERSION >= 9000 for
     optimal performance and fast startup time, but your PyTorch was compiled
     with CUDA_VERSION 8000. Please install the correct PyTorch binary
     using instructions from http://pytorch.org

Error:
RuntimeError: CUDNN_STATUS_MAPPING_ERROR

So, I decided to build the docker image using Pytorch with Cuda 9.0 or Cuda 10.0, none of them worked. Some suggestion?

--count argument not working

Although I manually added the --count 500 in the starting shell script I still see in the log file (mlperf_log_detail.txt) GeneratedQueries: "queries" : 60000:
Trace:

"pid": 2101, "tid": 140124207314688, "ts": 253977ns : ERROR : SingleStream only supports a samples_per_query of 1.
"pid": 2101, "tid": 140124207314688, "ts": 307040ns : Starting verification mode:
"pid": 2101, "tid": 140124207314688, "ts": 310222ns : ERROR : Unsupported scenario. Only MultiStream supported.
"pid": 2101, "tid": 140124207314688, "ts": 310741ns : Starting performance mode:
"pid": 2101, "tid": 140124207314688, "ts": 381222999ns : GeneratedQueries: "queries" : 60000, "samples per query" : 1, "duration" : 60000000000

Don't know why it is behaving like this. :(

Inference with model in NCHW Format

When I try to run inference on with with a NCHW model I get the following error.

2019-05-04 13:45:18.523575: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x562c16a7e780 executing computations on platform Host. Devices:
2019-05-04 13:45:18.523609: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
ERROR:main:execute_parallel thread: Cannot feed value of shape (1, 224, 224, 3) for Tensor 'data:0', which has shape '(1, 3, 224, 224)'
ERROR:main:execute_parallel thread: Cannot feed value of shape (1, 224, 224, 3) for Tensor 'data:0', which has shape '(1, 3, 224, 224)'

I do understand it is complaining about the NCHW format while it expects the NHWC format. Is changing the python/main.py enough?

SUPPORTED_DATASETS = {
    "imagenet":
        (imagenet.Imagenet, dataset.pre_process_vgg, dataset.post_process_offset1,
         {"image_size": [3, 224, 224]}),
    "imagenet_mobilenet":
        (imagenet.Imagenet, dataset.pre_process_mobilenet, dataset.post_process_argmax_offset,
         {"image_size": [224, 224, 3]}),
}

Changing the model could be one option, but I want to make mlperf work with NCHW
Any help is much appreciated!

error installing loadgen python module

My command:

sudo make mlperf_loadgen_pymodule

Stack Trace:

cd third_party/ninja && python configure.py --bootstrap
bootstrapping ninja...
wrote build.ninja.
bootstrap complete.  rebuilding...
[25/25] LINK ninja
# Generate gn's ninja build file
python third_party/gn/build/gen.py
# Build gn
third_party/ninja/ninja -C third_party/gn/out
ninja: Entering directory `third_party/gn/out'
ninja: no work to do.
# Copy gn to third_party/gn, where depot_tools expects it.
cp third_party/gn/out/gn* third_party/gn/.
third_party/gn/gn gen out/MakefileGnProj
Done. Made 24 targets from 41 files in 81ms
third_party/ninja/ninja -C out/MakefileGnProj loadgen_pymodule_wheel_src
ninja: Entering directory `out/MakefileGnProj'
[41/42] ACTION //:loadgen_pymodule_wheel_src(//build/toolchain/linux:x64)
FAILED: obj/dist/mlperf_loadgen-0.5a0.tar.gz 
python gen/loadgen_pymodule_setup_src.py bdist_wheel
running bdist_wheel
running build
running build_ext
building 'mlperf_loadgen' extension
creating build
creating build/temp.linux-x86_64-2.7
creating build/temp.linux-x86_64-2.7/gen
creating build/temp.linux-x86_64-2.7/gen/loadgen
creating build/temp.linux-x86_64-2.7/gen/loadgen/bindings
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -DMAJOR_VERSION=0 -DMINOR_VERSION=5 -Igen -Igen/third_party/pybind/include -I/usr/include/python2.7 -c gen/loadgen/bindings/python_api.cc -o build/temp.linux-x86_64-2.7/gen/loadgen/bindings/python_api.o
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /usr/include/c++/5/forward_list:35:0,
                 from gen/third_party/pybind/include/pybind11/detail/common.h:140,
                 from gen/third_party/pybind/include/pybind11/pytypes.h:12,
                 from gen/third_party/pybind/include/pybind11/cast.h:13,
                 from gen/third_party/pybind/include/pybind11/attr.h:13,
                 from gen/third_party/pybind/include/pybind11/pybind11.h:44,
                 from gen/third_party/pybind/include/pybind11/functional.h:12,
                 from gen/loadgen/bindings/python_api.cc:6:
/usr/include/c++/5/bits/c++0x_warning.h:32:2: error: #error This file requires compiler and library support for the ISO C++ 2011 standard. This support must be enabled with the -std=c++11 or -std=gnu++11 compiler options.
 #error This file requires compiler and library support \
  ^
In file included from gen/third_party/pybind/include/pybind11/pytypes.h:12:0,
                 from gen/third_party/pybind/include/pybind11/cast.h:13,
                 from gen/third_party/pybind/include/pybind11/attr.h:13,
                 from gen/third_party/pybind/include/pybind11/pybind11.h:44,
                 from gen/third_party/pybind/include/pybind11/functional.h:12,
                 from gen/loadgen/bindings/python_api.cc:6:
gen/third_party/pybind/include/pybind11/detail/common.h:355:1: warning: identifier ‘constexpr’ is a keyword in C++11 [-Wc++0x-compat]
 inline static constexpr int log2(size_t n, int k = 0) { return (n <= 1) ? k : log2(n >> 1, k + 1); }
 ^
gen/third_party/pybind/include/pybind11/detail/common.h:367:5: warning: identifier ‘static_assert’ is a keyword in C++11 [-Wc++0x-compat]
     static_assert(sizeof(std::shared_ptr<int>) >= sizeof(std::unique_ptr<int>),
     ^
gen/third_party/pybind/include/pybind11/detail/common.h:433:5: warning: identifier ‘nullptr’ is a keyword in C++11 [-Wc++0x-compat]
     value_and_holder get_value_and_holder(const type_info *find_type = nullptr, bool throw_if_missing = true);
     ^
In file included from gen/third_party/pybind/include/pybind11/pytypes.h:12:0,
                 from gen/third_party/pybind/include/pybind11/cast.h:13,
                 from gen/third_party/pybind/include/pybind11/attr.h:13,
                 from gen/third_party/pybind/include/pybind11/pybind11.h:44,
                 from gen/third_party/pybind/include/pybind11/functional.h:12,
                 from gen/loadgen/bindings/python_api.cc:6:
gen/third_party/pybind/include/pybind11/detail/common.h:597:1: warning: identifier ‘decltype’ is a keyword in C++11 [-Wc++0x-compat]
 using is_template_base_of = decltype(is_template_base_of_impl<Base>::check((intrinsic_t<T>*)nullptr));
 ^
In file included from gen/third_party/pybind/include/pybind11/cast.h:13:0,
                 from gen/third_party/pybind/include/pybind11/attr.h:13,
                 from gen/third_party/pybind/include/pybind11/pybind11.h:44,
                 from gen/third_party/pybind/include/pybind11/functional.h:12,
                 from gen/loadgen/bindings/python_api.cc:6:
gen/third_party/pybind/include/pybind11/pytypes.h:238:5: warning: identifier ‘noexcept’ is a keyword in C++11 [-Wc++0x-compat]
     object(object &&other) noexcept { m_ptr = other.m_ptr; other.m_ptr = nullptr; }
     ^
In file included from gen/third_party/pybind/include/pybind11/pybind11.h:46:0,
                 from gen/third_party/pybind/include/pybind11/functional.h:12,
                 from gen/loadgen/bindings/python_api.cc:6:
gen/third_party/pybind/include/pybind11/detail/class.h:87:38: warning: missing terminating " character
     PyObject *result = PyRun_String(R"(\
                                      ^
gen/third_party/pybind/include/pybind11/detail/class.h:95:10: warning: missing terminating " character
         )", Py_file_input, d.ptr(), d.ptr()
          ^
In file included from gen/third_party/pybind/include/pybind11/pytypes.h:12:0,
                 from gen/third_party/pybind/include/pybind11/cast.h:13,
                 from gen/third_party/pybind/include/pybind11/attr.h:13,
                 from gen/third_party/pybind/include/pybind11/pybind11.h:44,
                 from gen/third_party/pybind/include/pybind11/functional.h:12,
                 from gen/loadgen/bindings/python_api.cc:6:
gen/third_party/pybind/include/pybind11/detail/common.h:298:7: error: expected nested-name-specifier before ‘ssize_t’
 using ssize_t = Py_ssize_t;
       ^
gen/third_party/pybind/include/pybind11/detail/common.h:299:7: error: expected nested-name-specifier before ‘size_t’
 using size_t  = std::size_t;
       ^
gen/third_party/pybind/include/pybind11/detail/common.h:302:1: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class return_value_policy : uint8_t {
 ^
gen/third_party/pybind/include/pybind11/detail/common.h:302:34: warning: scoped enums only available with -std=c++11 or -std=gnu++11
 enum class return_value_policy : uint8_t {
                                  ^
gen/third_party/pybind/include/pybind11/detail/common.h:355:15: error: ‘constexpr’ does not name a type
 inline static constexpr int log2(size_t n, int k = 0) { return (n <= 1) ? k : log2(n >> 1, k + 1); }
               ^
gen/third_party/pybind/include/pybind11/detail/common.h:355:15: note: C++11 ‘constexpr’ only available with -std=c++11 or -std=gnu++11
gen/third_party/pybind/include/pybind11/detail/common.h:358:15: error: ‘constexpr’ does not name a type
 inline static constexpr size_t size_in_ptrs(size_t s) { return 1 + ((s - 1) >> log2(sizeof(void *))); }
               ^
gen/third_party/pybind/include/pybind11/detail/common.h:358:15: note: C++11 ‘constexpr’ only available with -std=c++11 or -std=gnu++11
gen/third_party/pybind/include/pybind11/detail/common.h:366:1: error: ‘constexpr’ does not name a type
 constexpr size_t instance_simple_holder_in_ptrs() {
 ^
gen/third_party/pybind/include/pybind11/detail/common.h:366:1: note: C++11 ‘constexpr’ only available with -std=c++11 or -std=gnu++11
gen/third_party/pybind/include/pybind11/detail/common.h:386:70: error: ‘instance_simple_holder_in_ptrs’ was not declared in this scope
         void *simple_value_holder[1 + instance_simple_holder_in_ptrs()];
                                                                      ^
gen/third_party/pybind/include/pybind11/detail/common.h:436:12: error: ‘constexpr’ does not name a type
     static constexpr uint8_t status_holder_constructed  = 1;
            ^
gen/third_party/pybind/include/pybind11/detail/common.h:436:12: note: C++11 ‘constexpr’ only available with -std=c++11 or -std=gnu++11
gen/third_party/pybind/include/pybind11/detail/common.h:437:12: error: ‘constexpr’ does not name a type
     static constexpr uint8_t status_instance_registered = 2;
            ^
gen/third_party/pybind/include/pybind11/detail/common.h:437:12: note: C++11 ‘constexpr’ only available with -std=c++11 or -std=gnu++11
gen/third_party/pybind/include/pybind11/detail/common.h:433:72: error: ‘nullptr’ was not declared in this scope
     value_and_holder get_value_and_holder(const type_info *find_type = nullptr, bool throw_if_missing = true);
                                                                        ^
gen/third_party/pybind/include/pybind11/detail/common.h:440:14: error: expected constructor, destructor, or type conversion before ‘(’ token
 static_assert(std::is_standard_layout<instance>::value, "Internal error: `pybind11::detail::instance` is not standard layout!");
              ^
In file included from gen/third_party/pybind/include/pybind11/pytypes.h:12:0,
                 from gen/third_party/pybind/include/pybind11/cast.h:13,
                 from gen/third_party/pybind/include/pybind11/attr.h:13,
                 from gen/third_party/pybind/include/pybind11/pybind11.h:44,
                 from gen/third_party/pybind/include/pybind11/functional.h:12,
                 from gen/loadgen/bindings/python_api.cc:6:
gen/third_party/pybind/include/pybind11/detail/common.h:449:38: error: expected unqualified-id before ‘using’
 template <bool B, typename T = void> using enable_if_t = typename std::enable_if<B, T>::type;
                                      ^

and a very long stack trace downwards. Thanks in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.