Giter Club home page Giter Club logo

retinanet's Introduction

Retina-Net

Focal loss for Dense Object Detection

The code is unofficial version for RetinaNet in focal loss for Dense Object Detection. https://arxiv.org/abs/1708.02002

You can use the focal loss in https://github.com/unsky/focal-loss

usage

install mxnet v0.9.5

  1. download the dataset in data/

  2. download the params in https://onedrive.live.com/?authkey=%21AI3oSHAoAIbxAB8&cid=F371D9563727B96F&id=F371D9563727B96F%21102802&parId=F371D9563727B96F%21102787&action=locate

./init.sh

train & test

python train.py --cfg kitti.yaml

python test.py --cfg kitti.yaml 

retinanet's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

retinanet's Issues

The error when training

@unsky , a nice work

When training, the error occur. the details is bellow:
#########################################train#######################################
./data/VOCdevkit2007/VOC2007/JPEGImages/2009_002123.jpg
./data/VOCdevkit2007/VOC2007/JPEGImages/000783.jpg
[08:24:35] /home/chengshuai/mx-maskrcnn-master1/incubator-mxnet/dmlc-core/include/dmlc/logging.h:308: [08:24:35] /home/chengshuai/mx-maskrcnn-master1/incubator-mxnet/mshadow/mshadow/././././cuda/tensor_gpu-inl.cuh:58: too large launch parameter: Softmax[89847,1], [256,1,1]

Stack trace returned 10 entries:
[bt] (0) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZN4dmlc15LogMessageFatalD1Ev+0x3c) [0x7f0b7ad7f70c]
[bt] (1) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZN7mshadow4cuda16CheckLaunchParamE4dim3S1_PKc+0x165) [0x7f0b7d3e83f5]
[bt] (2) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(ZN7mshadow4cuda7SoftmaxIfEEvRKNS_6TensorINS_3gpuELi2ET_EES7+0xfa) [0x7f0b7e3ec24a]
[bt] (3) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(ZN5mxnet2op19SoftmaxActivationOpIN7mshadow3gpuEE7ForwardERKNS_9OpContextERKSt6vectorINS_5TBlobESaIS9_EERKS8_INS_9OpReqTypeESaISE_EESD_SD+0x20b) [0x7f0b7e4fe57b]
[bt] (4) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(ZN5mxnet2op13OperatorState7ForwardERKNS_9OpContextERKSt6vectorINS_5TBlobESaIS6_EERKS5_INS_9OpReqTypeESaISB_EESA+0x354) [0x7f0b7d04a524]
[bt] (5) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(ZZN5mxnet10imperative12PushOperatorERKNS_10OpStatePtrEPKN4nnvm2OpERKNS4_9NodeAttrsERKNS_7ContextERKSt6vectorIPNS_6engine3VarESaISH_EESL_RKSE_INS_8ResourceESaISM_EERKSE_IPNS_7NDArrayESaISS_EESW_RKSE_IjSaIjEERKSE_INS_9OpReqTypeESaIS11_EENS_12DispatchModeEENKUlNS_10RunContextENSF_18CallbackOnCompleteEE0_clES17_S18+0x2a0) [0x7f0b7cec2950]
[bt] (6) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZN5mxnet6engine14ThreadedEngine15ExecuteOprBlockENS_10RunContextEPNS0_8OprBlockE+0x9d) [0x7f0b7ce3fc6d]
[bt] (7) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZN5mxnet6engine23ThreadedEnginePerDevice9GPUWorkerILN4dmlc19ConcurrentQueueTypeE0EEEvNS_7ContextEbPNS1_17ThreadWorkerBlockIXT_EEESt10shared_ptrINS0_10ThreadPool11SimpleEventEE+0xf3) [0x7f0b7ce43cb3]
[bt] (8) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(ZNSt17_Function_handlerIFvSt10shared_ptrIN5mxnet6engine10ThreadPool11SimpleEventEEEZZNS2_23ThreadedEnginePerDevice13PushToExecuteEPNS2_8OprBlockEbENKUlvE1_clEvEUlS5_E_E9_M_invokeERKSt9_Any_dataS5+0x56) [0x7f0b7ce43e96]
[bt] (9) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZNSt6thread5_ImplISt12_Bind_simpleIFSt8functionIFvSt10shared_ptrIN5mxnet6engine10ThreadPool11SimpleEventEEEES8_EEE6_M_runEv+0x3b) [0x7f0b7ce410cb]

[08:24:35] /home/chengshuai/mx-maskrcnn-master1/incubator-mxnet/dmlc-core/include/dmlc/logging.h:308: [08:24:35] src/engine/./threaded_engine.h:370: [08:24:35] /home/chengshuai/mx-maskrcnn-master1/incubator-mxnet/mshadow/mshadow/././././cuda/tensor_gpu-inl.cuh:58: too large launch parameter: Softmax[89847,1], [256,1,1]

Stack trace returned 10 entries:
[bt] (0) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZN4dmlc15LogMessageFatalD1Ev+0x3c) [0x7f0b7ad7f70c]
[bt] (1) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZN7mshadow4cuda16CheckLaunchParamE4dim3S1_PKc+0x165) [0x7f0b7d3e83f5]
[bt] (2) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(ZN7mshadow4cuda7SoftmaxIfEEvRKNS_6TensorINS_3gpuELi2ET_EES7+0xfa) [0x7f0b7e3ec24a]
[bt] (3) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(ZN5mxnet2op19SoftmaxActivationOpIN7mshadow3gpuEE7ForwardERKNS_9OpContextERKSt6vectorINS_5TBlobESaIS9_EERKS8_INS_9OpReqTypeESaISE_EESD_SD+0x20b) [0x7f0b7e4fe57b]
[bt] (4) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(ZN5mxnet2op13OperatorState7ForwardERKNS_9OpContextERKSt6vectorINS_5TBlobESaIS6_EERKS5_INS_9OpReqTypeESaISB_EESA+0x354) [0x7f0b7d04a524]
[bt] (5) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(ZZN5mxnet10imperative12PushOperatorERKNS_10OpStatePtrEPKN4nnvm2OpERKNS4_9NodeAttrsERKNS_7ContextERKSt6vectorIPNS_6engine3VarESaISH_EESL_RKSE_INS_8ResourceESaISM_EERKSE_IPNS_7NDArrayESaISS_EESW_RKSE_IjSaIjEERKSE_INS_9OpReqTypeESaIS11_EENS_12DispatchModeEENKUlNS_10RunContextENSF_18CallbackOnCompleteEE0_clES17_S18+0x2a0) [0x7f0b7cec2950]
[bt] (6) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZN5mxnet6engine14ThreadedEngine15ExecuteOprBlockENS_10RunContextEPNS0_8OprBlockE+0x9d) [0x7f0b7ce3fc6d]
[bt] (7) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZN5mxnet6engine23ThreadedEnginePerDevice9GPUWorkerILN4dmlc19ConcurrentQueueTypeE0EEEvNS_7ContextEbPNS1_17ThreadWorkerBlockIXT_EEESt10shared_ptrINS0_10ThreadPool11SimpleEventEE+0xf3) [0x7f0b7ce43cb3]
[bt] (8) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(ZNSt17_Function_handlerIFvSt10shared_ptrIN5mxnet6engine10ThreadPool11SimpleEventEEEZZNS2_23ThreadedEnginePerDevice13PushToExecuteEPNS2_8OprBlockEbENKUlvE1_clEvEUlS5_E_E9_M_invokeERKSt9_Any_dataS5+0x56) [0x7f0b7ce43e96]
[bt] (9) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZNSt6thread5_ImplISt12_Bind_simpleIFSt8functionIFvSt10shared_ptrIN5mxnet6engine10ThreadPool11SimpleEventEEEES8_EEE6_M_runEv+0x3b) [0x7f0b7ce410cb]

A fatal error occurred in asynchronous engine operation. If you do not know what caused this error, you can try set environment variable MXNET_ENGINE_TYPE to NaiveEngine and run with debugger (i.e. gdb). This will force all operations to be synchronous and backtrace will give you the series of calls that lead to this error. Remember to set MXNET_ENGINE_TYPE back to empty after debugging.

Stack trace returned 8 entries:
[bt] (0) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZN4dmlc15LogMessageFatalD1Ev+0x3c) [0x7f0b7ad7f70c]
[bt] (1) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZN5mxnet6engine14ThreadedEngine15ExecuteOprBlockENS_10RunContextEPNS0_8OprBlockE+0x3a0) [0x7f0b7ce3ff70]
[bt] (2) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZN5mxnet6engine23ThreadedEnginePerDevice9GPUWorkerILN4dmlc19ConcurrentQueueTypeE0EEEvNS_7ContextEbPNS1_17ThreadWorkerBlockIXT_EEESt10shared_ptrINS0_10ThreadPool11SimpleEventEE+0xf3) [0x7f0b7ce43cb3]
[bt] (3) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(ZNSt17_Function_handlerIFvSt10shared_ptrIN5mxnet6engine10ThreadPool11SimpleEventEEEZZNS2_23ThreadedEnginePerDevice13PushToExecuteEPNS2_8OprBlockEbENKUlvE1_clEvEUlS5_E_E9_M_invokeERKSt9_Any_dataS5+0x56) [0x7f0b7ce43e96]
[bt] (4) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZNSt6thread5_ImplISt12_Bind_simpleIFSt8functionIFvSt10shared_ptrIN5mxnet6engine10ThreadPool11SimpleEventEEEES8_EEE6_M_runEv+0x3b) [0x7f0b7ce410cb]
[bt] (5) /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0xb1a60) [0x7f0b9d5c7a60]
[bt] (6) /lib/x86_64-linux-gnu/libpthread.so.0(+0x8182) [0x7f0ba193a182]
[bt] (7) /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f0ba166747d]

terminate called after throwing an instance of 'dmlc::Error'
what(): [08:24:35] src/engine/./threaded_engine.h:370: [08:24:35] /home/chengshuai/mx-maskrcnn-master1/incubator-mxnet/mshadow/mshadow/././././cuda/tensor_gpu-inl.cuh:58: too large launch parameter: Softmax[89847,1], [256,1,1]

Stack trace returned 10 entries:
[bt] (0) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZN4dmlc15LogMessageFatalD1Ev+0x3c) [0x7f0b7ad7f70c]
[bt] (1) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZN7mshadow4cuda16CheckLaunchParamE4dim3S1_PKc+0x165) [0x7f0b7d3e83f5]
[bt] (2) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(ZN7mshadow4cuda7SoftmaxIfEEvRKNS_6TensorINS_3gpuELi2ET_EES7+0xfa) [0x7f0b7e3ec24a]
[bt] (3) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(ZN5mxnet2op19SoftmaxActivationOpIN7mshadow3gpuEE7ForwardERKNS_9OpContextERKSt6vectorINS_5TBlobESaIS9_EERKS8_INS_9OpReqTypeESaISE_EESD_SD+0x20b) [0x7f0b7e4fe57b]
[bt] (4) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(ZN5mxnet2op13OperatorState7ForwardERKNS_9OpContextERKSt6vectorINS_5TBlobESaIS6_EERKS5_INS_9OpReqTypeESaISB_EESA+0x354) [0x7f0b7d04a524]
[bt] (5) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(ZZN5mxnet10imperative12PushOperatorERKNS_10OpStatePtrEPKN4nnvm2OpERKNS4_9NodeAttrsERKNS_7ContextERKSt6vectorIPNS_6engine3VarESaISH_EESL_RKSE_INS_8ResourceESaISM_EERKSE_IPNS_7NDArrayESaISS_EESW_RKSE_IjSaIjEERKSE_INS_9OpReqTypeESaIS11_EENS_12DispatchModeEENKUlNS_10RunContextENSF_18CallbackOnCompleteEE0_clES17_S18+0x2a0) [0x7f0b7cec2950]
[bt] (6) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZN5mxnet6engine14ThreadedEngine15ExecuteOprBlockENS_10RunContextEPNS0_8OprBlockE+0x9d) [0x7f0b7ce3fc6d]
[bt] (7) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZN5mxnet6engine23ThreadedEnginePerDevice9GPUWorkerILN4dmlc19ConcurrentQueueTypeE0EEEvNS_7ContextEbPNS1_17ThreadWorkerBlockIXT_EEESt10shared_ptrINS0_10ThreadPool11SimpleEventEE+0xf3) [0x7f0b7ce43cb3]
[bt] (8) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(ZNSt17_Function_handlerIFvSt10shared_ptrIN5mxnet6engine10ThreadPool11SimpleEventEEEZZNS2_23ThreadedEnginePerDevice13PushToExecuteEPNS2_8OprBlockEbENKUlvE1_clEvEUlS5_E_E9_M_invokeERKSt9_Any_dataS5+0x56) [0x7f0b7ce43e96]
[bt] (9) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZNSt6thread5_ImplISt12_Bind_simpleIFSt8functionIFvSt10shared_ptrIN5mxnet6engine10ThreadPool11SimpleEventEEEES8_EEE6_M_runEv+0x3b) [0x7f0b7ce410cb]

A fatal error occurred in asynchronous engine operation. If you do not know what caused this error, you can try set environment variable MXNET_ENGINE_TYPE to NaiveEngine and run with debugger (i.e. gdb). This will force all operations to be synchronous and backtrace will give you the series of calls that lead to this error. Remember to set MXNET_ENGINE_TYPE back to empty after debugging.

Stack trace returned 8 entries:
[bt] (0) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZN4dmlc15LogMessageFatalD1Ev+0x3c) [0x7f0b7ad7f70c]
[bt] (1) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZN5mxnet6engine14ThreadedEngine15ExecuteOprBlockENS_10RunContextEPNS0_8OprBlockE+0x3a0) [0x7f0b7ce3ff70]
[bt] (2) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZN5mxnet6engine23ThreadedEnginePerDevice9GPUWorkerILN4dmlc19ConcurrentQueueTypeE0EEEvNS_7ContextEbPNS1_17ThreadWorkerBlockIXT_EEESt10shared_ptrINS0_10ThreadPool11SimpleEventEE+0xf3) [0x7f0b7ce43cb3]
[bt] (3) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(ZNSt17_Function_handlerIFvSt10shared_ptrIN5mxnet6engine10ThreadPool11SimpleEventEEEZZNS2_23ThreadedEnginePerDevice13PushToExecuteEPNS2_8OprBlockEbENKUlvE1_clEvEUlS5_E_E9_M_invokeERKSt9_Any_dataS5+0x56) [0x7f0b7ce43e96]
[bt] (4) /usr/local/lib/python2.7/dist-packages/mxnet-0.12.1-py2.7.egg/mxnet/libmxnet.so(_ZNSt6thread5_ImplISt12_Bind_simpleIFSt8functionIFvSt10shared_ptrIN5mxnet6engine10ThreadPool11SimpleEventEEEES8_EEE6_M_runEv+0x3b) [0x7f0b7ce410cb]
[bt] (5) /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0xb1a60) [0x7f0b9d5c7a60]
[bt] (6) /lib/x86_64-linux-gnu/libpthread.so.0(+0x8182) [0x7f0ba193a182]
[bt] (7) /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f0ba166747d]

What is the problem? Does the mxnet version result in?

Thanks!

when run ./init.sh AttributeError: 'dict' object has no attribute 'iteritems'

ENV : python3.5.5 cuda8.0 mxnet0.9.5 GTX1080Ti * 2

I run ./init.sh after finished dependency Cython mxnet_cu80==0.9.5 , get Errors:

Traceback (most recent call last):
File "setup_linux.py", line 56, in
CUDA = locate_cuda()
File "setup_linux.py", line 51, in locate_cuda
for k, v in cudaconfig.iteritems():
AttributeError: 'dict' object has no attribute 'iteritems'

What should I do ?

test problem

I successful run the train.py ,but I can't run the test.py.
Traceback (most recent call last):
File "/home/zzz/pycharm-community-2017.2.3/helpers/pydev/pydevd.py", line 1599, in
globals = debugger.run(setup['file'], None, None, is_module)
File "/home/zzz/pycharm-community-2017.2.3/helpers/pydev/pydevd.py", line 1026, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/zzz/RetinaNet/retinanet/test.py", line 56, in
main()
File "/home/zzz/RetinaNet/retinanet/test.py", line 53, in main
args.vis, args.ignore_cache, args.shuffle, config.TEST.HAS_RPN, config.dataset.proposal, args.thresh, logger=logger, output_path=final_output_path)
File "/home/zzz/RetinaNet/retinanet/function/test_rcnn.py", line 37, in test_rcnn
imdb = eval(dataset)(image_set, root_path, dataset_path, result_path=output_path)
File "/home/zzz/RetinaNet/retinanet/../lib/dataset/kitti.py", line 38, in init
self._image_index = self._load_image_set_index()
File "/home/zzz/RetinaNet/retinanet/../lib/dataset/kitti.py", line 69, in _load_image_set_index
'Path does not exist: {}'.format(image_set_file)
AssertionError: Path does not exist: ../data/kitti/ImageSets/imageset.txt

I downloaded the datasets which you support,but it haven't imagesets ,how can i do?

Thanks. @unsky

Input image size

In the paper it reports that the input size is 600x600. Am I right in thinking that this parameter is set in generator.py and that the default is set to 1333x800?

Encounter problem while training

Hi, when I use your file in my mxnet. I met this problem "ImportError: cpu_nms.so:undefined symbol: PyFPE_jbuf" . Can you tell me how to solve it? I am looking foward to hearing from you!Thank you! @ unsky

您好,按照您的程序和参数在VOC07上训练结果不理想

您的工作非常好,但是我按照您的您的程序和参数在VOC07上进行训练,训练了50个epoch,精度Train-RetinaTotalAcc=0.872032,RetinaFocalLoss=0.210117,RetinaL1Loss=0.117969。最终在voc_test上测试结果只有Mean [email protected] = 0.2392,Mean [email protected] = 0.0769。非常奇怪,请问是什么原因?以下是我的详细的train 和test log。

1
VOC07 metric? Y
AP for aeroplane = 0.2743
AP for bicycle = 0.2317
AP for bird = 0.1421
AP for boat = 0.0626
AP for bottle = 0.0513
AP for bus = 0.3651
AP for car = 0.5351
AP for cat = 0.3187
AP for chair = 0.0742
AP for cow = 0.2298
AP for diningtable = 0.1020
AP for dog = 0.2814
AP for horse = 0.3285
AP for motorbike = 0.3521
AP for person = 0.3603
AP for pottedplant = 0.1216
AP for sheep = 0.2260
AP for sofa = 0.1875
AP for train = 0.2995
AP for tvmonitor = 0.2410
Mean [email protected] = 0.2392
AP for aeroplane = 0.0355
AP for bicycle = 0.0239
AP for bird = 0.0420
AP for boat = 0.0303
AP for bottle = 0.0070
AP for bus = 0.1550
AP for car = 0.1918
AP for cat = 0.1068
AP for chair = 0.0140
AP for cow = 0.1064
AP for diningtable = 0.0465
AP for dog = 0.0552
AP for horse = 0.0889
AP for motorbike = 0.0846
AP for person = 0.1145
AP for pottedplant = 0.0364
AP for sheep = 0.0675
AP for sofa = 0.0982
AP for train = 0.0973
AP for tvmonitor = 0.1370
Mean [email protected] = 0.0769
请问这是什么问题?您还在更新么?

using problem

I have not use python train.py --cfg kitti.yaml to train kitti model, i want to use "python train.py --cfg pascal_voc.yaml" directly, i can not find model "rcnn_kitti-0015.params".

mxnet.base.MXNetError: [17:26:44] src/io/local_filesys.cc:154: Check failed: allow_null LocalFileSystem: fail to open "./output/retinanet/pascal_voc/2007_trainval/rcnn_kitti-0015.params"

i am a new guy for mxnet, pretrained model what you used, where i can find the pretrained model. could you give me a link?

cls_loss broadcast error

this is when using custom dataset with nonuniform image sizes

File "/home/xiangyong/Workbench/RetinaNet-unsky/retinanet/core/metric.py", line 73, in update
cls_loss = np.sum(-1 * alpha * labels * np.power(1 - cls_+eps, gamma) * np.log(cls_+eps) - (1-labels)*(1-alpha) * np.power(1 - ( 1-cls_)+eps, gamma) * np.log( 1-cls_+eps))
ValueError: operands could not be broadcast together with shapes (1,20,123) (1,5,123)

focal loss针对于faster rcnn

你好,将focal loss 应用到faster rcnn中的话,应该代替的是rpn的 Cross Entropy还是RCNN的呢?目前运行得到的mAP感觉没什么变化?谢谢

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.