You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, i see the benchmark from README. I have some questions.
What's the platform used for inference time testing?
It's there any neon acceleration for depthwise conv in mobilenet?
There is a great difference between theoretical acceleration and actual acceleration. Although the amount of computation of mobilenet is twice as much as that of condensenet, I still want to know the speed difference after specific optimization.
Inference time on ARM platform
Model
FLOPs
Top-1
Time(s)
VGG-16
15,300M
28.5
354
ResNet-18
1,818M
30.2
8.14
1.0 MobileNet-224
569M
29.4
1.96
CondenseNet-74 (C=G=4)
529M
26.2
1.89
CondenseNet-74 (C=G=8)
274M
29.0
0.99
The text was updated successfully, but these errors were encountered:
Hi, i see the benchmark from
README
. I have some questions.What's the platform used for inference time testing?
It's there any neon acceleration for depthwise conv in mobilenet?
There is a great difference between theoretical acceleration and actual acceleration. Although the amount of computation of mobilenet is twice as much as that of condensenet, I still want to know the speed difference after specific optimization.
Inference time on ARM platform
The text was updated successfully, but these errors were encountered: