You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I started using FINN not so long ago. I want to implement an accelerator on an Alveo U200.
My network is a CNN LeNet-5, implemented and trained with Brevitas. The model receives as input an image from the MNIST dataset (1,1,28,28) FP32 and then the bit width for both weights and activations is 1 bit (W1A1). The output has the format (1,10) FP32.
I took as an example the end2end_example cybersecurity.
I successfully exported the trained model to ONNX.
I can run the build (only estimates).
Now I'm trying to build the stitched IP, out-of-context synthesis and RTLsim performance with the following config:
Building dataflow accelerator from MNIST.LeNet-5.Brevitas.W1.A1.onnx
Intermediate outputs will be generated in /tmp/finn_dev_user1
Final outputs will be generated in output_ipstitch_ooc_rtlsim
Build log is at output_ipstitch_ooc_rtlsim/build_dataflow.log
Running step: step_qonnx_to_finn [1/17]
Running step: step_tidy_up [2/17]
Running step: step_streamline [3/17]
Running step: step_convert_to_hls [4/17]
Running step: step_create_dataflow_partition [5/17]
Running step: step_target_fps_parallelization [6/17]
Running step: step_apply_folding_config [7/17]
Running step: step_generate_estimate_reports [8/17]
Running step: step_hls_codegen [9/17]
Traceback (most recent call last):
File "/workspace/finn/src/finn/builder/build_dataflow.py", line 166, in build_dataflow_cfg
model = transform_step(model, cfg)
File "/workspace/finn/src/finn/builder/build_dataflow_steps.py", line 416, in step_hls_codegen
model = model.transform(
File "/workspace/finn-base/src/finn/core/modelwrapper.py", line 141, in transform
(transformed_model, model_was_changed) = transformation.apply(
File "/workspace/finn/src/finn/transformation/fpgadataflow/prepare_ip.py", line 88, in apply
_codegen_single_node(node, model, self.fpgapart, self.clk)
File "/workspace/finn/src/finn/transformation/fpgadataflow/prepare_ip.py", line 55, in _codegen_single_node
inst.code_generation_ipgen(model, fpgapart, clk)
File "/workspace/finn/src/finn/custom_op/fpgadataflow/hlscustomop.py", line 275, in code_generation_ipgen
self.docompute()
File "/workspace/finn/src/finn/custom_op/fpgadataflow/streamingfclayer_batch.py", line 1103, in docompute
tmpl_args = self.get_template_param_values()
File "/workspace/finn/src/finn/custom_op/fpgadataflow/streamingfclayer_batch.py", line 504, in get_template_param_values
raise Exception("True binary (non-bipolar) inputs not yet supported")
Exception: True binary (non-bipolar) inputs not yet supported
/workspace/finn/src/finn/custom_op/fpgadataflow/streamingfclayer_batch.py(504)get_template_param_values()
502 bin_xnor_mode = self.get_nodeattr("binaryXnorMode") == 1
503 if (inp_is_binary or wt_is_binary) and (not bin_xnor_mode):
--> 504 raise Exception("True binary (non-bipolar) inputs not yet supported")
505 inp_is_bipolar = self.get_input_datatype() == DataType["BIPOLAR"]
506 # out_is_bipolar = self.get_output_datatype() == DataType["BIPOLAR"]
What can I do to debug the problem?
Please let me know if you need more information on something I didn't mention.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Sorry, this is my first time posting an issue.
I started using FINN not so long ago. I want to implement an accelerator on an Alveo U200.
My network is a CNN LeNet-5, implemented and trained with Brevitas. The model receives as input an image from the MNIST dataset (1,1,28,28) FP32 and then the bit width for both weights and activations is 1 bit (W1A1). The output has the format (1,10) FP32.
I took as an example the end2end_example cybersecurity.
I successfully exported the trained model to ONNX.
I can run the build (only estimates).
Now I'm trying to build the stitched IP, out-of-context synthesis and RTLsim performance with the following config:
rtlsim_output_dir = "output_ipstitch_ooc_rtlsim"
cfg_stitched_ip = build.DataflowBuildConfig(
output_dir = rtlsim_output_dir,
mvau_wwidth_max = 10000,
target_fps = 10000,
synth_clk_period_ns = 5.0, # 5->200MHz; 10->100MHz
board = "U200",
fpga_part = "xcu200-fsgd2104-2-e",
generate_outputs=[
build_cfg.DataflowOutputType.STITCHED_IP,
build_cfg.DataflowOutputType.RTLSIM_PERFORMANCE,
build_cfg.DataflowOutputType.OOC_SYNTH,
]
)
I get the following output:
Building dataflow accelerator from MNIST.LeNet-5.Brevitas.W1.A1.onnx
Intermediate outputs will be generated in /tmp/finn_dev_user1
Final outputs will be generated in output_ipstitch_ooc_rtlsim
Build log is at output_ipstitch_ooc_rtlsim/build_dataflow.log
Running step: step_qonnx_to_finn [1/17]
Running step: step_tidy_up [2/17]
Running step: step_streamline [3/17]
Running step: step_convert_to_hls [4/17]
Running step: step_create_dataflow_partition [5/17]
Running step: step_target_fps_parallelization [6/17]
Running step: step_apply_folding_config [7/17]
Running step: step_generate_estimate_reports [8/17]
Running step: step_hls_codegen [9/17]
Traceback (most recent call last):
File "/workspace/finn/src/finn/builder/build_dataflow.py", line 166, in build_dataflow_cfg
model = transform_step(model, cfg)
File "/workspace/finn/src/finn/builder/build_dataflow_steps.py", line 416, in step_hls_codegen
model = model.transform(
File "/workspace/finn-base/src/finn/core/modelwrapper.py", line 141, in transform
(transformed_model, model_was_changed) = transformation.apply(
File "/workspace/finn/src/finn/transformation/fpgadataflow/prepare_ip.py", line 88, in apply
_codegen_single_node(node, model, self.fpgapart, self.clk)
File "/workspace/finn/src/finn/transformation/fpgadataflow/prepare_ip.py", line 55, in _codegen_single_node
inst.code_generation_ipgen(model, fpgapart, clk)
File "/workspace/finn/src/finn/custom_op/fpgadataflow/hlscustomop.py", line 275, in code_generation_ipgen
self.docompute()
File "/workspace/finn/src/finn/custom_op/fpgadataflow/streamingfclayer_batch.py", line 1103, in docompute
tmpl_args = self.get_template_param_values()
File "/workspace/finn/src/finn/custom_op/fpgadataflow/streamingfclayer_batch.py", line 504, in get_template_param_values
raise Exception("True binary (non-bipolar) inputs not yet supported")
Exception: True binary (non-bipolar) inputs not yet supported
What can I do to debug the problem?
Please let me know if you need more information on something I didn't mention.
Beta Was this translation helpful? Give feedback.
All reactions