#
bf16
Here are 3 public repositories matching this topic...
A flexible utility for converting tensor precision in PyTorch models and safetensors files, enabling efficient deployment across various platforms.
machine-learning deep-learning utilities deployment toolkit optimization pytorch fp16 model-compression model-conversion model-checkpoint bf16 tensor-precision efficient-deployment
-
Updated
Aug 24, 2023 - Python
Converts a floating-point number or hexadecimal representation of a floating-point numbers into various formats and displays them into binary/hexadecimal.
-
Updated
Dec 23, 2021 - C
Improve this page
Add a description, image, and links to the bf16 topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the bf16 topic, visit your repo's landing page and select "manage topics."