Skip to main content

Posts

Featured

Brain Numbers

David M. Berry Due to limitations on space I have had to cut this from my forthcoming Artificial Intelligence and Critical Theory (MUP) book. But I didn't want to lose the information, and I think others will also find the overview useful, so I have pasted it into this blog post.  In 2017, Google's Brain team designed a 16-bit floating-point format for training neural networks on their AI chips called Tensor Processing Units (TPUs). They called it bfloat16, brain float , and the name has stuck. BF16 is now the default numerical type for most large-scale AI training. Every large language model we interact with was almost certainly trained in brain numbers. The standard 32-bit floating-point number, FP32, uses 8 bits for the exponent and 23 for the mantissa, giving roughly 7.2 significant decimal digits, which works out at 4.3 billion representable values. BF16 keeps the same 8 exponent bits but reduces the mantissa to 7, yielding about 2.4 decimal digits and 65,000 representabl...

Latest Posts

L'Intelligence Artificielle, c'est la Guerre

Generation Vector