Floating Point Support For Smart Contracts

One of the common limitations of today’s smart contract platforms is that they don’t provide native support for floating point. There’s a few solid reasons for this decision. For one, floating point math is complex, and introduces all sorts of numerical messiness. NaNs pop out with distressing regularity, and there are many subtle issues. For example, basic mathematical operations can behave in unexpected ways. Even worse, floating point and multi-threaded applications can interact in strange ways, producing non-deterministic effects. There’s a good argument that financial code should avoid floating point since numerical errors on real money could lead to dangerous outcomes. At the same time, there are many important numerical and machine learning applications that rely on the additional precision provided by floating point, so it’s important to explore the topic of floats further.

There has been some exploration of adding support for fixed point arithmetic to smart contracts. Unfortunately fixed point arithmetic is often insufficient for numerical applications, since operations such as exponentiation and logarithms cannot be adequately supported due to the limited number of decimal places allowed. This means that it will not be easy to create smart contracts that use learning algorithms to make decisions more intelligently over time. There has been some early discussion of deep learning on the EVM, but the lack of floating point means that deploying such algorithms on smart contracts is challenging. It might be possible to use quantization for performing inference with trained models, but quantization remains challenging and unwieldy in practice. There has been some intriguing early work towards such ML smart contract deployments however.

You might reasonably ask if the discussion of deploying ML smart contracts isn’t entirely academic. The storage limitations of the current Ethereum system are quite restrictive. Storage bloat is a major issue that developers are wrestling with; wouldn’t the deployment of learned weights into smart contracts be expensive and counterproductive? At present, it is true that these limitations hold. But there’s a whole host of exciting work that is working towards loosening such restrictions. Work on creating sharded blockchain systems is proceeding apace. A system like Ethereum 2.0 with 1000 shards would have cheaper storage which might allow for small trained models to be deployed fruitfully. Further out, as storage networks such as Filecoin mature, it might be possible to deploy trained models onto these networks and have smart contracts invoke computation using these weights.

Native support for floating point on the EVM would make it feasible to start experimenting with intelligent smart contracts. As storage matures, these learning models could become increasingly sophisticated and borrow from the wealth of deep learning research that’s happening today. In addition, I suspect that floating point support would make existing smart contracts much more ergonomic. In our Computable contracts, the support curve math has proven to be very tricky to do without floating point. The support price function has to employ complicated transformations between units to get integer rounding errors under control. Having a small amount of floating rounding errors would likely be an acceptable tradeoff if it reduces the complexity of financial math. It’s likely that the code would become simpler and easier to audit as well.