Floating Point Precision

Any technical questions about the Epiphany chip and Parallella HW Platform.

Moderator: aolofsson

Floating Point Precision

Postby mxfreak » Sat Apr 12, 2014 9:33 am

Hello,

I have a question with regard to floating point precision:

As far as I know, double-precision in floating point calculations is used in applications up to date?!

The Epiphany-Chip is termed to be a leading chip regarding to floating point operations, but reffering to the manual, the FPU calculations will be done with single-precision?! Isn't it a contradiction?

Why does Epiphany use old standart (32bit FPU)?

Best regards,
mx
mxfreak
 
Posts: 8
Joined: Wed Mar 12, 2014 3:05 pm

Re: Floating Point Precision

Postby ysapir » Sat Apr 12, 2014 12:52 pm

Good question, but I think that there are a couple of misconceptions here. First, double precision floating point is not "newer" than single precision floats. They were standardized at about the same time. Single precision floating point is, as its name implies, less accurate than double precision. However, not all applications require that higher precision. It is definitely possible that your specific application will gain from a higher precision, and thus you should choose your data type accordingly. However, it can be otherwise. For example, comes to my mind graphics applications. The extra precision of doubles adds negligible increase in image quality.

Similarly, even double precision is not always precise enough, or just imprecise as a single precision. When doing arbitrary-precision math, or (precise) calculations dealing with fractions, the inherent limitation of the binary representation makes double just as (non-) useful as singles.

Using double precision math has its own cost. Your program will require more memory, more registers, more power and sometimes, more time to run. In the embedded computing world, as opposed to the PC world, these factors can be very costly, and even inhibit the whole implementation. This is why so many floating point processors support only singles, and even more do not support hardware floating point math at all. In the latter case, using floats means software library implementation. In the former, using doubles mean the same.

At the bottom line, like every other aspect of computing, it all boils down to your budget - which is tightly coupled to your application. When weighing the parameters I mentioned above, Adapteva took its design decision to go with the single precision math, knowing that floating point is a necessary feature, but not wanting to pay the cost of double precision - again, realizing it is possible to implement in software, if required.
User avatar
ysapir
 
Posts: 393
Joined: Tue Dec 11, 2012 7:05 pm

Re: Floating Point Precision

Postby mxfreak » Sun Apr 13, 2014 10:09 am

It become clearer to me by now.
Thank you very much for your detailed answer!!
mxfreak
 
Posts: 8
Joined: Wed Mar 12, 2014 3:05 pm


Return to Epiphany and Parallella Q & A

Who is online

Users browsing this forum: No registered users and 34 guests

cron