paper on extended "double-single" precision

Generic algorithms that could be implemented in almost any language, e.g. matrix operations and FFT.

paper on extended "double-single" precision

Postby notzed » Sat Jun 14, 2014 11:05 am

The paper ``Extended-Precision Floating-Point Numbers for GPU Computation'' by Andrew Thall may be of use for the epiphany.

It outlines some basic flops for a double-single format that I presume should be faster (and smaller) than a software-ieee-double library. All operations are decomposed into 32-bit flops so can run on hardware. It approximately doubles the mantissa accuracy but doesn't extend the exponent.

I downloaded it from here: http://andrewthall.net/papers/df64_qf128.pdf
notzed
 
Posts: 331
Joined: Mon Dec 17, 2012 12:28 am
Location: Australia

Re: paper on extended "double-single" precision

Postby upcFrost » Sat May 27, 2017 11:28 pm

Actually it might worth trying to implement. I'll try probably, at least basic ops
Current LLVM backend for Epiphany: . Commits and code reviews are welcome
upcFrost
 
Posts: 37
Joined: Wed May 28, 2014 6:37 am
Location: Moscow, Russia

Re: paper on extended "double-single" precision

Postby jar » Sun May 28, 2017 12:42 am

User avatar
jar
 
Posts: 295
Joined: Mon Dec 17, 2012 3:27 am


Return to Algorithms

Who is online

Users browsing this forum: No registered users and 9 guests