Home of The JavaSpecialists' Newsletter

**Abstract:** The new Java 7 Fork/Join Framework allows us to define our algorithms using recursion and then to easily parallelize them. In this newsletter we describe how that works using a fast Fibonacci algorithm that uses the sum of the squares rather than brute force. We also present a faster algorithm for multiplying two large BigInteger numbers, using the Fork/Join Framework and the Karatsuba algorithm.

Welcome to the 201st issue of **The Java(tm) Specialists' Newsletter** sent to you from the
Island of Crete. Kirk Pepperdine is running his Java Performance Tuning
Course at our conference room this
week. Helene had the brilliant idea that they should shift
their day by a few hours today. So instead of starting sharp
at 9am, we went to a secret beach
for a swim and then began with the lessons at
11:30am. Kirk will catch up with the material tonight. I
think we will do this on all our Cretan courses in future.
A quick jump in our fresh-water pool was a perfect way to get
me inspired to write this newsletter (You might enjoy this
short
video my son and his buddy made of our pool - a friend
gave us some old playground slides that we have hooked up to
our pool pump to make a lovely water slide - the video might
be blocked in your region due to the sound track.)

So, now that you're completely annoyed with my introduction and are sitting there behind your desk fuming with envy, there are two opportunities that you could take advantage of to visit our beautiful island :-) First off, I am running my Java Specialist Master Course here in 11 days time, from the 4-7 June 2012. It is the perfect course for a Java programmer who has used Java for at least 2 years and who would like to grow in their knowledge of Java. Money-back guarantee if you are not fully satisfied! Flights to Greece are cheap this year and Crete is perfectly safe. Secondly, we are running our second Java Specialist Symposium from the 10-13 September 2012. We doubled the attendance fees from last year ($0) and will keep on doing that every year for as long as we do the symposium. Space is limited though, first-come-first-served.

One last interesting bit of news from Crete, before we start with the newsletter. One of the main tourist areas, Platanias, is also a breeding ground for the loggerhead turtle, called Caretta-Caretta. Why they allowed hotels and beach umbrellas in that area boggles the mind. Last month, I took some friends out for a drive in my boat and almost ran over a turtle that was swimming in the middle of the sea. It was the first time I saw one of these critters in the wild and it was a completely amazing experience watching and filming it. My son was kind enough to turn our recordings into a short documentary about the Caretta-Caretta. Some viewers in Canada had errors when they tried to play it. If that happens, please just try again. I think you will enjoy seeing the turtle as much as I did.

**NEW: Refactoring to Java 8 Lambdas and Streams Workshop**
Are you currently using Java 6 or 7 and would like to see how
Java 8 can improve your code base? Are you tired of courses
that teach you a whole bunch of techniques that you cannot
apply in your world? Check out our one day intensive
Refactoring to Java 8 Lambdas and Streams Workshop.

Figuring out new language features can be daunting. The JavaDocs are not always that helpful. Here is an example shown in the RecursiveTask JavaDocs to demonstrate how the fork/join framework should be used:

classFibonacciextendsRecursiveTask<Integer> {finalintn; Fibonacci(intn) {this.n = n; } Integer compute() {if(n <= 1)returnn; Fibonacci f1 =newFibonacci(n - 1); f1.fork(); Fibonacci f2 =newFibonacci(n - 2);returnf2.compute() + f1.join(); } }

What is wrong with this example? A minor issue is that it contains a fairly basic Java syntax error. The compute() method is implementing a protected abstract method defined in the RecursiveTask class. In the example, the method is package access. It should either be protected or public. We can widen the access to an overridden method, but we cannot narrow it. Stylistically also, the field "n" should be private.

However, the syntax error will be semi-automatically repaired by any decent IDE. The real problem is one of computational complexity. The algorithm for Fibonacci presented here is exponential. Solving f(n+1) requires approximately twice as many steps as f(n). Thus if we manage to solve f(n) with a single core within some time, then with a 1024 processor machine, we will only be able to solve the f(n+10) in the same amount of time.

The idea of using Fibonacci as an example is not bad, but the choice of an exponential algorithm was unfortunate. I tried to produce a fast Java implementation of the famous function. Exactly three months ago I posted a challenge on java.net to try to find Fibonacci of one billion. Alexey Solodovnikov managed to solve the problem in 36 seconds using gcc 4.5.2 + gmp with a 3 GHz single core machine. His Java version took about 2.5 hours. Rexy Young solved it using Dijkstra's famous sum of squares algorithm (more later) in 55 minutes. The code I will present in this newsletter solved the problem without any 3rd party libraries in 34 minutes in Java on my 8-core server.

Something else I have done is start uploading the code from my newsletters onto GitHub. You can grab it from git://github.com/kabutz/javaspecialists.git if you like and then build it with Maven. It will still be changing a lot until I have decided how to arrange the various packages. I also still have to pull the source code from over 200 Java newsletters into my project tree. This will take a while, but I think it's good that my friend Alvaro pushed me into this direction. Thanks Alvaro! The license will probably be Apache 2.0, unless there are any compelling reasons why that won't do.

The first class we have is Fibonacci. It contains caching behaviour enabled by default, since a lot of Fibonacci algorithms do much better with it turned on. You might notice that the calculate() and doActualCalculate() methods both declare that they throw the InterruptedException. We can use interrupts to cancel long calculations.

importjava.math.*;publicabstractclassFibonacci {privatefinalFibonacciCache cache;protectedFibonacci(FibonacciCache cache) {this.cache = cache; }publicFibonacci() {this(null); }publicBigInteger calculate(intn)throwsInterruptedException {if(cache ==null)returndoActualCalculate(n); BigInteger result = cache.get(n);if(result ==null) { cache.put(n, result = doActualCalculate(n)); }returnresult; }protectedabstractBigInteger doActualCalculate(intn)throwsInterruptedException; }

Here is an implementation that uses the caching feature of the Fibonacci class. The algorithm is recursive, but thanks to the cache, we can do all the calculations in linear time. However, the BigInteger.add() method is also linear, which means that overall, this function ends up being quadratic. This means that if we double the size of n, it will take four times as long to solve. In addition, because it is still recursive in nature, the stack depth is a limiting factor in which numbers we can calculate.

importjava.math.*;publicclassFibonacciRecursiveextendsFibonacci {publicBigInteger doActualCalculate(intn)throwsInterruptedException {if(Thread.interrupted())thrownewInterruptedException();if(n < 0)thrownewIllegalArgumentException();if(n == 0)returnBigInteger.ZERO;if(n == 1)returnBigInteger.ONE;returncalculate(n - 1).add(calculate(n - 2)); } }

At this point, we should probably show the special FibonacciCache class. It contains the optimization that if we request "n" and the cache does not contain that, we see whether "n-1" and "n-2" are contained. If they are, then we can use those to quickly calculate the Fibonacci value for "n". Similarly, we can look for "n+1" and "n+2" or "n+1" and "n-1" to find "n" quickly. It also tries to avoid calculating the same value twice through an elaborate reserved caching scheme (see cacheReservation and solutionArrived fields). Unfortunately this solution does not increase the number of worker threads in the ForkJoinPool if one of the threads is blocked waiting for another thread to finish its work on a number.

One of the things that kept on happening was that several threads would request the same number at the same time. Since neither of them found it, they would both go and work out the value. This wasted a lot of computing power. Thus, before we return "null" from this cache, we reserve the right to calculate the number by adding the value of "n" into the cacheReservation set. If another thread now tries to calculate the same fibonacci number, it is suspended until the first thread finds the answer. The speedup with this optimization was quite dramatic, but required that the number or threads in the ForkJoinPool exceeded the number of cores.

importjava.math.*;importjava.util.*;importjava.util.concurrent.*;importjava.util.concurrent.locks.*;classFibonacciCache {privatefinalConcurrentMap<Integer, BigInteger> cache =newConcurrentHashMap<>();privatefinalLock lock =newReentrantLock();privatefinalCondition solutionArrived = lock.newCondition();privatefinalSet<Integer> cacheReservation =newHashSet<>();publicBigInteger get(intn)throwsInterruptedException { lock.lock();try{while(cacheReservation.contains(n)) {// we now want to wait until the answer is in the cachesolutionArrived.await(); } BigInteger result = cache.get(n);if(result !=null) {returnresult; } BigInteger nMinusOne = cache.get(n - 1); BigInteger nMinusTwo = cache.get(n - 2);if(nMinusOne !=null&& nMinusTwo !=null) { result = nMinusOne.add(nMinusTwo); put(n, result);returnresult; } BigInteger nPlusOne = cache.get(n + 1); BigInteger nPlusTwo = cache.get(n + 2);if(nPlusOne !=null&& nPlusTwo !=null) { result = nPlusTwo.subtract(nPlusOne); put(n, result);returnresult; }if(nPlusOne !=null&& nMinusOne !=null) { result = nPlusOne.subtract(nMinusOne); put(n, result);returnresult; } cacheReservation.add(n);returnnull; }finally{ lock.unlock(); } }publicvoidput(intn, BigInteger value) { lock.lock();try{ solutionArrived.signalAll(); cacheReservation.remove(n); cache.putIfAbsent(n, value); }finally{ lock.unlock(); } } }

Some of the algorithms do not need caching. For example, if we solve the problem with iteration, then we will never try to find the same Fibonacci number more than once. Our NonCachingFibonacci abstract base class redefines the calculate() method as abstract and stops us from overriding doActualCalculate():

importjava.math.*;publicabstractclassNonCachingFibonacciextendsFibonacci {protectedNonCachingFibonacci() {super(null); }publicfinalBigInteger doActualCalculate(intn)throwsInterruptedException {thrownewUnsupportedOperationException(); }publicabstractBigInteger calculate(intn)throwsInterruptedException; }

The first example that uses the NonCachingFibonacci is the iterative solution. If we ran this with "long" values, it would be very fast. However, we are slowed down by the BigInteger method being linear, which again makes this function quadratic, similar to the FibonacciRecursive class shown earlier. Since it is not recursive, this function does not run out of stack space as would happen with FibonacciRecursive.

importjava.math.*;publicclassFibonacciIterativeextendsNonCachingFibonacci {publicBigInteger calculate(intn)throwsInterruptedException {if(n < 0)thrownewIllegalArgumentException(); BigInteger n0 = BigInteger.ZERO; BigInteger n1 = BigInteger.ONE;for(inti = 0; i < n; i++) {if(Thread.interrupted())thrownewInterruptedException(); BigInteger temp = n1; n1 = n1.add(n0); n0 = temp; }returnn0; } }

The classical Fibonacci algorithm as described in the
RecursiveTask JavaDocs is exponential. The only way of
making it a bit performant is to cache the previous values.
In this version we do *not* cache, making it incredibly
slow. We can thus only calculate for very small values of n.

importjava.math.*;publicclassFibonacciRecursiveNonCachingextendsNonCachingFibonacci {publicBigInteger calculate(intn)throwsInterruptedException {if(Thread.interrupted())thrownewInterruptedException();if(n < 0)thrownewIllegalArgumentException();if(n == 0)returnBigInteger.ZERO;if(n == 1)returnBigInteger.ONE;returncalculate(n - 1).add(calculate(n - 2)); } }

We have shown simple iterative and recursive solutions. However, there is also a formula that we can use to calculate a specific Fibonacci number, without having to go back and calculate the previous numbers. This formula depends on the square root of 5. With "long" and "double", we can work out an accurate value of Fibonacci up to n=71.

Let `phi = (1 + root5) / 2`

and `psi = (1 - root5) / 2`

.
`Fibonacci(n) = (phi^n - psi^n) / (phi - psi)`

, where "^" means
"to the power of". Or we could simply say
`Fibonacci(n) = (phi^n - psi^n) / root5`

.

importjava.math.*;publicclassFibonacciFormulaLongextendsNonCachingFibonacci {privatestaticfinaldoubleroot5 = Math.sqrt(5);privatestaticfinaldoublePHI = (1 + root5) / 2;privatestaticfinaldoublePSI = (1 - root5) / 2;privatestaticfinalintMAXIMUM_PRECISE_NUMBER = 71;publicBigInteger calculate(intn)throwsInterruptedException {if(Thread.interrupted())thrownewInterruptedException();if(n < 0)thrownewIllegalArgumentException();if(n > MAXIMUM_PRECISE_NUMBER)thrownewIllegalArgumentException("Precision loss after "+ MAXIMUM_PRECISE_NUMBER);returnnewBigInteger(Long.toString(fibWithFormula(n))); }privatestaticlongfibWithFormula(intn) {return(long)((Math.pow(PHI, n) - Math.pow(PSI, n)) / root5); } }

If we want to be able to work out Fibonacci(1000) with this formula, we are forced to use BigDecimal. Unfortunately it is a rather slow class. I used a value of root5 that would give me an accurate Fibonacci number up to n=1000. The more accurate we make root5, the slower this algorithm takes. The biggest time sink is the call to divide(root5) at the end of the calculation.

importjava.math.*;publicclassFibonacciFormulaBigIntegerextendsNonCachingFibonacci {privatestaticfinalBigDecimal root5 =newBigDecimal("2.23606797749978969640917366873127623544061835961152572"+"4270897245410520925637804899414414408378782274969508176"+"1507737835042532677244470738635863601215334527088667781"+"7319187916581127664532263985658053576135041753378");privatestaticfinalBigDecimal PHI = root5.add(newBigDecimal(1)).divide(newBigDecimal(2));privatestaticfinalBigDecimal PSI = root5.subtract(newBigDecimal(1)).divide(newBigDecimal(2));privatestaticfinalintMAXIMUM_PRECISE_NUMBER = 1000;publicBigInteger calculate(intn)throwsInterruptedException {if(Thread.interrupted())thrownewInterruptedException();if(n < 0)thrownewIllegalArgumentException();if(n > MAXIMUM_PRECISE_NUMBER)thrownewIllegalArgumentException("Precision loss after "+ MAXIMUM_PRECISE_NUMBER); BigDecimal phiToTheN = PHI.pow(n);if(Thread.interrupted())thrownewInterruptedException(); BigDecimal psiToTheN = PSI.pow(n);if(Thread.interrupted())thrownewInterruptedException(); BigDecimal phiMinusPsi = phiToTheN.subtract(psiToTheN); BigDecimal result = phiMinusPsi.divide( root5, 0, RoundingMode.UP);returnresult.toBigInteger(); } }

So far, the best we have achieved is a linear algorithm, but with the addition of linear BigInteger.add(), we ended up with quadratic performance. Can we do better?

One of the algorithms that is popular is Dijkstra's sum of the squares is also mentioned on R.Knott's website.

My implementation goes like this: if n is odd, then
`f(2n-1) = f(n-1)^2 + f(n)^2`

; otherwise
`f(2n) = (2 * f(n-1) + f(n)) * f(n)`

.

importjava.math.*;publicclassFibonacciRecursiveDijkstraextendsFibonacci {publicBigInteger doActualCalculate(intn)throwsInterruptedException {if(Thread.interrupted())thrownewInterruptedException();if(n == 0)returnBigInteger.ZERO;if(n == 1)returnBigInteger.ONE;if(n % 2 == 1) {// f(2n-1) = f(n-1)^2 + f(n)^2intleft = (n + 1) / 2;intright = (n + 1) / 2 - 1;returnsquare(calculate(left)).add(square(calculate(right))); }else{// f(2n) = (2 * f(n-1) + f(n)) * f(n)intn_ = n / 2; BigInteger fn = calculate(n_); BigInteger fn_1 = calculate(n_ - 1);return(fn_1.add(fn_1).add(fn)).multiply(fn); } }protectedBigInteger multiply(BigInteger bi0, BigInteger bi1) {returnbi0.multiply(bi1); }protectedBigInteger square(BigInteger num) {returnmultiply(num, num); } }

The algorithm now is logarithmic, meaning that we can solve Fibonacci(1000) with about 10 calculations. I expected this to be really fast and to outperform all the other algorithms. After all, the iterative and simple recursive solutions both are quadratic. However, it turns out that the bottleneck in the above code is the BigInteger.multiply() method, which is quadratic. So we now have O(n^2 * log n) performance. Good enough to find Fibonacci for a thousand but not for a billion.

After some research, I stumbled across the algorithm by Karatsuba. Instead of quadratic performance, it can achieve 3n^1.585 performance. There are other algorithms one can use as your numbers get larger, but I tried to build this solver without using third-party libraries. Examples of third-party libraries are JScience's LargeInteger, which internally uses Karatsuba and also a parallel version. However, I really wanted a solution that could cooperate with an existing ForkJoinPool rather than start new threads. Another option would be to simply use a Java wrapper to GMP, which should give us the same speed as a C wrapper to GMP, since all we are doing is delegate the Fibonacci function to GMP.

For large numbers, Karatsuba's multiplication algorithm, is a lot faster than ordinary BigInteger multiply. We give two implementations, the BasicKaratsuba for single-threaded calculations and the ParallelKaratsuba that utilizes the Fork/Join Framework. Another place that mentions Karatsuba is in Sedgewick and Wayne's book Introduction to Programming in Java: An Interdisciplinary Approach.

importjava.math.*;publicinterfaceKaratsuba { BigInteger multiply(BigInteger x, BigInteger y); }

Our Karatsuba calculations use some utility functions for adding and splitting BigIntegers.

importjava.math.*;classBigIntegerUtils {publicstaticBigInteger add(BigInteger... ints) { BigInteger sum = ints[0];for(inti = 1; i < ints.length; i++) { sum = sum.add(ints[i]); }returnsum; }publicstaticBigInteger[] split(BigInteger x, int m) { BigInteger left = x.shiftRight(m); BigInteger right = x.subtract(left.shiftLeft(m));returnnewBigInteger[]{left, right}; } }

The BasicKaratsuba contains a threshold value. If either of the numbers being multiplied is less than the threshold, then we use the standard BigInteger.multiply().

importjava.math.*;importstaticeu.javaspecialists.tjsn.math.numbers.BigIntegerUtils.*;publicclassBasicKaratsubaimplementsKaratsuba {publicstaticfinalString THRESHOLD_PROPERTY_NAME ="eu.javaspecialists.tjsn.math.numbers.BasicKaratsubaThreshold";privatestaticfinalintTHRESHOLD = Integer.getInteger( THRESHOLD_PROPERTY_NAME, 1000);publicBigInteger multiply(BigInteger x, BigInteger y) {intm = java.lang.Math.min(x.bitLength(), y.bitLength())/2;if(m <= THRESHOLD)returnx.multiply(y);// x = x1 * 2^m + x0// y = y1 * 2^m + y0BigInteger[] xs = BigIntegerUtils.split(x, m); BigInteger[] ys = BigIntegerUtils.split(y, m);// xy = (x1*2^m + x0)(y1*2^m + y0) = z2*2^2m + z1*2^m + z0// where:// z2 = x1 * y1// z0 = x0 * y0// z1 = x1 * y0 + x0 * y1 = (x1 + x0)(y1 + y0) - z2 - z0BigInteger z2 = multiply(xs[0], ys[0]); BigInteger z0 = multiply(xs[1], ys[1]); BigInteger z1 = multiply(add(xs), add(ys)). subtract(z2).subtract(z0);// result = z2 * 2^2m + z1 * 2^m + z0returnz2.shiftLeft(2 * m).add(z1.shiftLeft(m)).add(z0); } }

The ParallelKaratsuba is almost identical to the BasicKaratsuba, except that it distributes the individual tasks to the ForkJoinPool. We can thus utilize the available hardware to multiply the numbers faster. Note how similar it looks to the sequential version in BasicKaratsuba. As long as your algorithm is recursive, it is very easy to employ the Fork/Join framework to speed up your work.

importjava.math.*;importjava.util.concurrent.*;importstaticeu.javaspecialists.tjsn.math.numbers.BigIntegerUtils.*;publicclassParallelKaratsubaimplementsKaratsuba {publicstaticfinalString THRESHOLD_PROPERTY_NAME ="eu.javaspecialists.tjsn.math.numbers.ParallelKaratsubaThreshold";privatestaticfinalint THRESHOLD = Integer.getInteger( THRESHOLD_PROPERTY_NAME, 1000);privatefinalForkJoinPool pool;publicParallelKaratsuba(ForkJoinPool pool) {this.pool = pool; }publicBigInteger multiply(BigInteger x, BigInteger y) {returnpool.invoke(newKaratsubaTask(x, y)); }privatestaticclassKaratsubaTaskextendsRecursiveTask<BigInteger> {privatefinalBigInteger x, y;publicKaratsubaTask(BigInteger x, BigInteger y) {this.x = x;this.y = y; }protectedBigInteger compute() {intm = (Math.min(x.bitLength(), y.bitLength()) / 2);if(m <= THRESHOLD) {returnx.multiply(y); } BigInteger[] xs = split(x, m); BigInteger[] ys = split(y, m); KaratsubaTask z2task =newKaratsubaTask(xs[0], ys[0]); KaratsubaTask z0task =newKaratsubaTask(xs[1], ys[1]); KaratsubaTask z1task =newKaratsubaTask(add(xs), add(ys)); z0task.fork(); z2task.fork(); BigInteger z0, z2; BigInteger z1 = z1task.invoke().subtract( z2 = z2task.join()).subtract(z0 = z0task.join());returnz2.shiftLeft(2*m).add(z1.shiftLeft(m)).add(z0); } } }

Karatsuba makes a big difference in the performance, especially if we are able to utilize the available cores. We can do the same thing with Dijkstra's Fibonacci algorithm where we distribute the various Fibonacci calculations to the various cores. However, the biggest speedup is probably from the Karatsuba multiply, since that is where most of the time is spent. Our parallel solution uses its own internal FibonaccciCache.

importjava.math.*;importjava.util.concurrent.*;publicclassFibonacciRecursiveParallelDijkstraKaratsubaextendsNonCachingFibonacci {publicstaticfinalString SEQUENTIAL_THRESHOLD_PROPERTY_NAME ="eu.javaspecialists.tjsn.math.fibonacci.SequentialThreshold";privatefinalstaticintSEQUENTIAL_THRESHOLD = Integer.getInteger(SEQUENTIAL_THRESHOLD_PROPERTY_NAME, 10000);privatefinalFibonacciCache cache =newFibonacciCache();privatefinalFibonacci sequential =newFibonacciRecursiveDijkstraKaratsuba();privatefinalForkJoinPool pool;privatefinalKaratsuba karatsuba;publicFibonacciRecursiveParallelDijkstraKaratsuba( ForkJoinPool pool) {this.pool = pool; karatsuba =newParallelKaratsuba(pool); }publicBigInteger calculate(intn)throwsInterruptedException {if(Thread.interrupted())thrownewInterruptedException(); BigInteger result = pool.invoke(newFibonacciTask(n));if(result ==null)thrownewInterruptedException();returnresult; }privateclassFibonacciTaskextendsRecursiveTask<BigInteger> {privatefinalintn;privateFibonacciTask(intn) {this.n = n; }protectedBigInteger compute() {try{ BigInteger result = cache.get(n);if(result !=null) {returnresult; }if(n < SEQUENTIAL_THRESHOLD) {result= sequential.calculate(n); }else{if(n % 2 == 1) {// f(2n-1) = f(n-1)^2 + f(n)^2intleft = (n + 1) / 2;intright = (n + 1) / 2 - 1; FibonacciTask f0 =newFibonacciTask(left); FibonacciTask f1 =newFibonacciTask(right); f1.fork(); BigInteger bi0 = f0.invoke(); BigInteger bi1 = f1.join();if(isCancelled())returnnull; result = square(bi1).add(square(bi0)); }else{// f(2n) = (2 * f(n-1) + f(n)) * f(n)intn_ = n / 2; FibonacciTask f0 =newFibonacciTask(n_); FibonacciTask f1 =newFibonacciTask(n_ - 1); f1.fork(); BigInteger bi0 = f0.invoke(); BigInteger bi1 = f1.join();if(isCancelled())returnnull; result = karatsuba.multiply( bi1.add(bi1).add(bi0), bi0); } } cache.put(n, result);returnresult; }catch(InterruptedException e) { cancel(true);returnnull; } }privateBigInteger square(BigInteger num) {returnkaratsuba.multiply(num, num); } } }

The FibonacciGenerator uses the given Fibonacci function to work out the result as a BigDecimal. However, the conversion of BigDecimal to a String is again, you guessed it, quadratic. Instead, what we do is print out the first and last 10 bytes and an Adler32 checksum.

importjava.math.*;importjava.util.zip.*;publicclassFibonacciGenerator {privatefinalFibonacci fib;publicFibonacciGenerator(Fibonacci fib) {this.fib = fib; }publicvoidfindFib(intn)throwsInterruptedException { System.out.printf("Searching for Fibonacci(%,d)%n", n);longtime = System.currentTimeMillis(); BigInteger num = fib.calculate(n); time = System.currentTimeMillis() - time; printProof(num); System.out.printf(" Time to calculate %d ms%n%n", time); }privatevoidprintProof(BigInteger num) { System.out.printf(" Number of bits: %d%n", num.bitLength());byte[] numHex = num.toByteArray(); System.out.print(" First 10 bytes: ");for(inti = 0; i < 10; i++) { System.out.printf(" %02x", numHex[i]); } System.out.println(); System.out.print(" Last 10 bytes: ");for(inti = numHex.length - 10; i < numHex.length; i++) { System.out.printf(" %02x", numHex[i]); } System.out.println(); Checksum ck =newAdler32(); ck.update(numHex, 0, numHex.length); System.out.printf(" Adler32 Checksum: 0x%016x%n", ck.getValue()); } }

The FibonacciGeneratorExample finds Fibonacci of one billion in about 34 minutes on my 8-core server.

importjava.util.concurrent.*;publicclassFibonacciGeneratorExample {privatestaticForkJoinPool pool =newForkJoinPool( Runtime.getRuntime().availableProcessors() * 4);publicstaticvoidmain(String[] args)throwsInterruptedException {int[] ns;if(args.length != 0) { ns =newint[args.length];for(inti = 0; i < ns.length; i++) { ns[i] = Integer.parseInt(args[i]); } }else{ ns =newint[]{ 1_000_000, 10_000_000, 100_000_000,// takes a bit long1_000_000_000,// takes a bit long}; } test(newFibonacciRecursiveParallelDijkstraKaratsuba(pool), ns); }privatestaticvoidtest(Fibonacci fib,int... ns)throwsInterruptedException {for(intn : ns) { FibonacciGenerator fibgen =newFibonacciGenerator(fib); fibgen.findFib(n); System.out.println(pool); } } }

I would love to hear if you can come up with an algorithm in Java to solve Fibonacci one billion faster on my hardware than my 34.58 minutes. Fastest solution gets a whopping 34.58% discount on the Java Symposium entrance fees! The rules are: you need to write the code yourself, but you can use existing algorithms.

Kind regards

Heinz

P.S. Did you know that the Fibonacci series occurs in nature? For example, the shells of a turtle are arranged according to the series. That's something to think about as you watch this big fellow swimming around the Gulf of Chania.

We deliver relevant courses, by top Java developers to produce more resourceful and efficient programmers within their organisations.

Nobody ever wants to call a Java performance consultant, but with first-hand experience repairing and improving commercial Java applications - JavaSpecialists are a good place to start...