Showing posts from November, 2012

When using direct memory can be faster

Overview Using direct memory is no guarantee of improving performance.  Given it adds complexity, it should be avoided unless you have a compelling reason to use it. This excellent article by Sergio Oliveira Jr shows its not simply a matter of using direct memory to improve performance Which one is faster: Java heap or native memory? Where direct memory and memory mapped files can help is when you have a large amounts of data and/or you have to perform some IO with that data. Time series data. Time series data tends to have both a large number of entries and involve IO to load and store the data.  This makes it a good candidate for memory mapped files and direct memory. I have provided an example here;  main and tests where the same operations are performed on regular objects and on memory mapped files.  Note: I am not suggesting that the access to the Objects is slow but the overhead of using objects which is the issue. e.g. loading, creating, the size of the objec

Wasting time by saving memory

Overview Memory and disk space is getting cheaper all the time, but the cost of an hours development is increasing.  Often I see people trying to save memory or disk space which literally wasn't worth worrying about. I have a tendancy to do this myself, because I can, not because it is good use of my time. ;) Costs for comparison Cheap memory - You can buy 16 GB of memory for £28. Expensive memory - You can buy 1 GB in a phone for £320. (The entire cost of the phone) Cheap disk space - You can buy 3 TB of disk for £120 Expensive disk space - You can buy 1 TB of the fastest RAID-1 PCI SDD for £2000. The living wage in London is £8.55. You might say that at my company, the hard ware is 10x more expensive, but it also likely you time is costing the company about the same more.  In any case, this article attempts to demonstate that there is a tipping point where it no longer makes sense to spend time saving memory, or even thinking about it. time spent cheap memor

Why is it not allowed in Java to overload Foo(Object…) with Foo(Object[])?

This is from Why is it not allowed in Java to overload Foo(Object…) with Foo(Object[])? Question I was wondering why it is not allowed in Java to overload Foo(Object[] args) with Foo(Object... args) , though they are used in a different way? Foo ( Object [] args ){} is used like:   Foo ( new Object []{ new Object (), new Object ()}); while the other form:   Foo ( Object ... args ){} is used like:   Foo ( new Object (), new Object ()); Is there any reason behind this? Answer This Choosing the Most Specific Method talk about this, but its quite complex. e.g. Choosing between Foo(Number... ints) and Foo(Integer... ints) In the interests of backward compatibility, these are effectively the same thing. public Foo ( Object ... args ){} // syntactic sugar for Foo(Object[] args){} // calls the varargs method. Foo ( new Object []{ new Object (), new Object ()}); e.g. you can define main() as   public static void main ( String ..

Practical uses for WeakReferences

This is based on answers to Is there a practical use for weak references? Question Since weak references can be claimed by the garbage collector at any time, is there any practical reason for using them? Answers If you want to keep a reference to something as long as it is used elsewhere e.g. a Listener, you can use a weak reference. WeakHashMap can be used as a short lived cache of keys to derived data. It can also be used to keep information about objects used else where and you don't know when those objects are discarded. BTW Soft References are like Weak references, but they will not always be cleaned up immediately. The GC will always discard weak references when it can and retain Soft References when it can. There is another kind of reference called a Phantom Reference . This is used in the GC clean up process and refers to an object which isn't accessible to "normal" code because its in the process of being cleaned up.

Why Double.NaN==Double.NaN is false

This is taken from the top two answers Why does Double.NaN==Double.NaN return false? Question I was just studying OCPJP questions and I found this strange code: public static void main ( String a []) { System . out . println ( Double . NaN == Double . NaN ); System . out . println ( Double . NaN != Double . NaN ); } When I ran the code, I got: false true How is the output false when we're comparing two things that look the same as each other? What does NaN mean? Answer NaN is by definition not equal to any number including NaN. This is part of the IEEE 754 standard and implemented by the CPU/FPU. It is not something the JVM has to add any logic to support. A comparison with a NaN always returns an unordered result even when comparing with itself. ... The equality and inequality predicates are non-signaling so x = x returning false can be used to test if x is a quiet NaN. Java treats all NaN as quiet NaN. Java

Java Intrinsics and Performance

The original question was How to count the number of 1's a number will have in binary?  I included a performance comparison of using Integer.bitCount() which can be turned into an intrinic i.e. a single machine code instruction POPCNT and the Java code which does the same thing. Question How do I count the number of 1 's a number will have in binary? So let's say I have the number 45 , which is equal to 101101 in binary and has 4 1 's in it. What's the most efficient way to write an algorithm to do this? Answer Instead of writing an algorithm to do this it's best to use the built in function. Integer.bitCount() What makes this especially efficient is that the JVM can treat this as an intrinsic. i.e. recognise and replace the whole thing with a single machine code instruction on a platform which supports it e.g. Intel/AMD To demonstrate how effective this optimisation is public static void main ( String ... args ) { perfTestIntrinsic ();

Java += and implicit casting

This is from two popular answers to the question Java += operator Question Until today I thought that for example: i += j ; is just a shortcut for: i = i + j ; But what if we try this: int i = 5 ; long j = 8 ; Then i = i + j; will not compile but i += j; will compile fine. Does it mean that in fact i += j; is a shortcut for something like this i = (type of i) (i + j) ? I've tried googling for it but couldn't find anything relevant. Answers As always with these questions, the JLS holds the answer. In this case §15.26.2 Compound Assignment Operators . An extract: A compound assignment expression of the form E1 op= E2 is equivalent to E1 = (T)((E1) op (E2)), where T is the type of E1, except that E1 is evaluated only once. And an example: For example, the following code is correct: short x = 3 ; x += 4.6 ;   and results in x having the value 7 because it is equivalent to: short x = 3 ; x = ( short )( x + 4.6 ); In other words

Moving the decimal place in a double

This is taken from a popular answer to the question Moving decimal places over in a double Question So I have a double set to equal 1234, I want to move a decimal place over to make it 12.34 So to do this I multiply .1 to 1234 two times, kinda like this double x = 1234 ; for ( int i = 1 ; i <= 2 ; i ++) { x = x *. 1 ; } System . out . println ( x ); This will print the result, "12.340000000000002" Is there a way, without simply formatting it to two decimal places, to have the double store 12.34 correctly? Answer If you use double or float , you should use rounding or expect to see some rounding errors. If you can't do this, use BigDecimal . The problem you have is that 0.1 is not an exact representation, and by performing the calculation twice, you are compounding that error. However, 100 can be represented accurately, so try: double x = 1234 ; x /= 100 ; System . out . println ( x ); which prints:   12.34 This works because Dou

Can try/finally prevent a StackOverflowError?

This post is taken from a popular answer to a question  try-finally block prevents StackOverflowError Question Take a look at the following two methods: public static void foo () { try { foo (); } finally { foo (); } } public static void bar () { bar (); } Running bar() clearly results in a StackOverflowError , but running foo() does not (the program just seems to run indefinitely). Why is that? Answer It doesn't run forever. Each stack overflow causes the code to move to the finally block. The problem is that it will take a really, really long time. The order of time is O(2^N) where N is the maximum stack depth. Imagine the maximum depth is 5 foo () calls foo () calls foo () calls foo () calls foo () which fails to call foo () finally calls foo () which fails to call foo () finally foo () calls foo () which f