What are the bad features of Java.

Overview

When you first learn to develop you see overly broad statements about different features to be bad, for design, performance, clarity, maintainability, it feels like a hack, or they just don't like it.

This might be backed by real world experience where removing the use of the feature improved the code.  Sometimes this is because the developers didn't know how to use the feature correctly, or the feature is inherently error prone (depending whether you like it or not)

It is disconcerting when either fashion, or your team changes and this feature becomes fine or even a preferred methodology.

In this post, I look at some of the feature people like to hate and why I think that used correctly, they should be a force for good.  Features are not as yes/no, good/bad as many like to believe.

Checked Exceptions

I am often surprised at the degree that developers don't like to think about error handling.  New developers don't even like to read error messages. It's hard work, and they complain the application crashed, "it's not working". They have no idea why the exception was thrown when often the error message and stack dump tell them exactly what went wrong if they could only see the clues. When I write out stack traces for tracing purposes, many just see the log shaped like a crash when there was no error.   Reading error messages is a skill and at first it can be overwhelming.

Similarly, handling exceptions in a useful manner is too often avoided.  I have no idea what to do with this exception, I would rather either log the exception and pretend it didn't happen or just blow up and let the operations people or to the GUI user, who have the least ability to deal the error.

Many experienced developers hate checked exceptions as a result.  However, the more I hear this, the more I am glad Java has checked exception as I am convinced they really will find it too easy ignore the exceptions and just let the application die if they are not annoyed by them.

Checked exceptions can be overused of course.  The question should be when throwing a checked exception; do I want to annoy the developer calling the code by forcing them to think a little bit about error handling? If the answer is yes, throw a checked exception.

IMHO, it is a failing of the lambda design that it doesn't handle checked exception transparently. i.e. as a natural block of code would by throwing out any unhandled exception as it does for unchecked exceptions and errors. However, given the history of lambdas and functional programming, where they don't like side effect at all, let alone short cut error handling, it is not surprising.

You can get around the limitation of lambdas by re-throwing a checked exception as if it were an unchecked one.  This works because the JVM has no notion of checked exceptions, it is a compile time check like generics.  My preferred method is to use Unsafe.rethrowException but there is 3 other ways of doing this. Thread.currentThread().stop(e) no longer working in Java 8 despite the fact it was always safe to do.

Was Thread.currentThread().stop(e) unsafe?

The method Thread.stop(Throwable) was unsafe when it could cause another thread to trigger an exception in a random section of code.  This could be a checked exception in a portion of code which didn't expect it, or throw an exception which is caught in some portions of the thread but not others leaving you with no idea what it would do.  

However, the main reason it was unsafe is that it could leave atomic operations in as synchronized of locked section of code in an inconsistent state corrupting the memory in subtle and untestable ways.
To add to the confusion, the stack trace of the Throwable didn't match the stack trace of the thread where the exception was actually thrown.

But what about Thread.currentThread().stop(e)?  This triggers the current thread to throw an exception on the current line.  This is no worse than just using throw exception you are performing an operation the compiler can't check.  The problem is that the compiler doesn't always know what you are doing and whether it is really safe or not.  For generics this is classed as an "unchecked cast" which is a warning which you can disable with an annotation.  Java doesn't support the same sort of operation with checked exception so well and you end up using hacks, or worse hiding the true checked exception as a runtime exception meaning there is little hope the caller will handle it correctly.

Is using static bad?

This is a new "rule" for me.  I understand where it is coming from, but there is more exceptions to this rule than where it should apply.  Let us first consider all the contexts where the overloaded meaning of static can be used.
  1. static mutable fields
  2. static immutable field (final primitive or final fields pointing to objects which are not changed)
  3. static methods.
  4. static classes (which have no implicit reference to an outer instance)
  5. static initialiser blocks.
I would agree that using static mutable fields is likely to be either a newbie bug, or something to be avoided if at all possible. If you see static fields being altered in a constructor, it is almost certainly a bug.(Even if not, I would avoid it)  I believe this is the cause of the statement to avoid all static.

However, in all the other cases, using static is not only more performant, it is clearer.  It shows this field isn't different for each instance, or that the method or class doesn't implicitly depend on than instance.

In short, static is good, and mutable static fields are the exception, not the rule.

Are Singletons bad?

The problems with singletons come from two directions.  They are effectively global mutable state making them difficult to maintain or encapsulte e.g. in a unit test, and they support auto-wiring. i.e. any component can access it making your dependencies unclear and difficult to manage.  For these reasons, some developers hate them.

However, following good dependency injection is a methodology which should be applied to all your components, singletons or not, and you should avoid global mutable state via singletons or not.  

If you exclude global state and self wiring components, you are left with Singletons which are immutable and passed via dependency injection and in this case they can work really elegantly.  A common pattern I use to implement strategies is to use an enum with one instance which implement an interface.

    enum MyComparator implements Comparator {
       INSTANCE;
       public int compare(MyObject o1, MyObject o2) {
           // something a bit too complicated to put in a lambda
       }
    }

This instance can be passed as an implementation of Comparator via dependency injection and without mutable state can be used safely across threads and unit tests.

Can I get a library or framework to do that very simple thing for me?

Libraries and frameworks can save you a lot of time and wasted effort getting you own code to do something which already works else where.

Even if you want to write you own code I strongly suggest you have an understanding of what existing libraries and frameworks do so you can learn from them.  Writing it yourself is not a short cut to avoid having to understand any existing solutions.  A journalist once wrote with despair about an aspiring journalist that; didn't like to read, only to write.  The same applies in software development.

However, I have seen (on Stackoverflow) developers so to great lengths to avoid using their own code for even trivial examples.  They feel like if they use a library it must be better than anything they have written.  The problem with this is it assumes; adding libraries don't come at a cost to complexity, you have a really good understanding of the library, and you will never need to learn to write code you can trust.

Some developers use frameworks to help learn what is actually a methodology.  Often developers use a framework for dependency injection when actually you could just do this in plain Java, but they either don't trust themselves or their team to do this.

In the high performance space, the simpler the code, the less work your application does, the easier it is to maintain with less moving parts and the faster it will go.  You need to use the minimum of libraries and frameworks which are reasonably easy to understand so you can get your system to perform at it best.

Is using double for money bad?

Using fractional numbers without any regard for rounding will give you unexpected results.  On the plus side, for double, are usually obviously wrong like 10.99999999999998 instead of 11.

Some have the view that BigDecimal is the solution.  However, the problem is that BigDecimal has it's own gotchas, is much harder to validate/read/write but worst of all can look correct when it is not.  Take this example

    double d = 1.0 / 3 * 3 + 0.01;
    BigDecimal bd1 = BigDecimal.valueOf(1.0)
            .divide(BigDecimal.valueOf(3), 2, RoundingMode.HALF_UP)
            .multiply(BigDecimal.valueOf(3))
            .add(BigDecimal.valueOf(0.01))
            .setScale(2, BigDecimal.ROUND_HALF_UP);
    BigDecimal bd2 = BigDecimal.valueOf(1.0)
            .divide(BigDecimal.valueOf(3), 2, RoundingMode.HALF_UP)
            .multiply(BigDecimal.valueOf(3)
            .add(BigDecimal.valueOf(0.01)))
            .setScale(2, BigDecimal.ROUND_HALF_UP);
    System.out.println("d: " + d);
    System.out.println("bd1: " + bd1);
    System.out.println("bd2: " + bd2);

This produces three different results.  By sight, which one produces the right result?  Can you tell the difference between bd1 and bd2?

This prints

d: 1.01
bd1: 1.00
bd2: 0.99

Can you see from the output which is wrong? Actually the answer should be 1.01.

Another gotcha of BigDecimal is that equals and compareTo do not behave the same.  equals() can be false when compareTo() returns 0. i.e. in BigDecimal 1.0 equals 1.00 is false as the scales are different.

The problem I have with BigDecimal is that you get code which is often harder to understand and produces incorrect results which look like they could be right.  BigDecimal is significantly slower and products lots of garbage.  (This is improving in each version of Java 8) There are situations where BigDecimal is the best solution, but it is not a given as some would protest.

If BigDecimal is not a great alternative, is there any other?  Often int and long are used with fixed precision e.g. whole number of cents instead of a fraction of dollars.  This has some challenges in you have to remember where the decimal place is.  If Java supports values types, it might makes sense to use these as wrappers for money and give you more safety, but the control, clarify and performance of dealing with whole number primitives.

Using null values

For developers new to Java, getting repeated NullPointerException is a draining experience.  Do I really have to create a new instance of every object, every element in an array in Java?  Other language don't require this as it is often done via embedded data structures. (Something which is being considered for Java)

Even experienced Java developers have difficulty dealing with null values and see it as a big mistake to have null in the language. IMHO The problem is that the replacements are often far worse. such as NULL objects which don't NPE, but perhaps should have been initialised to something else.  In Java 8, Optional is a good addition which makes the handling of a non-result clearer.  I think it is useful for those who struggle with NullPointerException as it forces you to consider that there might not be a result at all.  This doesn't solve the problem of uninitialised fields.  

I don't like it personally as it solves a problem which can be solved more generally by handling null correctly, but I recognise that for many it is an improvement.

A common question is;  how was I supposed to know a variable was null?  This is the wrong way around in my mind.  It should be, why assume it couldn't be null?  If you can't answer that, you have to assume it could be null and an NPE shouldn't be any surprise if you don't check for it.

You could argue that Java could do with more syntactic sugar to make code which handles null cleaner such as the Elvis operator, but I think the problem is that developers are not thinking about null values enough. e.g. do you check that an enum variable is null before you switch on it?. (I think there should be a case null: in switch but there isn't or to fall through to default: but it doesn't)

How important is it to write code fast?

Java is not a terse language and without an IDE to write half the code for you, it would be really painful to write esp if you spent all day writing code.

But this is what developers do all day don''t they?  Actually, they don't.  Developers don't spend much of their time writing code, they spend 90% (for new code) to 99% (for legacy code) understanding the problem.

You might say; I write 1000 lines of code all day long and later and re-write the code (often making it shorter) and some time later I fixed the code   However, while the code is still fresh in your mind, if you were to write just the code you needed in the end (or you do this from a print out) and you divide it by the total time you spent on the project, end to end,  you are likely to find it was actually less than 100 lines of code per day, possibly less than 10 lines per day.

So what were you really doing over that time if it wasn't writing the finished product.  It was understand what was required by the end users, and what was required to implement the solution.

Someone once told me; it doesn't matter how fast, how big, how deep or how many holes you dig, if you are digging them in the wrong place.

Conclusion

I hear views from beginners to distinguished developers claiming you shouldn't/I can't imagine why you would/you should be sacked if you use X, you should only use Y.  I find such statements are rarely 100% accurate.  Often there is either edge cases, and sometimes very common cases where such statement are misleading or simply incorrect.

I would treat any such broad comments with scepticism, and often they find they have to qualify what was said once they see that others don't have the same view.


Comments

  1. Those are a very sensible set of engineering observations.

    As for "I am often surprised the degree that developers don't like to think about error handling", well the answer is regrettablly easy. They are "happy daze" artistes (note the extra "e"), not engineers.

    Unfortunately HR-droids can't tell the difference, and probably wouldn't want hire someone with a "negative outlook" since they wouldn't be "team players". Managers are usually happy to accept estimates from happy daze artistes, even if they have to redo the work later.

    ReplyDelete
  2. Great post!
    For an investigation of the problems of dealing with exceptions in lambdas see this post:
    http://www.rationaljava.com/2015/04/cheating-with-exceptions-java-8-lambdas.html

    ReplyDelete
  3. Excellent observations! When it comes to null values, I have found that annotating parameters and return values (using e.g. JSR 305 annotations) greatly helps. Not only will these annotations allow static analysis to find violations of null handling, they also force the developer to actively think about null values.

    ReplyDelete
  4. Helpful post but will it help us... I am a student at apextgi.in how it will support.

    ReplyDelete
  5. Features of Java

    The main features of java is; it is platform independent and secure.

    ReplyDelete

Post a Comment

Popular posts from this blog

Java is Very Fast, If You Don’t Create Many Objects

System wide unique nanosecond timestamps

Unusual Java: StackTrace Extends Throwable