Showing posts from February, 2013

Chronicle and the Micro-Cloud

Overview A common question I face is; how do you scale a Chronicle based system if it is single writer, multiple readers. While there are solutions to this problem, it is far more likely it won't be a problem at all. The Micro-Cloud This is the term I have been using to describe a single thread to do the work currently being done by multiple servers. (Or the opposite of the trend of deploying a single application to multiple machines) There is an assumption that the only way to scale a system is to partition  or have multiple copies.  This doesn't always scale that well unless you have multiple systems as well. All this adds complexity to the development, deployment and maintenance of the system. A common problem I see is that developers are no longer able to test the end-to-end system on their workstations or in unit/functional tests increasing the development life cycle dramatically reducing productivity and time to innovate systems. Chronicle based processing eng

Performance Java Training Details

I have created a group for announcements and discussions regarding Higher Frequency Trading's Java Training. HFT Java Training discussion group This training emphasises simple and deterministic designs and code is the best path to easily to develop, innovate and maintain applications.  This is also the path to low latency and high throughput systems. The range of latencies of interest are 20 to 200 micro-seconds and complex event throughputs of 50K/s to 500K/s. For more information on courses outlines and prices see Higher Frequency Trading Java Training   This group will also announce introductory offers such as condensed Saturday courses.  These previews for the courses which are cheaper per course. The courses have a maximum number of students per instructor of between 4 and 6 (depending on the level of the course) The initial courses will be in London. In time, I plan to offer these courses in other major cities and as webinars. There is an option for a week'

Performance Java Training

I am looking to provide "Master Class" Java training for developers of high throughput and low latency systems based on my experience in designing and implementing trading systems for hedge funds. As this will be my first course, I am looking for feedback as to what to include and what to drop. I am concerned this is an overwhelming amount of information to cover in a week which will make it difficult to cover each topic in much depth. You can contact me on peter.lawrey (a) if you are interested in the course. This is in person training in English.  My first session will be in London, but I would consider other cities if there is enough interest. Overview The training assumes you are familiar with all the standard features of Java and know most of the topics covered by advanced Java programming courses. i.e. everything covered in most advanced books. The scope of the training is designing, developing, testing and tuning performance Jav

Java is dead (again)

Here is a couple of responses to this annual question I thought worth sharing. The Day Java lost the Battle There is a common myth amongst technologists that better technology will always be the most successful or that you must keep improving or die. A counter example I use is the QWERTY keyboard. No one who uses it, does so because it is a) natural or easy to learn b) faster to use c) newer or cooler than the alternatives. Yet many developers who couldn't imagine using anything other than a qwerty keyboard insist that Java must be dead for these reasons. I have looked at predictions that Java is dead from the year 1996 and found these predictions follow Java's popularity and when there was a drop interest due to the long age of Java 1.4 and Java 6, there was also a drop in predictions that Java is dead. (When IMHO that would have been a good time to question such things) I have come to the conclusion that passionate calls that Java is dead is a good sign that Java is alive

A down side of durable messaging

Overview Durable messaging can be very fast, as fast as non-durable messaging up to a point. Limitations of durable messaging Durable messaging is dependant on the size of your main memory and the speed of your hard drive.  If you have a HDD, this can be as low as 20 MB/s and as high as 60 MB/s.  A RAID set of HDD can support between 100 and 300 MB/s. An SATA SSD can support between 100 and 500 MB/s and a PCI SSD can support up to 1.5 GB/s. Case study Say you have 8 GB of memory, writing two million 100 bytes messages per second and a HDD which support 25 MB/s. This works fine at the speed in bursts but you reach a point where your disk cache is full.  Depending on your OS this can be between 20% and 80% of your main memory size.  In my experience, Windows tends to be closer to 20% even if you have plenty of free memory whereas Linux tends to allow in the region of 30% of your memory in uncommitted writes. Say you are writing two million 100 byte message per second or 200

High Performance Durable Messaging

Overview While there are a good number of high performance messaging systems available for Java, most avoid quoting benchmarks which include durable messaging and serialization/deserialization of messages.  This is done for a number of reasons;  1) you don't always need or want durable messages 2) you want the option of using your own serialization.  One important reason they are avoided is that both of these slow down messaging by as much as 10x which doesn't look so good. Most messaging benchmarks will highlight the performance of passing raw bytes around without durability as this gives the highest numbers. Some also quote durable messaging numbers, but these are typically much slower. What if you need to serialize and deserialize real data efficiently and you would like to record and replay messages even if you have learnt to do without these. Higher performance serialization and durability I have written a library which attempt to solve more of the problem, as I s