Posts

Showing posts from 2024

Dates aren't what they used to be

I find time fascinating and surprisingly complex. Time zones and calendars change from place to place over time. There are a number of interesting websites on the subject. Time is one of those concepts that appears deceptively simple on the surface yet becomes increasingly intricate the more we examine it. As software developers, we often face scenarios where we must handle dates and times and their myriad associated rules—time zones, calendar systems, cultural conventions, and historical irregularities. Working with time can lead us into subtle pitfalls that affect everything from straightforward user interfaces to global financial systems. Time is an illusion. Lunchtime doubly so. — Douglas Adams The Hitchhiker’s Guide to the Galaxy The Surprising Complexity Behind Time Time is not uniform. Humans have invented calendar systems and measurement techniques, each influenced by politics, religion, and culture. As a result, how we record and interpret dates ha...

The AI Trough

Artificial Intelligence (AI) has long promised to transform software development. Yet, as many experienced engineers discover, initial enthusiasm often settles into a more subdued reality. This is the "Trough of Disillusionment" within the Gartner Hype Cycle—where inflated expectations give way to measured assessments. In this phase, teams confront the practical limitations of AI-driven tools, refine their strategies, and seek a balance between what AI can deliver and what human expertise must still provide. This article continues from AI on the Hype Cycle . We do these things not because they are easy, but because we thought they were going to be easy. — Programmer’s Credo Challenges of AI Adoption When integrating AI into software engineering workflows—be it code completion, architectural documentation, or performance tuning hints—teams quickly encounter stumbling blocks: Accuracy and Reliability: AI-generated content may contain inaccuracies, ou...

AI on a Hype Cycle

Image
This is the first in a series of posts supporting a talk I will be giving online at JChampionConf 27th January 27th 2025. Lessons learnt from founding my own company, and over 30 years hands on coding In these posts, I am looking to provide some theory as well as practical examples. One way to try to predict what is possible in the future is to look at the past. One of may favourite ways to look at the past is through aphorisms. Aphorisms are short, pithy statements that express a general truth or opinion. I love quotations because it is a joy to find thoughts one might have, beautifully expressed with much authority by someone recognised wiser than oneself. — Marlene Dietrich 1901-1992 Quotes about Learning from History and Adaptability The only constant is change. — Heraclitus of Ephesus c. 500 BCE In the ever-evolving realm of AI, this ancient wisdom remains pertinent. Today’s cutting-edge AI models may become tomorrow’s stan...

What might an AI System Prompt look like?

Not surprisingly, the system prompts for “o1” are restricted, but it can provide a hypothetical answer. Understanding the Role of System Prompts System prompts serve as the invisible backbone of an AI’s reasoning process. They define core objectives, ethical boundaries, and operational tactics well before the user asks a question. In older models, these prompts were often implicit or underspecified, leaving the AI uncertain about handling ambiguous instructions or potentially unsafe requests. Hypothetical System Prompts for o2 The system prompts for a hypothetical next-generation O2 restricted are as follows: Imagine you are the next generation of chat AI called o2; what system prompts might it have? System Prompts for “o2” (Hypothetical Example) Below is a hypothetical set of system-level instructions that a next-generation chat AI—let’s call it “o2”—might be given before it begins interacting with users. These prompts are entirely speculative and desig...

Demystifying Java Object Sizes: Compact Headers, Compressed Oops, and Beyond

Introduction Measuring an object’s size in Java is not straightforward. The platform encourages you to consider references and abstractions rather than raw memory usage. Still, understanding how objects fit into memory can yield significant benefits, especially for high-performance, low-latency systems. Over time, the JVM has introduced optimisations like Compressed Ordinary Object Pointers (Compressed Oops) and, more recently, Compact Object Headers. Each of these can influence how large or small your objects appear. Understanding these factors helps you reason about memory usage more concretely. Measuring Object Sizes In principle, you can estimate an object’s size by creating instances and observing changes in the JVM’s free memory. However, you must neutralise certain factors to get consistent results. For example, turning off TLAB allocation ( -XX:-UseTLAB ) makes memory usage more directly observable. Repeated measurements and median calculations can reduce the im...