Essentialism and Technology
OverviewEssentialism isn't just flawed, it is flawed at many levels, some of which should be familiar to technologists. I have read a number of arguments against essentialism, however for me they miss a number of points which relate to technology, an area I claim to know more about. In technology you can measurably demonstrate the performance difference between two solutions, and yet it is difficult to convince people to let go of their biases.
What is Essentialism(from Google) "a belief that things have a set of characteristics that make them what they are, and that the task of science and philosophy is their discovery and expression; the doctrine that essence is prior to existence.
- the view that all children should be taught on traditional lines the ideas and methods regarded as essential to the prevalent culture.
- the view that categories of people, such as women and men, or heterosexuals and homosexuals, or members of ethnic groups, have intrinsically different and characteristic natures or dispositions."
What is true for a group vs what is true for an individual.Image you have a lottery, where balls with numbers between 1 and 48 are selected at random. You can say with mathematical certainty that the average of the first number selected will be 24.5. Similarly you can attempt to describe the average of metrics which relate to gender or any other grouping. In the lottery example it is important to note that;
- no ball has number 24.5. The average might not represent anyone.
- you cannot say anything about a particular ball eg. which is more likely.
- you cannot say which ball is better, or its value or role e.g. relative to the average.
You might say the ball which has your number on it is better, but that is completely subjective and momentary.
Perception is not reality
When trying to determine how to optimise an application, perception is a very tricky factor in trying to improve your application. It will deceive you in ways you don't even realise.
- People tend to see performance as relative. Inconsistent performance is more likely to be seen as a problem than consistent performance, even in cases where the inconsistent performance is consistently better. People tend to accept things which don't change.
- People are very bad at guessing the causes of performance problems. With more than ten years experience optimising applications, the one thing I have learnt is what ever you think the problem is, there is a very high chance it is not the biggest problem, possibly not the fifth biggest. I can almost always guess a couple of the top ten.
- Even when everything is telling you something is a big problem, it might turn out that fixing, won't make much difference, and something less obvious could turn out to be more significant. Sometimes multiple small changes only show a big difference when applied in combination.
When determining where to optimise an application, significant amounts of investigation are required, however when you measure a system, you invariably alter it and get an imperfect view. Measuring is often a better guess but it is just a guess.
You have to know your limitations
Even things you "know" at a given point of time are limited in ways including;
- How we see the world depending on the means we have of describing it.
- What we "know" today might not have always been the case. We tend to have a biased view of the past.
- What we "know" today might not always be the case. Always be ready to re-evaluate your assumptions.
- What we "know" is often very localised e.g. specific to a location, or a technology.
- We are always learning. The way we do things is unlikely to be the "best" way of doing something. Our idea of "best" is likely to change in the future, as it has in the past.
I believe the way essentialism is typically used, is to reflect views expressed without the need or ability to justify it. It is; because it is. I can't justify it, there for I conclude or imply there is no need to justify it. Never the less, people are willing to live their lives and insist others do too, on such a flimsy basis.
I see this in the fast data world where I work. There is a very strong bias to working the way things have always been done and even if you can show there extreme performance differences, like 10x or 1000x difference, this often doesn't overcome the hurdle of; we have always done it this way here. Some go as far to insist that everyone else should be doing it the way they do it, even in cases where clearly and measurably that wouldn't be possible for them.
In the technology space, one of the worst excuses for something is; we did it this way for performance reasons. i.e. it's really bad or complicated, but it has to be fast. The questions I ask are;
- did you actually test that it was faster. Usually simpler is faster.
- what was the real requirement for this extra speed. (in terms of the impact to the end user's not a requirement which sounded good to you) i.e. would it have been fast enough with a simpler solution.
- was there a better way to achieve the performance needed, with less impact.
People actively promote complicated and expensive solutions they have decided to use, when they have a poor basis for their assumptions. i.e. not only is it bad for other, but it was bad for them as well.
Live is beach.
I believe that there are differences between men and women, but in the same sense I believe there are beaches.
- The idea of beaches is a human construct, there is no beach particles in physics.
- Beaches are not everywhere, nor should they be.
- I don't know where most beaches are and never will.
- Beaches are often artificially constructed or maintained (for the benefit of tourists)
- Naturally occurring beaches are often destroyed. i.e. the lack of a visible beach doesn't mean there wouldn't be one.
- The existence of beaches implies nothing about other aspects of our lives.
In computing and in life we are always learning and adapting to changes. While there are themes in how we work and live our lives, we should start with the assumption that is is unlikely to be the best, and we will constantly be re-evaluting what best means.
What we should be chasing a moving target. We shouldn't be trying to implement or live an "idealised" view of the past, which was never ideal.
Thank you for a fascinating, on-topic post. Like you, I've spent most of the last decade tuning systems, which is a wonderfully humbling thing to do, bringing endless challenges. I cant count the number of consulting projects that follow the pattern:
"we need you to tune our GC settings" [your app's context switches are causing your pauses]
"we need you to tune our SQL queries" [your PAAS provider is overcommitting CPU, and %steal is killing your latencies]
Three truths that seem almost universal:
1. Classes with names that loo like FastPath, Cache, Lightweight, Rocket, Speed, will contain performance hot-spots.
2. Everyone blames the network.
3. Human nature leads us to thinking that performance is linear. "Knowing" that isn't true often isn't enough to avoid missteps. I flirt with the idea of tattooing reminders on my arm, ("measure it", "average is a dirty word", "look at scatter plots", "look at latency histograms", "what's the best case?", "never forget the lessons of j2ee", "if it were my money, how would I spend it?")
4. Technology attracts people who are black-and-white thinkers. Some amount of ego, arrogance, false pride, unwitting good luck can sometimes help us to see the tremendous amount of stuff we don't know, and not give up.
A more dangerous defense mechanism is to retreat into relativism and be unwilling to take the risk of saying "we should be able to support 200,000 concurrent connections on a static web-server running on a low end Sandy Bridge in 2014", and instead say "it all depends." The risk is that as soon as geeks say "it depends" about performance/probability questions, they seem to forget about the absolutes that they do know (it takes 45 micros/8 micros/<1micro to traverse the IP stack on my host).
This can lead to Emperor has no clothes scenarios that can abound when people spend other people's money.
I'm delighted that today we have benchmark sites like stacresearch.com or http://www.techempower.com/blog/2014/05/01/framework-benchmarks-round-9/ or passmark.com that share objective data. Blogs like yours, mechanicalsympathy, Gil Tene's, psychosomatic-lobotomy-saw, Brendan Greggs, Cliff Click's., igvita.com, Neil Gunther, Baron Schwarz, are part of trend where technologists are doing more to share hard-won expertise with all. It was harder to find technical wisdom like this fifteen years ago.
Thansk for your many contributions,
This appears to summarise the essence of your frustrations: "People actively promote complicated and expensive solutions they have decided to use".ReplyDelete
It feels to me that if instead of looking at it from the point of [nicely lab-testable] biases one looked at it in terms of [social & economic] motives, then many of these decisions can eventually make sense.
Also, a bias presents something for you to "fix", while a motive is something that you have to discover/understand. Looking at motives can make things start looking a bit more straight forward (however unacceptable or disgusting).
Simplistic example: A bank dismisses a well-known/proven software that is also cheaper for another similarly well-known/proven software which is 10+ times more expensive. Why would anyone do this?
Now add a few more negatives to the 10x product (like complicated and slower).
The bank may likely still choose the 10x product. Even more perplexing!
However, what if we are missing an obvious element?
What if the 10x product's price meets the foremost [unstated but implied!] requirement?
Would this be related to biases?