Star Ford

Essays on lots of things since 1989.

Agile America

If a software project is big enough to be “nontrivial” (as programmers say), a company needs not only a system for management of the quality of the engineering product, but also a system for management of the quality of the process – that is, management of management. This meta-management has become a big business itself with companies that just sell processes to software companies, and it involves proving that certain practices and ways of making decisions are likely to produce better outcomes than others. The work process is itself a product, complete with advertising and brand loyalty.


Here are some of the trends since the 90s that set the context for what is happening now in the industry:

  • Systems are getting scaled larger (more simultaneous users), are more open to hacking, and have higher up-time requirements, so they require more people to keep them running.
  • New systems don’t appear to be any more complex, but they often contain more records because companies are monopolizing. The size of databases is much larger, but the complexity is the same.
  • New systems are more business-critical. Earlier, a business often had a paper backup or a way to work without the software system, but now the company relies on it with no alternative.
  • The total number of people involved in automation has exploded. The nerdy prodigy type that is historically associated with programming is now in a small minority; there are not enough of those kinds of people. Most people in IT departments might not even be naturals at the job and many are extroverted, not terribly exact about logic, and much like the society at large.
  • More people’s demands are perceived to be important – more stakeholders. In particular, non-technical people are now deciding on the look and feel and flow of systems, and there is a noticeable degradation in internal consistency across business applications.
  • In terms of hardware, tools and platforms, the main change is that speed and memory are much, much cheaper. Despite popular belief, there has not been much innovation in operating systems, databases, and languages compared to the period in the 80s and 90s.

Meta-management works with these trends and also aims to prevent the worst problems of the past. Two of the most serious and common problems have been:

  • A developer suddenly disappears and no one else knows how to change the system, so it becomes stuck.
  • A large project goes way over estimated time and cost, and by the time it is done (if ever), it no longer matches what the company actually needs.

In order to solve those problems, we now have “agile” processes. While there are competing variants, they all aim to solve those two major problems through teamwork (no one person should be irreplaceable) and short planning horizons (projects cannot be months long by design). The idea is to work on very small increments of improvement to a working system and leave the system in a working condition all along, instead of going into the back room and coming out months later with a big finished system. It ensures that a company is getting a continuous stream of value in return for a continuous stream of investment. It is intended to eliminate the risk of losing all the investment due to the two problems above.


In my observations of company productivity, I see about 20% productive people, 60% deadweight, and 20% malicious or destructive people. It does not appear to be possible to change these ratios, because useless and destructive people often have greater powers of persuasion than productive people, and thus they appear essential. Usually they honestly believe they are contributing.

One of the most important meta-management concerns is whether any one person can sabotage a project. Each of the 20% destructors will sabotage it if they can. What we want is a system that restricts the power of the misinformed or malicious to only slightly slow down progress or make a feature slightly worse, but not to completely sabotage it.

Different company organizations offer different ways to sabotage projects:

  • In pyramid-shaped organizations that work primarily with delegation, every project gets subdivided into tasks and delegated, further subdivided and delegated so on, making every person accountable to their superior in the pyramid. If any one person is a destructor, their part fails and the whole system fails. The destructors can be identified but perhaps too late. For the same reason, productive people can do a lot and be recognized. This structure is widely considered obsolete.
  • Team-based organizations are an answer to the pyramid problem: everyone is replaceable and the productive 20% accomplishes everything by going around everyone else, leveraging a great many more connections between people – not just up and down the chain of command. Sabotage is thwarted by ensuring that no one person has any substantial power, and only teams have collective power. Destructors are difficult to identify, as are producers. An especially productive person is held back to the slower pace of the team.
  • Team-pyramids are a way to combine the worst aspects of teams and pyramids. Authority is delegated down through levels in a pyramid, but the engineering product is not passed back up through that pyramid. The product building is done as with teams, but the decisions are in pyramids. This system allows anyone to sabotage the project with no accountability, since the producers are not allowed to work around the deadweight and destructors. My last job recently switched to this system (unknowingly I guess) and the ability of anyone to accomplish anything seized up like an engine without oil.

Other meta-management problems

A key problem of meta-management as a supposedly legitimate field of study is that it requires longer term knowledge than almost anyone has. Most people in the industry are still in their first 10-year project, and hardly anyone lives long enough to have done enough 10-year projects to be able to compare them and have personal experience of which process works better. Therefore most of the claims made about meta-management cannot really be substantiated by experience.

Related to that legitimacy problem is the bizarre worship of process that goes on. Like anything based on faith alone, people become frenetic adherents to their chosen “agile” process, and evangelize it with a giant set of internal vocabulary. There’s normal sounding terms like “runway”, “backlog” and “standup” – and esoteric words (sometimes bordering on religious doctrine) like “kanban”, “scrum”, “manifesto” and “epic”. The words can be used as an appeal to a higher authority to prove a point, when that point cannot be supported by common sense or the normal use of language.

In that last company, a lot of people were as sure as the sky is blue that the process they were trained in was going to work, even when they had never personally done that process successfully. Many of them had never accomplished even one thing independently, since their whole careers were in teams that did not expose whose contributions were relevant. Some of the deadweight people had a very faith-based allegiance to certain aspects of process, and the less experience they had with it working, the more evangelical they seemed to be about it.

Deep in the house of mirrors, there are blatantly countersensical and even reality-defying beliefs. My “favorite” one is the belief that software bugs fall into one of two categories: Either it is a critical bug and the whole team stops all other work and fixes it immediately, or it is non-critical and should never be fixed or even written down anywhere. Anyone who has used software knows there is really a whole range of severity: some bugs make a product unusable, while others only make it annoying, slow, confusing or risky in some way. Anyone who hasn’t drunk the Agile kool-aid can see that obviously some bugs need to be assigned a medium non-emergency priority, but that common sense position had not been canonized and you are not supposed to believe it.

Actual agility

The term “waterfall” refers to software development processes that cannot be reversed or changed mid-course; an extremely waterfall-ish approach would be one that does all planning up front, then does all development in a back room without communication with the users, then the product is considered “done” when the contract terms are met, even if it does not work or does not meet expectations.

An agile approach is one that by contrast continuously checks in with users and is capable of adapting quickly. Ironically though, a waterfall approach can often be quicker and more agile than one labeled “Agile”, and the reason has to do with what I call “chunk size”.

A large chunk size means attempting a plan-build-test cycle that includes the whole project in a single cycle. For a large project, people are not able to plan and communicate everything accurately and it can fail just because the chunk is too large. On the other extreme, if the chunk is too small, then the plan-build-test cycle might only include one micro-feature per cycle and then it ends up taking years to develop a useful product. Very small chunks also result in inconsistency in the product usage, as different people try to push the user-interface paradigm in all different directions at once.

As an aside, the process of building something, no matter what size the chunks are, must have three general stages: One is understanding what we are wanting to accomplish (requirements); two is building it in the back room; and three is showing what was built and then evaluating, testing, fixing, and integrating it. If you have a large chunk, the back-room part might be months long; with a small chunk it might be only hours long. In any case there must be a back-room period when no communication is occurring, because it is a technical creative process that requires focus, and because it is essential to commit to something and complete it instead of being continuously up in the air.

As I saw in my last company, the plan-build-test cycle itself can be disintegrated or defined out of existence. In those conditions, management thinks that the cycle can be so short that there is no back-room period at all, and that all work can be accomplished with continuous communication going on. But that makes the work stop completely.

The ideal chunk size is one that is about the same size as (and not more than 10% larger than) the most recent successful chunk done by the developer. So if that person is comfortable with a 2-week long chunk and has demonstrated that ability, then it will work. If the person is only ready for a 1-day chunk, that is the appropriate size. Anything larger is too waterfall-ish, and anything smaller is too slow. Picking the right size yields the most actual agility.

Can it work?

Part of my reason for writing this is to debrief myself on what happened in that last company (which was my first and only big-company job) and ponder whether large teams can really build software at all. Most of that company’s software had been written by a few people, before they started calling themselves “agile”. After that shift, it looks like they started spending more money and slowing down work. The tech giants all seem to be growing and slowing in the same way. The software I love to hate the most is, a fiasco of waste (500 M$) and slowness; had it been written before Agile was doctrine, I think the contract would have been terminated without pay, and a different supplier chosen, which would have been faster in the end. All these things make me doubt all the new meta-management thinking.

My prediction is that someone with clout will come along one of these years and declare Agile to be dead, and the industry will shift to something similar with new vocabulary and the same problems. But what would REALLY work?

The things we cannot change are:

  • the 60% deadweight problem (the sector overpays because the demand for workers is so high, so people who are not naturals are swept in and cannot contribute much)
  • the 20% destructor problem
  • the fact that most people (even the contributors) value their careers over the product and will lie about their abilities and backstab anyone without allies
  • the newness problem (most people have never completed a large project cycle)

The newness and non-naturals problem means most people are in over their head; that anxiety combined with self-protective human nature is not a great combination for making good decisions. I don’t know how anyone could change this starting point, so we need meta-management that assumes these problems will always exist.

In my last company, as productivity and quality was perceived to be declining, decision making tended to become more centralized as a stress-induced reaction, and that further slowed productivity in a viscous cycle. As someone who has actually gone around more than one 10-year cycle of large application building, I would now consider myself qualified for technical leadership, but I had to complete many small projects independently and some large ones before I felt I was ready for that. Those who did get decision making power at that company appeared to feel they were qualified without having had that length of experience. There was a general sense of operating on theory alone – things were declared to be the right way because Agile (or because of some other appeal to outside authority), not because the decision-maker had any practical experience with it.

The universal rule is that if sabotage is possible, someone will do it, generally by preventing those who are the most qualified to make decisions from making them. Thus it is essential that any process is built around having many channels of communication and no single person who can restrict the communication or decisions. Statements like the following must be impossible: “Everything has to go through me”. “So-and-so needs to be at that meeting”. “We need buy in/approval from so-and-so for this request.”

So, to conclude, here are three ideas for structures that might work, at least better than the “agile” ways I’ve seen:

  • The pyramid of tiny teams: This is essentially an old-fashioned pyramid of delegation, but each node in the structure is a tiny team of 3-5 people that are collectively accountable for delivering their part. The most successful tiny-teams are given larger chunks and get newly formed tiny-teams under them, which they can delegate to. It might prevent a node from sabotaging the whole because there is likely to be one producer on each team.
  • Decisions by duplication: Instead of holding up engineering work to resolve conflicting points of view, the company proceeds to develop the product in multiple independent teams and then uses only the most successful result.
  • Chaos plus portfolio: This is a system where no one has fixed roles and the only management is in periodic evaluations of what a worked accomplished. It relies on the notions that people like to do engineering, and they will naturally learn to become more effective given a lot of freedom.
1 Comment »

On rationality and buffoonery

The structure of un-reason

Listening to the extreme claims made during the Kavanaugh hearings has made me consider the limits of rationality. There appears to be a structure of un-reason that we humans are stuck in, and it looks something like this:

unreason diagram

The two paths depicted are different ways of expressing why we do things or why we adopt positions, when the topic is contested. I am not attempting to explain why we do everything we do, and indeed a lot of what we do in a day is habitual or autonomic or arising just because we feel like it. This paper is only concerning those things that we make conscious verbal claims about, when we are thinking and expressing positions.

The top pathway seems to be the more common pattern and the subject of this paper. The lower pathway is the scientific version that I believe is only used in limited settings when we are able to be unusually objective.

Terms I’m using are:

  • The impetus is the actual basis for doing something, which can often be unconscious, and is self-interested. So it’s either related to a basic need (like hunger or loneliness) or resolves a state of unrest (like fear).
  • The justification is the expressed basis of the action, which we essentially make up after doing the thing, or as preparation for explaining ourselves after we have decided to do the thing. It’s retrospective of the action.
  • Buffoonery is the action-justification sequence, when viewed in the light of that retrospective order.
  • Rationality is the opposite of buffoonery, in which the reason is prospective of the action. In other words, the action is legitimately done for a known, expressed reason.
  • Buffoonery dissonance is what happens when justifications contradict each other.
  • The “cement” is all the layers of beliefs that we use to ease the buffoonery dissonance.


I will walk this through three examples – one from children in a classroom, one from a teen/adult perspective that I hope is relatable to the reader, and finally one from politicians showing an extreme case.

In the childhood example, imagine a student pleading to a teacher to relax the rules in some way – maybe to allow eating during class. “Teacher, we should be able to eat in class because we’ll be able to pay attention more!” The “reason” given is really a justification (or rationalization) – an invented basis that that the student hopes will appeal to the teacher’s supposedly disinterested sense of reason. But whatever basis is given, it is not the real impetus, which is actually more simple and direct: she’s hungry. So we have the real and self-serving impetus that happens first (hunger), and the false expressed justification that comes later (in order to pay attention more), and so far there is no rationality. The teacher might think “what principle can I apply here to make an impartial decision here” and that part could be actual rationality up to a point, but the teacher also has her own justifications for things, so the final ruling on the matter might not end up being rational.

The buffoonery of young children making up justifications for what they want is often transparent and teachers might even laugh at it. In particular they laugh when they notice the same child suddenly switches positions when their self-interest changes. Children’s lower level of sophistication allows us to see the structure. They are presumed to be less capable of reason so we do not hold them accountable. More on that below.

The second example is fictional but similar to things I have done. I love cookies and my impetus is to have them all for myself. Maybe I’m greedy or I fear a future cookie shortage, but I’m not consciously thinking of these causes. When I’m not actually hungry, I might tell others not to eat the cookies because “we should save them for a special occasion”. But when I get the munchies, I might eat them all and then say they were getting stale. The buffoonery here is switching justifications in a way that would make the other people in the house raise an eyebrow at my inconsistency and doubt my “reasoning”. If they pointed it out, I would feel the “buffoonery dissonance”, or the shame that goes with being caught.

The third example is what prompted this whole line of thought – the Supreme Court confirmation hearings on Kavanaugh. A senator supports something “on principle” one year and opposes the same thing the year later, appealing to the opposite principle. In this case, senators who made arguments for prudent and lengthy consideration of facts when Obama nominated a justice (and effectively delayed hearings until Obama was out of office) are now making arguments for quick action and not looking too hard at the nominee’s history. A staple of the workings of late night comedy shows is to search databases of footage for cases of inconsistency like this, and air the two clips juxtaposed. We all laugh at the very obvious lies, but beyond laughing is there any result of exposing them?

Responses to buffoonery dissonance

When confronted with buffoonery, people respond different ways:

  • In the case of children, they might just take stabs at whatever gets them off the hook or whatever seems to appeal to adults, and not feel the dissonance at all.
  • In the case of some senators, they appear to have developed an immunity to the shame, so they also take stabs at whatever gets them off the hook and rely on the press having a short memory. The increase of sophistication over children is only slight.
  • A conflict-avoiding person might retreat from their positions and stay safe within cultural norms, while not really facing or resolving the dilemma. (“Okay you’re probably right”)
  • A person valuing relationships over positions might soften or release principles and claims, and see multiple sides. (“It’s not so simple.”)
  • An introspective and ego-balanced person might feel the shame and admit “you got me there”. That could lead to bringing the impetus into consciousness and adjusting the position towards being rational, or having a growth moment. (However, people don’t mature in leaps like this every day, so the response would be less laudable most of the time.)
  • A person with a fighting spirit could double down and invent a more abstract justification that logically bridges the opposing ones. This is the “cement” that locks the positions in place.


Cement is all the other beliefs that we adopt to cement in our justification after we invented it.

In the cookie hoarding example, if I was called to account for a discrepancy in expressed principles, and I was not ready to admit that the actual impetus was to have all the cookies for myself, then I would need to come up with a new all-encompassing “reason” why my two prior “reasons” were compatible. The layers of justification can get ever more intellectual until I win. A lot of politics is essentially the art of getting all the cookies for oneself, where “cookies” can be substituted by anything, such as agri-business subsidies or stockpiling for war.

A global example is slavery. The impetus for holding slaves includes greed and aggression, but that is not admitted to directly. Instead slave-holders make up a justification that makes themselves sound innocent. Those same people might also say that “all men are created equal”, and then they might be confronted with the inconsistency. They would then need to appeal to some more abstract justification that unites the inconsistency, which could be, among other things, that “slaves are not people.” In all of this, I am making the argument that the actual sequence in time is the action, then the initial justification, then the cement. When people want to sound rational, they reverse the sequence and claim that the abstract principles were first in mind, then it led to reasons which led to the action to hold slaves.

The brain is a re-sequencing machine

While I cannot prove that buffoonery (reverse rationality) is the norm, there are cousin processes in the brain that point to backwards sequencing being something that occurs constantly as a central part of consciousness.

The first observation supporting this is that hearing is faster than seeing. If you create an experimental setting where a subject has a brain monitor and a startling noise happens at the same time as a picture on a screen appears, then there are two sequences that occur in the experiment. One is what science observes: First both stimuli are detected (the light arrives before sound but that difference is insignificant at short range). Second, the sound is processed and it signals a twitch reaction in the muscles. Third, the slower visual processing part of the brain sees the image, after the muscle reaction happened. The alternate sequence is what we as subjects believe we experienced: we are really sure that we saw and heard the stimuli at the same time, and then twitched afterwards. So consciousness is subjectively sure of something that is not true. The brain is constantly re-sequencing stimuli in short term memory to align the timeline of hearing and sight and providing a false sense of being present in this exact moment, when we really are never aware of a moment until later.

The other observation is about dream recall. Freud and possibly others asserted that dreams occur as disconnected images without the usual adherence to the laws of time and space, but that we force those images into a story during the process of recall. So the sequence is imposed later upon the original dream. Even though we feel sure we “saw” the things in a linear story format, that certainty only came after the dream was already done and we were waking up.


What’s been bugging me about the un-reasons of the Kavanaugh supporters is the thickness of that cement, the beliefs built up to support the justifications for supporting him. There may have been some rationality exercised by the people who originally put him on a short list and then selected his name – those are the sorts of activities where, at least some of the time, we can be rational. But the millions of supporting Americans can’t be doing that most of the time – their real impetus for support could be fear of losing control by white men, or some related fear-stoked groupthink, or simply being drawn to the norms of the people around them for safety.

The easiest justifications for supporting him despite his potentially disqualifying traits are (1) There is no proof that anything happened; and (2) whatever happened was so long ago. Bart Simpson has a line that covers the bases – something like “I wasn’t there. You can’t prove it happened. I don’t know anything about it.” None of Bart’s justifications are consistent with each other, and likewise the two easy justifications for supporting Kavanaugh are like that too: It is inconsistent to argue both that nothing happened, and that it happened a long time ago.

Then to resolve the inconsistency, the cemented beliefs are constructed, and they feel dangerous. The could be abstractions like “violent assaults are inconsequential” (a way of saying it anything happened, it doesn’t matter). The only way a person can say that shamelessly is by adopting beliefs such as: rape is not a real crime, and women are not fully human in a way that makes crimes against women “real”. Possibly rape is imagined by men as not that bad, if they can only visualize themselves as the rapist. Or people have claimed the victims are fake, or that caving into the real concerns of the opposition fuels some kind of liberal conspiracy.

So the danger is that in the rush to pretend to be rational and escape buffoonery dissonance, millions of people invent and adhere to dangerous beliefs.

My guess is that the left (including me) is doing the same un-reason, and if the nominee was a Democrat with the same history, both sides might have adopted exactly opposite “principles”. One way to test the theory is to assume a hypothetical nominee 20 years in the future, and all we know is that he probably committed some misdemeanor or felony decades earlier that was not reported, but we do not know his political leanings. What would our principles be then? Without knowing our self-interest in a question like this, we usually resist answering, and when I’ve asked people questions like this, I get a lot of “it depends”. They will not say what it depends on, and I tend to think that what it depends on is self-interest at the time, which has to remain unsaid because we rarely can admit to our real impetus.

When I try to bring my own un-reason sequence into consciousness, I get some data from the introspection, but I doubt it is really possible to know oneself enough to fully escape buffoonery. I can admit that a crime in the distant past should not be a disqualifier for most any job. Relatedly, most people working to help ex-offenders re-integrate are left-leaning, but they do not want to apply that same principle to this hearing. However I also then feel drawn to the justification that the particular job of Supreme Court justice should have higher qualifications than others. Also I am drawn to the notion that he should be disqualified for lying under oath, but at the same time President Clinton did that and at the time I felt it was not consequential because the subject matter was not of national significance. To be fair I would have to admit that Kavanaugh’s probable crimes are also not of national significance. That is about as far as I can get with looking at my own un-reason.

Teaching and change

I wonder if the balance of buffoonery versus rationality could be partly influenced by culture, or if it changes in different time periods. On one hand if feels like a solid part of how the brain is wired. On the other hand there is a case to be made for buffoonish cultural patterns. For example when we want children to act rational and we ask them why they did something we do not approve of, they say something to answer the question, like “I was tired” or some other justification. When we engage with them on that level, it is as if we have believed that there has to be a lie and we can only talk about the lie, whether refuting or bolstering it, but always staying at that level. So parents can be the gatekeepers of pseudo-rationality by colluding to stay within the un-reason pattern and limit the conversation to which lie out of the many possible justifications is an acceptable one to settle on. Thus we are teaching and modeling dishonesty. But maybe that way of teaching is culturally prescribed and change is still possible. If we had a cultural pattern of not engaging in pseudo-rationality with children, we might not have a culture that appears to celebrate lying so enthusiastically.

I also wonder about the cycle of shame and revision with people who are introspective: I do not know if the cycle is really just adding so much sophistication to the lies that they really get convincing, or if we inch towards actual rationality. I worry that the more I cultivate the idea that I’m being rational myself, the more I’m building up “principles” that let me have more cookies without risking the shame of inconsistency. And that goes for all of us who feel that we are fair and benevolent.

I also wonder if the cycle of shame and revision is less prevalent in the internet age, and that could be why the Kavanaugh hearing seems to be so much more buffoonish than things in politics in the past. I cannot recall seeing that “you got me” shame in public in recent years, but I think I remember it from earlier. My memory might be limited to local settings and not national politics though. If that is a real culture shift, it could be related to the new internet-age phenomenon of people avoiding anyone they disagree with, so they can never be caught in their inconsistencies. Without ever being caught, the inconsistencies could grow larger without anyone noticing. Possibly related, the president now is someone who cannot be caught because he has no principles, therefore can rarely feel inconsistency or shame.

1 Comment »

A glop taxonomy

In my lifelong quest to un-confuse myself, I have gradually awakened to the fact that I was constantly thrown off by words that are defined associatively, because classical definitions are so much more accessible. As a toddler I would have been so much more aware of the world around me if people answered my demands for meaning in the form “X is a thing in set Y, but distinguished from other members of Y by variable Z”. Because no one would give me a clear definition like this, I spent all those decades not completely sure what an ottoman was, or a hatchback, or a ranch house, or khakis or bows or blouses, or salads or tarts or barbecues. I was more clear about things that can be defined classically, such as that fermions are particles distinguished from bosons by their spin, or that a county is a kind of jurisdiction that allocates 100% of the land area of states into non-overlapping regions. That kind of definition is so much more accessible than trying to figure out why some cars with doors in the back are hatchbacks and others are not, when no one could tell me the distinguishing feature of the class. When people would debate whether a tomato was a fruit or vegetable without defining fruit and vegetable, I thought there was some mysterious classical structure behind their claims, and I came to find out as an adult that there is not, and the question itself (which stole precious minutes from my life) was wrong. A fruit is that part of deciduous plants containing seeds, and a vegetable is any edible unprocessed plant part. At the time those minutes were stolen, I was too young to catch on that the dichotomy was unreal – the two options are not members of the same parent set.

The particular problem category I’m working on in this paper is words defined by fluid consistency. For example if we assume that peanut butter is made of the two ingredients peanuts and butter, as I naturally did, then we miss the critical understanding that “butter” does not indicate animal fats at all, but is instead used as a reference to the resulting consistency of ground peanuts. Coconut milk, by the same illogic, contains no milk. All my childhood I would wonder “what is oil” and “what is wax” and no one could say because every definition would fail to account for all other things that they would also call oil or wax but which were clearly different than the thing in question.

So it turns out, as everyone else already knew, that the meaning of words like butter, wax, and syrup is not about what is actually in the thing, but only has to do with how the thing interacts. In particular since the non-solid parts of us, and biomass in general, are mostly salt water, sugars and lipids, so many word definitions center on how materials interact with water, sugar and lipids. A “syrup” for example need not contain any sugar, but it is just anything that is “syrupy” – but then how do we know in the classical sense if something is syrupy? Answer: it has a certain range of solidity, coherence, and adherence; also it absorbs water and thus is easily washed by water. Molten metals such as mercury might have the same solidity but they lack adherence to water so they are not syrupy. Olive oil has the same solidity and the same adherence as something syrupy, but it repels water so it can’t be called syrup.

The variables

These variables appear to be the most relevant to how we define words for non-solid things:

  • solidity – at one extreme, a measure of how pourable the thing is, or at the other extreme, how much it tends to return to shape (rubberiness)
  • elastic energy – the effort required to re-shape the thing
  • friability – a measure of how pressure tends to either break the thing (like tofu) or stretch the thing (like sugars), or both (like corn starch or silly putty)
  • cohesion and adhesion – the binding qualities of the thing to itself and to other things, affecting surface tension and mixability

A note on elasticity vs solidity: While they might appear to be the same thing, consider jello (and generally things called “gels”) – it is easy to reshape compared to molasses, even though they are both sugar water. Jello has low elastic energy (moves with a light touch) but high solidity (does not pour), while molasses has higher elastic energy (requires greater time or force to spoon up) and lower solidity (it pours).

There are many other variables that chemistry knows about, but they don’t appear to be major players in word definitions. For example:

  • The volatility of solvents is a phenomenon you can feel (such as how hand sanitizer seems to disappear with use) but I could not think of a common word describing liquids with that quality.
  • Density does not appear to affect word choice, possibly because everyday things have very similar densities.
  • The irregularity of a thing because of contained materials affects word choice; for example grout or other cement, sand and gravel aggregates would never be called milky or pasty even if they match other properties of such materials. However there doesn’t appear to be an everyday noun describing aggregation.
  • The quality of surfactants, or soapiness doe not appear to have a word for the class of substance behaviors.

The taxonomy

Now to turn to the actual taxonomy of glop-words, I’ve placed those annoyingly un-definable words into a context of some of the variables above.

  • words for water-absorbing things with low elasticity, low friability, ordered from low to high solidity
    • water > milk > batter > glop > paste > dough > putty / clay
  • words for water-repelling things with low friability, ordered from low to high solidity (and also low to high elastic energy)
    • cream > oil > slime > grease > butter > wax
  • words for water-adhering things, ordered from low to high solidity (and also low to high elastic energy)
    • syrup > gel / jelly > marshmallow > gum > rubber

This is a limited attempt to give a classical definition to these 20 words by defining the class and the variable differentiating the thing from other things in the class.

There are a variety of other words that describe what a thing does for you, or how it affects a process, like a lotion, balm, or detergent. However we don’t use these words as a noun class defined by the exemplar of the class, the way we do with milk, syrup, and the other words in the taxonomy.

Leave a comment »

The custodial economy

We are in a gradual shift in the economy when more people’s economic lives are enveloped in custodial institutions such as day care, schools, prisons, and disability and elder services. This “custodial economy” is made up of part of the government and service sectors, and is not normally considered a sector in itself. However it is distinct because it involves two expanding groups of people, who are neither buyers or sellers in a market. Instead they consist of beneficiaries – the students, prisoners and other people whose lives are being occupied by and supported by the systems, and the custodians who are deriving income from running it – the teachers and wardens and so on.

This paper is meant just to shine light on this phenomenon and look at the history, economic forces, and some commonly believed myths.

The main points defining the custodial economy are:

  • The beneficiaries in general do not have jobs with earnings sufficient to support themselves. Without economic power, their world is partly or mostly run by other people.
  • The custodians are not being paid directly by the beneficiaries as they would be in a market system. Instead the beneficiaries are a third party to the transaction, with less economic power. It’s similar to the way users of social media act (often unwittingly) as a third party in the sale of data about them to advertisers. Likewise, students and prisoners neither buy or sell the services being performed on them, but they must be there to make possible the salaries earned by the people working for those systems.
  • There are fuzzy edges to these ideas – there is not an exact way to categorize every person or every job.

Does the custodial economy encompass all poverty programs?

Read the rest of this entry »

1 Comment »

Do not resuscitate

(This is what came to me the night after my mother in law died.)


note2At the base of jagged cliffs in the river, the weapon and a used body remain but the pain is gone. Some things remain and some are free. Pines and soft grasses accept the coming and going of life. The Gallinas accepts the washing downpour. Everything proceeds in its cycle as if the suffering had never been.

In memory of Patricia Knoebel and all the stone-shaping waters that connected her to Earth.


Leave a comment »

On spark plugs and being present where you are

Yesterday, which spanned 48 hours, my car broke down while moving a load of stuff to Las Vegas. Cruise control decided that jumping to 6,000 RPM would be appropriate on the Cochiti hill; pistons went sproing-a-tattatat; oil all gone. Until They came, the predicament had no solution because tow trucks must – according to some towing logic – leave your trailer there on the highway, amongst rain and pillagers. Towing was my only plan I could think of but would amount to giving away the trailer with three chain saws, chicken wire, a table, back packs, and basic necessities such as a croquet set and other things I can’t remember.

They came flying by and some kind of flash happened in her wild mind. Stop, she said, and that was final. (How can we leave the lady there? Can’t.) They came back, found some oil, checked everything. They were both mechanics. She explained she learned how to fix everything because she could not afford to get it fixed. The questionable tail lights caught his attention and he went to work on that too. She alternately hugged me and diagnosed the oil pump.

They helped tenaciously from that sprinkly warm afternoon through nightfall in Santa Fe, through two trips to Vegas, and sunrise outside Tecolote. What do they want, I kept thinking. If they were con artists, they were not very good at it. They were somewhat intolerable to be around in the pushy way of people selling, but they were not wanting money or anything. All sleepless night I thought, surely they put water in the oil reservoir, and they overheard me telling my address, and a looting was surely in progress. They would disable my car, then conveniently “run out of gas” outside Pecos after waiting til the bats go home when no one would ever know what happened. I facebooked my coordinates because I think ahead.

I noticed two interesting things about their language. One was the omission of names, hellos, good byes, or anything of protocol. You just start interacting in the middle of the conversation and end in the middle. Or just don’t talk if that suits you. The people involved in this thing were normally referred to as The Man or Dude (interchangeably as there were two of them, the talking one and the other one), Home Girl (the dominant one who said Stop), The Lady (that was me), Her (her who owned the Suburban), and the nephew. In this context, the formality of “nephew” stood out as oddly specific. In the 14 hours, no one asked my name or said theirs. Home Girl and I were both the age of grandmothers.

The other oddity – a syntax rule – was a way of phrasing possessives. It’s become a meme to say “my baby daddy”, or to laugh at people saying that. But their dialect took this further. The main man held up a device and said “This: home girl baby daddy phone” with no verb or prepositions, like you might (not really) say in German Das Heimmaedchenkindvatihandy, leaving the word for the actual thing as the last element in the compound word. Latin languages would start at the other end of the chain of nouns and say this is the phone of the dad of the baby of my girl, putting the word for what it is – a phone – first. English is historically undecided between those two syntaxes, apparently excepting this German-leaning dialect of Spanglish.

I also noticed two patterns about behavior. One was living in the moment. Really in the moment. Not filling up the gas tank before driving in remote country in the middle of the night. Not considering what he would do without a tow bar once he got there. Forgetting food.

The Man talked about his own Suburban (currently missing since being “borrowed” without notice) being a gas hog at one time, and then he put in new spark plugs and that raised it from nine miles per gallon to something more affordable. He said he would rather spend the ten dollars for new plugs “up front” rather than spending $60 in gas each time he drives to Albuquerque. But he announced that as if it was radical to do anything preventive or with foresight, while to me, 10<60 is pretty simple math so of course you would do that. On the other hand, the suburban of Her, which he had “borrowed” for today’s adventure, did not have features like updated spark plugs, so it still got nine miles to the gallon, a fact that one had a lot of time to contemplate in the silence that happens without gasoline.

The other behavior that was impressed on me was the dedication to being responsible that is so full that they could not seem to even consider dropping a commitment. So he stayed up all night through sunrise because I had no other choice. A middle class American would have set a limit: “It’s 2 AM, so I can’t help you any more!” That thought didn’t seem compatible with their whole way of thinking. When Home Girl saw me, stopping to help me became true forever, not just for a reasonable amount of time. It wasn’t up to me to refuse or for them to reconsider. There was no undoing of that flash.

These are poverty behaviors, they say. If people could learn to think ahead and set limits, they would get further; they would get out of the cycle of poverty. Rich people buy in bulk and do things in a durable, planned manner, so they actually spend less and save more, while the poor are forced to live crisis to crisis eating at convenience store prices and paying for gas because of not paying for tune ups. The story is that poverty thinking is the problem that keeps them always on their last ten dollars.

I agree that people who tread on others, accumulate, and hoard do get ahead. I’m just not sure I know which of the two sides is the one with the problem.

He knew how to fix tail light bulbs with scrap bubble gum, spending nothing as habit because habitually there is nothing to spend. While scientists may not have discovered it, he had become an expert in getting ambient air pressure at certain temperatures to raise gas fumes into an engine when there is no gas. He clearly had years of experience coaxing life out of broken things.

I wondered if she was a calm heart of gold on the inside with tarnish and rough edges on the outside, or maybe a con artist to the core with a thick layer of deceptive frosting. Both seemed to be true; the layers were so thin and so sandwiched together, that it was impossible to tell which was on top and which was beneath. She was in some larger battle in life and like me, losing it most of the time but never giving up.

I had a sense from them that their role in their fleeting social network is keeping things zipped together and resisting entropy. The Other Dude did not have that role; he ran reactionarily into the twilight of Tecolote, which is not walking distance from anywhere, and did not reappear in this drama.

Demonstrating the power of the Suburban while still in Santa Fe, the main man flew over curbs, and not all our stuff remained on the trailer. One of the things about moving is, if you cannot remember what else you had before, after a portion of your belongings launch from your trailer into the night, you might not have needed those things.

Leave a comment »

Code forests

This paper is about a layering paradigm for enterprise scale software called a “code forest”. The paradigm is a tree-shaped database of elements that compose the code base, with their full content and revision history. Developers edit the database rather than the file system. I will get into details on what that means, but first want to start with list of problems that the approach improves upon.

Why we need code forests

In my examples I’m using C#-like code and terminology but it is the same concept with java or any language designed for enterprise-scale software. In this context “enterprise” means possibly millions of lines of code and multiple tiers with overlapping legacy and new products.

Some problems with today’s large code bases:

  • Depending on the language there are now three or more competing naming systems permeating the code base. These include the class names and optionally heirarchical namespaces of classes; names in the file system; and names of folders, projects and “solutions”. The only reason for the complication is the history of adding tools on top of other tools; it is not needed. A code forest only has one naming scheme.
  • Developers are usually forced to deal with source files. The base unit in programming and compiling is traditionally the file, but it does not have to be that way. A source file is not a meaningful concept in the compiled product. A code forest allows you to work with named elements in the forest, not files.
  • Developers are currently forced to deal with deployment considerations when writing classes. A code forest allows deployment decisions to be made completely separately.
  • The skills and other team characteristics needed to manage a code base are different than skills for writing classes and methods. A code forest helps teams do forest management as distinct from code quality management.
  • Documentation and understanding often decline as code bases get bigger. A code forest organizes code with documentation about code structure in the same place as the code itself.
  • Unwanted dependencies and friend dependencies creep into code bases as teams get bigger. A code forest makes creating dependencies an action that you have to take explicitly, so there can be no dependencies creeping in accidentally.
  • Source control is standard practice now, but technically it is an optional layer on top of a non-source-controlled system, and it can be broken, avoided, worked around, misused, or not be fully integrated with the development tool, leading to complications. A code forest is source control to begin with, so it is impossible to not use source control with it.
  • Developers can spend a great deal of time recompiling “the universe” of code when only one line changed. The compiler is usually too unaware of the code layering to optimize away unnecessary work. A code forest allows compiling to be based on changes only.
  • Visibility of class members is not flexible enough, with the options of public, private and protected members. Sometimes you need visibility in more complex ways, but we end up making too much public. Code forests control visibility exactly.

Basic definition of the code forest

A code forest is a tree (a directed acyclic graph with any number of roots) of nodes (which I’ll call “code elements“) along with the revision history of each element and a map of all dependencies between all elements. It can also include the concepts of code branches, commits, and other source control features.

Each element is composed of an expression in the form “visibility-spec name = element-definition” and a separate area for typing definitional or contract comments. Some example elements are shown here:

  • public A = 3
  • B = int (string s) { return s.Length; }
  • visible C = class { … }

The examples show a variable element, a function element, and a class element, respectively. The class element will have child elements inside it. The only type of element that allows fairly long definitions is the function body. Since most functions are ideally less than 20 lines, that means source control is operating on much smaller units than we are used to.

You may be questioning the function and class syntax. I am not concerned with exact syntax in this paper. There are many function syntaxes, and the one used here is chosen simply because it puts the name on the left of the equals sign so it is consistent with all other definitions. We are assuming that the type of any element is unambiguous from the definition, so in the example, A is known to be an integer. Classes also use the name = class syntax.

To organize millions of lines of code, one can think of all those millions of lines in one giant file with a lot of nesting. Of course you would not display it that way because of its size, but that is one logical way to display it. Replacing brackets with indented bullets to indicate the tree shape, that would look like this example:

  • PersistentData =
    • public Person = class
      • Name = “”
      • IsSally = bool () { return Name == “Sally”; }
    • Team = class
      • Members = new List<Person>;
  • UI =
    • Person = PersistentData.Person
    • ThisUser = (Person)null;

A team of developers can be branching, editing and merging elements all at the same time. The editable unit is the element; there is no need to “check out” or edit whole classes as a unit.

Layer views

You can also look at a code forest visually, showing boxes for the organizational classes and arrows denoting dependencies. Here is an example:

The example comes from an earlier paper “Megaworkarounds” –

The advantage to this kind of view is that it shows how the code is layered. Tools can also allow you to draw layers and drag elements to change the structure of the code base. For example, you could draw a box around a number of functions dealing with the same thing in an overly complex class and create an encapsulating layer. Read the rest of this entry »

1 Comment »

Complete and incomplete covers in engineering

I confess I have been irritated my whole life about car dashboard controls for heating and cooling because they are an incomplete cover for the complexity that is going on inside. It has been a rough few decades for user interface enthusiasts!

What is a complete cover? It is a layer or shell over some machine complexity that completely hides it and does not let any of the complexity out. A cover is incomplete if it forces you to understand what is going on underneath, or if it is confusing when you do not understand, or if the cover is insufficient to operate all aspects of the machine. A cover can be thick or thin – the thicker the cover, the more it changes the paradigm of the machine interaction. A cover is optimal when it is complete, regardless of whether it is thick, thin, or absent. Sometimes it is optimal to have no cover.

I will explain this with some of examples, starting with a mechanical mercury thermostat. There are three kinds of people in relation to these devices: (1) Those with a gut fear reaction when they look at dials and numbers; (2) those who understand the two exposed dials – measured temperature and set point – but do not know or care how it works inside; and (3) those who understand that the rotation of the temperature-sensitive coil which is superimposed on the rotation of the set point tips a mercury switch, that the bi-stable 2-lobed shape of the mercury chamber affects the temperature swing, and why mercury is used in the first place. It is a lovely thing, but not really the scope of this paper. I am mainly concerned with the middle category of people who are functional operators of the cover and what kind of cover it is.


The thermostat is a complete cover because you can operate every aspect of the heater with it, without needing to know how it works. It is also a fairly thick cover in the sense that it translates one paradigm to another. The actual heater requires an on/off switch to work, thus the only language it understands is on/off. But the thermostat exposes a set point to the user. It translates the language of on/off to the language of set points. Someone could replace the whole heater and wiring with a different inside paradigm but leave the exposed paradigm there, and the user would not need to know that anything changed, because the operation stays the same. In many systems – especially software systems, the replaceability of layers is an important design point, and complete coverage is one of the factors that makes it possible. Read the rest of this entry »

1 Comment »

On weeds and keystones

Las Vegas is an optical illusion. At first it looks poor; the city eye is drawn to cracks in the pavement and boarded up businesses, and one expects to feel poverty. Once an ornate and grand city, larger than Albuquerque, the town now shows age and depleting resources with fewer people. But then nothing bears out the expected feeling, and over time the eye learns to see different things – the beauty that is still there.

From my one window I see weeds, graffiti, and a muddy puddle in an empty lot. And I also see hand-set bricks in arches with stone sills, keystones, quatrefoils, with elms and aspens. Out the other window there’s a quintessential abandoned factory with sawtooth shaped roof, a highway bridge, and a stone hotel with a belfry and artistic parapet. With so much variation there is choice – what do I choose to see?

It reminds me of Pisa, Italy. I still have a picture I took of a goat eating weeds in a neglected brick-strewn lot, next to a crumbling plaster wall, in bleating distance from the throngs of leaning-tower photographers.

On a dumpster diving errand today I found nothing, and everything was surprisingly clean. Investment in the big city is equated with wealth, safety and the standard of living. But in reality, the distribution of money does not entirely control the use of time. New cities in the west exist because of greed, not because of natural necessity in the way port cities exist. Subdividing land, the innumerable rules, and smooth new concrete all make someone rich and define the city. Homelessness is illegal, and those who can’t meet the wealth standard congregate only where enforcement of all the rules is lacking, where there is less safety. So the city is an engine of separating haves from have-nots to its very core. And it fogs ones brain with the urgency of the struggle to have.

Politics in the west is the art of profiting from subdivision and controlling public utilities. The desert is almost free, but the value of a residential zoned quarter-acre with water and electricity is enormous. We don’t all share in that value. The winners are the ones who approved the subdivision plat on their own land.

On my errand the thing I realized is that if I myself owned things like sidewalks and too many buildings, and didn’t have enough money to make it all nice, I’d choose to spend it the way Las Vegas does. It would not be a priority to fix all the pavement. We have choices about equity and we can choose between concrete and education.

The growing city as an engine of segregation and uniformity gives a person that city eye that believes it sees education when it sees nice concrete. Nice and safe and pretty and educated are supposed to go together, and dirty, crumbling, dangerous and desperate are supposed to go together. But those are false choices; if the money is tight, we can choose education over concrete without having to have both.

Leave a comment »

On healthcare, disentangled


The current debate in Congress on healthcare is so hyperbolic and disingenuous that I felt it was time to actually pull out the threads and uncrumple the ball and lay it all out.

I will talk about the moral dimension, then the financial dimension, then the health dimension.

The moral dimension is simply this question: do we help each other through sickness and in health, or do we take the opposite extreme of every man for himself (women be damned)? Or do we take some middle road? Throughout most of my life, the moral choice made in the US was that middle and upper class people help each other out as a group, while we helped the working poor to a lower standard, and we essentially let the underclass die. Our sense of shame prodded us to ease the brutality of that death sentence somewhat by setting up a safety net. While that safety net was significant (medicaid, medicare, emergency services for the uninsured, for example), its moral foundation was that “we” treated “them” as inherently lower and less deserving. For generations health was never considered a right, and the debates focused on to what extent the recipients of “our” generosity were worth the expense.

The ACA fundamentally challenged that moral stance by declaring that we should take care of all of us to a more reasonable minimum standard, and it set in motion a trend towards more universal insurance coverage.

The current debate is completely hostile to that moral advance of the ACA, and I think racism and class superiority is a driving force. Some of the people who hate the ACA really want an underclass to exist, they enjoy winning, and nothing stimulates their competitive brain receptors like seeing the underclass waste away while they sip drinks by the poolside. They realize the ACA threatens to bring in more equality, so in retaliation, they ramp up their attack with an agitated fury and logical vacuum that can only be explained by the fear of the loss of their position in society. That is the moral failure in today’s debate, and it is driven by psychological forces that people experience when they lead narrow unexamined lives.

The financial dimension is a bit complex but important. Many people misinterpret the incentives of the four parties to each transaction, so I will lay them out:

  • Patients want to avoid doctors when we are well, but we also want the power to buy health-related services at only the level we need when we need it, no more and no less. It is important to see that there is not an infinite demand for services; thus patients are not the driving force behind cost escalation. But we want to be able to spend millions of dollars if needed, thus some kind of risk pooling is in our interest. It is not very relevant to us as patients whether costs are pooled by a public instrument or a private one. Like all consumer choices, we will minimize our personal costs and if insurance is not required or favorable, we will not buy it.
  • Providers (doctors, hospitals, etc) are private entities with the inventive to maximize income. Like anyone selling anything, they will do whatever it takes to make more sales – upselling, advertising, monopolistic practices, and lobbying. While the individuals involved in that system usually want to care for people at a personal level (they chose to go into that line of work), their corporate structures have the incentive to care less and charge more.

The first two of the three parties – buyers and sellers – operate just like with any other kind of financial transaction, but with healthcare, there are third and fourth parties.

  • Insurers and underwriters are private or governmental or non-profit organizations that provide the pooling of risk, taking a cut of the sales. Their incentive is to pay less out and charge more, but they operate in a market and under regulation, so they must stay within acceptable limits to stay in business. The important function of insurers is to determine what is an acceptable expense – more on that below. Many people on both sides incorrectly blame insurers for cost escalation and other problems, but insurers actually have the incentive to lower costs, so they are just a distraction from the central problems.
  • Courts are the final party involved in the money side of things, because ultimately they rule on insurance claims, if patients appeal, and thus courts ensure the insurers are following their own rules.

Costs can only be kept “correct” (not artificially low or high) if there are market forces at play, and the root reason why health costs have gone up in the last decades is that the market forces are not strong enough. Markets must have buyer choice and seller choice to be true markets, but in the US consumers cannot effectively shop around for health-related prices, so there is too little choice. Insurers commonly make deals with providers that cap prices on each procedure (more proof that insurers are on our side), but they are not allowed to cap the number and kind of procedures done.

Countries with more efficient health delivery systems (that is, all other countries) achieve that because they do fewer procedures, do them more efficiently, do not spend as much on marketing, and do not pay the doctors and CEOs outlandish salaries. Their cost savings are not achieved by pooling (insuring) differently.

The ACA changed the way the parties collude to set costs in some important ways, but it did not change the basic set of incentives. The main changes were that insurers were required to spend 85% of revenues on health costs, and they no longer could deny coverage (thus their whole business model became simpler, and they downsized). So under ACA, insurers are a less important variable in the cost equation than before.

There is an important link between the moral and financial dimensions, which is the question: what do we do if someone is not insured and they get sick and need help? They were not paying their share into the risk pool to help others, so when they need help, do we help anyway? (The same question is asked in the Little Red Hen story.) Or do we let them get insurance when they need it? Ultraconservatives say no, if they failed to think ahead, we should let them die. They are right on purely economic grounds for the same reason that if you sustain a loss of property that was not insured, you are out of luck; no one will pay you back the value of your loss. But it is clearly barbarian for us to live like that. If we really pause to consider this, we can only come to the conclusion that if we do not want to be barbaric, we need to require people to be in the risk pool. That is, either we have a public pool that automatically covers everyone, or else we require them to be in a private insurance pool. It does not make moral sense to have the “choice” to be uninsured.

The ACA was a compromise plan that favored the private pools to appease conservatives, and one of its failures was that the penalty for not being in a pool was too small; thus its adoption rate has been gradual.

The second financial dimension concerns the role of health spending as a tool of wealth distribution and equality. (I was talking about spending on health itself above, and now switching to issues of taxes and credits.)

Pre-ACA, health spending wasn’t tied in any rational way to income equality, so those expenses, being relatively equal across economic tiers, was a “regressive” type of expense; that is, one whose percentagewise impact on the poor is greater than that on the rich.

One of the most important effects of the ACA was in how it changed the distribution of wealth generally. It shifted the tax burden and entitlements on a gigantic sector of the economy such that wealthier people were paying a much larger chunk of the cost of health services for all of us, and many more people were getting those services at little or no cost. If you’re into active public management of poverty (as I am), this form of progressive taxation was a good start. Two or three more programs of that magnitide (such as in housing, food, or transport) would have made the US more like the compassionate socialist European states.

The condensed version of the way ACA was affordable is that Medicaid was theoretically expanded to include more people, and additionally, if you graduated out of Medicaid (by earning too much to qualify), then you still would qualify for credits towards premiums. Thus there was little or no gap which had existed previously. (One of the persistent failures of US social programs is that they often have a cutoff, so people have an incentive to stay poor to remain eligible.) As income goes up between around 30 to around 90,000 per year, the credits phase out.

The current agenda is being set by the uberrich, who seem to always want more money, so the brunt of their health reform proposals are to reverse the ACA taxes and entitlements. They say the ACA is broken and use terms like “choice”, but all of that is lies and distractions, and their actual motive is that they do not want to pay the taxes, along with the racist/classist motives as noted above. They propose reducing Medicaid and eliminating income-based help on premiums. Not only is the entire dimension of the health system as a tool of equality being chopped, it is even proposed to be reversed by creating a credits that the wealthy qualify for.

The health dimension includes the questions of what gets done for patients, who decides what gets done, and whether it is effective. Amidst all the noise about choice and rising premiums, these questions are not making the news. The current debate is completely missing the much larger factors of what the money gets spent on. We should be debating the finer points of who gets to decide whether to do each procedure and how much they can charge for it. Or we should be implementing more market forces to keep those prices under control. One of the ways to shift incentives is to pay for outcomes rather than procedures, so insurers only pay after the patient is treated successfully, instead of paying simply because something was done to them.

One of the big principles missed by conservatives is that people will not make good decisions about insurance in an unregulated market. Generally speaking we will under-insure ourselves if given too much choice. We might choose a plan with a 1 M$ lifetime cap, because that seems like a lot, but then need 2 M$ to survive cancer, and having made that choice when we were not thinking we might get cancer, we end up dying because of it. Or we might choose a plan that does not cover some drug that we never heard of, and then end up needing that particular drug.

We also do not know what procedures we need if we do not happen to have medical training. But on the other hand we cannot let doctors decide everything, or they would simply order every known test for every patient and drive prices up forever.

So the question of what gets done ultimately has to be a community decision – made either publicly or by insurers backed by courts. It does not make sense to make those decisions as individuals or as providers. The conservative’s notion that “doctors and patients” will decide on everything on a case-by-case basis is naive and does not contain costs. The ACA took a rational approach to that question by making those choices nationally, and putting into law specifically what had to be covered for everyone.

What do we do next? There are a lot of ways to rationally pay for healthcare costs. Here is the super-consolidated list of points that would need to be decided:

  • Who sets prices – There has to be a market force limiting the ability of providers to set runaway prices. (This point is rarely mentioned in debates, but assumed to be the role of insurers.)
  • Who decides what procedure is done – There has to be a market force limiting the ability of providers to do unnecessary procedures. (This point is rarely mentioned in debates, but should be central.)
  • What choice of doctors will you have – If insurers are allowed to control prices, they have to limit choice of providers as a way to do it. If you want to be able to go to any doctor and they can charge whatever they want, then there is no way to control runaway prices. (Republicans pretend to favor choice but have no plan that makes sense; Democrats pretend to favor choice but actually favor insurer price controls.)
  • Self-pay versus risk pooling – Only the 1%s could afford to pay full medical costs without pooling their risk, so all the rest of us need to pool risk. However, some chunk of the middle/upper income people could pay for a fairly large portion of typical medical costs if they accumulated money in health savings accounts (HSA), thus partially being their own insurer. (Everyone is assuming the combination of insurance and HSAs as far as I can tell, but Republicans want to expand the use of HSAs.)
  • Who gets included – Let’s assume that our goal is to care for everyone equally, and leave no one out. So the baseline assumption is that everyone is in a cost sharing pool of some kind. (Democrats generally favor this; Republicans generally opposed.)
  • Mandated coverage – This is really another word for who gets included; it may sound draconian to say “mandated” but it is how every other country does it. (Democrats generally favor; Republicans generally opposed but without any rational alternative.)
  • Penalty for not being covered – Pre-ACA, the penalty for not being covered was that once you developed a condition, you could not get covered for it at all, or only after a long wait. Thus in some cases the penalty was your life. Starting with the ACA, the penalty shifted to a simper tax payment. Another alternative is paying higher premiums after a coverage lapse. Another alternative is to automatically include everyone, avoiding the question of enforcing a penalty. If there is a choice in coverage, there logically has to be a penalty for opting out; otherwise the insurance market would collapse. A lot of people do not get this, but it is the main thing we need to get if we insist on using the insurance model for health costs. (Democrats generally favor the tax penalty or universal automatic coverage; Republicans favor a penalty through higher premiums.)
  • Change in coverage – Risk pools inherently require people to pay into them as a group when they are not sick, and by the same token, you would need to pay for the level of insurance that you might eventually need, before you need it. The strategy of buying minimal insurance while healthy and then switching to better insurance when you get sick undermines the whole concept of risk pooling. The ACA dealt with this problem by limiting the period of enrollment in a plan to the calendar year, which was not sufficient, since it would be economically favorable to wait out the year and then switch, for those who develop a chronic, expensive condition. Other solutions are automatic universal coverage, higher premiums (as above), and longer enrollment periods such as 3-5 years. (Democrats favor the ineffective one-year period and do not seem to have a solution; Republicans favor higher premiums.)
  • How the risk pools are grouped – There has to be a way to decide who is in a pool together, if there are going to be multiple separate pools. One way is to have the whole country in one pool. Another way is by employer. Another way is by insurer. Pre-ACA, continuously insured people were in pools by employer or by insurer if covered individually, while those who could not get insurance in the market either were in the medicaid pool or public high-risk pools. With the ACA, this mostly did not change but the ACA “exchange” established separate pools with a more transparent market. (Democrats favor a universal pool or the ACA compromise; Republicans appear to favor the tiered pre-ACA pool system.)
  • Who underwrites the shared risk pools – This is the question of insurance backing, and currently includes government, quasi-government public insurance companies, non profit and for-profit backers. (Everyone appears to be sidestepping this and is OK with the current slate of complex options.)
  • Who pays premiums – This is the question of whether consumers pay directly, through an employer, or via taxes. Pre-ACA, taxes paid for most of medicaid, most people with private insurance paid through an employer, and some paid directly. ACA did not overhaul that system, but it added a major tax credit component, such that insurers could collect part of the premiums monthly from the government and part from the consumers. (Democrats favor the current complex system or universal “single payer” via taxes; Republicans appear to favor the pre-ACA system.)
  • How is poverty handled – This is the question of how and if the health system trends towards more or less income equality. As I said above, the ACA had a system of credits and overall expansion of low-income support. (Democrats favor the current ACA system; Republicans favor a regressive taxation/credit system instead which makes it impossible to fully include the poor in the whole healthcare system, leading to countless deaths.)


Given the huge range of ways to do things, here is what I would do. My first choice would be single payer via taxes, automatic universal inclusion, and essentially removing the insurance component. The market forces would be created by publicly setting rates for outcomes, allowing providers to compete by minimizing the procedures necessary to achieve the outcomes.

Given that my first choice is a political non-starter, my second choice would be to keep ACA with these few changes: (1) improve the ability of insurers to negotiate prices and procedures, (2) shift to fees based on outcomes, (3) increase tax penalty for non-coverage, (4) lengthen the enrollment period to 2 years, and (4) phase out employer-sponsored plans.

Conclusion: There are complex choices to make, there is no one obvious best answer, and Republicans (mostly) are clouding the issues through a steady stream of lies to the point where meaningful real debate is possible.


(edited 3/11 to add more on medicaid and poverty)

Leave a comment »