👈🏻

Foreword

Just over a year ago I started this journal as an outlet to brain dump about that thought stuff that emerges while career-ing as a professional software engineer. Herein you’ll find teases, fancies, adventures, inquisitions, pretenses, punditry, delves. The blog as sympathetic ear, I suppose, for an epistemophile winding his way through layered systems. There’s just so much to consider in the code, as we say – perhaps presumptiously believing there is already a there there. Of course, we must also consider what’s around the code as well. There’s a lot of material to deal with. How to read it, write it, execute it, change it, share it, delete it, etc…

Table of Contents


Reading my streaming fancies with Le Guin’s Carrier Bag of Fiction

Tags: Ursula K. Le Guin, tv, teens, xenofeminism 10/4/2020

Why do I have such a big attraction to teen fictions (like Never Have I Ever)? For a while I’ve been thinking it was the force of a stream, a soak, a wash of saudade made lighter by revisiting that time from a better place; a chance to relive with empathy for myself, to reclaim that time from the incapacitating fear of being vulnerable, navigating coolness in the 80s/90s (a now more mainstreamed cultural topic since all storytellers are approaching their strength ~40). The journeys of these post-modern (mostly young women) protoganists through insecurity, heartbreak, the web of complex material and digital norms and hegemonies; the care they find for themselves and other people, with other people, emerging whole-enough with their collective–family/friend/mentor nexus; learning to survive the patriachal-colonial system, becoming disloyal, anti-heroes. It’s all a differing sort of story from the “prick tale”, where the Man-hero takes up arms against foe, real or imagined, creates the world in his vision; fealty to a cause which is only ever his own desire; pehaps the desire of the original Man he has inherited through gruff, nearly painful handshakes through time.

But the feel-good feel goes beyond. I think something bold happens in these stories.

I’m a bit new to Le Guin’s retelling and revisions of our myths and Man’s realization of history through story, but it seems these new sanguine cultural flows might map well to the technologies of The Carrier Bag Theory of Fiction.

If I am allowed a primitive literary-criticism gloss. What is picked up along the way: the misinformation and guidance from elders (both useful to (re)combining a new narrative of self), the wrought learnings from misfired expressions of need (for “boys”, peer acceptance, spiritual stability), the new kinds of love from parents and friends that evolve from changing circumstances. It seems especially apparent in the serials. Each episode there’s a satisfying build, a paced emergence that’s lovely because the wins are not blood-soaked triumphs. Transitions, evolutions, forks, back-tracks, shifting winds, climbs, descents; it’s a kind of Sunday hike through the woods with the Fellowship of the Ring in peacetime. Not the “linear, progressive, Time’s-(killing)-arrow mode of the Techno-Heroic” (Even the fantastical dragon time appropriations of Percy Jackson still fit the prick), but story that describes “what is in fact going on, what people actually do and feel, how people relate to everthing else in this vast sack, this belly of the universe, this womb of things to be and tomb of things that were, this unending story.” Unending…but not Neverending. (Though perhaps a re-reading of Bastian’s adventures is due now that I’m starting to finally put things into this sack (of a blog).)

Motherland: Fort Salem and Cobra Kai kinda fucks with all this too. Prick tales recast and modulated are problematic. They can be awesome and interesting because they don’t totally abandon tete-a-tete action (which, when crafted through martial arts is personally gripping (especially for an amateur artist of the martial myself)); and yet, these imaginative alternate worlds and thoughtful respun contemporaries can still carry out seditious ploys. Maybe we can think about Motherland and Cobra Kai straddling prick tale and carrier bags as tactics and technologies of xenofeminist adaptations, calibrations, speculations that transform through “seeping, directed subsumption”; since there is no pure refuge from patriarchy and eyeballs of grief-stricken viewers, anyway. Liminal bacteria waved through by situated materialist praxis. I couldn’t reject Cobra Kai whole cloth despite much of the character development of the teens is wrought through pugilisms (everyone literally joins a dojo); it’s softenings and care for men feels valuable when toxic masculinity runs rampant in the culture. Also, I wonder about the dojo. The split lips and bruises that result from Johnny’s tough, mangled love create openings; the kids can take more; take in more; become spacious (punching) bags. Face hits could be a technology of transformation when there are limited options for situated emergence in the hyperreal suburban desert of early 21st century SoCal. How big of a purse do we have here??

The immune system, rather than being imagined as a force of protective soldiers made by the host, can be thought of as a group of passport control agents and bouncers. They know who to let in and who to keep out–and they have learned this through millions of years of evolution.

Holobiont by Birth: Multilineage Individuals as the Concretion of Cooperative Processes, Scott F. Gilbert

Delighting in teen fiction is not a trifle. “It matters what worlds world worlds,” reminds Haraway. And of course, a most important reminder from Le Guin, lest we believe the prick tales (which, if they are not careful, could cause major issues like exponential inequality, and/or loss of ecological balance): “People who deny the existence of dragons are often eaten by dragons. From within.”

Mixtape 9/30

Tags: mixtape
9/30/20

Not a day goes by when I don’t witness the over/mis-signifying of “meeting” which basically damages possibilities for elevated communication.

Calling the coordinated exchange of messages or moderated access to shared material (like a blackboard) a conference or meeting seems to neglect a factor that is rather important in meetings. Often the explicit exchange of messages is relatively unimportant compared with the development of social relations in the group of participants that happens simply by the fact that they are close to each other as humans for a while.

Perspectives and Metaphors for Human-Computer Interaction, Susanne Maaß and Horst Oberquelle

Thus, Zoom doomed, perhaps.

👩🏻‍💻

Penny per day in Clojure:

(defn pennyPerDay [numDays]
  (reduce + (range 1 (+ numDays 1)))
)

(pennyPerDay 30)

Penny per day in JS:

function pennyPerDay(numDays) {
  return Array(30)
    .fill(0)
    .reduce((sum, _, i) => sum + i, 0);
}

pennyPerDay(30);

Thread me a river. Most frequent terms in a list with Clojure:

; eg terms: {term1 5 term2 27  term3 9 term4 0 etc...}

(defn most-freq-terms [terms]
  (->> terms
       frequencies
       (sort-by val)
       reverse
  ))

(most-freq-terms terms)

👩🏻‍💻

How to truly love our age, and also how to somehow live and die well here, with each other?

Manifestly Haraway, Donna Haraway

The term [Anthropocene] appeals to a false universal of homogeneous “Man,” which was created with a white, Christian, heterosexual male person as the basis for the universal.

Reflections on the Plantationocene: A Conversation with Donna Haraway and Anna Tsing

The excess of modesty in feminist agendas of recent decades is not proportionate to the monstrous complexity of our reality, a reality crosshatched with fibre-optic cables, radio and microwaves, oil and gas pipelines, aerial and shipping routes, and the unrelenting, simultaneous execution of millions of communication protocols with every passing millisecond.

Ours is a transformation of seeping, directed subsumption rather than rapid overthrow; it is a transformation of deliberate construction, seeking to submerge the white-supremacist capitalist patriarchy in a sea of procedures that soften its shell and dismantle its defenses, so as to build a new world from the scraps.

Xenofeminist Manifesto

SILICON Valley’s most powerful monopoly may be how we perceive technology.

Particular Universals, Francis Tseng

Tsing positions capitalist exchange as first and foremost a way of making relations. The relations made by exchange in post-industrial settings are evident as, among other things, space- and time-making supply chains. For Tsing, the supply chain is not an inevitable solution to a predetermined need to facilitate the flow of capital and raw materials necessary for industrial production. It is a thoroughly social achievement of bureaucratic actors that circulates prestige and kinship along with wealth and exploitation. It is the disjunctures between sites, the differing aims of actors, the mismatched logics and practices of investors that makes it possible to exchange one thing for another and gain a remainder of value. Supply chains bring people and motivations and objects together, aligning patches of activity in ways that produce meaningfully valuable frictions. The translational function of supply chains – how they make things move even though the interests of the parties involved are imperfectly aligned – is what makes it possible to make profits through accumulation, salvage, and trade. This is the very spirit of capitalism for Tsing in this book: ‘translations across sites of difference are capitalism: they make it possible for investors to accumulate wealth’.

In a manner reminiscent of J.K. Gibson-Graham, they are concerned with identifying spaces in which life can be made bearable through fine-grained analysis of specific, situated practices. This may be deeply unsatisfying to readers who want a bold proposal for fixing the problems produced by modern capitalism, including the acceleration of climate change. There are no calls for revolution in these books.

Feminists write the Anthropocene: three tales of possibility in Late Capitalism, Danya Glabau

This claim to intersectionality as the dominant feminist method can be produced with such insistence that an interest in exploring other frames, for example assemblage, gets rendered as problematic and even produces WOC feminists invested in other genealogies as “race-traitors.”

“I would rather be a cyborg than a goddess”: Becoming-Intersectional in Assemblage Theory, Jasbir Puar

Not so byzantine algorithm studies: Using math to deliver your medication

Tags: math, programming, probability, combinatorics
9/9/20

Like many commercial software developers, math plays a more sporadic role in my day-to-day work. That said, I certainly wouldn’t blithely demote knowing math below other techs like programming languages, web frameworks, and principles of software design/architecture; which already sets up a false dichotomy, anyway.

Math presents a beautiful core proposition to software developers. Through its powers of abstracting and reducing real-world dynamics into expressable, repeatable, sequential logic – which we can be lifted easily into our favorite coding grammars – math can be the companion dynamo we need for achieving levels of faithful verisimilitude for real world complexities (Keep It Savvy Stupid).

Italicizing knowing math is totally an intentional problematization to mess with the idea that knowing math is narrowly scoped to demonstrating technical prowess of calculation or proofing. I think software engineers can, and should, consider a relationship with math as knowing enough math; like, an ontological relationship with math’s formalism and general principles. Math The Good Parts, so to speak. Otherwise the bravado of equating math and problem solving or math as “hard” (dick) science of some kind is doomed to engender an axiomatic blur that will gate keep devs who are not deemed fit for lack of educational background or interest in fiddling with greek letters. Keep these gates wide open, por favor!

(I also think the focus on calculation in my early schooling, alienated from material reality, made me quite Meursault for it – perhaps why I don’t feel the need to champion for being mathy. It took entering commercial software life much later in life to see how the dynamo helps us first-hand and therefore able to consider embracing math (and being embraced by math) again.)

Our praxis: we can coopt knowing math to ramify similar to the way our desire for Senior Engineers™ to be T-shaped with their knowledge of the stack (in order that they can press more heavily on architecture and design) ramifies. For example, we can desire for React developers to understand just enough graph theory to see component architectures as graphs; the implications of data pipelining and message passing in a graph (bonus points for being a student of Christopher Alexanders’ semilattices); or just enough category theory to understand the heritage and benefits of composability of first-class functions – is JS even fun without a bit of FP? What else?

This is all just to say that some business problems map really well to higher-order maths and devs should be prepared for it, at least through the design phase. Again, once we sit down to write code, we can easily offload the fastidious implementation details; the few lines we’ll copy/paste from existing math libraries. (Leave the hard stuff for the math deep state.)

Take, for example, a common e-commerce retail problem like Distributed Order Management (DOM). Modern omnichannel selling combined with cloud computing open a broad field of possibilities to achieve low-cost order fulfillment. Affine cost structures – resulting from variable/variadic shipping dynamics – will increase complexity as an ecommerce company scales. We’re talking about quite a bit of real-world complexity to model and adapt to.

What, then, when we’re tasked to create an algorithm to satisfice a version of this problem where you have these rough requirements:

  1. Orders must be completely fulfilled
  2. There will be variant shipping costs per supplier
  3. Orders can be fully splittable by a supplier

This kind of thing will break your brain once you start to grasp for a rational boundary to contain the argument ramifications: supplier availability, supplier inventory availability, etc…

I was presented with a challenge like this recently and it took me a couple hours just to understand what this problem domain was; like, find my way to DOM through off-by-one google searches; a true StumbleUpon revival. Because I didn’t know the math well-enough yet. Google is pretty fast though, and within minutes I practically broke out in a sweat after discovering multiple long-winded computer science papers written on the topic filled with intimidating mathy notations. Then more OMGs and mon dieus as I careened sharply into Cantor’s theorem and Set Theory. Wait, am I doing math? Oh mersault!

More and more it seemed a fantasy that I’d be capable of solving Amazon within a reasonable amount of time because this gestalt was increasing aggressively. Nonetheless, after some deep breaths and patience recoup and having worked with enough devs without math experience, I began to inculcate to this world and realize any solve was going to be an approximation of some sort and probably wouldn’t require me to crack open a text book; there was no silver bullet or excavation of secret proofs. Rather, this whole class of optimization maximations applied to fulfillment problems is a rigorous academic field, but when it meets the metal it softens and warms for Good Enough™️ programming. Each potential brute force linear assignment or dynamic programming algorithm was discarded. My inputs couldn’t be structured into a decision table or cost matrix amenable to path finding, traversal, or Cartesian production. Which meant I could rule out potentials like Kuhn’s Hungarian algorithm. In fact, the scope was something more akin to a set cover or networking problem – still a brave new world, but less and less unbounded the more I scoured the web. Ulimately, my task was gonna be something toward imagining all probabilities between order items and suppliers, and then reducing these matches against cost constraints. “All probabilities” was a strong clue.

So, a bit surprisingly for someone not used to needing math everyday – and certainly not trying to fuck with this:

Eq. (5a) ensures every order is fulfilled, and (5b) limits service provided by a store.

I soon found myself drifting at a comfortable altitude through Probability and Combinatorics with the help of other cow path pavers – we are community-taught developers, after all! – crafting a studied, yet heuristic, approach from where I could thread my inputs through techs like combinations and permutations to make some educated guesses.

The general step-rules of algorithm gradually accreted into something resembling following:

  1. Generate all possible combinations of order items
    • Order items are unique, therefore we are working with a Set. We can therefore use a mathematical definition of a Powerset and create a function which outputs a set of all subsets of any set S:
      powerset([A, B, C]) === [[A], [B], [A, B], [C], [A, C], [B, C], [A, B, C]];
      
  2. Generate all possible combinations of combinations order items that are less than or equal to the number of suppliers
    • Effectively take the result of the Step 1 as the input Set for another powerset that only returns combinations of order item splits that can be fulfilled by available suppliers. For two suppliers:
      powersetBySize(powerset([A, B, C]), 2) ===
      [
        [["A"]],
        [["B"]],
        [["A"], ["B"]],
        [["A", "B"]],
        [["A"], ["A", "B"]],
        [["B"], ["A", "B"]],
        [["C"]],
        [["A"], ["C"]],
        [["B"], ["C"]],
        [["A", "B"], ["C"]],
        [["A", "C"]],
        [["A"], ["A", "C"]],
        [["B"], ["A", "C"]],
        [
          ["A", "B"],
          ["A", "C"],
        ],
        [["C"], ["A", "C"]],
        [["B", "C"]],
        [["A"], ["B", "C"]],
        [["B"], ["B", "C"]],
        [
          ["A", "B"],
          ["B", "C"],
        ],
        [["C"], ["B", "C"]],
        [
          ["A", "C"],
          ["B", "C"],
        ],
        [["A", "B", "C"]],
        [["A"], ["A", "B", "C"]],
        [["B"], ["A", "B", "C"]],
        [
          ["A", "B"],
          ["A", "B", "C"],
        ],
        [["C"], ["A", "B", "C"]],
        [
          ["A", "C"],
          ["A", "B", "C"],
        ],
        [
          ["B", "C"],
          ["A", "B", "C"],
        ],
      ];
      
  3. Generate all permutations of suppliers
  4. Generate all viable routes by matching between the sized combinations of order items (result of Step 2) to supplier permutations (result of Step 3)

    • Basically a fancy zipping computation
  5. Filter viable routes against both superficial and business constraints like like duplicated suppliers and supplier availability and/or inventory
  6. Compute the lowest cost route!

Now there’s a well-suited mathy modeling!

Some other thoughts

Combinatoric applications for this algorithm are quite expensive: you can see how the cardinality flourishes pretty fast above, and my example is for a modest 3 order items and 2 suppliers. If those numbers increase by any measure CPU will be tremendously exercised. (I believe the runtimes of these functions may be polynomial?) I can see why optimization becomes an attractively ripe apple for academicians. Quickly glossing, I can imagine optimizing the looping functions to break before completion when satsificing within a range of acceptance criteria; or structuring the data as Iterators and/or piping through transducers to minimize space complexity with lazy or eager techniques.

By the way, JS has pretty underwhelming options for combinatorics. I found one library that I found a bit awkward to use and ended up ditching in favor of a few standalone implementations of powerset and permutations so I could ensure the code would comply with how I was trying to express the above heuristic. Unsurprisingly, Python’s itertools has combinatoric functions built in and even provides recipes for common tooling you can build on primitives like permutations() and combinations(). For example, powerset().

def powerset(iterable):
    "powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)"
    s = list(iterable)
    return chain.from_iterable(combinations(s, r) for r in range(len(s)+1))

Or simply import the blessed more-itertools library. Of course, this is expected for Python which is heavily developed for the data science world.

The loss of logical purity primacy

Tags: decision-making, Gary Klein, >Code, recognition primed decision
9/7/20

Femi Ogunbanjo & Hanne Klintoe, 1999, The Loss of Sexual Innocence

Back in February, my entire notion of expertise and how experts make decisions was cracked after listening to episode 169 of the Greater than Code podcast. While the podcast is rolling I’m discovering my socialized construction of expert as some kind of uber-human with muscled query powers against a psycho-somatic access store, combined with a reined analytical rigor only gained through extensive tutelage, study, and practice.

After writing the above sentence I’m struck by how neo-classical cyberpunk-capitalist fantasy that sounds: a techno-positivist realization of brain as database; the privileging expertise as accumulation. The time-value of memory perhaps.

It’s kind of wild: wondering about what sort of instinct decision-making really is; in concrete terms. (Especially for yours truly who has no real erudition of the inner-workings of the brain or human beings on a psychological or physiologiccal level.)

Thankfully we have the >Code crew to help us navigate Decision Theory from the saftey of our own home. The podcast panelists offered a different notion of expertise attributed to Gary Klein’s work Sources of Power: How People Make Decisions. Rein’s paraphrasing of Gary Klein:

Expertise is an act of rapid mental simulation without formal analysis based on previous experience. It’s more akin to “instinct” than a cartesian scientific methodology.

Informed, not overly formal; certainly not grooved to Western patriarchal maths; but not totally informaled! Gary Klein calls this model for quick decision under time pressure situations in our business environs as doing “recognition primed decision.” RPD.

I find RPD comforting as a way to rethink my role as an expert despite fewer than a decade practicing my trade: a trifling “mid-level” software engineer. While RPD reveals the distinctions between less experienced and more experienced people – notably the former’s alacrity to arrive at a first course of action – RPD is more like pattern matching, identifying the least risky choice, or mentally modeling/imagining outcomes (see Variations)

Effectivement, RPD ramifies a permission for folks with less years of experience to face spontaneously emerging decision-making moments in the workplace without comprimising existential angst. That is, if we have an idea, we should trust our gut. RPD also beseeches us to expel myths like (quickly riffing): Principal and Staff technologists have reached their eminent station by putting in the time to refine their powers of inspection, analysis, interpretation, evaluation; they’ve amassed all the good frameworks, methodologies, and systems into their memory bank; and decisions they make are conclusive (foreclosing, narrowly, the scope of possibilty) because of this special toolkit which is wielded with reason.

Reason and logic might factor into RPD mental simulation, but there’s a primacy of imagination and knowing deep down. Strength of will even?

I am enough. My powers to decide can be totally excellent.

Of course, that doesn’t mean my first guess won’t be totally wrong. But hopefully I’m not asked to fix something urgent that far exceeds my bounded rationality of this domain; which might ramify harm on real people if the mark is missed.

I did some additional reflection with the >Code community in their Slack on why I was feeling relieved by Klein’s ideas (Jessica Kerr hearted this so I’m feeling proud to share!):

Thanks. This really struck me because I think I have internalized some idea that experts have these academically rigorous approaches to problems where they draw on frameworks and data and tools blah blah to come to conclusions.

Then Rein chimes in:

Yeah, the analytic decision-making mode is what we are taught, but it’s not what people actually use, especially in high-pressure situations.

The me again:

Oh right that was the other piece. [Making decisions under] high pressure. Which in corporate culture feels like all the time a lot.

Then some other folks tried to pivot the conversation to talk about how web tech is not really that high “pressure” because bad decisions don’t kill people or necessarily tank the business immediately. So I continue:

I understand where y’all went with that idea of “high pressure.” But even without the kinds of excess you describe — which I’m grateful not to feel at my (fairly spread out) 1000 person company — I still feel like I’m caught up in a heavy stream; not in a surprising or terribly uncomfortable way; but like a tireless machine of ambient conversation between me and my work and my coworkers that is always humming. So maybe “high pressure” in the sense of constant. Day to day I feel coerced into what feels like quick/on-the-spot decision-making, and always on guard for such; and that’s definitely challenging for me.

Like, I’m a developer, and I have to make a ton of small decisions all day long when coding. Even medium-sized decisions. (Like the other day deciding whether to coordinate on a revision of a standard component with a team in Spain with similar needs, vs plowing ahead myself because I knew that process may take weeks and I needed output in days). And often these decisions have to happen within a matter of minutes because of the above normal buzz. I think I remember there was another episode where folks talked about how developers make more decisions than CEOs on any given day?

I was finding comfort in this idea from Gary Klein because I can consider whatever my instinct draws on in that moment to be a practice of expertise. I can feel like an expert; and without any added insecurity for a lack of analytical rigor, if I had to explain my decision. I usually have confidence in my decisions, but I didn’t have a framing of it as expertness. A framing I was likely lacking, as Rein said, because I’ve been taught a dichotomy between analytical thinking and instinct, where there actually isn’t one; at least as I understand the idea without having read the book yet. But as someone relatively new in the industry — just around 6 years building web — being able to come into expertise or see it forming in my behavior is really nice.

Earlier today, delving through Software Development and Reality Construction, I was delighted to discover another rebuke to my previous ideas of expertise as analyitical reason meets a database. It smacked of Klein and I wonder if Reinhard Keil-Slawik is influenced by Klein – or conversely – or perhaps they swarm with each other in that world of systems thinking, cybernetics, constructivism, etc…

From Keil-Slawik’s essay Artifacts in Software Design:

Klein seems to intone that expertise – ie “insight” – is an enrichening of “understanding.” Understanding being a cognitive structure derived from the encounter with our envionrment which mashes up phenomena (physical, etc…) and circumstance. It’s not random access from a brain-as-database that has been populated with arrangements of stuff from “out there” - outside ourself/mind – (previous cognitive structures); as if insight is just a result of recall or trial; that smacks of an easy slippage slide into decision paralysis.

Rather it’s:

This notion seems to gel well with RPD. I wonder if Klein would welcome this cognitive science to explain how we do decisions.

I’m tempted to accept this explanation pro-tem since the thought of unbounded accumulation of stuff in the brain, and lookup at exponential runtime, doesn’t satisfactorily explain the reliable guesses of experts under time pressure.

Keil-Slawik and kin are proposing the idea of human expertise as an individual construction of meaning, as well as a co-constitutive emergence of our own body/mind wisdom and the “external memory” of symbols and tools we work with. The latter is an important point because it reinserts the imporatnce of mneumonic s being turing machines, algebra, languages, frameworks, etc…. This affords us the benefit of a tradeoff: speed of resolution over perfectness. The example Keil-Slawik gives is of the ape who, if taught once to put sticks together to reach a banana, will subsequently devise similar means of combining objects to get the banana “without any hesitation…and any similar situation.”

The awkwardsness of downloading concurrent, asynchronous academic threads around 2003/4

Tags: feminism, racism, academia, memory
8/30/2020

Halfway through Companions in Conversation and I’m having this memory of being in college – clacking away code points on the cheap silicons of a Micron computer – practicing new knowledges in my class essays just like Haraway admits while writing the Cyborg Manifesto: “trying out some of the knowledges that hadn’t been mine that I was getting from my colleagues [professors (in my case)] and the graduate students in the program.” I remember deploying the token “implosion.” I probably got it from Haraway.

And yet – perhaps a small singe of regret in my tummy neuronics, and deja vu – this word play was happening while I was coming to grips with the nightmare material of Black and Brown people in America through black feminist and black feminist incarceration studies. Our class journalled, and so I produced heartbroken journaling about the new Jim Crow (before Michelle Alexander would be able to corrupt the nation), taking some liberties with text. I wonder, now, if the linguistic turns in those reflections foreclosed, too soon, real collaborative feeling; like community (or, because it’s hard to resist, kinship and worlding). All these students toiling away in separate rooms and library cublices, synthesizing separately. Autopoietic onanisms. Needed more sympoiesis. I still regret a final project that was a bit too stylistic and “discursive” – performing the film student, trying to dazzle a little with Final Cut Pro; but some good calories lost. The art folks I admired made it hard for me to distinguish between medium and message, or situate with them with grace. (Not that Haraway was successful for everyone. But she kept playing whilst under the microscope.) I’m remembering a kind of safe feeling in postructuralism. But perhaps that was just the safety of academia which easily postpones praxis and doing something while chatting in the corridors of monuments to robber barons. Because, like, I’m also remembering that the materiality was a welcome call to action. Perhaps I wielded it – some subconscious desire for participation in activist and join movement(s) for climate, economic, gender, racial justice – as an antidote to the fist two years of cultural studies and dorm-chair politics. It’s hard to remember exactly if I was that wise; maybe the blunt horrors of the Bush regime’s colonialism/imperialism was enough of a catalytic flow through the student bodies progressive consciousness.

Still, interesting to wonder, now, about the concurrent postructural deluge of neologisms and deconstructuring of gender, race, class opressions, cyborg multiplicities back then; while faced with newly understood, centuries old urgencies of white supramcist capitalist patriarchy and Black holocaust. I’m at the same place now, returned, recurred. Which is somewhat shocking and all too expected in America.

That said, the second time around, considering how Haraway’s work traveled and continues to travel in the last 15-20 years, I’m excited to be hailed again, thinking with her. While moving duckets to organizing, talking to white brothers about racism. Worlding assemblages seems really relevant still as the complexes of racial capitalism and white psychosis have become wider, deeper, and more micronic.

Byzantine algorithm studies: Using math to reverse an integer

Tags: javascript, algorithms, integers
8/6/2020

Making code do something requires exactness but that doesn’t necessarily mean programmers will express grammars with any particular concision or ease. Code is an outcome of social construction, background, perhaps aesthetic desire. Toy code problems reveal the apparatus, for sure. If you ask me to reverse an integer I’ll build a world of arrays – because I like them. Directionality makes me think of traversal, therefore arrays. I also recall the reverse() convenience immediately. However, as a result, I’m forced to deploy morphological contortions; which might feel icky for someone derived within an algorithmic hermitage, auto-recursively deriving pleasure therein.

function reverseInt(int) {
  // STEP 1 - morph
  let intAsArrayOfStrings = int.toString().split("");
}

Reversal is now nearby:

function reverseInt(int) {
  // let intAsArrayOfChars = int.toString().split('')

  // STEP 2 - traverse, recombine
  let reversedIntString = intAsArrayOfStrings.reverse().join("");
}

All that remains is a second transmogrification back to the initial type and signing:

function reverseInt(int) {
  // let intAsArrayOfChars = int.toString().split('')
  // let reversedIntString = intAsArrayOfChars.reverse().join('')

  // STEP 3 - Sign
  let result = Number(reversedIntString) * Math.sign(num);
}

But it has come to my attention that a mathy person prefers preservation, whereby the input is arithmetically recombinated with the help of division and multiplication.

function reverseInt(int) {
  let remainder = 0;
  let result = 0;

  while (int) {
    remainder = int % 10;
    result = result * 10 + remainder;
    int = parseInt(int / 10);
  }
  return result;
}

But there’s no translation from integer to array. No shift. Grammar school level maths instead. I guess this solution exalts type consistency. Perhaps belies an obsession with 10? The base binary pair. I can’t imagine the benefit of doing that in a commercial software project so it could only be an academic fancy, benchmarked under extreme conditions. It’s really clever, though.

Also, been thinking about slice a lot today. Never stop dreaming:

let intAsStr = int.toString();
new Array(intAsStr.length)
  .fill(null)
  .map((_, idx) => intAsStr.slice(-(idx + 1))[0])
  .join("");

What’s a bug?

Tags: software development, bugs
7/15/2020

…bugs are interesting and important in themselves: they define the boundary between what is understood and what is not. Hence, they show us where our weaknees are, and provide opportunities for us to learn and grow.

Software Development and Reality Construction
Ch 5.1: The Denial of Error, Joseph A Goguen

The rush to fix bugs to recoup customer satisfaction is absolutely necessary when writing for capital. I just worry that without the proper retrospection we won’t actually increase our understanding of the software, and therefore the recidivism rate of error will remain constant; or likely increase in the tail wind of growing complexity and entropy.

Recursion is too easy for writing nested menus in UI

Tags: javascript, recursion, react, UI, interviews
7/11/2020

Hannah Höch, Für ein Fest gemacht (Made for a Party), 1936 Collage

When paired with component-based framework tools and HTML’s unordered lists. Since the rendering is done by the framework, we simply need to progressively stack render calls as we traverse the data. Such brutalist Dada.

function List({ items }) {
  return items.map((item) => (
    <ul key={item.id}>
      <li>{item.name}</li>
      {item.children && <List expanded={expanded} items={item.children} />}
    </ul>
  ));
}

A Recursive Validation Function with User-Defined Exceptions

Tags: javascript, recursion
7/5/20
Syndicated: dev.to

Every time I use a recursive function for something practical in commercial software my notional machine of it’s behavior is refined. This amounts to a small list of heuristics amassing in my mental pocket:

  1. “It’s a function that calls itself.”
  2. “You make your list of tasks first then start from the last and work your way back up” or “It’s like unpacking a box with other, smaller boxes, with other smaller boxes, with other…and then only start looking at the boxes from the smallest to the largest, one at a time” (🎩👌 Aditya Bhargava’s grokking algorithms)
  3. “It’s good for building up a list or value, kind of like reduce”
  4. “It can be less code but less performant.”

After working on another problem recently that involved deep-diving a nested JavaScript object and executing validations against each level I’m adding:

“Recursion is awkward if you need to break early.”

In my particular case I needed to validate a recursive data structure representing an org chart of Employees and return invalid immediately if the code traversed an Employee with bad data – extra fields, missing required fields, fields of the wrong type, etc….

Breaking out from a recursive function is not quite as straightforward as you’d think. Also, historically I was used to seeing recursive code employed for tasks that wanted the call stack to build up all the way through the data.

Like, (deep) flattening an array:

function deepFlatten(nestedArray, result = []) {
  for (let element of nestedArray) {
    if (Array.isArray(element)) {
      deepFlatten(element, result);
    } else {
      result.push(element);
    }
  }

  return result;
}

Or, fetching a complete set of data from a remote source in chunks:

async function fetchAll(params, all = []) {
  let chunk = await fetch(params);
  let nextPage = chunk.nextPage;
  all = all.concat(chunk.data);

  if (nextPage) {
    let nextParams = { ...params, page: nextPage };
    return await fetchAll(nextParams, all);
  }

  return all;
}

What I quickly discovered is that just trying to capture and emit an error from a recursive call stack is already a bit funky. Simply returning false in your function doesn’t work because calls lower on the stack may return true; and since we’re (kind of) “building a value” it only matters what the final call returns. This approach won’t work:

// Will only return false if the last call in the stack returns false
function validate(data, schema) {
  for (let item of data) {
    for (let rule of schema) {
      let field = item[rule.name];
      let required = rule.required;

      if (required && !field) return false;

      // Recurse
      if (Array.isArray(field)) {
        validate(field, schema);
      }
    }
  }

  return true;
}

Using recursion is more like a leap of faith - you are handing over control to the JS engine over an unbounded data set; it’s quite reminiscent to the manner in which higher order functions operate with Array and Object collections. For example, forEach is a powerful and declarative alternative to for and for..of/in loops until you find yourself needing to skip over an iteration or break out of the loop. Keywords like continue and break are unavailable in Array and Object collection methods – these are closed iterators.

Your only recourse in a recursive function is relying on outer calls – since the call stack is LIFO - to set that flag and pass it through each stack layer. So capturing and emitting an error from your recursive function might look like this:

function validate(data, schema, errors = []) {
  for (let item of data) {
    for (let rule of schema) {
      let field = item[rule.name];
      let required = rule.required;

      if (required && !field) {
        errors.push(error);
      }

      // Recurse
      if (Array.isArray(field)) {
        validate(field, schema, errors);
      }
    }
  }

  return errors;
}

If our program requirements suggest we want to parse the entire org chart for bad data, this function will give us a result array we can further process to report errors. But for my purpose, there’s too big a potential cost of unnecessary runs while a large call stack is cleared for a large org chart.

In order to stop processing the org chart and return an invalid result early, we need a solution that stops execution entirely when the invalid check is entered in the outermost call. Alas, the solution ends up being rather elegant and simple, though counter-intuitive. Rather than returning (false, an error list, etc…), you can throw and thereby forcibly halt the engine’s execution of the code. Here’s an example with throw:

function validate(data, schema) {
  for (let item of data) {
    for (let rule of schema) {
      let field = item[rule.name];
      let required = rule.required;

      // It's even one less character to write! 🤣
      // Also now we have total control over the exception content
      if (required && !field) throw new MissingFieldError(item, rule);

      // Recurse
      if (Array.isArray(field)) {
        validate(field, schema);
      }
    }
  }

  return true;
}

Day in, day out we work constantly with client applications that only trhow as a result of unintended bugs in the program. But we can take advantage of this standard JavaScript behavior and erect an appropriate error boundary. Remember:

Execution of the current function will stop (the statements after throw won’t be executed), and control will be passed to the first catch block in the call stack. 🔗

Therefore we can rename and wrap our recursive function that throws, and put it inside an error boundary to achieve that early break we want. This approach even comes with the added advantage of declaring the content of our user-defined exception at throw site; eg, utilizing meaningful error constructors or factories like missingFieldError().

function validate(data, schema) {
  try {
    validateInner(data, schema);
  } catch (error) {
    // returns new MissingFieldError()!
    return error;
  }
  return true;
}

Even more, the elegance of this design with an outer try/catch allows for separate testing of our validation business logic – the rules against which bad data throw – and error handling – what errors we emit for certain cases.

The explanation of Question 12 of Lydia Hallie’s awesome list of JS interview questions, and others

Tags: javascript, interviews
6/16/20

Lydia Hallie’s list of JS interview questions are extensive and probably one of the most comprehensive resources out there. The laundry list of questions runs the gamut from basic to esoteric language features, each of which progress you though various themes in what seem like meaningful sequences. It appears some modicum of pedagogy is involved, which therefore avoids that oft vacancy of substance typical of a listicle.

Yesterday I was running through a block of questions with an ex-colleague and we noticed that some of the answers were a bit inchoate. It’s so easy for meaning to drift when talking about language features especially when it comes to JS. Consider navigating the following taxonomy – related to creating objects in JS – for readers of all skill levels without specious reasoning (the kind of short-hand you get the point or close-enough superstitious explanation we sometimes use; which, by the way, I think greatly hindered my early career development): object, constructor, function, class, prototype, instance, static, method, new, this. It’s quite hard but really worth being as correct as interpreting ECMAScript affords us.

I decided to open up some discussion on Question 12 which tests one’s knowledge of the differntiated effects of using the new operator before a function call:

https://github.com/lydiahallie/javascript-questions/issues/82#issuecomment-645022140

Definitely called for a little revisit of YDKJS, no doubt.

I also feel like I need some clarifcation on Question 14. What do we call the very root object in JS that all objects delegate to?

https://github.com/lydiahallie/javascript-questions/issues/26#issuecomment-645058195

Coming up with examples is hard. I think Question 11 could use a bit of rework too because there is a this scope issue that somewhat overloads the focus of the problem. Here I go again:

https://github.com/lydiahallie/javascript-questions/issues/355#issuecomment-645066354

Debugging TS in VSCode and Russel Ackhoff’s Problem Treatments

Tags: vscode, typescript, Russel Ackhoff
6/16/2020

When I’m drifting between jobs – say, during a global pandemic and massive civil rights upheaval – I tease away some time from protest and anti-racist organizing to study programming basics and plunge below the membranes of web frameworks. Of course, the first thing you realize after a few key clacks is the brutal lack of a full-featured dev environment. I think my industry folks who don’t spend free time on side projects can relate.

I had a serious moment of angst this morning about whether I could achieve debugging for TypeScript files in VSCode within a reasonable amount of time – that is, an amount of time that wouldn’t deplete all my creative energy before writing any code. To my very pleasant surprise this was much easier than expected. You need two things.

  1. Add a task to run the compiler in watch mode (this is totally boilerplate):
{
  "version": "2.0.0",
  "tasks": [
    {
      "type": "typescript",
      "tsconfig": "tsconfig.json",
      "option": "watch",
      "problemMatcher": ["$tsc-watch"],
      "group": "build",
      "label": "tsc: watch - tsconfig.json"
    }
  ]
}
  1. Add your outfiles (dist) target glob to your launch.json file:
{
  // Use IntelliSense to learn about possible attributes.
  // Hover to view descriptions of existing attributes.
  // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
  "version": "0.2.0",
  "configurations": [
    {
      "name": "Launch server.js via nodemon",
      "type": "node",
      "request": "launch",
      "runtimeExecutable": "nodemon",
      "program": "${file}",
      "restart": true,
      "console": "integratedTerminal",
      "internalConsoleOptions": "neverOpen",
      "runtimeArgs": ["--experimental-modules"],
      "outFiles": ["${workspaceFolder}/out/**/*.js"] // <-- ADD YOUR OUTFILES GLOB HERE
    }
  ]
}

And voila, pausing on breakpoint in a basic router I wrote while running a node server:

Ok new topic…

During episode 138 of the > Code podcast with guest Vaidehi Joshi (of BaseCS podcast fame amongst many other things), the discussion veers into the topic of problem solving (~20:30). Jamey and Vaidehi riff about that condition we all know and love – and never appreciate enough – whereby solutions come to you while away from the computer; while your brain is “percolating” (Jamey) or in a “mental marinade” (Vaidehi). Jacob Stoebel expands on that by describing another common mental turnabout we programmers do: where we often catch ourselves down a rabbit hole trying to – big “S” – solve the problem with our first instinct; but that increasingly deeper and muddier slog causes us to pause and shake off the adrenaline and climb back out of the hole and we find ourselves asking oursleves how do I get rid of this problem. Rein, per usual, picks up the thread and introduces the panel/listeners to a theorist named Russell Ackoff who minted a classification for approaches to problems called “Problem Treatments.”

To list and paraphrase them:

  1. Absolution - Ignore the problem and hope it goes away

    It’s easy to joke about this one, but Rein actually provided a compelling case for this. For example, security engineers often make the tradeoff to ignore a security loophole if the threat isn’t great enough.

  2. Resolution - Doing something something “good enough”

    This feels like a pretty standard principle for building product considering some of the typical aspects of commercial software work:
    a) Your team will never have a complete understanding of how your software will express itself once “running” or “used” by customers. Therefore perfection, completeness, wholeness are not desirable or ever, actually, achievable.

    b) Deadlines and other external stakeholder pressures will force you to sacrifice however your team defines “great” or “better” software

  3. Solution - Finding an optimal solution with experimentation or research

    There are touchstone moments in the development of software where this happens. When choosing a framework. When designing a styleguide. When choosing a caching strategy. When designing a game plan to migrate code.

  4. Dissolution - Redesigning the system so that the problem no longer exists

    This is what Jacob was hinting at. That pure land you inhabit when refactoring code to circumvent an immediate impasse in the software by reframing the problem. I fondly remember dissolving a particularly tricky conditional statement in a web client by working with a backend engineer to implement an enum for a resource type. This removed a bunch of object introspetion (Law of Demeter violations, etc…) in favor of a simple switch statement.

Book quotes and commentary: Software Theory by Federica Frabetti

Tags: software theory, derrida
6/8/2020

In which Frederica Frabetti locates the “points of opacity” – malfunction – in/of software through study of the Garmisch (Germany) report – a foundational text of software from the first Conference on Software Engineering in 1968 organized by the NATO Science Committee. (Also careful you don’t mince the word “Garmisch” into Gramsci.)

“…these two aspects…the necessity of calculating incalculable risks, and of attributing responsibility for them…is a point where software engineering ‘undoes itself’ precisely at the moment of its constitution.” -p75

As in discovering responsibility outside of the software: in the likeness of user demands, society’s problems, ever-increasing demands on existing software from users.

“…there are problems out there that software helps to solve.” -p79

“…the system and it’s development are never clearly separated” -78

“…they must acknowledge that such a process is not in fact linear, but at the same time they have to avoid backward steps at all cost.” -p78

And how professionalization, as in the planning and management of software development over time – or formalism, or “linearization”; where we literally write many things to organize time (external and internal design specs, JIRA stories, prototypes, then finally the code (I guess) etc…) – relegates software to the realm of instrumentality and further destabilizes what software is; albeit by attempting to create an interiorized structure that skirts isolation. Thus the evergreen frustrations of the business side.

“…this appeal to the lineraization of time is part of the general attempt to ‘control’ software – that is, to think of software in instrumental terms.”

“…software is both a problem and a solution – indeed, it emerges at the point where the distinction between the two becomes undone and it exists only as the precarious stabilization of this distinction.” -p81

Software development is a fleeting discipline. I guess a line of flight in the rhizome. So maybe we’re hearding rats not cats. This partly explains, for me, why I can iterate again and again over a simple “implementation.” Because we’re never done, we’re always undone.

Hence our continued, common problematic to characterize when commercial software development happens. Is it when you change all var to let? The moment your brain begins to mull over a requirement presented by your PM in sprint planning? The final inscription of the company’s mission statement? During a whiteboarding design exercise? The moments of realization you have in the shower, during a lunch conversation? The isncription of an idea you copy/steal from your last job without attribution? The deletion of unused code? The moment you decide to delete the unused code?

Or perhaps the moment we recognize it is bound to break, malfunction, fail:

“…software engineering performs an impossible expulsion of consititutive failure from technology, with this move establishing itself as a discipline.” -p76

And further, what else software development is. Because it couldn’t just be about formalizing the process, right? Because how do we account for the incalculability of costs/time between problem and solution?

“Thus, Dijkstra resorts to individual creativeity – or, as Derrida would have it, ‘genius’ – as an explanation for what is in excess of a procedural method and constitutes a leap beyond the programmable.” -p82

But others from the conference warn about the cult of genius. Interesting earlier this year I was mulling over a similar intervention four decades later:

“The ‘exessive’ creativity of the ‘system type’ is in turn portrayed as ‘bad’ unexpected–something that exceeds the management of the project and threatens it.” -p84

Replace “system type” with our modern-day 10x programmer or rockstar and it’s comforting to think that 50 years ago computer scientists were already weary of the danger this person posed to software development. It certainly seems like Frabetti is jabbing at that here.

Software resists instrumentality, to be merely called “code”; it follows, then, that engineers resist becoming fungible resources in a complicated domain space. Software’s realization is a co-consititutive process of increasing understanding of the problem and the solving of the problem itself. I like the way this passage sums it up:

“How can something based on the lack of knowledge be realized successfully? This paradox clarifies the particular understanding of time that programmers develop in their interaction with software. Not only does one find out what the system does only by constructing it; the original ignorance of what the system does is constitutive of the system…Infelicity is constitutive of the possibility of felicity…One always starts albeit one does not know what will be.” -p90

Of course, the solution is never an end. Or have you ever built software that should not be extensible?

Let’s talk about Orchestration vs Separation of Concerns: React/Redux Edition: Part 2

Tags: react, redux, xState, architecture, user-interface-control-model
4/26/2020

Check out the second – and I believe last – in this short series: https://dev.to/internetross/let-s-talk-about-orchestration-vs-separation-of-concerns-react-redux-edition-part-2-imo

Let’s talk about Orchestration vs Separation of Concerns: React/Redux Edition: Part 1

Tags: react, redux, xState, architecture, user-interface-control-model
04/16/2020

I’m emerging from the shadows, a bit, and trying out dev.to. Check out this first in a two part series: https://dev.to/internetross/let-s-privilege-orchestration-over-separation-of-concerns-4410

Preferring repetitive Action notifications over reuse

Tags: React, data, actions, code duplication, DRY
4/1/2020

Have you ever noticed yourself going to unnecessary lengths to avoid repetition in your code despite the fact that requirements actually represent a complicated world? I swear the longer I write code in the industry the more DRY feels like a leaky ideological imposition that drifts from “knowledge” duplication into code/documentation – ie text artifact – duplication. Sigh. This is a nuance we lose too easily by the ad nauseum peddling of the shorthand DRY; which I suppose also reveals the danger of the supremacist tendency of the axiomatic; axioms all-to-easily claim ontological root. Especially in the minds of early career folks or folks emerging from industry founding fathers cargo cults.

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

The Pragmatic Programmer, [emphasis mine]

As we’re tapping along in an editor, the code that bears some resemblance to other code is, in fact, the precise nuance our applications need to create the richness of these membranes we build between people and machines. Here’s an example slightly altered.

Let’s say we are working with an events management system where a new feature is being built to allow event organizers to administer marketing campaigns on their events. In the current world, an event organizer can do two things to a campaign once it’s added to the event:

  1. Activate the campaign.
  2. Select the campaign to be “featured” – this places the campaign into a larger sitewide collection of campaigns that are featured in certain areas of the product.

A sketch of the stateless component in our graph might look like so (just relevant JSX and event handler):

Then Product decides to add a new nicety whereby an event organizer, when activating a marketing campaign, will automatically trigger marking that campaign with the special “featured” flag if that campaign happens to be the only active marketing campaign on the event. This is effectively a third flow; of course to the user it’s just a list item with a couple check boxes (simple stuff, right?).

As the developer I might think: this doesn’t seem all that hard! I already had the foresight to implement a flexible indirection – a consolidated event handler to coordinate the store notifications for user interactions in one place. This means all I have to do is flavor the conditional will another branch; and I can reuse the current dispatchers. I’m a fucking oracle.

Whereby I try:

I mean, that’s the right logic. All my test mocks are reusable. We are sailing along.

Sort of.

Then something nags at me looking at this code. It’s subtle, but there’s a risky assumption embedded here. Inside the interior if-then branch under the conditional expression hasNoActivated, we are sending a sequence of two distinct mutative notifications to our data store; moreover, the second depends on the first to complete successfully. Now, we might test this code in the browser and it works every time. Our tests pass.

But.

For how long?

This code lives downstream of our next indirection – the container – that shepherds these messages to the store. So this code is trying to coordinate a subroutine of effectful functions (Actions) three (perhaps more) layers away from the store itself.

Which begs some questions. For example, although the action notifiers I’m calling are synchronous, how confident am I in the inner workings of my component framework’s render engine and algorithm – particularly it’s collaboration with my data layer (ie Redux)? I might be forcing additional renders here. More importantly, how much can I trust other developers to maintain the correct sequencing in the container? Will a test help? Will a comment help? Will either of these prevent mistakes when additional behavior needs to be added.

100% maybe.

Software is weird. Code like this could become a productive, yet forgotten, corner of entropy. But silent risks, if recognized, must be avoided. We can do better, although part of reaching this better is yielding our mental model to become more reactive – to adapt, to embrace nuance, to become loose as the requirements become more complicated. My observation is that the code is not lack for defensive guards and documentation, but lack for an expanded set – single, unambiguous, authoritative – of blessed possibilities with distinct articulations. In other words, an expanded model – expressed in code – that models more precisely the desired possibilities of what a user can do.

Let’s try rewriting our component-level event handler to incorporate the new happy path into our domain’s lexicon: the Action notifiers, the names of things “that depend on when they are called or how many times they are called” (see Grokking Simplicity).

The modifications in this third example add a new Action notifier (handler) that represents the distinct type of mutation we want to effect against the store. In other words, a function that satisfies the new Product case that narrows attention – of the developer and program – on the new case by leveraging a semantic, structural, change to the number of container handlers. (Side note: I fantasize that Zachary Tellman might describe the new handler name as a too “natural” yet still “consistent” name (see Elements of Clojure).

Furthermore, let’s observe that the payload semantics remain the same. Which means our data layer comprised of the container and action dispatching, only requires similarly modest adjustments to shepherd the same data. Lastly, though quite importantly, our container is now capable of dispatching messages to the store and effecting a mutation in just a single pass if we wish it. For example:

updateCampaignAndUpdateFeaturedCampaigns(data) {
    dispatch(updateCampaigns(data));
}

// Whereby our reducer might return the state mutations composed as:

{
    campaigns: {
        ...state.campaigns,
        {
            id: payload.id,
            activated: true
            // other fields...
        }
    },
    featuredCampaigns: [
        ...state.featuredCampaigns,
        payload.id
    ],
}

Overall this design provides increased confidence that this display component, our presentational leaf node, is more precisely responsible for render and event notification. Perhaps the subtweet here is keeping events and messages 1:1, even if you wind up with similar or duplicate lines of code.

The will to better software companies

Tags: Semilattice, trees, coherence costs, critical theory, Christopher Alexander, Kojin Karatani
3/12/2020

Lately I’ve been thinking about productivity and it’s evolutionary rhythm with scale. Like, how easy the cadence ebbs. My lived experience is often multiplied slowdown. For example, we have split teams to tackle different product areas in parallel. Usually this means working separately in our “owned” codebases. But software is never neat. So when the team in Mendoza needs review on a changeset that touches our code area, we’re talking timezone crossing to achieve a merge.

It’s late afternoon in Argentina wine country when I’m just digging my heals into a morning Philz roast.

The compounded burden of a cross-geo async request requires I hunker down to understand what they’re code is doing and what it’s supposed to do – without direct assistance from the author. While async communication over Slack is ok, closing the mental gap in understanding the code changes is tough. We are facing a kind of Derridean differánce. As if navigating the palimpsest wasn’t enough. Multiple personages reading and explaining the code text, scribbling comments, changing, re-reading the next day; discovering the truth in the spaces/tabs. Reading software is really tough stuff. This impact of distance and time spent to “get up to speed” (pun intended) is a delta that doesn’t always feel satisfyingly addressed by organizational planning.

Does this sound like what you do for work?:

Every software project is like a palimpsest where every developer scratches old text, and writes on top of it their perceived new solution, creating a manuscript where a mix off styles and languages reflect different understandings of the world, different world visions.

And

Différance is the systematic play of differences, of the traces of differences, of the spacing by means of which elements are related to each other. This spacing is the simultaneously active and passive (the a of différance indicates this indecision as concerns activity and passivity, that which cannot be governed by or distributed between the terms of this opposition) production of the intervals without which the “full” terms would not signify, would not function.

Positions

Holy complicatedness, richness, depth! No wonder you can’t just throw more devs at the problem. Though apparently that’s what men in the industry used to think.

Not too long into my first software gig my team of 5 engineers organized a book club and tackled The Mythical Man Month. That’s when I really learned how dev teams became reduced to fungible “resources.” (It wasn’t long before I was at a happy hour and our head of sales was asking me, “Why can’t we just ship the work you do to India?” So I apparently it’s a historically recurrent theme.) It was also my first introduction to the idea that you can’t just throw more people at software problems to achieve speed gains.

Adding manpower to a late software project makes it later.

The Mythical Man-Month

If we believe Jessica Kerr and the other cyberneticians of our ilk, and on this point I might – software is a symmathesy: a learning system of learning parts. Brooks must have believed something similar I just don’t remember his exact words.

Nonetheless, I interpolate. Brooks’ insights feel situated as early symmathecist thinking. Software is so highly sophisticated, so highly complicated!, that command and control thinking can’t intervene to establish order:

As time passes, the system becomes less and less well-ordered. Sooner or later the fixing cease to gain any ground. Each forward step is matched by a backward one.

Ibid.

So why does this happen? Like, despite Brooks’ spacious, ethereal characterization of our work as only slightly removed from pure thought-stuff where we build castles in the air (words I do remember wonderingly)…we definitely live through a tangible stickiness, queues, waiting rooms; especially mid-to-late in the build cycle. Things get really hard. I swear there’s something even metallic about a build failure.

In order to change our system, we need a mental model of it. Each developer has a mental model of the software we work on. Each developer’s mental model is necessarily incomplete and out of date. And, they’re each different from everyone else’s. (This is called Woods’ Law, if I recall correctly.) We spend time reconciling our mental models enough to communicate with each other; this is a coherence penalty.

Well, we all know the drill and can recall with pretty sharp clarity what tends to happen. Making code changes later in a project becomes increasingly difficult because previous design decisions have constrained possibilities. More consistency is demanded; maybe not total, but if the component handlers should call to the reducer, why are you calling setState() directly in the handler? Because…the rush to the finish line produces anxiety that strains communication (within the already polynomial channeling). You shift from pensive philosopher to tactician; code breaker. Code review quality suffers; if only for the obvious reason that you need to double down on crafting and glueing code for more hours in the day. This, in turn, decreasing mental reserves for context switching to someone else’s code. All that great senior leadership out the door. There’s simply less consensus building. Public Slack channels are quieter. Regardless how high-performing the team, I’m convinced there will be some communication degradation in the twilight hours.

When I read Coda Hale’s recent piece Work Is Work all of this ugh was loaded into view again; and I really appreciate his analysis of how we end up in slog mode; and the antidote. Maybe the industry hadn’t received a really concise reflection in a while. Maybe it’s the mathematical proofing and formalism that adds a seductive element for the modern SWE struggling in their unwieldy symmathesies; what Karatani would call our “will to architecture”. This piece was going around the Greater than Code sphere for sure. For my philosopher/poet brain for sure. Ok, so similar to Brooks, but for orgs, Hale says:

At some point in time, every organization realizes that it’s slowing down.

Like Brooks, Hale is dealing with that slog. But with the scope zoomed out to help us understand how to manage and overcome the obstacle of big organizations growing at economies of scale. He enriches the idea of the mythical man month with Amdahl’s law:

The work capacity of an organization scales, at most, linearly as new members are added.

Ibid.

Amdahl’s law: I’m still wrapping my head around it! But I believe the gist is: when we try to add more people to parallelize parts of the dev work, we won’t get any dramatic (exponentialized) speed gains over the sequential work of the smaller group. Amdahl observed this behavior while throwing additional processors at a computational task in a computer machine. The “parallel fraction” of speed increase when adding N new members is only relative to the original group size, and eventually tops out and flattens. Rather than the speed boost our command-control leadership wills, we get something a flip side slog:

Coherence costs grow quadratically as new members are added.

Ibid.

Slog = coherence costs! That’s a sweet little buttoned up phrase for it.

Around the same time I read Hale’s piece I was picking up the aforementioned Kojin Karatani book, Architecture as Metaphor. So while I’m sitting with these ahas! about how the frustrating things at work are these somewhat well-understood “coherence costs” that occur on large teams like mine which can be observed with rigor from the outside (etic)…I’m suddenly finding myself in chapter 4 of Architecture as Metaphor called the Natural City where Karatani continues to build his argument for the ways that theory is entrapped again and again by the architectonic. But the thread I want to pull on is simply Karatani’s synopsis of Christopher Alexander’s The City Is Not A Tree. Lol, of course I should meet Alexander here. He is like one of the father’s of Software Patterns – see his 1996 keynote at OOPSLA – that would inspire so much of how our apps are built at scale today.

In the seminal essay The City Is Not A Tree, that would inspire people across fields and perhaps give rise to the concept of bounded rationality, Alexander proposed mathematically based, formal, structural, ideas (just like Hale is doing) to describe the “natural city” as a semi-lattice; a concept from Set theory.

A collection of sets forms a semilattice if and only if, when two overlapping sets belong to the collection, the set of elements common to both also belongs to the collection.

A City is Not a Tree

He is thereby able to reveal the “natural city” as something we can think about rationally. Particularly, and importantly, rationally distinct from the nemesis “artificial city” – a command-and-control style approach to designing cities in a treelike fashion from those who don’t appreciate the the richness, subtlety, and complexity in the overlaps that otherwise staves off “coming destruction” and “anarchy.”

When we think in terms of trees we are trading the humanity and richness of the living city for a conceptual simplicity which benefits only designers, planners, administrators and developers. Every time a piece of a city is torn out, and a tree made to replace the semilattice that was there before, the city takes a further step toward dissociation.

Ibid.

And

The semilattice…is the structure of a complex fabric; it is the structure of living things, of great paintings and symphonies.

Ibid.

My brain keyed on semilattices because they seemed to describe the structure of a software system I’ve been coming to know through Kerr and Bateson (opera, camerata, living) – not to mention the cutely convenient lexical overlap of mathematical terms in the definition; eg, a “Set” is a common data structure implemented by many high-level programming languages. Mendoza devs putting their text in our text. Some overlap of time zone. Knowledge overlaps as more tenured developers work in more areas of the codebase. Overlap at the coffee station, at the lunch table. Overlapping documentation, nearly duplicate. Also the incomplete lines of connection. The grassroots partnerships and friendships that emerge across teams. The polymorphic ensemble of patterns and style in our web applications: classes called as functions, state reducers and DOM events, REST with GraphQL, compositional components within hard-bound graphs. Microservices databases implementing Order and User over and over again. The semilattice seems very familiar.

Therefore, in the context of a symathesy, where Alexander might say our bounded rationality (not to mention hubris, or inexperience, or ego, or other ingressive tendencies) limits our ability to observe and study the subtlety and richness of the organization, throwing more men at the problem is short-sighted as fuck. Managers can’t govern from apart, at the top of the edifice. There are superlinear interactions to behold.

Then the advice that Hale starts giving to tackle coherence costs makes a lot of sense:

The only scalable strategy for containing coherence costs is to limit the number of people an individual needs to talk to in order to do their job to a constant factor.

Limit the overlap!

As with heavily layered applications, the more distance between those designing the organization and the work being done, the greater the risk of unmanaged points of contention. Top-down organizational methods can lead to subdivisions which seem like parallel efforts when listed on a slide but which are, in actuality, highly interdependent and interlocking. Staffing highly sequential efforts as if they were entirely parallel leads to catastrophe.

Ibid.

And

As an organization hires more employees, work on productivity improvements must be a constant priority. Internal tooling, training, and services must be developed and fielded to ensure that all members are able to work on problems of continuously increasing impact. The ceaseless pursuit of force multipliers is the only possible route to superlinear productivity improvements as an organization grows.

Ibid.

In an essay I assume Hale drew from, and to which Jessica Kerr’s above quote references, Michael Nygard makes a similar suggestion:

…take a look at your architecture, language, tools, and team. See where you spend time re-establishing coherence when people make changes to the system’s model of the world…take a look at your architecture, language, tools, and team. See where you spend time re-establishing coherence when people make changes to the system’s model of the world.

I love all this. Of course, it’s hard. Alexander tested folks, experimented to discover that the tendency was to “reorganize” the mental model into a treelike structure (though perhaps he only interviewed a group of limited white men, or folks embedded in white supremacist society?) But I suspect some of the hardness has to do with:

  1. Our inability to see the costs (ie the complexity, simply constrained by our brain’s inherent limitation to expansively see/hold it all); for leadership looking at the org we can also probably say that their “mental model is necessarily incomplete and out of date”.

It is similarly difficult to visualize the semi-lattice structure – where multiple sets overlap–of the natural city, and thus we tend to reduce it to a tree, the only structure we can visualize.

A City is Not a Tree
  1. An ignorance that we have a semilattice at all. Who out there amongst leadership are systems thinkers? Embracers and students of variety, incompleteness, ambiguity, multiplicity? Who can embrace these, split things, and double down on rewiring the learning (neuronic) threads? Ensembles over A/agile. More of that interspecies contamination, contaminated diversity that Anna Tsing talks about. Mushrooms and the dying earth can help.

Where a matrix indicates a high-touch relationship between two groups (e.g., a group of engineers working on a feature and the lawyers trying to ensure the legal compliance of that feature), efforts should be made to reduce the cost of that interaction by colocating their members (e.g., embed a lawyer with the engineers).

Management or whatever version of leadership you have must stay on top of it. It’s some serious work. Maybe we need better integration tech; it’s like we are missing some connective tissue. Maybe it’s not tech at all.

I remember being met with a lot of skepticism when I suggested to some engineers at my company that we create a meeting support team. It was a partially-formed thought, but the basic idea would be to make a self-selected group available to take notes for other peoples’ meetings. I don’t think many devs grasp the importance of their meeting communication as architectural design records; and similarly how hard it is to accurately and correctly document after the fact; or document in progress for someone who is also trying to participate. Living in entropy is easier than culture change. I also understand that no computer science education ever taught anyone this stuff was important which is why Brooks’ observations went unheeded for 50 years.

Postscript:

Why did I keep trudging through Karatani while writing this essay?? I should have known better: a few chapters later I’m cry-laughing because I bump into that hopeful schizo theory which blew my mind like 20 years ago in undergrad. And now I’m like damn because Karatani is saying that the semilattice is probably not the ideal (post)structure I want to imagine upon my software company. Of course, this makes sense in Karatani’s argument because he wants to problematize architectonic thinking.

Despite its appearance, [the semilattice] is orderly and centered.

The Will to Architecture

And

What happens when the semilattice is broken?

Ibid.

By Delueze and Guattari of course. Whatever, I still think the semilattice is a good formal representation of why coherence costs emerge in the indeterminate, rich encounters of the software symmathesy; the gentle swish of overlapping domains; like at Hearst and Euclid. But the rhizomatic network requires consideration soon. I’m urged that way anyway – see my hints above about interspecies contamination and the anthropocene. I think the ensemble moves toward assemblage soon.

Some patriarchal intervention at Google I/O a while back

Tags: feminism, patriarchy, ursula k. le guinn, eugenia cheng, anna tsing, hannah thomas
1/24/2020

About 11 years ago at Google I/O two dudes, Brian Fitzpatrick and Ben Collins-Sussman, presented The Myth of the Genius Programmer to an audience of mostly dudes. Well, we can surmise at the demographics because women were still being invented around this time.

Women are a very recent invention.

Ursula K. Le Guinn, The Wave in the Mind

Or, rather, reinvented since they had been denied participation in Western STEM since at least the middle 1700s:

Women lost their security clearances very early in the stories of leading-edge science.

Donna Haraway, Modest Witness

Despite the missed opporunity to cite the origins of congressive world-making in non-human-men being experience, indigenous practice, cooperative economics, anarchism, feminism, liberation theologies, etc…the talk is a pretty good orientation for ingressive types to think congressively about working with other people.

That’s my subtext as I watch. I just wish men were erudite enough to understand they’re re-invention. Non-mens have to keep rewinding it back. For example:

The men who wrote the [Agile] manifesto are unclear on whether they invited any women. Even so, the thing I notice from both the manifesto, the accompanying principles, and the fact that these 17 men call themselves ‘organizational anarchists’ is that what they came up with is inherently subversive, anti-authoritarian, and feminist. There is an emphasis on self-organising, collaboration, experimentation, welcoming change, and building high-trust and supportive relationships.

Tech, like other industries, continues to produce the worst of/from man because its manifest teleology of Progress (what Donna Haraway calls “Euclidean tales…of tragic detumescence”, the “prick tale.” ) – sieved through history from the industrial revolution – is wrong for the precarity of the Anthropocene and a damaged earth. For richness of our symmathesies. We all suffer for it.

Racism and sexism are the key ingredients in the alchemy that has transformed “Don’t be evil” to solving the hard problems of “Controlling the explosion of child porn on the platform” (Youtube, to name a recent example. The darkest of whoops).

What these guys are hinting at is that an ingressive mindset can help us move us at the speed of trust, and therefore build better software together. So, yep, there was some movement going on a decade ago. Radical muds seeping into the granite of hard dick culture; white boys catching each other and putting together slides for the other boys.

The talk is an introduction and invitation in trust-based collaboration and vulnerability. Touchstones include:

From the progenitor of the terms “ingressive” and “congressive” that I’ve been flinging about wildly:

Ingressive is about going into things and not being waylaid by what people think or by emotions and congressive is about bringing people with you and bringing people together and unifying and making connections between things…. I’ve come to think that congressive behavior is basically better for society but that ingressive behavior is rewarded more by society because society is based on competition. It’s based on how you present yourself so for things like competing for a job, you have to be ingressive to put yourself forward for promotion and to talk about how great you are. Whereas, when you’re actually doing some work with people, then it’s really helpful to be congressive because then you bring people together, you understand people.

It’s devastating to think about the enormity of emotional intelligence training men needed/need/will need; how expensive it is to refactor patriarchy’s structural grammar; that ambition which pits men against one another in an end game for supreme master and commander. In this ever-war even order and peace means the seizing of power; power must be assumed by the individual. It is not shared.

Sure, there is a drip of undoing of Patriarchy when disenfranchisng the myth of genius; but the non-human-men bear the major brunt of burden for maintaining the moral center and possible salvation.

I’m reminded of another talk.

Hillel Wayne turns a similar gaze navel-y and spotlights the peril of “charisma-driven development” we suffer in engineering culture. We might say another symptom of cock-sure genius arising. I love this talk. Recognizing the artifice of loudness and leaning, he looks at the data and calls bullshit on men. The data is good. It’s a tool. Nonetheless, I’m not sure Hillel’s Empiricism will be as powerfully feminist as interventions like Thomas’ archeology. Of course, Cheng’s metonymic replacements are interesting in that they make it easier to implicate women’s complicit was in patriarchy. Hot take: overall let’s be skeptical of the neo-classical – logic can win – and neo-liberal – we can reach across the aisle tactics.

Thinking about heuristics for avoiding code duplication across the stack

Tags: large applications, large teams, duplication
1/20/2020

At the office I’ve sprinkled some in-between-project-work labor on a piece of documentation that attempts a heuristics for avoiding code duplication across client and api layers. I kept seeing this type of repetition occur and it was creating too much maintenance risk in the code.

Here’s an example. Imagine a piece of code that takes inventory data from an order and derives a cost based on the order status. This calculation could be executed in the api layer, where it would be serialized into response body adjacent to the original cost data. Something like:

{
    inventory_items: [
        {
            order_id: 1234
            cost: {
                major: ...,
                minor: ...,
                // etc...
            },
            cost_display: 'MXN186.67' // <== augmented data
        }
    ]
}

This is a totally reasonable API design. However, a client developer could decide to do the calculation in the view code with a more raw serialization of the inventory item and order objects, and then merge that data:

const composeInventoryItemForDisplay = (inventoryItem) {
    ...inventoryItem,
    costDisplay: calculateCostForDisplay(inventoryItem);
}

function calculateCostForDisplay(inventoryItem) {
    let newCost;
    const order = lookupOrder(inventoryItem.get('order_id'));
    const currency = order.get('currency');
    // subsequent processing steps...
    return newCost;
}

In a software system there are many nodes (teams, software that’s running, software that’s being worked on, etc…) working toward similar ends. It won’t be uncommon for a front and back end team to write some code to solve a similar problem – what might even be a simple calculation – while totally blind to the other side tapping away. Frequent team project reassignments, geospatial and organizational distance, depth of expertise, weak cultural value around curious coding, etc… these types of contention costs create the regrettable (though yet natura) complicatedness that results in risky duplications. Especially risky for inconsitencies that might develop inside essential software modules like pricing computations.

Supplemental thoughts from Coda Hale’s new blog post going around:

Contention costs grow superlinearly as new members are added.

Coherence costs grow quadratically as new members are added.

Limit the number of people an individual needs to talk to in order to do their job to a constant factor.

I’m not sure I would have believed this type of thing would happen before I joined a larger company with an engineering group spread across geos; teams flung across the ownership matrix by the high winds of market shift. This investor and wall street casino shit drives tough team decisions. Lol team pet names: stunted protologisms. Like, what happened to the Squirrel team? Oh, they are the Crib Gto Pomp team now. Well, at least we are consistently circumlocutory at naming teams.

But lo and behold, such an example like the above cost calculation exists. I saw something like it in our JavaScript code. Then I asked the folks who work on backend if the frontend really needed to be doing this work. We shouldn’t. We literally shouldn’t because the computation for this existed in our backend service already. This is one reason we can’t have nice things.

So I’ve been thinking a lot about an approach to code, a kind of mental shim, to mitigate the coherence costs across the stack. Even with a fluctuating number of devs with knowledge about the code. Maybe we should just call this predicament patchy; patchy coding. Here’s what I’ve got so far with a cute story and real life examples:

Avoid duplicating code across the stack

Consider the following scenario which is based on true events.

One day Team RocketShip is working on a feature for Big Initiative X. They implement some computational logic in Ticket Availability Service which adds a new attribute on TicketClass. The code is great, it ships, customers are elated. But soon after, Team SpaceShip is tasked with another piece of Big Initiative X and needs to display the result of similar computational logic in a React app. However, Team SpaceShip doesn't know about Team RocketShip's previous implementation in TAS, and they decide they can actually write this computational logic into the client code using existing TicketClass data. The code is great, it ships, customers are elated.

Uh oh. Now we have duplicate code on the backend and frontend. Which means code that is very likely to become out-of-sync and result in disruptive maintenance work or bugs in the future. But how do we avoid this situation? Unfortunately, this is one of those social (read: people) problems of software engineering. There's no way to reliably automate prevention with static analysis of the code.

Heuristics:

    There > 1 attributes returned from the API that are similarly named
        Here's a real life example:
        TicketClass used to have attributes for auto_hide, hide, auto_hide_before, and auto_hide_after that client code in checkout utilized to compute the ticket's "hidden" status. In other words, client code asked of these associated attributes: should this ticket be unavailable for purchase. However, creating statically derived properties from a mixture of data may be an indication the backend can do this work for you, or already has.
            Ticket Availability Service already had code that computed this logic. Ultimately the frontend computation was replaced with a new attribute added to the TicketClass API.
    There > 1 attributes returned from the API that are related "communicationally," in that they reference similar types of data that are computed differently.
        Similar to the example above, the existence of auto_hide and hide attributes seem conceptually related – through "hide" – and can be a clue that there may be logic on the backend that could be reused to for hidden status calculation.
    I need a static value that won't be impacted by user behavior after first load

Make the backend do the work

Heuristics:

    Will the iOS and Android team's have to implement this too?
        Always remember that the desktop website is just one place where we write code for consumers of our applications. If the feature you are building will likely end up on a device, it's likely the backend code can expose what you need in the API.

Maybe Eithers with Promises

Tags: functional programming, maybe, either, promises 12/10/2019

I want take some bits to process a recent issue of JS Weekly that reposted a new piece by Eric Elliot about optional values in JavaScript. I read it. It lit up my memory.

One of the first Lambda Cast episodes I listened to was #6: Null and Friends. At this point in time I’m near the start of my FP cultivations, sometime mid- last year I think. It’s probably around the time I was finishing Kyle Simpson’s Functional-Light JavaScript. I was fairly shocked to hear the bros discuss functional languages that are designed to keep null out of your programs. I could feel myself starting to consider the FP language hype, especially the hype around eliminating uncertainty.

Because of JavaScript’s weak type system optional values represent data that will likely cause problems somewhere in your software system. Basically the language lets us get away with answering I don’t know without any compose-time feedback. Spinning up the browser and pushing play is sometimes your only recourse.

FUNCTION A: Alright here’s some data, can I have that back nice and neat please before I send it to the user.
FUNCTION B: Yeah sure…wait…ermm…I don’t think so actually…yeah I don’t know.
FUNCTION A: Ok great I’ll just show nothing to the user forever or maybe crash the system.
FUNCTION B:

Elliot is also quick to mention the case of uninitialized data. He lists the common progenitors of null:

  1. User input
  2. Database/network records
  3. Uninitialized state
  4. Functions which could return nothing

I’ve got an example from the office, bear with me.

In our application there is a form field. This field represents a maximum limit on the number of holds for your event ticket. There are two types of input this field can receive from a user: 1) an integer or 2) nothing (ie, be left empty). The latter signifies that the user desires to unrestrict the number of holds.

Nullable fields! These are quite commonplace and a user would never think twice about the dark alchemy we’re performing behind the scenes. What the user doesn’t know is that this field maps to a database columm that expects an integer (INT). Thus, when the user leaves the field empty and then submits the form, we need to make sure that a “no INT” is mutated into a stringified representation of zero to slot in the POST body: {limit: 0}. What a twisting mind eff: the zero means unlimited. Leaving the field blank is not a lack, but bountiful! 1 x 0 = FUN. Nothing is not nothing. Therefore, a sensible default.

function serializeFormData(formData) {
  const limit = formData.limit || 0; // the empty input string is nullable!
  const nextField = //...
  const body = {
    limit,
    nextField,
    // ...
  }

  return body;
}

Of course, a reverse alchemy must occur from the other direction when our React code initializes the form to begin with. We first hand our form builder an empty object – key/value pairs with field names as keys and undefineds as values. The builder then converts these undefineds to appropriate defaults.

function getInitialFormData(initialData) {
  const limit = initialData.limit || '';
  const nextField = //...
  const initialFormData = {
    limit,
    nextField,
    // ...
  }

  return initialFormData;
}

Looks familiar.

Thusly, the JavaScripter putzes around with a notion of nothingness by phase-shifting duck typed values.

null <-> '' <-> 0

(There’s probably cooler mathematical notation for this.)

Unforch we have to use null and friends for lack of a better optional option. Like, for lack of formal invariants in-language. It’s impossible to truly prevent these meaningless nothings from entering our JavaScript programs. (Meaningless like may never receive meaning, ambiguous, undecided. Totally void 0: what a good euphemism from the grammar.) Like, you can’t serialize nothing for a value an API response formatted as JSON.

>>> json.dumps({name: }
  File "<stdin>", line 1
    json.dumps({name: }
                     ^
SyntaxError: invalid syntax

Or try and stringify an Object back up again with the same:

>> JSON.stringify({name: })
SyntaxError: expected expression, got '}'

Null and undefined are optional in JS but they are not illegal. Like in Haskell, which wraps up the ambiguity in a fat Nothing.

Elliot does an interesting rhetorical jiu jitsu by giving us new options for optional values. In liue of eviscerating null from JS, we can work to push null to the edge of our programs with a handful of innovative approaches. In a sense we can ignore nullables and declutter areas of code which can just focus on data pipelining and other UI biz logic. Techniques include: constructing state machines – highly determined object interfaces – that error without values set to a wanted data type; ie something. We can also take advantage of that new new: Optional Chaining. And then there’s borrowing from FP. The last I love.

I’ve already been thinking about Maybes a lot recently. My last post was about using “maybe” in function names to negotiate the unrealistic binary of if/else with better signaling to other developers; like, a function may or may not complete it’s designated purpose if an async call fails or an argument is unacceptable. The real word is far too fuzzy. In contrast to the imperatives of JS, FP languages substitute nullable data with algebraic structures that encapsulate possibilities of existence or nothingness. For example, the actual Maybe data type represents either a Just (truth, success, completion) or Nothing (false, error, incompatibility). Data that’s wrapped in a Maybe and operated on won’t leak a nullable into our program, like the commonly observed undefined. Obviously implementations vary across libaries. Here’s a simple example from the Practica library which demonstrates the way that using Maybe can simplify code:

import { Maybe } from 'pratica';

const data = await fetchAllPeople(...);

Maybe(data)
  .map(people => people.filter(person => person.cool))
  .map(people => people.map(getNames))
  .map(names => name.toUpperCase())
  .cata({
    Just: transformedData => render(transformedData),
    Nothing: () => console.log('Womp, no data returned from API.')
  })

(Btw, “cata” stands for catamorphism and means to decompose the Maybe container into simple values. Honestly, I’m not good enough in the category theory yet to confidently distill it for you completely – pun intended – but that’s the gist.)

A more basic JS solution might look like:

const data = await fetchAllPeople(...);

if (data) {
    const coolPeople = people => people.filter(person => person.kind);

    if (coolPeople.length) {
        const formattedKindPeople = people => people.map(formatPersonForDisplay);
        render(formattedKindPeople)
    } else {
        console.log('Womp, only unkind people.')
    }
} else {
    console.log('Womp, no data returned from API.')
}

The combination of FP-style data pipelining – allowed by Maybe’s monadic interface, I think? – and control flow encapsulated in the data type itself, we get a semantically rich and easy-to-read solution without nullables and exhausting boilerplate; ie, param existence checks.

Where Elliot really surprised me was drawing a line between FP’s similar-to-Maybe data type Either and JS’s Promise. Tucking null away with Promises is super neat. Let’s see how that plays out in a sec.

While maybes represent one or no value, Just or Nothing, Either implementations are slightly different in that they represent one or the other, but not both. If you’re familiar with bitwise XOR, it’s the same algorithm, except that in place of a Nothing or performing a noop, Eithers provide a secondary branch for an error case. Let’s see it in action.

Take Elliot’s example of a small abstraction that hides null checking away in a kind of promisified ternary (which I’ve slightly modified):

const exists = (x) => x !== null;
const ifExists = (value) =>
  exists(value)
    ? Promise.resolve(value)
    : Promise.reject(`Invalid prop: ${value}`);

ifExists(prop.name).then(renderName).catch(log);

Here basic null checking and *primitive” if/else binaries are replaced with a more expressive, semantically rich statement for the logical disjunction: proceed this way if success, or that way.

Now, logging an error doesn’t get us very far from param checking and early returns. A slightly more interesting example might be something like:

const inputExists = x => x !== '';
const ifInputExists = value => inputExists(value) ?
  Promise.resolve(value) :
  Promise.reject(`Input is blank`);

onInput((prevValue, nextValue) =>
    ifInputExists(nextValue)
        .then(validate)
        .catch(trackClearInput(prevValue))

It’s hard to see the real power of this for a simple resolve/reject example. It just feels like a fancy if/else, right? But if we extrapolate from this base interesting things start to happen. Here’s a slightly modified version of an example from Practica’s docs with an imaginary Either that uses Promises under the hood and implements a chain behavior:

const isPerson = p => p.name && p.age
  ? Promise.resolve(p)
  : Promise.reject('Not a person')

const isOlderThan2 = p => p.age > 2
  ? Promise.resolve(p)
  : Promise.reject('Not older than 2')

const isJason = p => p.name === 'jason'
  ? Promise.resolve(p)
  : Promise.reject('Not jason')

const person = awaitfetchPerson(...);

Either(person)
  .chain(isPerson)
  .chain(isOlderThan2)
  .chain(isJason)
  .then(p => console.log('this person satisfies all the checks'))
  .catch(msg => console.log(msg)); // if any checks reject, then this function will be called. If isPerson rejects, then isOlderThan2 and isJason functions won't even execute, and the err msg would be 'Not a person'

Suffice to say I’m quite tickled by the re-purposing of Promises as Eithers. You can start to imagine how one might construct custom-fit control flow chains and layer cakes using thenable types to play nicely with other function composition and pipelining. I’m not always in love with (what feels like) sacrificed readability with chaining over stacked lines of assigned returns or async/await. But seeing it an action using Practica I’m starting to believe more and more in its viability, even in a codebase touched by less experienced or new developers.

Gripes with FP readability aside, it’s eye-opening to look at available JS language features and see them in a different light. Also, aside from the clever use of Promises, just getting the null check into an abstraction exists(...) already has us using an FP mindset to build strong declarative (function-first) foundations.

Maybe maybes

Tags: naming, javascript
11/18/2019

Migrating my content from Medium was due, but this past week became a particularly opportune time to repost some old content on the subject of naming. Because I’m in one of those feels again, y’ll. And it’s about one of the two hard things enshrined by Phil Karlton. Really we just float in a crisis of language trying to speak correct enough to make some money. Between thought and action. Between computer and person. Between persons.

How do I communicate to the next developer that this function might do one thing, or it might do another?

The other day I’m refactoring some code and picking apart a function to understand how it works. Let’s call this function saveThing. It’s about 60 lines. Kinda medium-sized but a lot of lines are neat procedural steps of computation. The part that’s quizzical to me is a condition that will exit the function if the user is missing a permission. So…sometimes this function won’t save a thing.

This is FUNKY. I want to rename the function to maybeSaveThing because the steadfast verbage seemed incorrect. Like, literally it was signaling something wrong to me about it’s gist. I think I may have been particularly sensitive because I had already spent two days refactoring a bunch of code in this area and was becoming a bit exasperated with various kinds of misdirection. I really wanted the code to start being straight with me. But it’s when we’re weakest that we need the code to be brutally honest. Fixing critical bugs. Nearing deadlines. When stuff’s happening at home. In the world. Ironically, these modes reveal the code’s honest readability.

But this could be a matter of style. I’m not sure. I’m mulling it over. I have colleagues who think maybeSaveThing isn’t really necessary. The internal conditional is obvious enough once you peer inside. I think I’m going to fight with them more about it.

I felt validated when a homie from a JS community I hang with on Slack gave me an example of production code from his place of work:

componentDidUpdate() {
    this.maybeRedirect();
    this.updateTitle();
    this.maybeDoActivation();
    this.maybeInjectSampleTimelineItems();
},

I LOVE this. Don’t you think it reads nicely? Like, you can really understand the sequence. The maybes communicate a true story of many possibilities. I’m imagining refactoring this component transition block. These function names are helping me decide what to care about now or later; to maybe deal with them or not.

“do” helper for emphasis in variable naming

Tags: variables, naming
11/10/2019

Reposted from medium
Originally published: 9/19/2015

However you mental wrap the process of realizing virtual things (writing, accretion, rose gardening), there’s no question literary-ness does imprint at some point on IDE surfaces. In my experience working in the higher stories of the tower (cloudgineering, we so high): up there with web pages, casually cast instructions inRuby and JavaScript, cascades in hierarchical CSS computer notation, and declarative document formatters (XML, HTML); up here you’re definitely notating control flows and arguments that require semblances to English lang. Arguments. Computer languages.

The other day I surprised myself by forming an excited opinion during code review about what a variable should be called. I guess the time had done arrived; I had spent just enough time in Ember and client state management to become OCD about the signifiers being passed around. And so I found myself feeling incomplete when I read over a line of code announcing a state change event that facetChanged. I can’t totally explain it but this wording felt insufficient. I instinctively wanted more. Needed surge troops. Something like…facetDidChange.

The difference is slight, but important.

(It’s hard, y’all. One of two things.)

I researched the grammar to figure out why the variable was letting me down.

Turns out the “do” helping verb isn’t a special thing. But when added to another verb in the preterite, it increases emphatic assertion heaping deeper credibility on the thing that did happen. You can really feel it on your brain tongue, right? The additional slam ofd in the phrase, like doors shutting — the hardness more certain.

Or maybe the reassurance is there because of the influence of the black/southern dialect on modern American english; the Shakespearian-like emphasis where past finality is hammered home when something done happened, or 2x emphasized when someone done did.

The code doth protest too much??

Eh, I didn’t push for a rewrite in the code review. Naming is a bar brawl and sometimes you fight and sometimes you don’t. The author is balancing empathy for their colleagues with their own mental frame of the application at the moment they create the code; and then there are many an english word and conventional abbreviations, degrees of formality and tone. There’s a reason Steve McConnell dedicates 30 pages on The Power of Variable Names. And this section doesn’t even explicitly tackle naming receivers, subscribers, etc…

Just tryna make sense of what these users be doing.

What’s in a name?

Tags: naming, variablew, Shakespeare
11/10/2019

Reposted from medium
Originally published: 9/19/2015

There’s always heated babbling (err…babeling) on the cyberspace to assign metaphor to our embryonic field of building digital things: writing or engineering or accretion. One thing for sure, humans have a religious proclivity to conquistador in the bikeshed when faced with terra incognita. Perhaps, ironically, because it’s something of a science and therefore seems to ask for a pinning down. But these guys, some of us, just seem unable to leave it alone to variance; to let it lie under a broad, mercurial atmospheric plane of something like creating.

I remember when I attended my first Railsconf in 2014. In the opening keynote DHH made a todo about software writing (and more famously that TDD was dead).

Hello, World!

Fucking comma in there. You got an author-face bro.

At that time, 2014 in Chi-city (my place of birth), I was too green building product to really care what you called what I was doing. (Maybe I’ll never actually care that much.) It wouldn’t have helped me get a quicker handle on the revealing module pattern.

And then, a couple days later in the closing keynote, Aaron Patterson rebukes the writing metaphor and talks about the advanced engineering of improving Rails 4 query performance. They totally did that on purpose. Those bros.

What’s in a name? Sigh, we don’t have the luxury of poetic soliloquy when getting business done and keeping this company afloat. Shitty code smells like shit no matter what you call it. I’m not gonna care too much for now what the pontificators put on the RHS.

- <-{@

More about anointed princes and gatekeeping culture

Tags: gatekeeping, software industry, gee paw hill
11/7/2019

Ok, gonna brain dump a little because I got excited about some synergy! Maybe I’m just really starting to find some peeps in the trillicon valley zeitgeist.

Gee Paw Hill published a blog a few days ago that retrospects on geek culture gatekeeping (I think we can basically proxy software industry for geek culture here). He presents this notion of our geek culture as “thin culture”; boundaried, exclusive. Lo and behold I was pleasantly surprised to see him use the phrase “anointed princes” to describe the folks – “they’re usually men” – who perpetrate the thinness. Yay because this is nearly identical language I used in my last post for analyzing the behavior I saw in some other internet writing that made me feel poopy. I feel validated.

Feels!

I like that he backtracks the finger-pointing though; like, a fire with fire counter is a non-helpful meanness here. But also, I’d argue some anger first is ok. “Anger is clarifying” – as Fleet Maull once told me many years ago. Or maybe a tool. Today during the Human Dimension of Business training at work we practiced the art of the rant – privately journal your story of blame – as a bridge step to toward truth and clarity; and as a way to measure and check our extra assumptions against the NVC-like framework (“The facts are…”, “I feel…”, etc…).

There’s some interesting stuff in Gee Paw’s reflection. I feel armed with richer clues about where gatekeeping culture is inherited from, and inspired again to work with it in my own being.

Loneliness, isolation, computer as friend, for some of these guys. I mean these are kind of obvious origin story props that will perpetuate harm and abuse across generations. It also may be a narrative setup for geek thinness that Gee Paw warns us to see for fiction.

But I think, bringing his frame of three gatekeeping behaviors that keep our culture narrowly defined – “badging, self-blindedness, atemporality” – to any reading or conversation with the anointed princes in our industry will likely save us from feelings of smallness. (Those of us who came in through the side door; ie not academia). And to my fellow white dudes, it will help us break the cycle of abuse.

Lets freestyle this a little bit:

Harmful ways to write about software

Tags: gatekeeping, java, oop
10/10/2019

Dispatch from Soekarno-Hatta International Airport. Arrived way too early because of traffic fantasies.

While leafing through my daily software briefs I came across an article with that dissapointing refrain in our industry: this should be easy to do. There’s probably no better way to put a reader on the defensive and create an adversarial teaching situation. Suddenly I’m challenged to match the author’s intellectual prowess or else remain befuddled and continue writing shitty code. Why you fuckin with my head???

The article is another offering to the OO world which introduces the dependency indirection “injectable factory” design pattern as a cure for “rigid direct dependencies” in unit tests. Super simple stuff.

Also just LOL the indexical disambiguation science of the OO world. I’m just in love with Steve Yegge’s satire about this. A true kingdom of nouns. Frankly it’s a deliciously bookish world compared to JS-land where it’s function function function function.

If I grokked it right, I think the article is just a way of instructing the use of mocks without dependency injection, helping you avoid writing a service implementation for your tests that looks like this:

public class OrderServiceImpl implement OrderService {
  private final ShoppingCartService cartService;
  private final PricingService pricingService;
  private final InventoryService inventoryService;
  private final UserSession userSession;
  private final TimeProvider timeProvider;
  // ... and many more, this list will keep increasing when more requirements added

  public OrderService(
      ShoppingCartService cartService,
      PricingService pricingService,
      InventoryService inventoryService,
      UserSession userSession,
      TimeProvider timeProvider) {
    this.cartService = cartService;
    this.pricingService = pricingService;
    // ....
  }

  public CreateOrderResponse createOrder(CreateOrderRequest req) {
    // ...
  }
}

Ah yeah that looks hairy. I’m flashing to Jest unit tests where small function implementation details are mocked all the way down.

I remember a moment, a phase if you will, in my first couple of years working professionally as a softwhere engineer when I thought that understanding OO design patterns was part of our right of passage into…seniority, I guess? At the time I was beginning to write more in the Rails parts of our codebase and as a consequence, began inheriting a pressure, a notion – from where I’m not sure – to achieve familiarity with common OO design patterns.

From where, indeed. It’s really subtle how this worldview is propogated amongst our ilk but I think I’m starting to see, with clear eyes, the quiet, viral expansion.

For example, this digesis about “injectable factory” thats got me irked. When the author completes the situation and complication setup, he begins his solution section with:

The pattern is pretty simple.

Umm great, so what you’re proposing is just a fancy name for something I probably already know about? But then:

It’s implemented as a mutable value holder. This pattern is related to the factory, service locator, singleton pattern and ambient context pattern. All these patterns are considered anti-patterns according to most DI articles. But in practice, those patterns, especially singleton, are very handy and useful. They are prevalent in almost every non-trivial project. But they come with caveats. InjectableFactory tries to prevent those caveats at the same time keep the convenience and simplicity of those patterns.

It’s so simple I just need to have a solid undertanding of these (what must be other simple) concepts as well:

  1. dependency injection
  2. a value holder (probably Java slang for some kind of variable?)
  3. mutable value holder
  4. factory pattern
  5. service locater pattern
  6. singleton pattern
  7. ambient context pattern

I’m reminded of Rich Hickey’s 2012 Rails Conf talk where he disambiguates simple from easy which is a linguistic casuality which creates confusion in our industry about what “simple” or “easy” software really is or achieves. In the talk Hickey helps recoup a specificity for each word through an etymological tracing. Distilled you take away that simple is objective and easy is relativistic.

In consideration of the foregoing, our Javian knight likely means easy, which, by Hickey’s redefinition (rather reinstatement of acuity), means a spacial relativism of nearness. As in easy to go to or get or near to our understanding. In other words familiar. Like a better developer experience. Like:

…can I get this instantly and start running it in five seconds?

But he probably also means “being near to our capabilities”, the promise of easy.

Even if the usage intent is closer to Hickey’s easy, what about this list of like seven things I need to know that may not be near my capabilities? Like, easy for whom? What type of reader is summoned here? Since the author never names his imagined reader/audience we’ll never know for sure, but we can infer it’s likely folks who are experts in OO and the Java lang. Perhaps that’s the transgression that bothers me. This invocation of simple to self anoint as meister amongst a court of meisters, but without warning patronizingly forecloses access for an unitiatied reader. I think this is the gatekeeping emerging, infectiously. Almost too quickly to observe. I’m really glad Hickey also finds circumscribing code as easy dangerous for our industry’s wellbeing:

And we don’t like to talk about this because it makes us uncomfortable because what kind of capabilities are we talking about? If we’re talking about easy in the case of violin playing or piano playing or mountain climbing or something like that, well, you know, I don’t personally feel bad if I don’t play the violin well because I don’t play the violin at all.

But the work that we’re in is conceptual work, so when we start talking about something being outside of our capability, well, you know, it really starts trampling on our egos in a big way. And so, you know, due to a combination of hubris and insecurity, we never really talk about whether or not something is outside of our capabilities. It ends up that it’s not so embarrassing after all because we don’t have tremendously divergent abilities in that area.

Sigh. The deleterious effect of gatekeeping. Like is there some insecurity at play here for the author that requires he front his meisterness?

Though what if…I wonder if the author actually intended to describe injectable factory as simple by Hickey’s definition? As in, requiring less effort to “repurpose, substitute, move, combine, extend”, and less coupled (“interleaved”). The root of the word is one fold, braid, or twist. That does seem to be accomplished in the design. Nevertheless, I can’t get over the heavy connotation of reader culpability and capability implied in the (weak) qualification of “pretty simple.” Language is weird.

I’m thinking it’s probably best to avoid “simple” and “easy” in technical writing. Let’s not fuzzy match.

This reminds of when the words good or bad are used when discussing code. Without more objective substance, simple evidence, they just do a big hand-wave over facts and truths of the software’s outcomes and the shared understanding of the folks in the conversation. Is the software reliable, easy to change? What level of expertise is required, what audience are you interpolating into this discourse of nouns?

Is simplicity achieved?:

There are tons of libraries that look, oh, look, there’s different classes; there’s separate classes. They call each other in sort of these nice ways, right? Then you get out in the field and you’re like, oh, my God! This thing presumes that that thing never returns the number 17. What is that?

Is easiness achieved?:

…can I get this instantly and start running it in five seconds?

A better term for unintentional technical debt

Tags: technical debt
10/6/2019

The other day I got into a small argument with coworkers during the Sailboat exercise about the meaning of “technical debt.” We were hunched over stickies and milling about thoughtfully. Suddenly I leaped from my chair and smacked a note on the whiteboard: application too big. The rent is too damn high!

This damn slice of monorepo in which we had been toiling over the course of the last sprint had blind-sided us as a super bloom of contributions from different teams. The code was getting hard reason about; we were hostage to the out-of-bounds-ness. It was getting too complicated:

It has many interconnected parts, each full of details. Conditionals abound, forming intricate rules that may surprise us when we read them.

Now, I was reticent to describe this anchor I threw on the boards as any kind of “debt” since I’d been thinking more about debt as the future IOW on quality to increase speed in the present. By that narrow definition, if this nuanced latticework were debt, we’d be able to rustle up the ledger and observe the amortization plan, hopefully incremental and steady. To think of complicated code emerging from a combination of variegated knowledge across teams, and the teleologic weight of entropy (bit rot, cruft, and so forth) didn’t situate in Cunningham’s metaphor for me because it showed up as a stunning surprise. Are we ever really surprised by our credit card bill?

There are other sources of debt people talk about that related negligence. Procrastination, not revisiting architecture, postponing test-writing. I truly believe the teams working in this code were doing it well, so that flavor wouldn’t help situation this present monorepo madness for me.

Well, there’s the whole category of “unintentional” debt. But that also felt wrong, too. There is an aspect of stuff out of our control but the intent/unintent dichotomy feels unsatisfying if software is a rich system of people, code, automations, feedbacks…I don’t know. Language is hard. I guess I’m gliding at the edges of a linguistic constraint with the lexicon that was tossed around the table by my colleagues. Some sort of weak Sapir–Whorfian confusion that I can’t peak around.

Nonetheless, I conceded to my fellow engineers of wares that are soft: ok, this debt that so cunningly creeped with bold surprise was, in fact, a (indefinite) form of technical debt. In fact again, the frame I particularly appreciated was this: while it may not have been a result of a conscious decision, now that we observed this mucky muck we could choose to deal with it as debt. Ok, debt is debt whether it arrives by forethought or entropy.

Luckily, as I was catching up on my daily doses of software newsletters this morning, jessitron (who I can’t stop quoting) hyperlinked her latest blog to another post from the Atomist site that gives us what I think is a more accurate language for this very type of technical debt. Frankly, any optimistic (re)casting of language in our field is welcome; we’re so prone to a feeling of unfinished, incomplete, or overwhelmed, everything is fine (eh, nope).

Technology drift is a form of technical debt that comes from progress. As our target architecture advances, it leaves a trail of projects in all its momentary incarnations. As repositories proliferate, the accumulation holds us back. We have many technologies and many versions of each. We have the risks of all of them.

TECHNICAL DRIFT

Fuck yeah, let’s use that!

Two tales of Binary Search

Tags: javascript, interviews
10/1/2019

I still have lingering rage from two years ago when an interviewer said to me: “I could probably implement this in about 20 minutes.” Seriously crushing words to utter offhand during a facetime code screen for someone who has been programming and building web apps professionally for 3+ years.

The problem was something like find the nearest value to x in the array. I’m so bad at toy algorithm questions since I basically spent those first 3sh years smashing a ton of Rails and Ember into my brain and worked to be productive building more typical business app/e-commerce style UIs. I wasn’t a whiz at data traversal. More like a deep component interface expert. What else would you call it? Suffice to say a simple filter function at 0(n) time would satisfy the requirements. That is lived experience.

But my interviewer pushed me to consider a big fat array. I was stumped, and failing hard in the spotlight. Only 20 minutes. Bro even let me spend some time after the interview coming up with a solution to email later, but that was futile. I didn’t even know how to properly phrase the problem for Google. I got rejected.

What I learned later is that bro was looking for a solution implementing binary search – something any CS grad would know. I didn’t have a CS background. I learned to program on the job and never had to care about that level of performance in the apps I worked on.

To this day, around 6 years into my career, I haven’t had to implement a binary search algorithm to scan over a big data set.


Julia Evans put out a short series of zine pages recently that describe best practices for debugging. And guess who showed up? Binary search! Leaning back at my desk chair I realized that all along I’ve been using binary search in my every day work.

When fixing a bug without a useful stack trace, triangulating the problematic code requires what I used to think as a kind of cludgy technique: step by step, commenting out large parts of the code at the top and/or bottom of suspect files. Like, literally just comment out half the file where you think the buggy code lives. Does the error still throw? If yes, the problem is not in that half. Try commenting out the other half. If no, then keep halving that part of the file. Recursively repeat. This is binary search. Also, did you notice we’re now also talking about recursion, a topic that even senior programmers have trouble with?

“Advanced” CS concepts can show up in our work all the time. Whether it’s UI, backend, databases, ops, throughout all layers of this mushy cake stack. (“Mushy” as in blended, bleeding, fluid, transitional. Not as in gross, unfit, unstable.) We need to begin reconciling with the danger of narrowly defining their employ and weaponizing them in toy code examples for gatekeeping (cough cough interviewing).

When we learn the meaning behind these CS concepts, we may actually discover more creative ways to use them. Or how they are already part of our toolkit. For example, git-bisect!

Deep(er) software concepts showing up in UI problems

Tags: javascript, bitwise operators
9/16/2019

I’ve got three posts in my brain backlog now about more complex software concepts showing up in UI work. Here’s the first!

I’ve been waiting a long time to use a bitwise operator in “real world” JavaScript – like 5-6 years – and the opportunity finally presented itself the other day.

In UI, near 100% of the time the basic comparison and logical operators of the JavaScript language give us the power and control we need to express product requirements in code. Equal, not equal, AND, OR, NOT, etc.. But then there’s that arcane set of bitwise comparison operators. These special boolean operators give us expressiveness for more complex comparison scenarios. They’re also just kinda weird because they do comparisons at the binary integral level – that is, they coerce the values to bits first.

Take XOR. Which in more layman terms is “exclusive OR”. In JavaScript XOR will return 1 when the output of either side of the operator is different and 0 if otherwise. Similar to a boolean return, we can easily pass around the result of a bitwise operator as a predicate.

As it turns out, this happens to be the very logic we need for testing existence between two dependent form fields.

At Eventbrite our UI library has graphical pickers for both date and time form fields. Our designs typically place these individual components next to each other and make them required. The user is free to change their values, though we do provide sensible defaults. One not so surprising possibility is that a user can leave one field blank by accident. Of course, not having an exact date and time for ticket sales dates doesn’t really make sense. Therefore, since we want to give the user some immediate feedback if they put the form in this state, we run a validation on blur using XOR logic!

Nonetheless, for checking existence, we don’t want to bitwise compare the two sides of the expression directly which could be many kinds of strings. To make the comparison reliable, we cast each side to boolean values with a bang. Then we wrap up the expression in a composable function. The result is a very concise one-liner:

const isOneTruthyAndTheOtherNot = (a, b) => !a ^ !b;

Which might be passed around in my hypothetical React event handler like:

dateTimeValidator: (dateValue, timeValue) => {
    const hasEmptyField = isOneTruthyAndTheOtherNot(dateValue, timeValue);
    const error = hasEmptyField ? FORM_ERRORS('dateTimeField') : null;

    this.setState({
        dateTimeFieldError: error
    }

    this.props.onError(error)
}

This could be written in a couple ways without the more arcane XOR:

... = ( foo && !bar ) || ( !foo && bar );
... = foo ? !bar : bar

I’m generally against using overly clever code in codebases that are worked on by less experienced engineers, but I think the bitwise operators are a great tool for anyone to know. And the MDN docs are very clear about how XOR works:

“Performs the XOR operation on each pair of bits. a XOR b yields 1 if a and b are different”

The docs will also introduce you to the algorithmic decision table for the XOR logic, which is another useful tool to expose new developers to.

a b a XOR b
0 0 0
0 1 1
1 0 1
1 1 0

What always makes this sort of exposé interesting is that the early-web understanding of UI still colors our perception of UI work; like, UI is just a sprinkle of scripting and layout and browser wrangling that gently rests on top of the real software where the computer science happens. Or maybe it’s changing. But I feel like there’s still too much emotional labor educating the web dev community about complexity throughout all layers of this mushy cake stack. “Mushy” as in blended, bleeding, fluid, transitional. Not as in gross, unfit, unstable.

White theft, white entrepreneurship

Tags: racism
9/9/2019

I’ve been reading two texts this week. Side-by-side they offer another reveal of the tragic double standard of black and white life in our America. I really felt that truth descend upon reaching 90 pages into Shoe Dog, Phil Knight’s memoir about starting Nike. On the one hand: a story about a white Christian rich kid manifesting a new destiny for himself away from mediocrity using military connections to build a business with a recently conquered nation. 1964: the year Knight starts Blue Ribbon Sports. On the other hand: I’m half-way through the drudge of Vann R. Newkirk II’s long-form in the Atlantic, The Great Land Robbery, about the vast land theft and wealth transfer during the civil rights era from blacks to whites in Mississipi. 1964: by this year almost 800,000 acres of land have transfered from blacks to whites as a result of legal discriminatory (racist) federal farm loan programs and private lenders.

Knight’s pop fantasy of himself and the pursuit of a vision to make life about “play” through footwear and lifestyling branding falls so fucking flat against the underlying drivers of the globalization that were making cheap shoes possible – the return to exploitative capitalism (slavery), maintenance of a permamenent underlcass. Etc. This theft shifted majority voting power before blacks could vote. This theft continues to enrich white investors, hedge fund managers, and agri-business who now own these once-black suffered farm lands.

Overall I enjoy Knight’s story. But the American dream is still available for a white ruling class only – and those they selectively permit. Holding that heavy.

React inline function gotcha, but in a non-obvious way

Tags: react, DOM, performance
7/31/2019

I think a lot about Hillel Wayne’s blog post INSTRUCTIVE AND PERSUASIVE EXAMPLES where he does this interpolative critque of a best practice article on unit testing.

His argument is that “instructive” examples don’t make a reader care. “Persuasive” examples, on the other hand, are harder to invent, but better. They’re hard because:

  1. If your example is too simple, people brush it off.
  2. If your example is too complicated, people won’t follow. They’re also less likely - to trust it, because you complicated example might be concealing a simpler way.
  3. If your example is too abstract, people won’t see how it’s useful in practice.
  4. If your example is too concrete, people won’t see how it generalizes to their own - problems.

We see examples of #1 and #4 all of the time in technical writing – maybe because a lot of it is written for marketing purposes. Y’all, let’s be careful about how we present anti-patterns and best practices when we’re trying to thought leader. Often this writing ends up being spec or framework documentation plagiarized and adorned with gifs. But without contextual complexity we may be doing a disservice to our fellow humans who dev – especially less experienced devs.

Persuasive examples are harder but the payoff is bigger. By demonstrating the why and why not there’s a better chance of putting knowledge into your reader’s brain and making it stick. This is certainly my experience. If you:

  1. Share the reasoning behind all of the small decisions getting from A to B, or determining why A or B is important
  2. Give me code that looks more like code “in the wild”
  3. Explain whether these ideas are new or old

I’m more likely to recognize anti-patterns or work my way out from under one in a living, breathing codebase.


To wit. (An attempt at a pursuasive example.)

Just this week I toiled on a bug with a colleague that ultimately turned out to be a case of a classic React anti-pattern where an inline function invocation caused a re-render of a Pure Component.

Now, I’ve definitely read at least a 3-4 instructive articles on the pitfalls of declaring functions inside of render(). Perf implications and unnecessary function creation, etc…. But that insight didn’t help me or my similarly schooled colleague solve this bug inside a more complex and complicated system.

We failed spectacularly to bring our academic rigor to bear because we weren’t working with the familiar sandboxed toy code from instructive examples. The root cause became obvious only after we did hours of thought-work to pinpoint the problem code.

Finding problematic inline functions in something like this triviality of Hillel’s lament would be much easier:

render() {
    return (
        <button onClick={() => this.setState()}>
            Click Me
        </button>
    )
}

But in our case, the problem function invocation was buried in a nested component wrapped in a separate constructor function and abstracted into a separate helper in a separate file and blah dee blah…you get the point. Our journey was starting miles from render().

Our code better resembled:

// Template.js
export default (...props) => {
    // ...
    const Sidebar = connectSidebar(Sidebar);
    const Main = connectMain(Main);

    return <Layout sidebar={Sidebar} main={Main} />;
}

// Page.js
import Template from `./Template`;

export class Page extends Component {
    // ...
    render() {
        return (
            <Template {...props} />
        )
    }
}

The bug was observed when a user clicks the “Buy on Map” button in the Sidebar. The click was getting swallowed and not transitioning the view.

So what was going on?

When we looked closer we noticed that at the moment of click an unwanted blur event would be triggered on the form field in the <Main /> component; and its state would be updated. This redux-form state update would then ripple through the component tree and cause the sidebar to be re-rendered, even though none of the props passed to <Sidebar /> had changed. The result was the sidebar button getting re-rendered in the middle of the click event; which meant the click handlers of the newly rendered button were not capturing the click and able to transition to the next view!

One hack we considered was changing the onClick handler of the sidebar button to onMouseUp – the newly rendered button could receive that event (browsers are weird). But my colleague couldn’t let it slide so we decided to troubleshoot the real issue: the sidebar getting rendered every time there was a field blur when it’s props weren’t changing.

After binary searching the code code up and down <Page />, deleting chunk by chunk until the re-renders stopped, we discovered the fix to be fairly straight forward. Just move the invocation of connectSidebar and connectMain outside of the Template function context into the module context.

// Template.js
const Sidebar = connectSidebar(Sidebar);
const Main = connectMain(Main);

export default (...props) => {
  // ...
  return <Layout sidebar={Sidebar} main={Main} />;
};

Now, when <Layout /> was rendered, the child component passed as the prop sidebar wouldn’t be invoked – it’s already been invoked and the return value of the component has been assigned. In other words, we are not creating a new function in memory and executing it fresh every time.

Sigh. We likely ended up in this place by not being careful with nested connectors and their subscription boundaries. Sometimes you forget that it’s just functions all the way down. Like, declaring and assigning a component constructor inside another component constructor would not be how you’d typically compose your functions. I will humbly accept this not uncommon (read: forgivable) symptom of “bottom-up” programming. (See David Kourshid’s talk about finite state machines and the sad stories of “bottom-up”).

Unlike the pristine nirvanic fields of instructive examples, we make our bed in large projects born from large organizations – cue Conway’s law. The requirements for the application accrete in fantastic ways over time. Cue McConnell’s oyster farms. The primitives you start with to satisfy embryonic requirements, like a root-level <Page /> component, may just become one large prop-drilled well. Graph hell.

This means you suddenly find yourself debugging why a click event on a sidebar button is being swallowed. You notice the divs are flashing in the Elements tab of Chrome dev tools, which means the browser is repainting the sidebar on click – re-renders!

Ok, so instructive examples won’t necessarily help you. But I’ll also asterisk this post and append they aren’t worthless either. Post facto I can properly identify and classify this bug as a commonly known React problem – that is, a problem with React composition not a problem inherent to React – because the instructive examples are out there contouring a problem space and a shared vocabulary. I can then use this shared language when communicating the what or what caused this… during a retrospective or or incident report. But I suspect I could have arrived to that similar conclusion if I’d read articles that had more persuasive examples.

499 closed connections

Tags: nginx, debugging, 499, promises, async/await
7/13/2019

Bugs reveal. We look, observe. I learn things. I just experienced another one.

The customer can’t publish. Ensue existential how come???

After some investigation we realized our JavaScript was erroneously deleting a parent entity too eagerly in some other flow and leaving us with child data stranded in the db. “Publish” didn’t know what to do with bad data.

The code we found responsible for persisting these entities was designed as a kind of sequential “transaction” (heavy scare quotes because this is definitely a loaded term).

When the user clicked “Save”, separate POST requests for parent and child/ren would be sent one at a time – our restful API routes didn’t allow us to send merged data. Now, in the scenario that the POST request for the child failed, our JavaScript code would send an immediate destroy request for the parent – like a roll back. Take a look at the code (simplified for example):

Can you see how this code was written a bit too simplistically? From what I can tell there are at least two latent problems that make this code prone to fail in a way we don’t want.

  1. First, a parse error may be thrown during “other synchronous things” after the saveChildEntity promise is fulfilled. See a contrived example of that: async/await with synchronous error

  2. Second, it’s possible that the POST request initiated by saveChildEntity may succeed on the backend and persist the child data, but the connection between browser and server may be severed before the browser recieves the 200 and the promise becomes fulfilled! When that happens, the promise is actually rejected and the runtime goes into the catch block.

It was the latter.

It seems obvious in retrospect. We allow the user to hit publish anytime which kicks off a heavy network sequence that finishes with a full page reload. But, while the publish sequence is in flight the user can still interact with the page. Meaning they could click another visible button – “Save” – that triggers saveStuffThunk. Based on the server logs, it seems that fairly often the publish would complete and then start to reload the page right in the middle of the second try/catch block of saveStuffThunk. When that happens nginx sends down a special 499 status code meaning the client closed the connection before the server responded with a request. The JavaScript then interprets this as an error and sends the runtime into the catch-delete block.

The logs literally read (simplified):

It still blows my mind this happened consistently to effect hundreds of records. The browser deterministically queues/coordinates? It was a very strange UX-driven race condition.

In addition to realizing that our thunk code written too…optimistically, another really interesting aspect of this bug was learning that we had missed the really really important requirement of locking the page for the user when they click the “Publish” button. This was actually implemented for other similar interfaces, but when my team implemented a new screen with similar access to the “Publish” button, we didn’t fully understanding the potentiality of allowing this race condition. Or how to prevent it.

Bugs are sometimes a result of a big complex system with fast-shifting pubertal code and fugitive ownership creating blind spots.

“Every existing feature, and even past bugs, makes every new feature harder. Every user with expectations is a drag on change.” - jessitron

It sucks but we observed some new things and thereby learned some new things.

Even another cool, tangential observation/learning came from understanding the possible sources of 499s. I did some digging into this and discovered that connections could be cancelled eagerly by a load balancer, or proxy “client,” upon a timeout. See: Nginx 499 error codes. Our server ops folks were able to confirm we did not have a load balancer managing the requests. Demystifying is an important part of this process. Don’t follow clues you don’t have to. A timeout might point to a slow service.

I’m just hard reflecting on on how signals of “broken” – like bad data – can reveal many interesting things about the system. Just think about how much our JavaScript promise handling hid wondrous possibilities.

Debugging a test that does nothing

Tags: debugging
6/24/2019

This weekend I spotted Julia Evans posting some tips about debugging – of course a zine followed – and it resonated deeeply because it touches on one aspect of debugging that I often struggle with. It’s comforting to know this is a common kind of struggle!

Every time we sit down to debug a program we bring a narrow worldview to bear about how the program should work. Oftentimes this assumption is what makes the debugging process laborious. Each fit and start becomes a heated, lasting tango between your worldview and your proofs against the current reality. While this dialectic ensues you begin to receive feedback, often surprising, which may expands the dance to different hypotheses. Or it may contract when you foreclose a set of possibilities. You unlearn, go back, change course. The tango lasts.


It’s impossible to throw out all we know at the beginning and re-interrogate every line of code.

…you just have to pick one and start checking.

Patience is what we need, as we methodically unpack our worldview from the inside out.


This past week I spent a good part of two days wrestling with a broken acceptance test in a somewhat unfamiliar part of code where my initial assumptions misled me from the start.

The test was written to observe a state change by simulating a click on the first menu item in a dropdown, which would flip the disabled state of a Purchase Button elsewhere in the container component. That assertion was no longer passing. Because my new code had changed the source of the the initial values for the dropdown, I had set my sights on determining if that source data in the test was wrong.

Sigh, that was an incorrect assumption. I spent a bunch of time interrogating these source values in the test setup for the dropdown menu but in the end they were correct – my initial presumption and assumption (worldview) steering me into a void. Sucked.

After repeatedly playing with the test instructions and comparing to the outcome in the browser, I discovered the click action was doing nothing because the initial state for the Purchase Button was already set through a test setup step which equaled the first value of the dropdown menu; so the click action didn’t actually change the value in the test (I was finally able to repro this in the browser). There was no state change to observe. In fact, and this is the best part, the test wasn’t needed at all. My code change inadvertently exposed a test that was applying false assumptions to the code.

Lol, how much of our software is layer cakes of fallacious worldviews???

Pre-crude development

Tags: legacy code
6/19/2019

I caught this tweet by Ruth Malan yesterday which references the “tension between continuous evolution and product instability”. This is a tension we hold as software engineers daily. It’s the source of these “trade-offs” that we often talk about.

At the start of my career I thought that longer-lived codebases would be more manicured and predictable than the embryonic Rails app I was working on. I presently work in a legacy codebase (like 10+ years running), and I’m not really sure if it’s much more stable than that 3 year-old startup experiment. Sure, there are fossilized pieces of code in corner tar pits that have run largely unattended for years which we can ignore, and beautiful code abounds. But I find myself often in labored negotiation with code in-between those ends, code that has a mix of pieces written a year or two ago, and then some new additions in the past few months. We might call this code pre-crude pressurized compounds with some new raw material shoveled in – forming. These are tension points where I struggle with the tension the most. There are just enough layers of abstraction, code reuse, shared responsibilty, etc… – tensile strength – that don’t make changing the code easy. Any change could destabilize things, like expose bugs, ie break existing functionality.

So the day-to-day discipline in legacy code is wiggle in tension to add/change behavior, but also support all existing behavior. jessitron quotes in a recent blog:

Every new feature comes with the invisible requirement: “and everything else still works.”

This is a wild task in pre-crude places!

Too many imports

Tags: javascript, react, refactoring
5/11/2019

There are too many imports in this file. I’m staring down like 50 lines of imports. External libraries (React, Lodash, etc…), to our own internal libraries, to components, to helpers, to constants. The last bothers me the most because they feel like implementation details I want hidden away in these other components or helpers. And as am I’m looking at this statement inside a render() block:

const isSoldOut = this.props.statusType == SOLD_OUT;

I’m suddenly reminded of what Kyle Simpson told me on twitter a couple weeks ago:

“in ‘functional programming’, we love and embrace functions! functions everywhere!” – @getify

That’s it, that’s my out. We can refactor this down to a data utility.

const isSoldOut = ({ status }) => status === SOLD_OUT;

Or we might even abstract this into a general utility if it makes sense:

const getTicketStatusType = (ticketProps) =>
  STATUS_TYPES_MAP[get(ticketProps, "status")];

While these functions are not decreasing the number of imports, I’m perhaps doing a few other useful things:

  1. Cutting down the number of imports from different files
  2. Creating a reusable abstraction – which I know can be employed elsewhere
  3. De-cluttering render() which makes integration tests simpler, possibly unnecessary for some cases since logic can be independently unit tested

That last point is important. At least to me. Untidy render() blocks are hard to scan, debug, and test. The machinations for constructing and composing the things to render can happen in other places: above the component class definition, in class methods, or in a separate file. The first options is one I quite like because abstractions don’t have to be all that general. It’s great if they are localized to the component at hand.

Starting a new blog and jumping right into an article I read about dependency injection using function parameters

Tags: python, design patterns
4/10/2019

Read Ensuring Clean Code: A Look at Python, Parameterized this morning. Parameters as DI! So yeah, now that we have default parameters in JavaScript – as of ES6 – we can think of parameterized DI for JS too; right there in our function declarations. This is super familiar if you’re coming from JS where funcs are first class.