• poetry

    With each new day it doesn’t rain I lose hope about the future of our species. I have a good performance review this week. Adding some lemon juice and gojuchang to this breakfast soup was CORRECT.

    The slap heard round the world is LAPD riot cops descending on climate scientists in battle armor. Fuck the police. We must stop them.

    Coding adapters for payment gateways is also not so easy! I’m in love with programming books that have a smooth narrative arch. Describe the problem in all its complexity, with many dash of subjectivity. There’s not right way! Only what’s CORRECT.

    Wednesday April 20, 2022
  • habitations

    Redis Poem

    set things
    set them to expire
    set them to be exclusive
    set them to expire, be exclusive

    (inspired by a chat with mike b)

    Saturday January 8, 2022
  • habitations

    Well-known ways that JavaScript coerces objects to strings

    It’s a proper cliche of commercial computer programs to bind audit reporters alongside code at important relay nexuses. There are a panoply of reasons to extract information this way: producing audit trails for legal compliance, gathering product insights, collecting debug traces for quality monitoring, etc… The trick is wresting useful emissions from these reporters when they are working within a dynamically typed language that might unabashedly take liberties on ambiguous data types – like JavaScript. Typically you design your loggers to capture a Good-Enough Facsimile™️ of the current state of the world for later analysis1. This likely means grabbing the subject of the current operation as well as felicitous supporting actors like application and session context, and user info, in Object form. (Already a funky transmigration of Subject/Object). Oftentimes the subject is the user – we love to know you, and know to love you. But, if we may clasp hands and walk/skip through a hallucination together into a fantasy e-commerce example, we can explore logging and value coercion of a familiar Order class. What would be a blog about building websites without a bit of retail market magic.

    Let’s start with the following modest invention which terraforms1 a theoretical backend piece of an online shop. And let’s imagine that from time to time, orders that come into this shop become stuck during the fulfillment process – perhaps for a variety of reasons because we are dealing with, say, medication items. Insurance claims may be rejected, the doctor or pharmacist discovers a conflict between the patient’s health attributes and the meds. In such a scenario, this code attempts a fully-automated retry of the unstuck order, and in due course, at important nexuses, passes relevant contextual data to log handlers. From there other bots or real-life persons can take over.

    const oldestStuckOrder = await query(orders, { filter: 'oldest_stuck' });
    const logData = {
    	order: oldestStuckOrder, 
    	userInfo: { ... },
    	sessionInfo: { ... },
    };
    logger.info(`Start resolving stuck order: ${logData}`);
    const reason = determineStuckReason(oldestStuckOrder);
    
    if (reason) {
    	const result = resolveStuckOrder(oldestStuckOrder, reason);
    	const logDataWithResult = {
    		...logData,
    		reason,
    		result,
    	};
    	logger.info(`Finished resolving stuck order: ${logDataWithResult}`);
    } else {
    	const logDataWithReason = {
    		...logData,
    		reason,
    	}
    	logger.error(`Unable to resolve stuck order: ${logDataWithReason}`);
    }
    

    There are three points in this code where I’m sending a rich object into a logger service which only accepts a single string argument. In this fantasy, this particular team of software developers just wants the logger service API as simple and straightforward as can be: unary and stringy. I hope this code reads naturally, that it’s similar to something you’ve seen before. And…hopefully your gears are turning already and you are starting to see with me 𓁿; and you are beginning to feel awkward 🙇🏻‍♂️ about what representation for oldestStuckOrder or logDataWithReason this gussied-up console function will display. Won’t interpolating complex objects inside a template string force the engine to implicitly coerce the object into the obstinately churlish [object Object]?

    Scene opens, your PM marches up to your desk with a bemused frown:

    PM: What happened with Order #555-5555
    Me: The problem is that this order got into an illegal state!
    PM: Which order?
    Me: Oh, the object object order.
    PM: 😒

    JavaScript is funky-beautiful because the dynamic typing nature of the lang means you can smush values of mismatched types into the same context and your program won’t catastrophically fail when you run it. Loose assumptions loosely held, I guess. When we write JavaScript, we often take this for granted, and we exploit it for good. You probably know and understand intuitively or consciously the bunch of common “contexts” where the language affords this failsafe. Here’s a general breakdown:

    • expressions using arithmetic operators
    • comparison operators
    • text expressions (if statements, the second clause of a for loop handler, the first clause of a ternary operator)
    • interpolated strings.

    That last one is case in point; we can send an object into a string context – Finished resolving stuck order: ${logDataWithResult} – and get something workable out the other end:

    const logDataWithResult = { prop: 'prop', anotherProp: 'anotherProp', }; console.log(`Finished resolving stuck order: ${logDataWithResult}`);

    And there it is. Workable (quite generously). The famed proterozoic, bracketed notation of familiar churlish conceit and “bad parts” motifs. Obviously this not the best guess we hope the engine to make when executing our logging function – we have lost all that rich order and user data! Our compliance trail is meaningless. But we shouldn’t despair quite yet. I’m happy to share that JavaScript exposes many an API for developers to control the return value of type conversions. If JavaScript is anything it’s a fairly open language. Not necessarily open for expansion to the extent of, say, Clojure’s macros. But all things being mutable objects (with a few untouchable properties) and a handful of scalar values, you have a good deal of freedom. For coercing objects to string, the most famous method is probably toString(). In fact, JavaScript dogfoods its own toString() for object -> string conversion automatically in all those common contexts listed above. Whenever

    the object is to be represented as a text value or when an object is referred to in a manner in which a string is expected – MDN contributors

    Like between backticks.

    Now, if a fellow developer in our dreamy medication retail shop codebase has not already come along and monkey-patched the Order object’s toString() method, the default conversion algorithm rules defined for Object.prototype.toString() from the ECMAScript® 2021 spec section 19.1.3.6 will kick in. Yep, we are going there. The algorithm is actually pretty easy to understand (though here’s to hoping your browser of choice plays along!). I invite you for a glance:

    Can you see where [object Object] comes from? If the unfamiliar dialect is a bit intimidating for this casual Sunday morning read, here’s what the above algorithm would like if we implemented it in JavaScript4:

    import { SecretInternals } from 'secret-internals';
    
    const internalSlotsToTagMap = {
      ParameterMap: "Arguments",
      Call: "Function",
      ErrorData: "Error",
      BooleanData: "Boolean",
      NumberData: "Number",
      StringData: "String",
      DateValue: "Date",
      RegExpMatcher: "RegExp",
    }
    
    function toString(value) {  
      if (value === undefined) return '[object Undefined]';
      if (value === null) return `[object Null]`; 
      
      let builtinTag;
      const innerValue = innerValue = SecretInternals.Object.box(value);
      const isArray = isArray(innerValue);
      
      if (isArray) {
        builtinTag = 'Array';
      } else {
        for (const [key, value] of Object.entries(internalSlotsToTagMap)) {
          if (SecretInternals.Object.hasInternalSlot(innerValue, key)) {
            builtinTag = value;
          }
        }
      }
    
      if (!builtinTag) {
        builtinTag = 'Object';
      }
    
      const tag = SecretInternals.Object.get(innerValue, '@@toStringTag');
    
      if (tag !== string) {
        tag = builtinTag
      }
    
      return `[object ${tag}]`;
    }
    

    For actual objects, not object-like things (Arrays), the algorithm falls through to step 14 where a temporary referent called builtInTag receives the value Object. This built in tag is later used as the second part of the converted argument value.

    Despite the sarcastic jabs from the peanut gallery, what else would we expect the language to do. JavaScript’s unintentional emissions were designed for a platform that projects them through UIs for consumption by masses of retinas of human people – it’s Good Enough. The language keeps your program running with a type guess and leaves the contents of your value alone. It doesn’t, like, radically unpack and serialize your contents to JSON (what compute or privacy costs might lurk!); or try to set the built in tag to the left-hand side of the last variable assignment statement: what a drastic move, the language doesn’t work this way: variable names are not synonymous with types! Variable assignment is void of any tautological binding! JavaScript just lets you take matters into your own hands.

    Until very recently I wasn’t aware of any techniques beyond an Object’s built-in toString() property to futz around with type conversions. But apparently there are a few modern well-known Symbols5 that have entered the language to help. I’ve compiled an example with four different APIs I could track down. Any of these would play nicely for logging libraries, though the last only works in node, and there are nuances to each that you must consider for your use case.

    // 1. Override toString() (js, node) class Chicken { toString() { return '🐓'; } }

    var chicken = new Chicken(); console.log(${chicken}); // 🐓

    // 2. Symbol.toStringTag (js, node) class Chicken { get Symbol.toStringTag { return ‘🐓’; } }

    let chicken = new Chicken(); console.log(${chicken}); // [object 🐓]

    // 3. Symbol.toPrimitive (js, node) class Chicken { Symbol.toPrimitive { switch (hint) { case ‘number’: case ‘string’: case ‘default’: return ‘🐓’; default: return null; } } }

    let chicken = new Chicken(); console.log(${chicken}); // 🐓

    // 4. util.inspect.custom (node) const inspect = Symbol.for(‘nodejs.util.inspect.custom’); const util = require(‘util’);

    class Chicken { inspect { return ‘🐓’; } }

    const chicken = new Chicken(); console.log(chicken); // 🐓 console.log(${chicken}); // [object Object] util.inspect(chicken) // 🐓

    I haven’t formally surveyed this, but I can report anecdotally that of my current team of ~25 engineers who work on a universal JavaScript application, overriding toString() is the most commonly preferred strategy. My sneaking suspicion is that many JavaScript developers are not aware of the more contemporary Symbol methods/properties, even though these methods have been available in major browsers/node for ~4-5 years. Or maybe it’s simply a matter of many backend devs coming to backend JS from other languages and server environs. From what I understand, node has just started to finally emerge in the past few years as fit-enough for prod. JavaScript is vast territory, quickly expanding, in multiple runtimes – it takes years.

    As for nodejs.util.inspect.custom, I haven’t been around node land long enough to know if its usage is idiomatic.

    Still, preference for toString() may not simply be an issue of keeping up with the JS Joneses. As shown above, the outcomes of these different strategies are not identical. What’s more, to layer on the complexity, these options aren’t wholly independent. In fact, JavaScript combines these strategies together under the hood. Did you notice what was going on in step 15 of the conversion algorithm above? The spec requires that Object.prototype.toString looks up the @@toStringTag symbol property on the object – these symbol members are in the DNA sequence now. When we take control back, understanding the spec is quite key: we can avoid mistakes like combining these two options since overriding toString() always take precedence. For example:

    class Chicken { get [Symbol.toStringTag]() { return 'Base 🐓'; }

    toString() { return ‘This is a 🐓’; } }

    class JungleChicken extends Chicken { get Symbol.toStringTag { return ‘Jungle 🐓’; } }

    const chicky = new Chicken(); const jungleChicky = new JungleChicken();

    console.log(${chicky}); console.log(${jungleChicky});

    However, say I were interested in simply tagging the string representation of my object to protect exposing its value contents, but still present a semantically rich identifier to consumers. This would help us express a desire to maintain the default bracket output – [Object..] – with the prepended “object” type to maintain some consistency with how objects are stringified in our code. In that case, leveraging the well-known Symbol.toStringTag property would be the way to go. For example, the following logger from our e-commerce imaginary might obscure private user data like this:

    // .../jsonapi/resources/user.js class User { get [Symbol.toStringTag]() { return `User ${this.id}`; } }

    // Somewhere else… logger.error(Unable to resolve stuck order: ${logDataWithReason}); // Start resolving stuck order: // { // order: {…}, // userInfo: [object User:123456], // sessionInfo: {…}, // };

    Your next option, empowering even more fined-grained control, is adding a Symbol.toPrimitive method to your object. Symbol.toPrimitive is a main line into the runtime’s coercion processing. After playing around a bit in a browser and a node repl, I’ve noticed that this Symbol will precede over a provided toString() override.

    class Chicken { toString() { return 'This is a 🐓'; }

    get Symbol.toStringTag { return ‘🐓’; }

    Symbol.toPrimitive { switch (hint) { case ‘number’: case ‘string’: case ‘default’: return ‘This is a 🐓 primitive’; default: return null; } } }

    const chicky = new Chicken(); console.log(${chicky})

    By using Symbol.toPrimitive, you’re basically instructing the runtime to LOOK HERE for all its object coercion preferences. What’s more, beyond Symbol.toStringTag’s mere label augmentation, you get a powerful indirection2 to handle all primitive type coercion scenarios1. You’re also overriding internal language behavior which – I was surprised to learn – effectively resolves how to order the calls for Object.prototype.toString() and Object.prototpye.valueOf(). Flip to section 7.1.1 of the spec to see how the ToPrimitive abstraction is designed to call a further nested OrdinaryToPrimitive abstraction for decision-making:

    Translated for comfort:

    import { SecretInternals } from 'secret-internals';
    
    function ordinaryToPrimitive(object, hint) {
      if (SecreteInternals.isObject(object)) {
        throw TypeError;
      }
    
      if (hint !== 'string' || hint !== 'number') {
        throw TypeError;
      }
    
      let methodNames = [];
    
      if (hint === 'string') {
        methodNames = ['toString', 'valueOf'];
      }
    
      if (hint === 'number') {
        methodNames = ['valueOf', 'toString'];
      }
    
      for (methodName of methodNames) {
        if (SecreteInternals.isCallable(methodName)) {
          const result = SecreteInternals.call(object, methodName);
    
          if (SecreteInternals.isNotObject(result)) {
            return result;
          }
        }
      }
    
      throw TypeError;
    }
    

    I think I like the idea of using these well-known symbols for custom object -> string representations, if for the collision protection alone.3 What would it be like to reach for the powerhouse of Symbol.toPrimitive to hijack the runtime from eventually calling through to Object.prototype.toString(). Furtive, conspiratorial whispers to the interpreter 🤫. Even a partially implemented reducer will do, as I demonstrate above in my chicken example above: the switch statement can gracefully sidestep other type hint cases and only target the string hint case. But is grabbing for Symbol.toPrimitive overkill? toString() is tried and true and a pristine greenfield function block without arrogant “hints” and a naive switch statement without pattern matching 🙄 (are we there yet?). Could there be non-trivial DX cost of confusing other developers if the other primitive case statements are fall throughs?


    1: Whenever I think about how software tries to capture its own understanding of the world it creates, I’m brought back to systems thinkers like jessitron:

    "We don’t expect the world to have perfect consistency. Yet we wish it did, so we create facsimiles of certainty in our software.
    It’s impossible to model the entire world. Completeness and consistency are in conflict, sadly. Still, if we limit “complete” to a business domain, and to the boundaries of our company, this is possible. Theoretically.

    2: By “indirection” I really mean to invoke Zachary Tellman’s exegesis of semantic drifts in software industry lexicons from his book Elements of Clojure. His work is a really nice refinement:

    Indirection provides separation between what and how. It exists wherever "how does this work?" is best answered, "it depends." This separation is useful when the underlying implementation is complicated or subject to change. It gives us the freedom to change incidental details in our software while maintaining its essential qualities. It also defines the layers of our software; indirection invites the reader to stop and explore no further. It tells us when we're allowed to be incurious.
    He goes on to discuss that conditionals are one of two primary devices to achieve successful indirections (the other being references). Conditionals are effective because they contain ordered, "closed" decision-making mechanisms that avoid conflicts; in contrast to tables with indivduated keys.
    Conditionals solve conflicts by making an explicit, fixed decision. Where conflicts are possible, we use conditionals because they are closed.

    3: That’s the primary purpose of these Symbols. For a deeper Support util.inspect.custom as a public symbol #20821 on GitHub.

    4: Try not to think too hard about the implications of a JS interpreter written in JS. But, ya know, Atwood’s law; you will, not surprisingly, find JS interpreters for JS out there, many built on Acorn, which, is itself, a parser written in JS. Man alive!

    5: I’m bemused and betwixed by this use of “well-known.” Does anyone know the origin of this qualifier?

    Monday January 25, 2021
  • habitations

    Language ergonomic studies: Summing ranges and most frequent terms

    Pennies per day (by day count) in Clojure:

    (defn pennyPerDay [numDays] (reduce + (range 1 (+ numDays 1))) )

    (pennyPerDay 30)

    Pennies per day (by day count) in JS:

    function pennyPerDay(numDays) { return Array(++numDays) .fill(0) .reduce((sum, _, i) => sum + i, 0); }

    pennyPerDay(30);

    Most frequent terms in JS:

    const termList = ['term1', 'term2', 'term3', 'term3']; function mostFrequentTerms(arr) { return arr.reduce((result, curr) => { result[curr] = ++result[curr] || 1 return result; }, {}) }

    mostFrequentTerms(termList);

    Most frequent terms in a list with Clojure:

    (def term-list [:term1 :term2 :term3 :term3]) (defn most-freq-terms \[terms\] (->> terms frequencies (sort-by val) reverse)) (most-freq-terms term-list)
    Monday November 9, 2020
  • habitations

    Language ergnonomic studies: 52 card deck

    Javascript:

    const cards = () => ['♥','♠','♣','♦'] .map((suite) => (['2','3','4','5','6','7','8','9','10','J','Q','K','A'].map((card) => suite + card)))

    Clojure:

    (defn new-deck [] (for [r [\♥ \♠ \♣ \♦] s [:2 :3 :4 :5 :6 :7 :8 :9 :10 :J :Q :K :A]] [r s]))

    💻

    Monday October 19, 2020
  • poetry

    Reading my streaming fancies with Le Guin's Carrier Bag of Fiction

    180124 ursula seated 70s banner Why do I have such a big attraction to teen fictions (like _Never Have I Ever_)? For a while I've been thinking it was the force of a stream, a soak, a wash of saudade made lighter by revisiting that time from a better place; a chance to relive with empathy for myself, to reclaim that time from the incapacitating fear of being vulnerable, navigating _coolness_ in the 80s/90s (a now more mainstreamed cultural topic since all storytellers are approaching their strength ~40). The journeys of these post-modern (mostly young women) protagonists through insecurity, heartbreak, the web of complex material and digital norms and hegemonies; the care they find for themselves and other people, with other people, emerging whole-enough with their collective--family/friend/mentor nexus; learning to survive the patriarchal-colonial system, becoming disloyal, anti-heroes. It's all a differing sort of story from the "prick tale", where the Man-hero takes up arms against foe, real or imagined, creates the world in his vision; fealty to a cause which is his own desire-machine; the desire of the First Man he has inherited through callused, painfully tight handshakes through time.

    But the feel-good feel goes beyond. I think something bold happens in these stories.

    I’m a bit new to Le Guin’s project of composting our myths but you can’t help but feel new sanguine cultural flow from the advanced logo-ero she develops in The Carrier Bag Theory of Fiction.

    If I am allowed a primitive literary-criticism gloss. What is picked up along the way: the misinformation and guidance from elders (both useful to (re)combining a new narrative of self), the wrought learnings from misfired expressions of need (for “boys”, peer acceptance, spiritual stability), the new kinds of love from parents and friends that evolve from changing circumstances. It seems especially apparent in the serials. Each episode there’s a satisfying build, a paced emergence that’s lovely because the wins are not blood-soaked triumphs. Transitions, evolutions, forks, back-tracks, shifting winds, climbs, descents; it’s a kind of Sunday hike through the woods with the Fellowship of the Ring in peacetime. Not the “linear, progressive, Time’s-(killing)-arrow mode of the Techno-Heroic” (Even the fantastical dragon time appropriations of Percy Jackson still fit the prick), but story that describes “what is in fact going on, what people actually do and feel, how people relate to everthing else in this vast sack, this belly of the universe, this womb of things to be and tomb of things that were, this unending story.” Unending…but not Neverending. (Though perhaps a re-reading of Bastian’s adventures is due now that I’m starting to finally put things into this sack (of a blog).)

    Motherland: Fort Salem and Cobra Kai kinda fucks with all this too. Prick tales recast and modulated are problematic. They can be awesome and interesting because they don’t totally abandon tete-a-tete action (which, when crafted through martial arts is personally gripping (especially for an amateur artist of the martial myself)); and yet, these imaginative alternate worlds and thoughtful respun contemporaries can still carry out seditious ploys. Maybe we can think about Motherland and Cobra Kai straddling prick tale and carrier bags as tactics and technologies of xenofeminist adaptations, calibrations, speculations that transform through “seeping, directed subsumption”; since there is no pure refuge from patriarchy and eyeballs of grief-stricken viewers, anyway. Liminal bacteria waved through by situated materialist praxis. I couldn’t reject Cobra Kai whole cloth despite much of the character development of the teens is wrought through pugilisms (everyone literally joins a dojo); it’s softenings and care for men feels valuable when toxic masculinity runs rampant in the culture. Also, I wonder about the dojo. The split lips and bruises that result from Johnny’s tough, mangled love create openings; the kids can take more; take in more; become spacious (punching) bags. Face hits could be a technology of transformation when there are limited options for situated emergence in the hyperreal suburban desert of early 21st century SoCal. How big of a purse do we have here??

    The immune system, rather than being imagined as a force of protective soldiers made by the host, can be thought of as a group of passport control agents and bouncers. They know who to let in and who to keep out–and they have learned this through millions of years of evolution.

    Holobiont by Birth: Multilineage Individuals as the Concretion of Cooperative Processes, Scott F. Gilbert

    Delighting in teen fiction is not a trifle. “It matters what worlds world worlds,” reminds Haraway. And of course, a most important reminder from Le Guin, lest we believe the prick tales (which, if they are not careful, could cause major issues like exponential inequality, and/or loss of ecological balance): “People who deny the existence of dragons are often eaten by dragons. From within.”

    Sunday October 4, 2020
  • habitations

    Zoom Doomed

    Not a day goes by when I don’t witness the over signifying of “meeting”. “Meeting” invokes a collective sigh for laborers since a better way seems a fairy tale. As a consequence, possibilities for elevated communication are damaged. But a meeting is so much more:

    Calling the coordinated exchange of messages or moderated access to shared material (like a blackboard) a conference or meeting seems to neglect a factor that is rather important in meetings. Often the explicit exchange of messages is relatively unimportant compared with the development of social relations in the group of participants that happens simply by the fact that they are close to each other as humans for a while.

    Perspectives and Metaphors for Human-Computer Interaction, Susanne Maaß and Horst Oberquelle

    Thus, we may be Zoom doomed. Perhaps. We need to hire facilitators and train our people in facilitation. It’s a technical skill that most technologists lack, profoundly.

    Wednesday September 30, 2020
  • habitations

    Not so byzantine algorithm studies: Using math to deliver your medication

    Like many commercial software developers, math plays a more sporadic role in my day-to-day work. That said, I certainly wouldn’t blithely demote knowing math below other techs like programming languages, web frameworks, and principles of software design/architecture; which already sets up a false dichotomy, anyway.

    Math presents a beautiful core proposition to software developers. Through its powers of abstracting and reducing real-world dynamics into expressable, repeatable, sequential logic – which we can be lifted easily into our favorite coding grammars – math can be the companion dynamo we need for achieving levels of faithful verisimilitude for real world complexities (Keep It Savvy Stupid).

    Italicizing knowing math is totally an intentional problematization to mess with the idea that knowing math is narrowly scoped to demonstrating technical prowess of calculation or proofing. I think software engineers can, and should, consider a relationship with math as knowing enough math; like, an ontological relationship with math’s formalism and general principles. Math The Good Parts, so to speak. Otherwise the bravado of equating math and problem solving or math as “hard” (dick) science of some kind is doomed to engender an axiomatic blur that will gate keep devs who are not deemed fit for lack of educational background or interest in fiddling with greek letters. Keep these gates wide open, por favor!

    (I also think the focus on calculation in my early schooling, alienated from everyday material uses, made me quite Meursault for it – perhaps why I don’t feel the need to champion for being mathy. It took entering commercial software life much later in life to see how the dynamo helps us make stuff and therefore able to consider embracing math (and being embraced by math) again.)

    Our praxis: we can coopt knowing math to ramify similar to our desire for Senior Engineers™ to be T-shaped with their knowledge of the stack in order that they can press more heavily on architecture and design. For example, we can desire for React developers to:

    • Understand just enough graph theory to see component architectures as graphs;
    • Understand the implications of data pipelining and message passing in a graph (bonus points for being a student of Christopher Alexanders' semilattices)
    • Or understand just enough category theory to understand the heritage and benefits of composability of first-class functions – is JS even fun without a bit of FP? What else?

    This is all just to say that some business problems map really well to higher-order maths and devs should be prepared for it, at least through the design phase. Again, once we sit down to write code, we can easily offload the fastidious implementation details; those few lines of copy|paste from the math deep state.

    For example, take a common e-commerce retail problem like Distributed Order Management (DOM). Modern omnichannel selling combined with cloud computing open a broad field of possibilities to achieve low-cost order fulfillment. Affine cost structures – resulting from variable/variadic shipping dynamics – will increase complexity as an ecommerce company scales. We’re talking about quite a bit of real-world complexity to model and adapt to.

    What, then, when we’re tasked to create an algorithm to satisfice a version of this problem where you have these rough requirements:

    1. Orders must be completely fulfilled
    2. There will be variant shipping costs per supplier
    3. Orders can be fully splittable by a supplier

    This kind of thing will break your brain once you start to grasp for a rational boundary to contain the argument ramifications: supplier availability, supplier inventory availability, etc…

    I was presented with a challenge like this recently and it took me a couple hours just to understand what this problem domain was; like, find my way to DOM through off-by-one google searches; a true StumbleUpon revival. Because I didn’t know the math well-enough yet. Google is pretty fast though, and within minutes I practically broke out in a sweat after discovering multiple long-winded computer science papers written on the topic filled with intimidating mathy notations. Then more OMGs and mon dieus as I careened sharply into Cantor’s theorem and Set Theory. Wait, am I doing math? Oh mersault!

    More and more it seemed a fantasy that I’d be capable of solving Amazon within a reasonable amount of time because this gestalt was increasing aggressively. Nonetheless, after some deep breaths and patience recoup and having worked with enough devs without math experience, I began to inculcate to this world and realize any solve was going to be an approximation of some sort and probably wouldn’t require me to crack open a text book; there was no silver bullet or excavation of secret proofs. Rather, this whole class of optimization maximations applied to fulfillment problems is a rigorous academic field, but when it meets the metal it softens and warms for Good Enough™️ programming. Each potential brute force linear assignment or dynamic programming algorithm was discarded. My inputs couldn’t be structured into a decision table or cost matrix amenable to path finding, traversal, or Cartesian production. Which meant I could rule out potentials like Kuhn’s Hungarian algorithm. In fact, the scope was something more akin to a set cover or networking problem – still a brave new world, but less and less unbounded the more I scoured the web. Ulimately, my task was gonna be something toward imagining all probabilities between order items and suppliers, and then reducing these matches against cost constraints. “All probabilities” was a strong clue.

    So, a bit surprisingly for someone not used to needing math everyday – and certainly not trying to fuck with this:

    Fully splittable
    Eq. (5a) ensures every order is fulfilled, and (5b) limits service provided by a store.

    I soon found myself drifting at a comfortable altitude through Probability and Combinatorics with the help of other cow path pavers – we are community-taught developers, after all! – crafting a studied, yet heuristic, approach from where I could thread my inputs through techs like combinations and permutations to make some educated guesses.

    The general step-rules of algorithm gradually accreted into something resembling following:

    1. Generate all possible combinations of order items
      • Order items are unique, therefore we are working with a Set. We can therefore use a mathematical definition of a Powerset and create a function which outputs a set of all subsets of any set S:
      powerset([A, B, C]) === [[A], [B], [A, B], [C], [A, C], [B, C], [A, B, C]];
    2. Generate all possible combinations of combinations order items that are less than or equal to the number of suppliers
      • Effectively take the result of the Step 1 as the input Set for another powerset that only returns combinations of order item splits that can be fulfilled by available suppliers. For two suppliers:
      powersetBySize(powerset([A, B, C]), 2) ===
        [
          [["A"]],
          [["B"]],
          [["A"], ["B"]],
          [["A", "B"]],
          [["A"], ["A", "B"]],
          [["B"], ["A", "B"]],
          [["C"]],
          [["A"], ["C"]],
          [["B"], ["C"]],
          [["A", "B"], ["C"]],
          [["A", "C"]],
          [["A"], ["A", "C"]],
          [["B"], ["A", "C"]],
          [
            ["A", "B"],
            ["A", "C"],
          ],
          [["C"], ["A", "C"]],
          [["B", "C"]],
          [["A"], ["B", "C"]],
          [["B"], ["B", "C"]],
          [
            ["A", "B"],
            ["B", "C"],
          ],
          [["C"], ["B", "C"]],
          [
            ["A", "C"],
            ["B", "C"],
          ],
          [["A", "B", "C"]],
          [["A"], ["A", "B", "C"]],
          [["B"], ["A", "B", "C"]],
          [
            ["A", "B"],
            ["A", "B", "C"],
          ],
          [["C"], ["A", "B", "C"]],
          [
            ["A", "C"],
            ["A", "B", "C"],
          ],
          [
            ["B", "C"],
            ["A", "B", "C"],
          ],
        ];
      
    3. Generate all permutations of suppliers
    4. Generate all viable routes by matching between the sized combinations of order items (result of Step 2) to supplier permutations (result of Step 3)
      • Basically a fancy zipping computation
    5. Filter viable routes against both superficial and business constraints like like duplicated suppliers and supplier availability and/or inventory
    6. Compute the lowest cost route!

    Now there’s a well-suited mathy modeling!

    Some other thoughts

    Combinatoric applications for this algorithm are quite expensive: you can see how the cardinality flourishes pretty fast above, and my example is for a modest 3 order items and 2 suppliers. If those numbers increase by any measure CPU will be tremendously exercised. (I believe the runtimes of these functions may be polynomial?) I can see why optimization becomes an attractively ripe apple for academicians. Quickly glossing, I can imagine optimizing the looping functions to break before completion when satsificing within a range of acceptance criteria; or structuring the data as Iterators and/or piping through transducers to minimize space complexity with lazy or eager techniques.

    By the way, JS has pretty underwhelming options for combinatorics. I found one library that I found a bit awkward to use and ended up ditching in favor of a few standalone implementations of powerset and permutations so I could ensure the code would comply with how I was trying to express the above heuristic. Unsurprisingly, Python’s itertools has combinatoric functions built in and even provides recipes for common tooling you can build on primitives like permutations() and combinations(). For example, powerset().

    def powerset(iterable):
        "powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)"
        s = list(iterable)
        return chain.from_iterable(combinations(s, r) for r in range(len(s)+1))
    

    Or simply import the blessed more-itertools library. Of course, this is expected for Python which is heavily developed for the data science world.

    Wednesday September 9, 2020
  • habitations

    The loss of logical purity primacy

    MV5BN2NhZTg0MDgtMTc5ZC00MGQxLThhNzEtYmE5ZjRiMzI5ZTcyXkEyXkFqcGdeQXVyMjUyNDk2ODc V1
    Femi Ogunbanjo & Hanne Klintoe, 1999, The Loss of Sexual Innocence

    Back in February, my entire notion of expertise and how experts make decisions became cracked after listening to episode 169 of the Greater than Code podcast. While the podcast is rolling I’m discovering my socialized construction of expert – like we pump and grow some kind of muscled query power of an expanding brain-database over time. And that combined with a trained analytical rigor gained through extensive tutelage, study, and practice.

    I’m struck by how neo-classical cyberpunk-capitalist fantasy that notion sounds: a techno-positivist outcome of brain as database; the privileging of expertise as accumulation. The time-value of memory perhaps.

    It’s kind of wild: wondering about what sort of instinct decision-making really is in concrete terms. Thankfully we have the >Code crew to help us navigate Decision Theory from the safety of our own home. The podcast panelists offer a different notion attributed to Gary Klein’s work Sources of Power: How People Make Decisions. Rein’s paraphrasing of this more probable notion for quick decision under time pressure situations in our business environs:

    Expertise is an act of rapid mental simulation without formal analysis based on previous experience. It’s more akin to “instinct” than a cartesian scientific methodology.

    Gary Klein calls this model “recognition primed decision” or RPD. The key is that over time and through experience we continue to enhance our instincts around familiar problems, and replace outdated versions that don’t make sense in our newer model of the world. No need to increase disk/head space! Then, when facing a quick-decision moment, we can make a strong guesses. Informed guesses, not overly formal; certainly not grooved to dreams of electric sheep or Moore’s law or, by necessity, scholastic architectures of cognition. But not totally informaled, either; we are making smart connections.

    I find RPD comforting as a way to rethink my role as an expert despite fewer than a decade of years practicing my trade: a sophomoric “mid-level” software engineer. While RPD reveals the distinctions between less experienced and more experienced people – notably the former’s exceeding alacrity to arrive at a first course of action – RPD is about evolving your pattern matching, identifying the least risky choice, mentally modeling/imagining outcomes (see Variations)

    Embracing an RPD notion of expertise can ramify more permissions for less skilled SWEs to face spontaneously emerging decision-making moments in the workplace with less imposter syndrome and tougher felicity. Fellow career middlers, if you have an idea, basically just trust your power of . RPD also beseeches us to expel myths like (quickly riffing): Principal and Staff technologists have reached their eminent station by putting in the time to refine their powers of inspection, analysis, interpretation, evaluation; they’ve amassed all the good frameworks, methodologies, and systems into their memory bank; and decisions they make are conclusive (foreclosing, narrowly, the scope of possibility) because of this special toolkit which is wielded though an infallible reason.

    Reason and logic might factor into RPD mental simulation, but there’s a primacy of imagination and knowing deep down. Strength of will even?

    I am enough. My powers to decide can be totally excellent.

    Of course, that doesn’t mean my first guess won’t be totally wrong. But hopefully I’m not asked to fix something urgent that far exceeds my bounded rationality of this domain; which might ramify harm on real people if the mark is missed, goddess forbid and forgive.

    I did some additional reflection with the >Code community in their Slack on why I was feeling relieved by Klein’s ideas (Jessica Kerr hearted this so I’m feeling proud to share!):

    Thanks. This really struck me because I think I have internalized some idea that experts have these academically rigorous approaches to problems where they draw on frameworks and data and tools blah blah to come to conclusions.

    Then Rein chimes in:

    Yeah, the analytic decision-making mode is what we are taught, but it’s not what people actually use, especially in high-pressure situations.

    The me again:

    Oh right that was the other piece. [Making decisions under] high pressure. Which in corporate culture feels like all the time a lot.

    Then some other folks tried to pivot the conversation to talk about how web tech is not really that high “pressure” because bad decisions don’t kill people or necessarily tank the business immediately. So I continue:

    I understand where y’all went with that idea of “high pressure.” But even without the kinds of excess you describe — which I’m grateful not to feel at my (fairly spread out) 1000 person company — I still feel like I’m caught up in a heavy stream; not in a surprising or terribly uncomfortable way; but like a tireless machine of ambient conversation between me and my work and my coworkers that is always humming. So maybe “high pressure” in the sense of constant. Day to day I feel coerced into what feels like quick/on-the-spot decision-making, and always on guard for such; and that’s definitely challenging for me.

    Like, I’m a developer, and I have to make a ton of small decisions all day long when coding. Even medium-sized decisions. (Like the other day deciding whether to coordinate on a revision of a standard component with a team in Spain with similar needs, vs plowing ahead myself because I knew that process may take weeks and I needed output in days). And often these decisions have to happen within a matter of minutes because of the above normal buzz. I think I remember there was another episode where folks talked about how developers make more decisions than CEOs on any given day?

    I was finding comfort in this idea from Gary Klein because I can consider whatever my instinct draws on in that moment to be a practice of expertise. I can feel like an expert; and without any added insecurity for a lack of analytical rigor, if I had to explain my decision. I usually have confidence in my decisions, but I didn’t have a framing of it as expertness. A framing I was likely lacking, as Rein said, because I’ve been taught a dichotomy between analytical thinking and instinct, where there actually isn’t one; at least as I understand the idea without having read the book yet. But as someone relatively new in the industry — just around 6 years building web — being able to come into expertise or see it forming in my behavior is really nice.

    Earlier today, delving through Software Development and Reality Construction, I was delighted to discover another rebuke to my previous ideas of expertise as analytical reason qua database. It smacked of Klein and I wonder if Reinhard Keil-Slawik is influenced by Klein – or conversely – or perhaps they swarm with each other in that world of systems thinking, cybernetics, constructivism, etc…

    From Keil-Slawik’s essay Artifacts in Software Design:

    Reinhard keil slawik insight

    Keil-Slawik argues that expertise – ie “insight” – is an enrichening of “understanding.” Understanding being a cognitive structure derived from the encounter with our envionrment which mashes up phenomena (physical, etc…) and circumstance. It’s not random access from a brain-as-database that has been populated with arrangements of stuff from “out there” - outside ourself/mind – (previous cognitive structures); as if insight is just a result of recall or trial; that smacks of an easy slippage slide into decision paralysis.

    Rather it’s:

    Reinhard keil slawik insight 2

    This notion seems to gel well with RPD. I wonder if Klein would welcome this cognitive science to explain how we do decisions.

    I’m tempted to accept this explanation pro-tem since the thought of unbounded accumulation of stuff in the brain, and lookup at exponential runtime, doesn’t satisfactorily explain the reliable guesses of experts under time pressure. This idea finds kinship with Nora Bateson’s characterization of sensemaking as Warm Data collection:

    “For some reason, you know what it would feel like to put your tongue on the rug — you almost know what it would taste like. You have been gathering the contrasts, comparisons, and differences that give you that sensing since you were born, maybe even through the sense making of prior generations. You have no files for those data.” 🚀

    Keil-Slawik and kin are proposing the idea of human expertise as an individual construction of meaning, as well as a co-constitutive emergence of our own body/mind wisdom and the “external memory” of symbols and tools we work with. The latter is an important point because it reinserts the importance of mneumonics as Turing machines, algebra, languages, frameworks, etc…. This affords us the benefit of a tradeoff: speed of resolution over perfectness. The example Keil-Slawik gives is of the ape who, if taught once to put sticks together to reach a banana, will subsequently devise similar means of combining objects to get the banana “without any hesitation…and any similar situation.”

    Monday September 7, 2020
  • poetry

    The awkwardness of downloading concurrent, asynchronous academic threads around 2003/4

    A cyborg manifesto Halfway through _Companions in Conversation_ and I'm having this memory of being in college -- clacking away code points on the cheap silicons of a Micron computer -- practicing new knowledges in my class essays just like Haraway admits while writing the _Cyborg Manifesto_: "trying out some of the knowledges that hadn’t been mine that I was getting from my colleagues _[professors (in my case)]_ and the graduate students in the program." I remember deploying the token "implosion." I probably got it from Haraway.

    And yet – perhaps a small singe of regret in my tummy neuronics, and deja vu – this word play was happening while I was coming to grips with the nightmare material of Black and Brown people in America through black feminist and black feminist incarceration studies. Our class journaled, and so I produced heartbroken journaling about the new Jim Crow (before Michelle Alexander would be able to corrupt the nation), taking some liberties with text. I wonder, now, if the linguistic turns in those reflections foreclosed, too soon, real collaborative feeling; like community (or, because it’s hard to resist, kinship and worlding). All these students toiling away in separate rooms and library cublices, synthesizing separately. Autopoietic onanisms. Needed more sympoiesis. I still regret a final project that was a bit too stylistic and “discursive” – performing the film student, trying to dazzle a little with Final Cut Pro; but some good calories lost. The art folks I admired made it hard for me to distinguish between medium and message, or situate with them with grace. (Not that Haraway was successful for everyone. But she kept playing whilst under the microscope.) I’m remembering a kind of safe feeling in postructuralism. But perhaps that was just the safety of academia which easily postpones praxis and doing something while chatting in the corridors of monuments to robber barons. Because, like, I’m also remembering that the materiality was a welcome call to action. Perhaps I wielded it – some subconscious desire for participation in activist and join movement(s) for climate, economic, gender, racial justice – as an antidote to the fist two years of cultural studies and dorm-chair politics. It’s hard to remember exactly if I was that wise; maybe the blunt horrors of the Bush regime’s colonialism/imperialism was enough of a catalytic flow through the student bodies progressive consciousness.

    Still, interesting to wonder, now, about the concurrent postructural deluge of neologisms and deconstructing of gender, race, class oppressions, cyborg multiplicities back then; while faced with newly understood, centuries old urgencies of white supremacist capitalist patriarchy and Black holocaust. I’m at the same place now, returned, recurred. Which is somewhat shocking and all too expected in America.

    That said, the second time around, considering how Haraway’s work traveled and continues to travel in the last 15-20 years, I’m excited to be hailed again, thinking with her. While moving duckets to organizing, talking to white brothers about racism. Worlding assemblages seems really relevant still as the complexes of racial capitalism and white psychosis have become wider, deeper, and more micronic.

    Sunday August 30, 2020
  • habitations

    Byzantine algorithm studies: Using math to reverse an integer

    Making code do something requires exactness but that doesn’t necessarily mean programmers will express grammars with any particular concision or ease. Code is an outcome of social construction, background, perhaps aesthetic desire. Toy code problems reveal the apparatus, for sure. If you ask me to reverse an integer I’ll build a world of arrays – because I like them. Directionality makes me think of traversal, therefore arrays. I also recall the reverse() convenience immediately. However, as a result, I’m forced to deploy morphological contortions; which might feel icky for someone derived within an algorithmic hermitage, auto-recursively deriving pleasure therein.

    function reverseInt(int) { // STEP 1 - morph let intAsArrayOfStrings = int.toString().split(""); }

    Reversal is now nearby:

    function reverseInt(int) { // let intAsArrayOfChars = int.toString().split('')

    // STEP 2 - traverse, recombine let reversedIntString = intAsArrayOfStrings.reverse().join(""); }

    All that remains is a second transmogrification back to the initial type and signing:

    function reverseInt(int) { // let intAsArrayOfChars = int.toString().split('') // let reversedIntString = intAsArrayOfChars.reverse().join('')

    // STEP 3 - Sign let result = Number(reversedIntString) * Math.sign(num); }

    But it has come to my attention that a mathy person prefers preservation, whereby the input is arithmetically recombinated with the help of division and multiplication.

    function reverseInt(int) { let remainder = 0; let result = 0;

    while (int) { remainder = int % 10; result = result * 10 + remainder; int = parseInt(int / 10); } return result; }

    reverseInt(12345);

    But there’s no translation from integer to array. No shift. Grammar school level maths instead. I guess this solution exalts type consistency. Perhaps belies an obsession with 10? The base binary pair. I can’t imagine the benefit of doing that in a commercial software project so it could only be an academic fancy, benchmarked under extreme conditions. It’s really clever, though.

    Also, been thinking about slice a lot today. Never stop dreaming:

    let int = 12345; let intAsStr = int.toString(); new Array(intAsStr.length) .fill(null) .map((_, idx) => intAsStr.slice(-(idx + 1))[0]) .join("");
    Thursday August 6, 2020
  • habitations

    What's a bug?

    …bugs are interesting and important in themselves: they define the boundary between what is understood and what is not. Hence, they show us where our weaknees are, and provide opportunities for us to learn and grow.

    Software Development and Reality Construction
    Ch 5.1: The Denial of Error, Joseph A Goguen

    The rush to fix bugs to recoup customer satisfaction is absolutely necessary when writing for capital. I just worry that without the proper retrospection we won’t actually increase our understanding of the software, and therefore the recidivism rate of error will remain constant; or likely increase in the tail wind of growing complexity and entropy.

    Wednesday July 15, 2020
  • habitations

    Recursion is too easy for writing nested menus in UI

    Hannah Höch, Für ein Fest gemacht (Made for a Party), 1936 Collage

    When paired with component-based framework tools and HTML’s unordered lists. Since the rendering is done by the framework, we simply need to progressively stack render calls as we traverse the data. Such brutalist Dada.

    function List({ items }) {
      return items.map((item) => (
        
    • {item.name}
    • {item.children && }
    )); }
    Sunday July 12, 2020
  • habitations

    A Recursive Validation Function with User-Defined Exceptions

    Every time I use a recursive function for something practical in commercial software my notional machine of it’s behavior is refined. This amounts to a small list of heuristics amassing in my mental pocket:

    1. “It’s a function that calls itself.”
    2. “You make your list of tasks first then start from the last and work your way back up” or “It’s like unpacking a box with other, smaller boxes, with other smaller boxes, with other…and then only start looking at the boxes from the smallest to the largest, one at a time” (🎩👌 Aditya Bhargava’s grokking algorithms)
    3. “It’s good for building up a list or value, kind of like reduce”
    4. “It can be less code but less performant.”

    After working on another problem recently that involved deep-diving a nested JavaScript object and executing validations against each level I’m adding:

    “Recursion is awkward if you need to break early.”

    In my particular case I needed to validate a recursive data structure representing an org chart of Employees and return invalid immediately if the code traversed an Employee with bad data – extra fields, missing required fields, fields of the wrong type, etc….

    Breaking out from a recursive function is not quite as straightforward as you’d think. Also, historically I was used to seeing recursive code employed for tasks that wanted the call stack to build up all the way through the data.

    Like, (deep) flattening an array:

    function deepFlatten(nestedArray, result = []) { for (let element of nestedArray) { if (Array.isArray(element)) { deepFlatten(element, result); } else { result.push(element); } }

    return result; }

    Or, fetching a complete set of data from a remote source in chunks:

    async function fetchAll(params, all = []) { let chunk = await fetch(params); let nextPage = chunk.nextPage; all = all.concat(chunk.data);

    if (nextPage) { let nextParams = { …params, page: nextPage }; return await fetchAll(nextParams, all); }

    return all; }

    What I quickly discovered is that just trying to capture and emit an error from a recursive call stack is already a bit funky. Simply returning false in your function doesn’t work because calls lower on the stack may return true; and since we’re (kind of) “building a value” it only matters what the final call returns. This approach won’t work:

    // Will only return false if the last call in the stack returns false function validate(data, schema) { for (let item of data) { for (let rule of schema) { let field = item[rule.name]; let required = rule.required;

      if (required && !field) return false;
    
      // Recurse
      if (Array.isArray(field)) {
        validate(field, schema);
      }
    }
    

    }

    return true; }

    Using recursion is more like a leap of faith - you are handing over control to the JS engine over an unbounded data set; it’s quite reminiscent to the manner in which higher order functions operate with Array and Object collections. For example, forEach is a powerful and declarative alternative to for and for..of/in loops until you find yourself needing to skip over an iteration or break out of the loop. Keywords like continue and break are unavailable in Array and Object collection methods – these are closed iterators.

    Your only recourse in a recursive function is relying on outer calls – since the call stack is LIFO - to set that flag and pass it through each stack layer. So capturing and emitting an error from your recursive function might look like this:

    function validate(data, schema, errors = []) { for (let item of data) { for (let rule of schema) { let field = item[rule.name]; let required = rule.required;

      if (required && !field) {
        errors.push(error);
      }
    
      // Recurse
      if (Array.isArray(field)) {
        validate(field, schema, errors);
      }
    }
    

    }

    return errors; }

    If our program requirements suggest we want to parse the entire org chart for bad data, this function will give us a result array we can further process to report errors. But for my purpose, there’s too big a potential cost of unnecessary runs while a large call stack is cleared for a large org chart.

    In order to stop processing the org chart and return an invalid result early, we need a solution that stops execution entirely when the invalid check is entered in the outermost call. Alas, the solution ends up being rather elegant and simple, though counter-intuitive. Rather than returning (false, an error list, etc…), you can throw and thereby forcibly halt the engine’s execution of the code. Here’s an example with throw:

    function validate(data, schema) { for (let item of data) { for (let rule of schema) { let field = item[rule.name]; let required = rule.required;

      // It's even one less character to write! 🤣
      // Also now we have total control over the exception content
      if (required && !field) throw new MissingFieldError(item, rule);
    
      // Recurse
      if (Array.isArray(field)) {
        validate(field, schema);
      }
    }
    

    }

    return true; }

    Day in, day out we work constantly with client applications that only trhow as a result of unintended bugs in the program. But we can take advantage of this standard JavaScript behavior and erect an appropriate error boundary. Remember:

    Execution of the current function will stop (the statements after throw won’t be executed), and control will be passed to the first catch block in the call stack. 🔗

    Therefore we can rename and wrap our recursive function that throws, and put it inside an error boundary to achieve that early break we want. This approach even comes with the added advantage of declaring the content of our user-defined exception at throw site; eg, utilizing meaningful error constructors or factories like missingFieldError().

    function validate(data, schema) { try { validateInner(data, schema); } catch (error) { // returns new MissingFieldError()! return error; } return true; }

    Even more, the elegance of this design with an outer try/catch allows for separate testing of our validation business logic – the rules against which bad data throw – and error handling – what errors we emit for certain cases.

    Monday July 6, 2020
  • habitations

    Debugging TS in VSCode and Russel Ackhoff's Problem Treatments

    When I’m drifting between jobs – say, during a global pandemic and massive civil rights upheaval – I tease away some time from protest and anti-racist organizing to study programming basics and plunge below the membranes of web frameworks. Of course, the first thing you realize after a few key clacks is the brutal lack of a full-featured dev environment. I think my industry folks who don’t spend free time on side projects can relate.

    I had a serious moment of angst this morning about whether I could achieve debugging for TypeScript files in VSCode within a reasonable amount of time – that is, an amount of time that wouldn’t deplete all my creative energy before writing any code. To my very pleasant surprise this was much easier than expected. You need two things.

    1. Add a task to run the compiler in watch mode (this is totally boilerplate):
    {
      "version": "2.0.0",
      "tasks": [
        {
          "type": "typescript",
          "tsconfig": "tsconfig.json",
          "option": "watch",
          "problemMatcher": ["$tsc-watch"],
          "group": "build",
          "label": "tsc: watch - tsconfig.json"
        }
      ]
    }
    
    1. Add your outfiles (dist) target glob to your launch.json file:
    {
      // Use IntelliSense to learn about possible attributes.
      // Hover to view descriptions of existing attributes.
      // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
      "version": "0.2.0",
      "configurations": [
        {
          "name": "Launch server.js via nodemon",
          "type": "node",
          "request": "launch",
          "runtimeExecutable": "nodemon",
          "program": "${file}",
          "restart": true,
          "console": "integratedTerminal",
          "internalConsoleOptions": "neverOpen",
          "runtimeArgs": ["--experimental-modules"],
          "outFiles": ["${workspaceFolder}/out/**/*.js"] // <-- ADD YOUR OUTFILES GLOB HERE
        }
      ]
    }
    

    And voila, pausing on breakpoint in a basic router I wrote while running a node server:

    Ok new topic…

    During episode 138 of the > Code podcast with guest Vaidehi Joshi (of BaseCS podcast fame amongst many other things), the discussion veers into the topic of problem solving (~20:30). Jamey and Vaidehi riff about that condition we all know and love – and never appreciate enough – whereby solutions come to you while away from the computer; while your brain is “percolating” (Jamey) or in a “mental marinade” (Vaidehi). Jacob Stoebel expands on that by describing another common mental turnabout we programmers do: where we often catch ourselves down a rabbit hole trying to – big “S” – solve the problem with our first instinct; but that increasingly deeper and muddier slog causes us to pause and shake off the adrenaline and climb back out of the hole and we find ourselves asking oursleves how do I get rid of this problem. Rein, per usual, picks up the thread and introduces the panel/listeners to a theorist named Russell Ackoff who minted a classification for approaches to problems called “Problem Treatments.”

    To list and paraphrase them:

    1. Absolution - Ignore the problem and hope it goes away

      It’s easy to joke about this one, but Rein actually provided a compelling case for this. For example, security engineers often make the tradeoff to ignore a security loophole if the threat isn’t great enough.

    2. Resolution - Doing something something “good enough”

      This feels like a pretty standard principle for building product considering some of the typical aspects of commercial software work:
      a) Your team will never have a complete understanding of how your software will express itself once “running” or “used” by customers. Therefore perfection, completeness, wholeness are not desirable or ever, actually, achievable.

      b) Deadlines and other external stakeholder pressures will force you to sacrifice however your team defines “great” or “better” software

    3. Solution - Finding an optimal solution with experimentation or research

      There are touchstone moments in the development of software where this happens. When choosing a framework. When designing a styleguide. When choosing a caching strategy. When designing a game plan to migrate code.

    4. Dissolution - Redesigning the system so that the problem no longer exists

      This is what Jacob was hinting at. That pure land you inhabit when refactoring code to circumvent an immediate impasse in the software by reframing the problem. I fondly remember dissolving a particularly tricky conditional statement in a web client by working with a backend engineer to implement an enum for a resource type. This removed a bunch of object introspection (Law of Demeter violations, etc…) in favor of a simple switch statement.

    Wednesday June 17, 2020
  • habitations

    The explanation of Question 12 of Lydia Hallie's fabulous list of JS interview questions, and others

    Lydia Hallie’s list of JS interview questions are extensive and probably one of the most comprehensive resources out there. The laundry list of questions runs the gamut from basic to esoteric language features, each of which progress you though various themes in what seem like meaningful sequences. It appears some modicum of pedagogy is involved, which therefore avoids that oft vacancy of substance typical of a listicle.

    Yesterday I was running through a block of questions with an ex-colleague and we noticed that some of the answers were a bit inchoate. It’s so easy for meaning to drift when talking about language features especially when it comes to JS. Consider navigating the following taxonomy – related to creating objects in JS – for readers of all skill levels without specious reasoning (the kind of short-hand you get the point or close-enough superstitious explanation we sometimes use; which, by the way, I think greatly hindered my early career development): object, constructor, function, class, prototype, instance, static, method, new, this. It’s quite hard but really worth being as correct as interpreting ECMAScript affords us.

    I decided to open up some discussion on Question 12 which tests one’s knowledge of the differentiated effects of using the new operator before a function call:

    https://github.com/lydiahallie/javascript-questions/issues/82#issuecomment-645022140

    Definitely called for a little revisit of YDKJS, no doubt.

    I also feel like I need some clarification on Question 14. What do we call the very root object in JS that all objects delegate to?

    https://github.com/lydiahallie/javascript-questions/issues/26#issuecomment-645058195

    Coming up with examples is hard. I think Question 11 could use a bit of rework too because there is a this scope issue that somewhat overloads the focus of the problem. Here I go again:

    https://github.com/lydiahallie/javascript-questions/issues/355#issuecomment-645066354

    Tuesday June 16, 2020
  • habitations

    Book quotes and commentary: _Software Theory_ by Federica Frabetti

    In which Frederica Frabetti locates the “points of opacity” – malfunction – in/of software through study of the Garmisch (Germany) report – a foundational text of software from the first Conference on Software Engineering in 1968 organized by the NATO Science Committee. (Also careful you don’t mince the word “Garmisch” into Gramsci.)

    “…these two aspects…the necessity of calculating incalculable risks, and of attributing responsibility for them…is a point where software engineering ‘undoes itself’ precisely at the moment of its constitution.” -p75

    As in discovering responsibility outside of the software: in the likeness of user demands, society’s problems, ever-increasing demands on existing software from users.

    “…there are problems out there that software helps to solve.” -p79

    “…the system and it’s development are never clearly separated” -78

    “…they must acknowledge that such a process is not in fact linear, but at the same time they have to avoid backward steps at all cost.” -p78

    And how professionalization, as in the planning and management of software development over time – or formalism, or “linearization”; where we literally write many things to organize time (external and internal design specs, JIRA stories, prototypes, then finally the code (I guess) etc…) – relegates software to the realm of instrumentality and further destabilizes what software is; albeit by attempting to create an interiorized structure that skirts isolation. Thus the evergreen frustrations of the business side.

    “…this appeal to the lineraization of time is part of the general attempt to ‘control’ software – that is, to think of software in instrumental terms.”

    “…software is both a problem and a solution – indeed, it emerges at the point where the distinction between the two becomes undone and it exists only as the precarious stabilization of this distinction.” -p81

    Software development is a fleeting discipline. I guess a line of flight in the rhizome. So maybe we’re hearding rats not cats. This partly explains, for me, why I can iterate again and again over a simple “implementation.” Because we’re never done, we’re always undone.

    Hence our continued, common problematic to characterize when commercial software development happens. Is it when you change all var to let? The moment your brain begins to mull over a requirement presented by your PM in sprint planning? The final inscription of the company’s mission statement? During a whiteboarding design exercise? The moments of realization you have in the shower, during a lunch conversation? The isncription of an idea you copy/steal from your last job without attribution? The deletion of unused code? The moment you decide to delete the unused code?

    Or perhaps the moment we recognize it is bound to break, malfunction, fail:

    “…software engineering performs an impossible expulsion of consititutive failure from technology, with this move establishing itself as a discipline.” -p76

    And further, what else software development is. Because it couldn’t just be about formalizing the process, right? Because how do we account for the incalculability of costs/time between problem and solution?

    “Thus, Dijkstra resorts to individual creativeity – or, as Derrida would have it, ‘genius’ – as an explanation for what is in excess of a procedural method and constitutes a leap beyond the programmable.” -p82

    But others from the conference warn about the cult of genius. Interesting earlier this year I was mulling over a similar intervention four decades later:

    “The ‘exessive’ creativity of the ‘system type’ is in turn portrayed as ‘bad’ unexpected–something that exceeds the management of the project and threatens it.” -p84

    Replace “system type” with our modern-day 10x programmer or rockstar and it’s comforting to think that 50 years ago computer scientists were already weary of the danger this person posed to software development. It certainly seems like Frabetti is jabbing at that here.

    Software resists instrumentality, to be merely called “code”; it follows, then, that engineers resist becoming fungible resources in a complicated domain space. Software’s realization is a co-consititutive process of increasing understanding of the problem and the solving of the problem itself. I like the way this passage sums it up:

    “How can something based on the lack of knowledge be realized successfully? This paradox clarifies the particular understanding of time that programmers develop in their interaction with software. Not only does one find out what the system does only by constructing it; the original ignorance of what the system does is constitutive of the system…Infelicity is constitutive of the possibility of felicity…One always starts albeit one does not know what will be.” -p90

    Of course, the solution is never an end. Or have you ever built software that should not be extensible?

    Monday June 8, 2020
  • habitations

    Let's talk about Orchestration vs Separation of Concerns

    Check out the second – and I believe last – in this short series: https://dev.to/internetross/let-s-talk-about-orchestration-vs-separation-of-concerns-react-redux-edition-part-2-imo


    I’m emerging from the shadows, a bit, and trying out dev.to. Check out this first in a two part series: https://dev.to/internetross/let-s-privilege-orchestration-over-separation-of-concerns-4410

    Sunday April 26, 2020
  • habitations

    Preferring repetitive Action notifications over reuse

    Have you ever noticed yourself going to unnecessary lengths to avoid repetition in your code despite the fact that requirements actually represent a complicated world? I swear the longer I write code in the industry the more DRY feels like a leaky ideological imposition that drifts from “knowledge” duplication into code/documentation – ie text artifact – duplication. Sigh. This is a nuance we lose too easily by the ad nauseam peddling of the shorthand DRY; which I suppose also reveals the danger of the supremacist tendency of the axiomatic; axioms all-to-easily claim ontological root. Especially in the minds of early career folks or folks emerging from industry founding fathers cargo cults.

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    The Pragmatic Programmer, [emphasis mine]

    As we’re tapping along in an editor, the code that bears some resemblance to other code is, in fact, the precise nuance our applications need to create the richness of these membranes we build between people and machines. Here’s an example slightly altered.

    Let’s say we are working with an events management system where a new feature is being built to allow event organizers to administer marketing campaigns on their events. In the current world, an event organizer can do two things to a campaign once it’s added to the event:

    1. Activate the campaign.
    2. Select the campaign to be “featured” – this places the campaign into a larger sitewide collection of campaigns that are featured in certain areas of the product.

    A sketch of the stateless component in our graph might look like so (just relevant JSX and event handler):

    Then Product decides to add a new nicety whereby an event organizer, when activating a marketing campaign, will automatically trigger marking that campaign with the special “featured” flag if that campaign happens to be the only active marketing campaign on the event. This is effectively a third flow; of course to the user it’s just a list item with a couple check boxes (simple stuff, right?).

    As the developer I might think: this doesn’t seem all that hard! I already had the foresight to implement a flexible indirection – a consolidated event handler to coordinate the store notifications for user interactions in one place. This means all I have to do is flavor the conditional will another branch; and I can reuse the current dispatchers. I’m a fucking oracle.

    Whereby I try:

    I mean, that’s the right logic. All my test mocks are reusable. We are sailing along.

    Sort of.

    Then something nags at me looking at this code. It’s subtle, but there’s a risky assumption embedded here. Inside the interior if-then branch under the conditional expression hasNoActivated, we are sending a sequence of two distinct mutative notifications to our data store; moreover, the second depends on the first to complete successfully. Now, we might test this code in the browser and it works every time. Our tests pass.

    But.

    For how long?

    This code lives downstream of our next indirection – the container – that shepherds these messages to the store. So this code is trying to coordinate a subroutine of effectful functions (Actions) three (perhaps more) layers away from the store itself.

    Which begs some questions. For example, although the action notifiers I’m calling are synchronous, how confident am I in the inner workings of my component framework’s render engine and algorithm – particularly it’s collaboration with my data layer (ie Redux)? I might be forcing additional renders here. More importantly, how much can I trust other developers to maintain the correct sequencing in the container? Will a test help? Will a comment help? Will either of these prevent mistakes when additional behavior needs to be added.

    100% maybe.

    Software is weird. Code like this could become a productive, yet forgotten, corner of entropy. But silent risks, if recognized, must be avoided. We can do better, although part of reaching this better is yielding our mental model to become more reactive – to adapt, to embrace nuance, to become loose as the requirements become more complicated. My observation is that the code is not lack for defensive guards and documentation, but lack for an expanded set – single, unambiguous, authoritative – of blessed possibilities with distinct articulations. In other words, an expanded model – expressed in code – that models more precisely the desired possibilities of what a user can do.

    Let’s try rewriting our component-level event handler to incorporate the new happy path into our domain’s lexicon: the Action notifiers, the names of things “that depend on when they are called or how many times they are called” (see Grokking Simplicity).

    The modifications in this third example add a new Action notifier (handler) that represents the distinct type of mutation we want to effect against the store. In other words, a function that satisfies the new Product case that narrows attention – of the developer and program – on the new case by leveraging a semantic, structural, change to the number of container handlers. (Side note: I fantasize that Zachary Tellman might describe the new handler name as a too “natural” yet still “consistent” name (see Elements of Clojure).

    Furthermore, let’s observe that the payload semantics remain the same. Which means our data layer comprised of the container and action dispatching, only requires similarly modest adjustments to shepherd the same data. Lastly, though quite importantly, our container is now capable of dispatching messages to the store and effecting a mutation in just a single pass if we wish it. For example:

    
    updateCampaignAndUpdateFeaturedCampaigns(data) {
        dispatch(updateCampaigns(data));
    }
    
    // Whereby our reducer might return the state mutations composed as:
    
    {
        campaigns: {
            ...state.campaigns,
            {
                id: payload.id,
                activated: true
                // other fields...
            }
        },
        featuredCampaigns: [
            ...state.featuredCampaigns,
            payload.id
        ],
    }
    

    Overall this design provides increased confidence that this display component, our presentational leaf node, is more precisely responsible for render and event notification. Perhaps the subtweet here is keeping events and messages 1:1, even if you wind up with similar or duplicate lines of code.

    Wednesday April 1, 2020
  • habitations

    The will to better software companies

    Semilattice a and Tree Structures b Source A City is Not a Tree 4

    Lately I’ve been thinking about productivity and it’s evolutionary rhythm while scaling. Like, how easy the cadence ebbs. My lived experience as a software developer often feels like anfractuous GPS reroutes. For example, we have split teams to tackle different product areas in parallel. Usually this means working separately in our “owned” codebases. But software is never neat. So when the team in Mendoza needs review on a changeset that touches our code area, we’re talking temporal crossings to achieve conciliation.

    It’s late afternoon in Argentina wine country when I’m just digging my heals into a morning Philz roast.

    The additional burden of a cross-geo async request means that I’ll have to hunker down to decipher what they’re proposed code is doing and what it’s supposed to do – what might take a few minutes pairing could take tens of minutes. While async communication over Slack is ok, closing the mental gap in understanding the code changes is tough. We are facing a kind of Derridean differánce. As if navigating the palimpsest wasn’t enough. Multiple personages reading and explaining the code text, scribbling comments, changing, re-reading the next day; discovering the truth in the spaces/tabs. Reading software is really tough stuff. This impact of distance and time spent to “get up to speed” (pun intended) is a delta that doesn’t always feel satisfyingly addressed by organizational planning.

    Does this sound like what you do for work?:

    Every software project is like a palimpsest where every developer scratches old text, and writes on top of it their perceived new solution, creating a manuscript where a mix off styles and languages reflect different understandings of the world, different world visions.

    And

    Différance is the systematic play of differences, of the traces of differences, of the spacing by means of which elements are related to each other. This spacing is the simultaneously active and passive (the a of différance indicates this indecision as concerns activity and passivity, that which cannot be governed by or distributed between the terms of this opposition) production of the intervals without which the “full” terms would not signify, would not function.

    Positions

    Holy complicatedness, richness, depth! No wonder you can’t just throw more devs at the problem. Though apparently that’s what men in the industry used to think.

    Not too long into my first software gig my team of 5 engineers organized a book club and tackled The Mythical Man Month. That’s when I really learned how dev teams became reduced to fungible “resources.” (It wasn’t long before I was at a happy hour and our head of sales was asking me, “Why can’t we just ship the work you do to India?” So I apparently it’s a historically recurrent theme.) It was also my first introduction to the idea that you can’t just throw more people at software problems to achieve speed gains.

    Adding manpower to a late software project makes it later.

    The Mythical Man-Month

    If we believe Jessica Kerr and the other cyberneticians of our ilk, and on this point I might – software is a symmathesy: a learning system of learning parts. Brooks must have believed something similar I just don’t remember his exact words.

    Nonetheless, I interpolate. Brooks' insights feel situated as early symmathecist thinking. Software is so highly sophisticated, so highly complicated!, that command and control thinking can’t intervene to establish order:

    As time passes, the system becomes less and less well-ordered. Sooner or later the fixing cease to gain any ground. Each forward step is matched by a backward one.

    Ibid.

    So why does this happen? Like, despite Brooks' spacious, ethereal characterization of our work as only slightly removed from pure thought-stuff where we build castles in the air (words I do remember wonderingly)…we definitely live through a tangible stickiness, queues, waiting rooms; especially mid-to-late in the build cycle. Things get really hard. I swear there’s something even metallic about a build failure.

    In order to change our system, we need a mental model of it. Each developer has a mental model of the software we work on. Each developer’s mental model is necessarily incomplete and out of date. And, they’re each different from everyone else’s. (This is called Woods’ Law, if I recall correctly.) We spend time reconciling our mental models enough to communicate with each other; this is a coherence penalty.

    Well, we all know the drill and can recall with pretty sharp clarity what tends to happen. Making code changes later in a project becomes increasingly difficult because previous design decisions have constrained possibilities. More consistency is demanded; maybe not total, but if the component handlers should call to the reducer, why are you calling setState() directly in the handler? Because…the rush to the finish line produces anxiety that strains communication (within the already polynomial channeling). You shift from pensive philosopher to tactician; code breaker. Code review quality suffers; if only for the obvious reason that you need to double down on crafting and glueing code for more hours in the day. This, in turn, decreasing mental reserves for context switching to someone else’s code. All that great senior leadership out the door. There’s simply less consensus building. Public Slack channels are quieter. Regardless how high-performing the team, I’m convinced there will be some communication degradation in the twilight hours.

    When I read Coda Hale’s recent piece Work Is Work all of this ugh was loaded into view again; and I really appreciate his analysis of how we end up in slog mode; and the antidote. Maybe the industry hadn’t received a really concise reflection in a while. Maybe it’s the mathematical proofing and formalism that adds a seductive element for the modern SWE struggling in their unwieldy symmathesies; what Karatani would call our “will to architecture”. This piece was going around the Greater than Code sphere for sure. For my philosopher/poet brain for sure. Ok, so similar to Brooks, but for orgs, Hale says:

    At some point in time, every organization realizes that it’s slowing down.

    Like Brooks, Hale is dealing with that slog. But with the scope zoomed out to help us understand how to manage and overcome the obstacle of big organizations growing at economies of scale. He enriches the idea of the mythical man month with Amdahl’s law:

    The work capacity of an organization scales, at most, linearly as new members are added.

    Ibid.

    Amdahl’s law: I’m still wrapping my head around it! But I believe the gist is: when we try to add more people to parallelize parts of the dev work, we won’t get any dramatic (exponentialized) speed gains over the sequential work of the smaller group. Amdahl observed this behavior while throwing additional processors at a computational task in a computer machine. The “parallel fraction” of speed increase when adding N new members is only relative to the original group size, and eventually tops out and flattens. Rather than the speed boost our command-control leadership wills, we get something a flip side slog:

    Coherence costs grow quadratically as new members are added.

    Ibid.

    Slog = coherence costs! That’s a sweet little buttoned up phrase for it.

    Around the same time I read Hale’s piece I was picking up the aforementioned Kojin Karatani book, Architecture as Metaphor. So while I’m sitting with these ahas! about how the frustrating things at work are these somewhat well-understood “coherence costs” that occur on large teams like mine which can be observed with rigor from the outside (etic)…I’m suddenly finding myself in chapter 4 of Architecture as Metaphor called the Natural City where Karatani continues to build his argument for the ways that theory is entrapped again and again by the architectonic. But the thread I want to pull on is simply Karatani’s synopsis of Christopher Alexander’s The City Is Not A Tree. Lol, of course I should meet Alexander here. He is like one of the father’s of Software Patterns – see his 1996 keynote at OOPSLA – that would inspire so much of how our apps are built at scale today.

    In the seminal essay The City Is Not A Tree, that would inspire people across fields and perhaps give rise to the concept of bounded rationality, Alexander proposed mathematically based, formal, structural, ideas (just like Hale is doing) to describe the “natural city” as a semi-lattice; a concept from Set theory.

    A collection of sets forms a semilattice if and only if, when two overlapping sets belong to the collection, the set of elements common to both also belongs to the collection.

    A City is Not a Tree

    He is thereby able to reveal the “natural city” as something we can think about rationally. Particularly, and importantly, rationally distinct from the nemesis “artificial city” – a command-and-control style approach to designing cities in a treelike fashion from those who don’t appreciate the the richness, subtlety, and complexity in the overlaps that otherwise staves off “coming destruction” and “anarchy.”

    When we think in terms of trees we are trading the humanity and richness of the living city for a conceptual simplicity which benefits only designers, planners, administrators and developers. Every time a piece of a city is torn out, and a tree made to replace the semilattice that was there before, the city takes a further step toward dissociation.

    Ibid.

    And

    The semilattice…is the structure of a complex fabric; it is the structure of living things, of great paintings and symphonies.

    Ibid.

    My brain keyed on semilattices because they seemed to describe the structure of a software system I’ve been coming to know through Kerr and Bateson (opera, camerata, living) – not to mention the cutely convenient lexical overlap of mathematical terms in the definition; eg, a “Set” is a common data structure implemented by many high-level programming languages. Mendoza devs putting their text in our text. Some overlap of time zone. Knowledge overlaps as more tenured developers work in more areas of the codebase. Overlap at the coffee station, at the lunch table. Overlapping documentation, nearly duplicate. Also the incomplete lines of connection. The grassroots partnerships and friendships that emerge across teams. The polymorphic ensemble of patterns and style in our web applications: classes called as functions, state reducers and DOM events, REST with GraphQL, compositional components within hard-bound graphs. Microservices databases implementing Order and User over and over again. The semilattice seems very familiar.

    Therefore, in the context of a symathesy, where Alexander might say our bounded rationality (not to mention hubris, or inexperience, or ego, or other ingressive tendencies) limits our ability to observe and study the subtlety and richness of the organization, throwing more men at the problem is short-sighted as fuck. Managers can’t govern from apart, at the top of the edifice. There are superlinear interactions to behold.

    Then the advice that Hale starts giving to tackle coherence costs makes a lot of sense:

    The only scalable strategy for containing coherence costs is to limit the number of people an individual needs to talk to in order to do their job to a constant factor.

    Limit the overlap!

    As with heavily layered applications, the more distance between those designing the organization and the work being done, the greater the risk of unmanaged points of contention. Top-down organizational methods can lead to subdivisions which seem like parallel efforts when listed on a slide but which are, in actuality, highly interdependent and interlocking. Staffing highly sequential efforts as if they were entirely parallel leads to catastrophe.

    Ibid.

    And

    As an organization hires more employees, work on productivity improvements must be a constant priority. Internal tooling, training, and services must be developed and fielded to ensure that all members are able to work on problems of continuously increasing impact. The ceaseless pursuit of force multipliers is the only possible route to superlinear productivity improvements as an organization grows.

    Ibid.

    In an essay I assume Hale drew from, and to which Jessica Kerr’s above quote references, Michael Nygard makes a similar suggestion:

    …take a look at your architecture, language, tools, and team. See where you spend time re-establishing coherence when people make changes to the system’s model of the world…take a look at your architecture, language, tools, and team. See where you spend time re-establishing coherence when people make changes to the system’s model of the world.

    I love all this. Of course, it’s hard. Alexander tested folks, experimented to discover that the tendency was to “reorganize” the mental model into a treelike structure (though perhaps he only interviewed a group of limited white men, or folks embedded in white supremacist society?) But I suspect some of the hardness has to do with:

    1. Our inability to see the costs (ie the complexity, simply constrained by our brain’s inherent limitation to expansively see/hold it all); for leadership looking at the org we can also probably say that their “mental model is necessarily incomplete and out of date”.

      It is similarly difficult to visualize the semi-lattice structure – where multiple sets overlap–of the natural city, and thus we tend to reduce it to a tree, the only structure we can visualize.

      A City is Not a Tree
    2. An ignorance that we have a semilattice at all. Who out there amongst leadership are systems thinkers? Embracers and students of variety, incompleteness, ambiguity, multiplicity? Who can embrace these, split things, and double down on rewiring the learning (neuronic) threads? Ensembles over A/agile. More of that interspecies contamination, contaminated diversity that Anna Tsing talks about. Mushrooms and the dying earth can help.

      Where a matrix indicates a high-touch relationship between two groups (e.g., a group of engineers working on a feature and the lawyers trying to ensure the legal compliance of that feature), efforts should be made to reduce the cost of that interaction by colocating their members (e.g., embed a lawyer with the engineers).

    Management or whatever version of leadership you have must stay on top of it. It’s some serious work. Maybe we need better integration tech; it’s like we are missing some connective tissue. Maybe it’s not tech at all.

    I remember being met with a lot of skepticism when I suggested to some engineers at my company that we create a meeting support team. It was a partially-formed thought, but the basic idea would be to make a self-selected group available to take notes for other peoples' meetings. I don’t think many devs grasp the importance of their meeting communication as architectural design records; and similarly how hard it is to accurately and correctly document after the fact; or document in progress for someone who is also trying to participate. Living in entropy is easier than culture change. I also understand that no computer science education ever taught anyone this stuff was important which is why Brooks' observations went unheeded for 50 years.

    Postscript:

    Why did I keep trudging through Karatani while writing this essay?? I should have known better: a few chapters later I’m cry-laughing because I bump into that hopeful schizo theory which blew my mind like 20 years ago in undergrad. And now I’m like damn because Karatani is saying that the semilattice is probably not the ideal (post)structure I want to imagine upon my software company. Of course, this makes sense in Karatani’s argument because he wants to problematize architectonic thinking.

    Despite its appearance, [the semilattice] is orderly and centered.

    The Will to Architecture

    And

    What happens when the semilattice is broken?

    Ibid.

    By Delueze and Guattari of course. Whatever, I still think the semilattice is a good formal representation of why coherence costs emerge in the indeterminate, rich encounters of the software symmathesy; the gentle swish of overlapping domains; like at Hearst and Euclid. But the rhizomatic network requires consideration soon. I’m urged that way anyway – see my hints above about interspecies contamination and the anthropocene. I think the ensemble moves toward assemblage soon.

    Thursday March 12, 2020
  • habitations

    Some patriarchal intervention at Google I/O a while back

    About 11 years ago at Google I/O two dudes, Brian Fitzpatrick and Ben Collins-Sussman, presented The Myth of the Genius Programmer to an audience of mostly dudes. Well, we can surmise at the demographics because women were still being invented around this time.

    Women are a very recent invention.

    Ursula K. Le Guinn, The Wave in the Mind

    Or, rather, reinvented since they had been denied participation in Western STEM since at least the middle 1700s:

    Women lost their security clearances very early in the stories of leading-edge science.

    Donna Haraway, Modest Witness

    Despite the missed opporunity to cite the origins of congressive world-making in non-human-men being experience, indigenous practice, cooperative economics, anarchism, feminism, liberation theologies, etc…the talk is a pretty good orientation for ingressive types to think congressively about working with other people.

    That’s my subtext as I watch. I just wish men were erudite enough to understand they’re re-invention. Non-mens have to keep rewinding it back. For example:

    The men who wrote the [Agile] manifesto are unclear on whether they invited any women. Even so, the thing I notice from both the manifesto, the accompanying principles, and the fact that these 17 men call themselves ‘organizational anarchists’ is that what they came up with is inherently subversive, anti-authoritarian, and feminist. There is an emphasis on self-organising, collaboration, experimentation, welcoming change, and building high-trust and supportive relationships.

    Tech, like other industries, continues to produce the worst of/from man because its manifest teleology of Progress (what Donna Haraway calls “Euclidean tales…of tragic detumescence”, the “prick tale.” ) – sieved through history from the industrial revolution – is wrong for the precarity of the Anthropocene and a damaged earth. For richness of our symmathesies. We all suffer for it.

    Racism and sexism are the key ingredients in the alchemy that has transformed “Don’t be evil” to solving the hard problems of “Controlling the explosion of child porn on the platform” (Youtube, to name a recent example. The darkest of whoops).

    What these guys are hinting at is that an ingressive mindset can help us move us at the speed of trust, and therefore build better software together. So, yep, there was some movement going on a decade ago. Radical muds seeping into the granite of hard dick culture; white boys catching each other and putting together slides for the other boys.

    The talk is an introduction and invitation in trust-based collaboration and vulnerability. Touchstones include:

    • you are unique but not special (meh, reinscribing in sad machismo Tyler Durden reference)
    • individual genius is a myth
    • ego is dangerous
    • stop hero-worshipping, Linus et al
    • coding is social
    • feedback is important
    • getting feedback early and often is good
    • being nice is important
    • code review is necessary
    • failing publicly is ok
    • keeping record of your failing is ok (keep documentation for failed designs)
    • being non-perfect publickly is ok
    • vulnerability at work is a strength
    • tools create a sociality
    • tools can unnecessarily restrict

    From the progenitor of the terms “ingressive” and “congressive” that I’ve been flinging about wildly:

    Ingressive is about going into things and not being waylaid by what people think or by emotions and congressive is about bringing people with you and bringing people together and unifying and making connections between things…. I’ve come to think that congressive behavior is basically better for society but that ingressive behavior is rewarded more by society because society is based on competition. It’s based on how you present yourself so for things like competing for a job, you have to be ingressive to put yourself forward for promotion and to talk about how great you are. Whereas, when you’re actually doing some work with people, then it’s really helpful to be congressive because then you bring people together, you understand people.

    It’s devastating to think about the enormity of emotional intelligence training men needed/need/will need; how expensive it is to refactor patriarchy’s structural grammar; that ambition which pits men against one another in an end game for supreme master and commander. In this ever-war even order and peace means the seizing of power; power must be assumed by the individual. It is not shared.

    Sure, there is a drip of undoing of Patriarchy when disenfranchisng the myth of genius; but the non-human-men bear the major brunt of burden for maintaining the moral center and possible salvation.

    I’m reminded of another talk.

    Hillel Wayne turns a similar gaze navel-y and spotlights the peril of “charisma-driven development” we suffer in engineering culture. We might say another symptom of cock-sure genius arising. I love this talk. Recognizing the artifice of loudness and leaning, he looks at the data and calls bullshit on men. The data is good. It’s a tool. Nonetheless, I’m not sure Hillel’s Empiricism will be as powerfully feminist as interventions like Thomas' archeology. Of course, Cheng’s metonymic replacements are interesting in that they make it easier to implicate women’s complicit was in patriarchy. Hot take: overall let’s be skeptical of the neo-classical – logic can win – and neo-liberal – we can reach across the aisle tactics.

    Friday January 24, 2020
  • habitations

    Thinking about heuristics for avoiding code duplication across the stack

    Doublemint 1

    At the office I’ve sprinkled some glue labor on a piece of documentation that attempts a heuristics for avoiding code duplication across client and api layers. I kept seeing this type of repetition occur and it was creating too much maintenance risk in the code.

    Here’s an example. Imagine a piece of code that takes inventory items from an order and derives a cost based on the order status. This calculation could be executed in the api layer, where it would be serialized into response body adjacent to the original cost data. Something like:

    {
        inventory_items: [
            {
                order_id: 1234
                cost: {
                    major: ...,
                    minor: ...,
                    // etc...
                },
                cost_display: 'MXN186.67' // <== augmented data
            }
        ]
    }
    

    This is a totally reasonable API design. However, a client developer could decide to do the calculation in the view code with a more raw serialization of the inventory item and order objects, and then merge that data:

    const composeInventoryItemForDisplay = (inventoryItem) {
        ...inventoryItem,
        costDisplay: calculateCostForDisplay(inventoryItem);
    }
    
    function calculateCostForDisplay(inventoryItem) {
        let newCost;
        const order = lookupOrder(inventoryItem.get('order_id'));
        const currency = order.get('currency');
        // subsequent processing steps...
        return newCost;
    }
    

    In a software system there are many floating nodes urged toward similar ends: teams, software that’s running, software that’s being worked on, etc… It hardly uncommon for both a front and back end team to write some code to solve a similar problem – what might even be a simple calculation – while totally blind to the other side tapping away. Frequent team project reassignments, geospatial and organizational distance, depth of expertise, weak cultural value around curious coding, etc… these types of contention costs create the regrettable complicatedness, the vivant natura, that results in risky duplications. Especially risky for inconsitencies that might develop inside essential software modules like pricing computations.

    Supplemental thoughts from Coda Hale’s new blog post going around:

    Contention costs grow superlinearly as new members are added.

    Coherence costs grow quadratically as new members are added.

    Limit the number of people an individual needs to talk to in order to do their job to a constant factor.

    I’m not sure I would have believed this type of thing would happen before I joined a larger company with an engineering group spread across geos; teams flung across the ownership matrix by the high winds of market shift. This investor and wall street casino shit drives tough team decisions. Lol team pet names: stunted protologisms. Like, what happened to the Squirrel team? Oh, they are the Crib Gto Pomp team now. Well, at least we are consistently circumlocutory at naming teams.

    But lo and behold, such an example like the above cost calculation exists. I saw something like it in our JavaScript code. Then I asked the folks who work on backend if the frontend really needed to be doing this work. We shouldn’t. We literally shouldn’t because the computation for this existed in our backend service already. This is one reason we can’t have nice things.

    So I’ve been thinking a lot about an approach to code, a kind of mental shim, to mitigate the coherence costs across the stack. Even with a fluctuating number of devs with knowledge about the code. Maybe we should just call this predicament patchy; patchy coding. Here’s what I’ve got so far with a cute story and real life examples:

    Avoid duplicating code across the stack
    
    Consider the following scenario which is based on true events.
    
    One day Team RocketShip is working on a feature for Big Initiative X. They implement some computational logic in Ticket Availability Service which adds a new attribute on TicketClass. The code is great, it ships, customers are elated. But soon after, Team SpaceShip is tasked with another piece of Big Initiative X and needs to display the result of similar computational logic in a React app. However, Team SpaceShip doesn't know about Team RocketShip's previous implementation in TAS, and they decide they can actually write this computational logic into the client code using existing TicketClass data. The code is great, it ships, customers are elated.
    
    Uh oh. Now we have duplicate code on the backend and frontend. Which means code that is very likely to become out-of-sync and result in disruptive maintenance work or bugs in the future. But how do we avoid this situation? Unfortunately, this is one of those social (read: people) problems of software engineering. There's no way to reliably automate prevention with static analysis of the code.
    
    Heuristics:
    
        There > 1 attributes returned from the API that are similarly named
            Here's a real life example:
            TicketClass used to have attributes for auto_hide, hide, auto_hide_before, and auto_hide_after that client code in checkout utilized to compute the ticket's "hidden" status. In other words, client code asked of these associated attributes: should this ticket be unavailable for purchase. However, creating statically derived properties from a mixture of data may be an indication the backend can do this work for you, or already has.
                Ticket Availability Service already had code that computed this logic. Ultimately the frontend computation was replaced with a new attribute added to the TicketClass API.
        There > 1 attributes returned from the API that are related "communicationally," in that they reference similar types of data that are computed differently.
            Similar to the example above, the existence of auto_hide and hide attributes seem conceptually related – through "hide" – and can be a clue that there may be logic on the backend that could be reused to for hidden status calculation.
        I need a static value that won't be impacted by user behavior after first load
    
    Make the backend do the work
    
    Heuristics:
    
        Will the iOS and Android team's have to implement this too?
            Always remember that the desktop website is just one place where we write code for consumers of our applications. If the feature you are building will likely end up on a device, it's likely the backend code can expose what you need in the API.
    
    Monday January 20, 2020
  • Maybe Eithers with Promises

    I want take some bits to process a recent issue of JS Weekly that reposted a new piece by Eric Elliot about optional values in JavaScript. I read it. It lit up my memory.

    One of the first Lambda Cast episodes I listened to was #6: Null and Friends. At this point in time I’m near the start of my FP cultivations, sometime mid- last year I think. It’s probably around the time I was finishing Kyle Simpson’s Functional-Light JavaScript. I was fairly shocked to hear the bros discuss functional languages that are designed to keep null out of your programs. I could feel myself starting to consider the FP language hype, especially the hype around eliminating uncertainty.

    Because of JavaScript’s weak type system optional values represent data that will likely cause problems somewhere in your software system. Basically the language lets us get away with answering I don’t know without any compose-time feedback. Spinning up the browser and pushing play is sometimes your only recourse.

    FUNCTION A: Alright here’s some data, can I have that back nice and neat please before I send it to the user.
    FUNCTION B: Yeah sure…wait…ermm…I don’t think so actually…yeah I don’t know.
    FUNCTION A: Ok great I’ll just show nothing to the user forever or maybe crash the system.
    FUNCTION B:

    Elliot is also quick to mention the case of uninitialized data. He lists the common progenitors of null:

    1. User input
    2. Database/network records
    3. Uninitialized state
    4. Functions which could return nothing

    I’ve got an example from the office, bear with me.

    In our application there is a form field. This field represents a maximum limit on the number of holds for your event ticket. There are two types of input this field can receive from a user: 1) an integer or 2) nothing (ie, be left empty). The latter signifies that the user desires to unrestrict the number of holds.

    Nullable fields! These are quite commonplace and a user would never think twice about the dark alchemy we’re performing behind the scenes. What the user doesn’t know is that this field maps to a database columm that expects an integer (INT). Thus, when the user leaves the field empty and then submits the form, we need to make sure that a “no INT” is mutated into a stringified representation of zero to slot in the POST body: {limit: 0}. What a twisting mind eff: the zero means unlimited. Leaving the field blank is not a lack, but bountiful! 1 x 0 = FUN. Nothing is not nothing. Therefore, a sensible default.

    
    function serializeFormData(formData) {
      const limit = formData.limit || 0; // the empty input string is nullable!
      const nextField = //...
      const body = {
        limit,
        nextField,
        // ...
      }
    
      return body;
    }
    

    Of course, a reverse alchemy must occur from the other direction when our React code initializes the form to begin with. We first hand our form builder an empty object – key/value pairs with field names as keys and undefineds as values. The builder then converts these undefineds to appropriate defaults.

    
    function getInitialFormData(initialData) {
      const limit = initialData.limit || '';
      const nextField = //...
      const initialFormData = {
        limit,
        nextField,
        // ...
      }
    
      return initialFormData;
    }
    
    

    Looks familiar.

    Thusly, the JavaScripter putzes around with a notion of nothingness by phase-shifting duck typed values.

    null <-> '' <-> 0

    (There’s probably cooler mathematical notation for this.)

    Unforch we have to use null and friends for lack of a better optional option. Like, for lack of formal invariants in-language. It’s impossible to truly prevent these meaningless nothings from entering our JavaScript programs. (Meaningless like may never receive meaning, ambiguous, undecided. Totally void 0: what a good euphemism from the grammar.) Like, you can’t serialize nothing for a value an API response formatted as JSON.

    
    >>> json.dumps({name: }
      File "", line 1
        json.dumps({name: }
                         ^
    SyntaxError: invalid syntax
    

    Or try and stringify an Object back up again with the same:

    
    >> JSON.stringify({name: })
    SyntaxError: expected expression, got '}'
    

    Null and undefined are optional in JS but they are not illegal. Like in Haskell, which wraps up the ambiguity in a fat Nothing.

    Elliot does an interesting rhetorical jiu jitsu by giving us new options for optional values. In liue of eviscerating null from JS, we can work to push null to the edge of our programs with a handful of innovative approaches. In a sense we can ignore nullables and declutter areas of code which can just focus on data pipelining and other UI biz logic. Techniques include: constructing state machines – highly determined object interfaces – that error without values set to a wanted data type; ie something. We can also take advantage of that new new: Optional Chaining. And then there’s borrowing from FP. The last I love.

    I’ve already been thinking about Maybes a lot recently. My last post was about using “maybe” in function names to negotiate the unrealistic binary of if/else with better signaling to other developers; like, a function may or may not complete it’s designated purpose if an async call fails or an argument is unacceptable. The real word is far too fuzzy. In contrast to the imperatives of JS, FP languages substitute nullable data with algebraic structures that encapsulate possibilities of existence or nothingness. For example, the actual Maybe data type represents either a Just (truth, success, completion) or Nothing (false, error, incompatibility). Data that’s wrapped in a Maybe and operated on won’t leak a nullable into our program, like the commonly observed undefined. Obviously implementations vary across libaries. Here’s a simple example from the Practica library which demonstrates the way that using Maybe can simplify code:

    
    import { Maybe } from 'pratica';
    
    const data = await fetchAllPeople(...);
    
    Maybe(data)
      .map(people => people.filter(person => person.cool))
      .map(people => people.map(getNames))
      .map(names => name.toUpperCase())
      .cata({
        Just: transformedData => render(transformedData),
        Nothing: () => console.log('Womp, no data returned from API.')
      })
    

    (Btw, “cata” stands for catamorphism and means to decompose the Maybe container into simple values. Honestly, I’m not good enough in the category theory yet to confidently distill it for you completely – pun intended – but that’s the gist.)

    A more basic JS solution might look like:

    
    const data = await fetchAllPeople(...);
    
    if (data) {
        const coolPeople = people => people.filter(person => person.kind);
    
        if (coolPeople.length) {
            const formattedKindPeople = people => people.map(formatPersonForDisplay);
            render(formattedKindPeople)
        } else {
            console.log('Womp, only unkind people.')
        }
    } else {
        console.log('Womp, no data returned from API.')
    }
    

    The combination of FP-style data pipelining – allowed by Maybe’s monadic interface, I think? – and control flow encapsulated in the data type itself, we get a semantically rich and easy-to-read solution without nullables and exhausting boilerplate; ie, param existence checks.

    Where Elliot really surprised me was drawing a line between FP’s similar-to-Maybe data type Either and JS’s Promise. Tucking null away with Promises is super neat. Let’s see how that plays out in a sec.

    While maybes represent one or no value, Just or Nothing, Either implementations are slightly different in that they represent one or the other, but not both. If you’re familiar with bitwise XOR, it’s the same algorithm, except that in place of a Nothing or performing a noop, Eithers provide a secondary branch for an error case. Let’s see it in action.

    Take Elliot’s example of a small abstraction that hides null checking away in a kind of promisified ternary (which I’ve slightly modified):

    
    const exists = (x) => x !== null;
    const ifExists = (value) =>
      exists(value)
        ? Promise.resolve(value)
        : Promise.reject(`Invalid prop: ${value}`);
    
    ifExists(prop.name).then(renderName).catch(log);
    

    Here basic null checking and *primitive" if/else binaries are replaced with a more expressive, semantically rich statement for the logical disjunction: proceed this way if success, or that way.

    Now, logging an error doesn’t get us very far from param checking and early returns. A slightly more interesting example might be something like:

    
    const inputExists = x => x !== '';
    const ifInputExists = value => inputExists(value) ?
      Promise.resolve(value) :
      Promise.reject(`Input is blank`);
    
    onInput((prevValue, nextValue) =>
        ifInputExists(nextValue)
            .then(validate)
            .catch(trackClearInput(prevValue))
    

    It’s hard to see the real power of this for a simple resolve/reject example. It just feels like a fancy if/else, right? But if we extrapolate from this base interesting things start to happen. Here’s a slightly modified version of an example from Practica’s docs with an imaginary Either that uses Promises under the hood and implements a chain behavior:

    
    const isPerson = p => p.name && p.age
      ? Promise.resolve(p)
      : Promise.reject('Not a person')
    
    const isOlderThan2 = p => p.age > 2
      ? Promise.resolve(p)
      : Promise.reject('Not older than 2')
    
    const isJason = p => p.name === 'jason'
      ? Promise.resolve(p)
      : Promise.reject('Not jason')
    
    const person = awaitfetchPerson(...);
    
    Either(person)
      .chain(isPerson)
      .chain(isOlderThan2)
      .chain(isJason)
      .then(p => console.log('this person satisfies all the checks'))
      .catch(msg => console.log(msg)); // if any checks reject, then this function will be called. If isPerson rejects, then isOlderThan2 and isJason functions won't even execute, and the err msg would be 'Not a person'
    

    Suffice to say I’m quite tickled by the re-purposing of Promises as Eithers. You can start to imagine how one might construct custom-fit control flow chains and layer cakes using thenable types to play nicely with other function composition and pipelining. I’m not always in love with (what feels like) sacrificed readability with chaining over stacked lines of assigned returns or async/await. But seeing it an action using Practica I’m starting to believe more and more in its viability, even in a codebase touched by less experienced or new developers.

    Gripes with FP readability aside, it’s eye-opening to look at available JS language features and see them in a different light. Also, aside from the clever use of Promises, just getting the null check into an abstraction exists(...) already has us using an FP mindset to build strong declarative (function-first) foundations.

    Wednesday December 11, 2019
  • habitations

    Maybe maybes

    Migrating my content from Medium was due, but this past week became a particularly opportune time to repost some old content on the subject of naming. Because I’m in one of those feels again, y’ll. And it’s about one of the two hard things enshrined by Phil Karlton. Really we just float in a crisis of language trying to speak correct enough to make some money. Between thought and action. Between computer and person. Between persons.

    How do I communicate to the next developer that this function might do one thing, or it might do another?

    The other day I’m refactoring some code and picking apart a function to understand how it works. Let’s call this function saveThing. It’s about 60 lines. Kinda medium-sized but a lot of lines are neat procedural steps of computation. The part that’s quizzical to me is a condition that will exit the function if the user is missing a permission. So…sometimes this function won’t save a thing.

    This is FUNKY. I want to rename the function to maybeSaveThing because the steadfast verbiage seemed incorrect. Like, literally it was signaling something wrong to me about its gist. I think I may have been particularly sensitive because I had already spent two days refactoring a bunch of code in this area and was becoming a bit exasperated with various kinds of misdirection. I really wanted the code to start being straight with me. But it’s when we’re weakest that we need the code to be brutally honest. Fixing critical bugs. Nearing deadlines. When stuff’s happening at home. In the world. Ironically, these modes reveal the code’s honest readability.

    But this could be a matter of style. I’m not sure. I’m mulling it over. I have colleagues who think maybeSaveThing isn’t really necessary. The internal conditional is obvious enough once you peer inside. I think I’m going to fight with them more about it.

    I felt validated when a homie from a JS community I hang with on Slack gave me an example of production code from his place of work:

    componentDidUpdate() {
        this.maybeRedirect();
        this.updateTitle();
        this.maybeDoActivation();
        this.maybeInjectSampleTimelineItems();
    },
    

    I LOVE this. Don’t you think it reads nicely? Like, you can really understand the sequence. The maybes communicate a true story of many possibilities. I’m imagining refactoring this component transition block. These function names are helping me decide what to care about now or later; to maybe deal with them or not.

    Tuesday November 19, 2019
  • habitations

    “do” helper for emphasis in variable naming

    However you mental wrap the process of realizing virtual things (writing, accretion, rose gardening), there’s no question literary-ness does imprint at some point on IDE surfaces. In my experience working in the higher stories of the tower (cloudgineering, we so high): up there with web pages, casually cast instructions inRuby and JavaScript, cascades in hierarchical CSS computer notation, and declarative document formatters (XML, HTML); up here you’re definitely notating control flows and arguments that require semblances to English lang. Arguments. Computer languages.

    The other day I surprised myself by forming an excited opinion during code review about what a variable should be called. I guess the time had done arrived; I had spent just enough time in Ember and client state management to become OCD about the signifiers being passed around. And so I found myself feeling incomplete when I read over a line of code announcing a state change event that facetChanged. I can’t totally explain it but this wording felt insufficient. I instinctively wanted more. Needed surge troops. Something like…facetDidChange.

    The difference is slight, but important.

    (It’s hard, y’all. One of two things.)

    I researched the grammar to figure out why the variable was letting me down.

    Turns out the “do” helping verb isn’t a special thing. But when added to another verb in the preterite, it increases emphatic assertion heaping deeper credibility on the thing that did happen. You can really feel it on your brain tongue, right? The additional slam ofd in the phrase, like doors shutting — the hardness more certain.

    Or maybe the reassurance is there because of the influence of the black/southern dialect on modern American english; the Shakespearian-like emphasis where past finality is hammered home when something done happened, or 2x emphasized when someone done did.

    The code doth protest too much??

    Eh, I didn’t push for a rewrite in the code review. Naming is a bar brawl and sometimes you fight and sometimes you don’t. The author is balancing empathy for their colleagues with their own mental frame of the application at the moment they create the code; and then there are many an english word and conventional abbreviations, degrees of formality and tone. There’s a reason Steve McConnell dedicates 30 pages on The Power of Variable Names. And this section doesn’t even explicitly tackle naming receivers, subscribers, etc…

    Just tryna make sense of what these users be doing.

    Monday November 11, 2019