• habitations

    But night software is not like that. It’s not written for the day job. It’s not written to see the light of day at all. It’s not written to be looked at and scrutinized by anybody. It’s intimate and personal, it’s messy and buggy. To take a look is to transgress.

    Boundary cross. Sex.

    From a small heap software that helps make software. By way of Devine Lu Linvega

    Monday November 20, 2023
  • habitations

    DEI Grieving and AI Skepticism

    IYKYK DEI in tech is all but dead. If you’re an outsider, it’s important to understand that employee-led efforts to make workplaces less racist – which began in earnest at the onset of the Obama years – have been steadily blotted out since Trump. The Tech aristocracy has decided to abandon DEI budgets along with other workplace perks. As @betsythemuffin notes (in the thread I’ll be referencing below):

    DEI is seen by a sizable chunk of the funder class as an allowable luxury to keep the peons happy.

    It has been tragically hard to swallow this defeat. @betsythemuffin again:

    Regret that we didn’t do more to build material power when we had the chance.

    And yet, as the movement has regrouped and recentered to reposition the struggle, some have begun to take note of a problematic adaptation in the techy left’s rhetoric. @danilo started a thread on Mastodon that points out a new-born form of absolutist skepticism for emerging technologies – notably AI (of course). Regardless of why the left slips into what @glyph calls a problem of “semasiology” around the term “AI”, point is, the hurt seems to be cutting so deep that the movement has retreated into an unproductive preservationism.

    The abject and understandably heartbreaking defeat of DEI-type progress in tech has moved most leftist critique of the space into a technological conservatism built on denialist, magical thinking.

    You can’t successfully advocate for a dead end when the other side is investing in roads that lead to helpful places.

    You can’t win against technologies people find useful by pretending they aren’t useful.

    Check out the rest of the conversation. There are smart people in here thinking through this “AI” moment. The underlying question seems to be how we can continue the fight without completely disavowing what might be practical revolutions borne out by this albeit early phase of “AI” that is “finally bearing serious fruit.”

    If you’re wondering on what side of the line I fall, my major beef with the LLMs is their wanton extractive consumption of human labor without citation or, god forbid, recompense. Of course, a time honored tradition of slave traders and capitalists. Here is Marc Andreesen with a recent plea written to the US copyright office, hopeful we can just all ignore this blatant theft for the common good of the funder class. This is class warfare:

    Imposing the cost of actual or potential copyright liability on the creators of AI models will either kill or significantly hamper their development.

    I hold that citation is feminist since it combats the authoritative mansplain that obscures collaboration, hardens selfhood, and hoards ideas. Ergo I consider the de-linkified, referent vacuum of effluvium produced by ChatGPT et al as patriarchal in its presentation of “knowledge” and preoccupation with mastery. I’m scare quoting because philosophically I’m confused about what this content even is, so unhinged. Although I recognize that the interplay of this text with the reader and their investigative context does produce meaning.

    UPDATE (minutes after the writing the above): heh, Danilo actually expounds generously about the left’s false critique of AI in a long-form piece on his blog.

    Monday November 20, 2023
  • habitations

    Lament for Tafteria: sailing the Apollo's docs

    The hospital ship martin bax

    Documentation, naming things. Hard. Let’s see this hardness compound at the intersection of these two phenomena in the real world.

    I’m casually scanning the Apollo docs, reading up on subscriptions because my wife is writing graphql client code at work. (I had vague memories of how we worked with distributed event streaming at my last job. NestJS + Kafka. We had consolidated apis into a graphql layer, but worked with Kafka event streams through Nest with the kafka-js lib as a bridge. Point is, I wanted a light refreshment of how this stuff works.).

    It’s a frustrating moment to jump between these two pieces of the documentation. (This is supposed to be a mature framework (sobbing, why can’t we have nice things)). From 2. Initialize a GraphQLWsLink to Enabling subscriptions.

    The former section, which walks us through client setup, provides a callout with a detour into the server docs for grabbing the connection url we need for configuring and instantiating our web socket linkage.

    Replace the value of the url option with your GraphQL server’s subscription-specific WebSocket endpoint. If you’re using Apollo Server, see Setting a subscription endpoint.

    If we are using Apollo’s server, then we must pass a url property when creating and instantiating the GraphQLWsLink object .

    const wsLink = new GraphQLWsLink(createClient({
      url: 'ws://localhost:4000/subscriptions',
    }));
    

    HOWEVER, once we hyperlink to the server docs one simply cannot find – throughout the entire web document – a mention of “endpoint.” We’re dropped into a section called “Enabliing subscriptions” – instruction for setting up the server side bits.

    Let down. Expectations missed. Problematic asymmetry, lack of consistency!

    Repletely kerfuffled, my synapses delivering a reminiscence of Tom Benner’s big bird bound Naming Things, and his chapter on Consistency. (This so happens to be one of my favorite dogmas of Better Naming™️.) Well, the Apollo docs demonstrate a similar symptom of bad naming, just like the kinds of bad examples we oft find in code. Join me in considering disparate, hyperlinked documents as analogs to components of a software program that exchange messages. Readers of these texts suffer the bad naming with familiar symptoms: frustration. Capitalists lose the most with anemic productivity.

    If the hyperlink is a documentation document-as-component api/interface, us readers should be allowed to charismatically move back and forth without being flung out from our flow state like starships intercepted in warp. Perhaps the doc could send me to a subsection of “Enabling Subscriptions” titled/anchored with “Server endpoint” (or the like). My instinct is that there’s a deeper scoop here.

    Of course, there’s an imperfect system to blame. When Trevor Scheer removed all mention of “endpoint” in the docs update preparing users for Apollo Server version 3 back in 2021 – Apollo Server would no longer include inherent support for websocket protocols – how closely did he collaborate with Stephen Barlow who initially committed the instructions referencing endpoints a year or so before. Do the people matter? (They do.) And from under whatever constraints they perform these literary efforts. We can wonder, imagine, daydream about the socio-technical system that produces docs for clients and servers, and how the basic capabilities of the web link them together. The bounded contexts. The challenges of producing universal languages, ubiquitous terms across what likely are two disparate teams.

    There are some leaky abstractions, but it’s not a total system failure.

    I’m reading Bax.

    Dispatches, bulletins, papers, pinned up, spewed about The Hospital Ship.

    Page 15: A chronic shortage of pins to fix up the reports, so that in consequence they blrew away and anyone in the stern of the ship could reach out a hand and collect bulletins from the air as they drifted by in an endless paper-chase over the stern of the boat and on into the sea.

    Page 98: Did the programmers hope to attract some reply? The Hopeful did not know what replay to make so they transmitted their call signal only, but there was no response.

    Euan wants to blame Tafteria, too bad.

    Friday November 10, 2023
  • habitations

    Closure was small, TypeScript is big (thinking w/ Dan Vanderkam)

    Midnight endless summer

    The dangers of premature optimization are well understood among software engineers who’ve been frustrated when the secondary effects of the efforts end up compounding maintenance cost. (Does this principle play out in other milieu?). What’s perhaps more interesting to ponder is when “advanced optimizations” are decidedly beneficial but cannot be implemented because of larger system and social dynamics that require greater interop. Here I’m thinking with Dan Vanderkam’s post from back in Sept The Saga of the Closure Compiler, and Why TypeScript Won.

    Dan skips gingerly back through recent web/computing history to discuss the (obvious in 20/20) demise of Google’s Closure compiler. The hardware of the early aughts demanded Closure’s designers focus on minification. In contrast, if the typed project(s) (TypeScript) of today are going to succeed beyond niche usage, they must play well with an expansive ecosystem of <other people’s code (le sigh!) which might not be typed. Despite the heroic campaigns of developers to slap types on their everythings (DefinitelyTyped), the JavaScript universe offers no gaurantees to makes of tools. TypeScripts declared horizon of

    Be a cross-platform development tool

    must be joined by

    Emit clean, idiomatic, recognizable JavaScript code

    and further qualified by the non-goal of

    Aggressively optimize the runtime performance of programs. Instead, emit idiomatic JavaScript code that plays well with the performance characteristics of runtime platforms.

    I wonder how many TypeScript developers have looked at the TypeScript Design Goals? Not to call out/in, just curious.

    I wonder, the Midnight’s 2016 Endless Summer. Saxophone crying out from the other room.

    Tuesday October 31, 2023
  • habitations

    Language ergonomic studies: Possession: Ruby, JavaScript, D/s, Mycology

    There exist happy little clouds of coincidence when studying new languages. And not just across the computer ones, mind you!

    Ruby and Indonesian share a frugal brevity and disinterest in flairs of punctuation when managing possession.

    Itu bukuku.
    

    vs

    That's my book.
    

    And…

    me = Person.find(my_user_id)
    my_books = me.books
    

    vs

    const me = await prisma.user.findUnique({ 
    	where: { 
    		id: 99, 
    	}, 
    	include: { 
    		books: true 
    	} 
    });
    myBooks = me.books;
    

    Well, Rails AR query syntax vs Prisma (NodeJS). (Also, did I do a useful interpretation of linguistic possession in a soft coding expression??)

    This isn’t YAOFJS (Yet Another Opprobrium for JS). I don’t have a quarrel with English, either. Although I find it often fairly dull when it’s not rescued by the reticulations of regional and immigrant (forced or free) anastomosis of the tongue and mouth (and whatever other maneuvers contribut to nudging the Queen’s own verbage (I’ve forgotten if there is anything before a body without organs, how is it done???)) that take root in everyday speech.

    Only a mere 30 pages (of 250) into Make the Golf Course a Public Sex Forest and you run headlong into Raechel Anne Jolie’s musings on D/s, mushrooms, and mouths. There’s probably something here to dwell on with respect to power, colonialism, oral sex, entangelment, Dulcinea Pitagora’s “subspace” in BDSM play (ohhh, wonderful that this term wasn’t entirely pre-figured by Roddenberry’s universe expanse), etc…

    Cue Butler’s Xenogenesis, ringing in my ears.

    Monday October 30, 2023
  • habitations

    Language ergonomic studies: Ruby vs JS one-liners

    Coming from JavaScript, I was not expecting Ruby’s Enumerable module to have methods for selecting min and max values from a collection. Especially with an added parameter that lets you “take” a range starting from zero.

    For comparison, what if we needed the lowest x number of values from an array of numbers.

    In JavaScript we’d sort numerically (requiring the conspicuous umbrage a declaring for the umpteenth time a comparison function) and then destructuring out the result.

    function twoLowest(numbers) {
      return numbers
    	.sort((a, b) => a - b)
    	.slice(0, 2)
    	.reduce((a, b) => a + b, 0);
    }
    

    Anyway, written in 10 days.

    Matz giving us many ways to do things.

    I want to make Ruby users free. I want to give them the freedom to choose. People are different.

    From:

    🔗 The Philosophy of Ruby: A Conversation with Yukihiro Matsumoto, Part I

    def two_lowest(numbers)
      numbers.min(2).sum
    end
    

    Enumerable methods that can receive an operator as symbol. Glacial autumn breeze whipped up from the 101. I don’t have a brevity fetish, like I know many of you do. Or the tendency to place the one-liner upon the highest of pedestal. It’s the obvious natural-language-style readability of the Ruby here that is simply remarkable.

    Achieving this is possible in JavaScript with more fanfare. But, alas, the maintenance cost of dressing up our array in Ruby’s fast casual couture.

    class NumberArray {
      constructor(numbers) {
        this.value = numbers;
      }
    
      min(n) {
    	function compareNumbers(a, b) {
    	  return a - b
       }
       this.value = this.value.sort(compareNumbers).slice(0, n);
       return this;
      }
    
      sum() {
    	function add(a, b) {
         return a + b
    	}
        return this.value.reduce(add, 0);
      }
    }
    
    function twoLowest(numbers) {
      return new NumberArray(numbers).min(2).sum();
    }
    
    
    Wednesday October 25, 2023
  • habitations

    Rspec vs js, let vs let

    I love comparison pieces like Steve Hicks' article What JavaScript Tests Could Learn From RSpec that juxtapose the syntaxes, rules, shibboleths of two programming languages. Comprehension comes more quickly.

    Interesting. In describing the perceived advantages of rspec’s let method to declutter code, I noticed Hicks fails to mention this little maneuver he performs of encapsulating a result function –result and getResult for rubes and javs respectively. It would seem this strategy is a given for him. But in my experience it’s one I’ve shied away from in JavaScript tests – especially any of non-trivial sophistication.

    I prefer the subject invocation repeated in each it block. I find this repetition comforting. Like:

    
    describe("when the first value is negative", () => {
      describe("when the second value is negative", () => {
        it("returns a positive number", () => {
          const second = -3;
          const result = calculator.multiply(first, second)    
          expect(result).toEqual(3)
        })
      })
    
      describe("when the second value is positive", () => {
        it("returns a negative number", () => {
          const second = 3;
          const result = calculator.multiply(first, second)
          expect(result).toEqual(-3)
        })
      })
    })
    

    Also, unlike what I’m doing above, Hicks is declaring the value of second in beforeEach blocks to hint at how more sophisticated JS tests would be written with heavier logic to execute before one or more test runs within the same scenario/context. In practice I find you often do both in JS tests – do preparation in before* blocks and declare single-use constants within it blocks (vs re-assigning/re-using).

    Generally I keep re-assigning/re-using variables to a minimum if they can be localized – it is confusing to follow in JS. Beware of cross-contamination. Pollination. Sex.

    But why: for some reason the concise syntax of let seems easier to track. Lack of curly braces?

    This doesn’t really bother me though:

    let sharedVar;
    
    before(() => {
      sharedVar = ...
    })
    
    describe(() => {
      it(() => {
        sharedVar = ...
      })
    })
    

    Is rspec let better than JS let? Of course of course, it depends. le sigh

    Even rspec maintainers caution us, as Hicks notes. Convenience always carries a caveat in programming. This is a substantial pillar of the “readability” discussion: durability of convenience.

    Note: let can enhance readability when used sparingly (1,2, or maybe 3 declarations) in any given example group, but that can quickly degrade with overuse. YMMV.

    Cool find: givens. Shoutout to the translators, the transcribers, the interpreters.

    Wednesday October 25, 2023
  • habitations

    With Anne-Marie Willis, professor of design theory at the German University in Cairo.

    Another world is possible. But what of worlding? How to world?

    When a Farley’s barista is a strong current; wow the level of shine escaping her mouth. And I loved it. The slightest tickle of maple syrup was a great recommendation!

    Per usual I sent myself something to read and didn’t cite the source so I’ve been having that weird kind of drifty, but tethered, Pong ball experience. Was it mentioned by someone in a Paper in Systems discussion? Maybe a content of the fediverse’s systems thinkers briefly held in place by my thumb.

    As I like to say (with often different verses), I do software development because it’s a phenomenological wonder of people, text, code, time travel, non-determinism – and always pluckily avoids fitting neatly into capitalist fetish.

    Like, software projects (already “bad” (incomplete, inaccurate) lexicon) of any substantial scope are only delivered on time accidentally.

    Frederica Frabetti supports this axiomatic chiste of software, noting in her book Software Theory:

    “The central problem of software development is thus the impossibility of following a sure path in which a system is completely and exhaustively specified before it can be realized.”

    and

    “Stability is something that happens to the system, rather than being scheduled and worked towards.”

    Look how software was virtually helpless to build complex systems that are designed to become eventually consistent.

    For me, software is philosophical-ing. An ontological milieu. Therefore Willis' essay is soooo good for those of us who are philosophical-ing, but not academically trained – and who can drift into this kind of theoretical reasoning easily, especially in an essay like Ontological Designing which onboards us you into hardcore theory with a ton of grace. Willis doesn’t need to spend her initial breaths on defining “ontological” and and Heidegger’s “being”, and (re)introducing us to the failure of the Western metaphysical tradition. But she is generous; and that’s likely the point. A broader appeal.

    Which, in turn, helps the mind Pong around during an essay about pervasive push and pull.

    I’m reading this essay about how we effect social change but also thinking through the systems reasoning and how it vibrates into software ontology. Well, there are references to IT infrastructure and other “equipment” of our present epoch – though they do not supersede other kinds of equipment. For Willis, our contemporary technologies can function both as examples of any designed material, as well as juxtapose with the immaterial (organizational structures, administrative systems, etc…) to demonstrate how they are equivalent as objects/outcomes of the ontological design circularity/looping, bi-directional reach – Heidegger’s grabbing Cartesian dualism by the shoulders and shaking them. (“Ontological designing refuses such one dimensional understandings of (human) being-in-the-world, which are worn-out fragments of enlightenment thinking and Christian morality sloppily stitched together.") As Willis notes, Heidegger themself reaches for the simple household jug to work with in his pursuit of how things, thing.

    “The jug gathers and unites these.”

    These being: water, wine, sky, earth. Gathered/outpoured.

    But she talks about tech in a way that tickles, for sure. Especially it’s excess as it frenetically infinite loops rather than unlocks potential:

    Rather than inducing us into a world of multiple creative possibilities (as software advertisers would have it), [computers] design us as users into their horizons of possibility – which by the very nature of horizons (in Gadamer’s sense) always have a limit. In fact, the proliferation of options within even a basic operating system or software application becomes a tyranny of choice, a maze of seemingly endless possibilities, a dazzling instrumentation for its own sake, all means with no end in sight.”

    Maybe free software never will be truly free.

    Tuesday July 4, 2023
  • habitations

    Naur, goodies, 1985 years after Jesus Christ

    There are so many goodies in Naur, 1985. Filter, some():

    • Declares there’s no right way to write software
    • (Further) rebukes the scientific method
    • Calls out lack of empirical study of software methods
    • The programmer as “manager” of computer activity

    Thank you ceejbot for further distillations.

    Thursday June 8, 2023
  • habitations

    Naur, the optimist, 1985 years after Jesus Christ

    The year is 1985. Certain kinds of optimism abound in programming circles. From Peter Naur’s Programming as Theory Building:

    "It may be noted that the need for revival of an entirely dead program probably will rarely arise, since it is hardly conceivable that the revival would be assigned to new programmers without at least some knowledge of the theory had by the original team."

    The infamous “shit mouse” bug that I pushed to production in 2018 – which subsequently became an iconic team joke with its own concomitant laptop sticker swag – was the direct result of software abandoned in the wake of absconding team members. Picking up dust-laden software seems like a common occurrence these days, no?

    In their reading-with of Naur, Ceejbot offers a valuable remediation technique for deleterious knowledge vacancies. It’s one that I personally strive for in everyday software practice: gross amounts of maintainer documentation.

    Don’t waste time documenting what can be seen through simple reading. Document why that function exists and what purpose it serves in the software. When might I call it? Does it have side effects? Is there anything important about the inputs and outputs that I might not be able to deduce by reading the source of the function? All of those things are clues about the thinking of the original author of the function that can help their successor figure out what that author’s theory of the program was.

    and

    …the program exists to solve a problem, some “affair of the world” that Naur refers to. What was that problem? Is there a concise statement of that problem anywhere? What approach did you take to solving that problem statement? What tradeoffs did you make and why? What values did you hold as you made those tradeoffs? Why did you organize the source code in that particular way? What belongs where?

    Thursday June 8, 2023
  • habitations

    Power Moves

    During her novel workshop on estate planning, Sarah Deluca of Move Money Shift Power poses the question:

    Is holding onto control after your death something you want?

    If part of our life’s work is to strengthen connection, relinquish power, share power, redistribute, does a Trust actually violate the moves we’ve been making at the speed of small “t” trust? Individualistic posthumous scheming is not the invention of communities, but of corrupted powerfuls. It would seem.

    A few days later I found myself in the next Paper in Systems discussion led by Dawn Ahukanna and Shauna Gordon-McKeon. On the table was Shauna’s essay Interpretive Labor: Bridging the Gap Between Map and Territory. Wherein you’ll find a rich investigation – and then interrogation – of the lopsided power distribution between those that labor to imagine, construct models, maps, and those that sit with the effects of the implementation – who interpret, navigate, bridge the chasm between theory and material outcomes.

    I was called in. Software engineers do wield an outsized power from behind the desk. Although the tower isn’t deathly bleached, because 1) we do some interpretive labor at the seam between business requirements and software building (system design, theory of, code/text writing), and 2) failures reverberate back to us pretty hard (midnight pages) – at least more acutely than through the beauracratic layers. (Do the capitalists get to feel much of anything?) Nonetheless, we probably aren’t the ones screaming during the scream test. What do we do with this power?

    Jorge Luis Borges self portait
    Jorge Luis Borges - Self Portrait
    Tuesday June 6, 2023
  • habitations

    Can you jam with the console cowboys in cyberspace?

    No. You can’t. Not yet.

    It’s quite possible that my work in computers today is a last ditch to actualize a childhood fantasy of solving neighborhood crimes with my friends in Brooklyn with the help of a friendly ghost that communicates via word processor. You type, then we type, Ghostwriter.

    The way she caresses the monitor in this scene: longingly, tenderly, expectant. The arrogant gush of buzzwords. Unalloyed after school cool.

    Saturday April 29, 2023
  • habitations

    Redis Poem

    set things
    set them to expire
    set them to be exclusive
    set them to expire, be exclusive

    (inspired by a chat with mike b)

    Saturday January 8, 2022
  • habitations

    Well-known ways that JavaScript coerces objects to strings

    It’s a proper cliche of commercial computer programs to bind audit reporters alongside code at important relay nexuses. There are a panoply of reasons to extract information this way: producing audit trails for legal compliance, gathering product insights, collecting debug traces for quality monitoring, etc… The trick is wresting useful emissions from these reporters when they are working within a dynamically typed language that might unabashedly take liberties on ambiguous data types – like JavaScript. Typically you design your loggers to capture a Good-Enough Facsimile™️ of the current state of the world for later analysis1. This likely means grabbing the subject of the current operation as well as felicitous supporting actors like application and session context, and user info, in Object form. (Already a funky transmigration of Subject/Object). Oftentimes the subject is the user – we love to know you, and know to love you. But, if we may clasp hands and walk/skip through a hallucination together into a fantasy e-commerce example, we can explore logging and value coercion of a familiar Order class. What would be a blog about building websites without a bit of retail market magic.

    Let’s start with the following modest invention which terraforms1 a theoretical backend piece of an online shop. And let’s imagine that from time to time, orders that come into this shop become stuck during the fulfillment process – perhaps for a variety of reasons because we are dealing with, say, medication items. Insurance claims may be rejected, the doctor or pharmacist discovers a conflict between the patient’s health attributes and the meds. In such a scenario, this code attempts a fully-automated retry of the unstuck order, and in due course, at important nexuses, passes relevant contextual data to log handlers. From there other bots or real-life persons can take over.

    const oldestStuckOrder = await query(orders, { filter: 'oldest_stuck' });
    const logData = {
    	order: oldestStuckOrder, 
    	userInfo: { ... },
    	sessionInfo: { ... },
    };
    logger.info(`Start resolving stuck order: ${logData}`);
    const reason = determineStuckReason(oldestStuckOrder);
    
    if (reason) {
    	const result = resolveStuckOrder(oldestStuckOrder, reason);
    	const logDataWithResult = {
    		...logData,
    		reason,
    		result,
    	};
    	logger.info(`Finished resolving stuck order: ${logDataWithResult}`);
    } else {
    	const logDataWithReason = {
    		...logData,
    		reason,
    	}
    	logger.error(`Unable to resolve stuck order: ${logDataWithReason}`);
    }
    

    There are three points in this code where I’m sending a rich object into a logger service which only accepts a single string argument. In this fantasy, this particular team of software developers just wants the logger service API as simple and straightforward as can be: unary and stringy. I hope this code reads naturally, that it’s similar to something you’ve seen before. And…hopefully your gears are turning already and you are starting to see with me 𓁿; and you are beginning to feel awkward 🙇🏻‍♂️ about what representation for oldestStuckOrder or logDataWithReason this gussied-up console function will display. Won’t interpolating complex objects inside a template string force the engine to implicitly coerce the object into the obstinately churlish [object Object]?

    Scene opens, your PM marches up to your desk with a bemused frown:

    PM: What happened with Order #555-5555
    Me: The problem is that this order got into an illegal state!
    PM: Which order?
    Me: Oh, the object object order.
    PM: 😒

    JavaScript is funky-beautiful because the dynamic typing nature of the lang means you can smush values of mismatched types into the same context and your program won’t catastrophically fail when you run it. Loose assumptions loosely held, I guess. When we write JavaScript, we often take this for granted, and we exploit it for good. You probably know and understand intuitively or consciously the bunch of common “contexts” where the language affords this failsafe. Here’s a general breakdown:

    • expressions using arithmetic operators
    • comparison operators
    • text expressions (if statements, the second clause of a for loop handler, the first clause of a ternary operator)
    • interpolated strings.

    That last one is case in point; we can send an object into a string context – Finished resolving stuck order: ${logDataWithResult} – and get something workable out the other end:

    const logDataWithResult = { prop: 'prop', anotherProp: 'anotherProp', }; console.log(`Finished resolving stuck order: ${logDataWithResult}`);

    And there it is. Workable (quite generously). The famed proterozoic, bracketed notation of familiar churlish conceit and “bad parts” motifs. Obviously this not the best guess we hope the engine to make when executing our logging function – we have lost all that rich order and user data! Our compliance trail is meaningless. But we shouldn’t despair quite yet. I’m happy to share that JavaScript exposes many an API for developers to control the return value of type conversions. If JavaScript is anything it’s a fairly open language. Not necessarily open for expansion to the extent of, say, Clojure’s macros. But all things being mutable objects (with a few untouchable properties) and a handful of scalar values, you have a good deal of freedom. For coercing objects to string, the most famous method is probably toString(). In fact, JavaScript dogfoods its own toString() for object -> string conversion automatically in all those common contexts listed above. Whenever

    the object is to be represented as a text value or when an object is referred to in a manner in which a string is expected – MDN contributors

    Like between backticks.

    Now, if a fellow developer in our dreamy medication retail shop codebase has not already come along and monkey-patched the Order object’s toString() method, the default conversion algorithm rules defined for Object.prototype.toString() from the ECMAScript® 2021 spec section 19.1.3.6 will kick in. Yep, we are going there. The algorithm is actually pretty easy to understand (though here’s to hoping your browser of choice plays along!). I invite you for a glance:

    Can you see where [object Object] comes from? If the unfamiliar dialect is a bit intimidating for this casual Sunday morning read, here’s what the above algorithm would like if we implemented it in JavaScript4:

    import { SecretInternals } from 'secret-internals';
    
    const internalSlotsToTagMap = {
      ParameterMap: "Arguments",
      Call: "Function",
      ErrorData: "Error",
      BooleanData: "Boolean",
      NumberData: "Number",
      StringData: "String",
      DateValue: "Date",
      RegExpMatcher: "RegExp",
    }
    
    function toString(value) {  
      if (value === undefined) return '[object Undefined]';
      if (value === null) return `[object Null]`; 
      
      let builtinTag;
      const innerValue = innerValue = SecretInternals.Object.box(value);
      const isArray = isArray(innerValue);
      
      if (isArray) {
        builtinTag = 'Array';
      } else {
        for (const [key, value] of Object.entries(internalSlotsToTagMap)) {
          if (SecretInternals.Object.hasInternalSlot(innerValue, key)) {
            builtinTag = value;
          }
        }
      }
    
      if (!builtinTag) {
        builtinTag = 'Object';
      }
    
      const tag = SecretInternals.Object.get(innerValue, '@@toStringTag');
    
      if (tag !== string) {
        tag = builtinTag
      }
    
      return `[object ${tag}]`;
    }
    

    For actual objects, not object-like things (Arrays), the algorithm falls through to step 14 where a temporary referent called builtInTag receives the value Object. This built in tag is later used as the second part of the converted argument value.

    Despite the sarcastic jabs from the peanut gallery, what else would we expect the language to do. JavaScript’s unintentional emissions were designed for a platform that projects them through UIs for consumption by masses of retinas of human people – it’s Good Enough. The language keeps your program running with a type guess and leaves the contents of your value alone. It doesn’t, like, radically unpack and serialize your contents to JSON (what compute or privacy costs might lurk!); or try to set the built in tag to the left-hand side of the last variable assignment statement: what a drastic move, the language doesn’t work this way: variable names are not synonymous with types! Variable assignment is void of any tautological binding! JavaScript just lets you take matters into your own hands.

    Until very recently I wasn’t aware of any techniques beyond an Object’s built-in toString() property to futz around with type conversions. But apparently there are a few modern well-known Symbols5 that have entered the language to help. I’ve compiled an example with four different APIs I could track down. Any of these would play nicely for logging libraries, though the last only works in node, and there are nuances to each that you must consider for your use case.

    // 1. Override toString() (js, node) class Chicken { toString() { return '🐓'; } }

    var chicken = new Chicken(); console.log(${chicken}); // 🐓

    // 2. Symbol.toStringTag (js, node) class Chicken { get Symbol.toStringTag { return ‘🐓’; } }

    let chicken = new Chicken(); console.log(${chicken}); // [object 🐓]

    // 3. Symbol.toPrimitive (js, node) class Chicken { Symbol.toPrimitive { switch (hint) { case ‘number’: case ‘string’: case ‘default’: return ‘🐓’; default: return null; } } }

    let chicken = new Chicken(); console.log(${chicken}); // 🐓

    // 4. util.inspect.custom (node) const inspect = Symbol.for(‘nodejs.util.inspect.custom’); const util = require(‘util’);

    class Chicken { inspect { return ‘🐓’; } }

    const chicken = new Chicken(); console.log(chicken); // 🐓 console.log(${chicken}); // [object Object] util.inspect(chicken) // 🐓

    I haven’t formally surveyed this, but I can report anecdotally that of my current team of ~25 engineers who work on a universal JavaScript application, overriding toString() is the most commonly preferred strategy. My sneaking suspicion is that many JavaScript developers are not aware of the more contemporary Symbol methods/properties, even though these methods have been available in major browsers/node for ~4-5 years. Or maybe it’s simply a matter of many backend devs coming to backend JS from other languages and server environs. From what I understand, node has just started to finally emerge in the past few years as fit-enough for prod. JavaScript is vast territory, quickly expanding, in multiple runtimes – it takes years.

    As for nodejs.util.inspect.custom, I haven’t been around node land long enough to know if its usage is idiomatic.

    Still, preference for toString() may not simply be an issue of keeping up with the JS Joneses. As shown above, the outcomes of these different strategies are not identical. What’s more, to layer on the complexity, these options aren’t wholly independent. In fact, JavaScript combines these strategies together under the hood. Did you notice what was going on in step 15 of the conversion algorithm above? The spec requires that Object.prototype.toString looks up the @@toStringTag symbol property on the object – these symbol members are in the DNA sequence now. When we take control back, understanding the spec is quite key: we can avoid mistakes like combining these two options since overriding toString() always take precedence. For example:

    class Chicken { get [Symbol.toStringTag]() { return 'Base 🐓'; }

    toString() { return ‘This is a 🐓’; } }

    class JungleChicken extends Chicken { get Symbol.toStringTag { return ‘Jungle 🐓’; } }

    const chicky = new Chicken(); const jungleChicky = new JungleChicken();

    console.log(${chicky}); console.log(${jungleChicky});

    However, say I were interested in simply tagging the string representation of my object to protect exposing its value contents, but still present a semantically rich identifier to consumers. This would help us express a desire to maintain the default bracket output – [Object..] – with the prepended “object” type to maintain some consistency with how objects are stringified in our code. In that case, leveraging the well-known Symbol.toStringTag property would be the way to go. For example, the following logger from our e-commerce imaginary might obscure private user data like this:

    // .../jsonapi/resources/user.js class User { get [Symbol.toStringTag]() { return `User ${this.id}`; } }

    // Somewhere else… logger.error(Unable to resolve stuck order: ${logDataWithReason}); // Start resolving stuck order: // { // order: {…}, // userInfo: [object User:123456], // sessionInfo: {…}, // };

    Your next option, empowering even more fined-grained control, is adding a Symbol.toPrimitive method to your object. Symbol.toPrimitive is a main line into the runtime’s coercion processing. After playing around a bit in a browser and a node repl, I’ve noticed that this Symbol will precede over a provided toString() override.

    class Chicken { toString() { return 'This is a 🐓'; }

    get Symbol.toStringTag { return ‘🐓’; }

    Symbol.toPrimitive { switch (hint) { case ‘number’: case ‘string’: case ‘default’: return ‘This is a 🐓 primitive’; default: return null; } } }

    const chicky = new Chicken(); console.log(${chicky})

    By using Symbol.toPrimitive, you’re basically instructing the runtime to LOOK HERE for all its object coercion preferences. What’s more, beyond Symbol.toStringTag’s mere label augmentation, you get a powerful indirection2 to handle all primitive type coercion scenarios1. You’re also overriding internal language behavior which – I was surprised to learn – effectively resolves how to order the calls for Object.prototype.toString() and Object.prototpye.valueOf(). Flip to section 7.1.1 of the spec to see how the ToPrimitive abstraction is designed to call a further nested OrdinaryToPrimitive abstraction for decision-making:

    Translated for comfort:

    import { SecretInternals } from 'secret-internals';
    
    function ordinaryToPrimitive(object, hint) {
      if (SecreteInternals.isObject(object)) {
        throw TypeError;
      }
    
      if (hint !== 'string' || hint !== 'number') {
        throw TypeError;
      }
    
      let methodNames = [];
    
      if (hint === 'string') {
        methodNames = ['toString', 'valueOf'];
      }
    
      if (hint === 'number') {
        methodNames = ['valueOf', 'toString'];
      }
    
      for (methodName of methodNames) {
        if (SecreteInternals.isCallable(methodName)) {
          const result = SecreteInternals.call(object, methodName);
    
          if (SecreteInternals.isNotObject(result)) {
            return result;
          }
        }
      }
    
      throw TypeError;
    }
    

    I think I like the idea of using these well-known symbols for custom object -> string representations, if for the collision protection alone.3 What would it be like to reach for the powerhouse of Symbol.toPrimitive to hijack the runtime from eventually calling through to Object.prototype.toString(). Furtive, conspiratorial whispers to the interpreter 🤫. Even a partially implemented reducer will do, as I demonstrate above in my chicken example above: the switch statement can gracefully sidestep other type hint cases and only target the string hint case. But is grabbing for Symbol.toPrimitive overkill? toString() is tried and true and a pristine greenfield function block without arrogant “hints” and a naive switch statement without pattern matching 🙄 (are we there yet?). Could there be non-trivial DX cost of confusing other developers if the other primitive case statements are fall throughs?


    1: Whenever I think about how software tries to capture its own understanding of the world it creates, I’m brought back to systems thinkers like jessitron:

    "We don’t expect the world to have perfect consistency. Yet we wish it did, so we create facsimiles of certainty in our software.
    It’s impossible to model the entire world. Completeness and consistency are in conflict, sadly. Still, if we limit “complete” to a business domain, and to the boundaries of our company, this is possible. Theoretically.

    2: By “indirection” I really mean to invoke Zachary Tellman’s exegesis of semantic drifts in software industry lexicons from his book Elements of Clojure. His work is a really nice refinement:

    Indirection provides separation between what and how. It exists wherever "how does this work?" is best answered, "it depends." This separation is useful when the underlying implementation is complicated or subject to change. It gives us the freedom to change incidental details in our software while maintaining its essential qualities. It also defines the layers of our software; indirection invites the reader to stop and explore no further. It tells us when we're allowed to be incurious.
    He goes on to discuss that conditionals are one of two primary devices to achieve successful indirections (the other being references). Conditionals are effective because they contain ordered, "closed" decision-making mechanisms that avoid conflicts; in contrast to tables with indivduated keys.
    Conditionals solve conflicts by making an explicit, fixed decision. Where conflicts are possible, we use conditionals because they are closed.

    3: That’s the primary purpose of these Symbols. For a deeper Support util.inspect.custom as a public symbol #20821 on GitHub.

    4: Try not to think too hard about the implications of a JS interpreter written in JS. But, ya know, Atwood’s law; you will, not surprisingly, find JS interpreters for JS out there, many built on Acorn, which, is itself, a parser written in JS. Man alive!

    5: I’m bemused and betwixed by this use of “well-known.” Does anyone know the origin of this qualifier?

    Sunday January 24, 2021
  • habitations

    Language ergonomic studies: Summing ranges and most frequent terms

    Pennies per day (by day count) in Clojure:

    (defn pennyPerDay [numDays] (reduce + (range 1 (+ numDays 1))) )

    (pennyPerDay 30)

    Pennies per day (by day count) in JS:

    function pennyPerDay(numDays) { return Array(++numDays) .fill(0) .reduce((sum, _, i) => sum + i, 0); }

    pennyPerDay(30);

    Most frequent terms in JS:

    const termList = ['term1', 'term2', 'term3', 'term3']; function mostFrequentTerms(arr) { return arr.reduce((result, curr) => { result[curr] = ++result[curr] || 1 return result; }, {}) }

    mostFrequentTerms(termList);

    Most frequent terms in a list with Clojure:

    (def term-list [:term1 :term2 :term3 :term3]) (defn most-freq-terms \[terms\] (->> terms frequencies (sort-by val) reverse)) (most-freq-terms term-list)
    Sunday November 8, 2020
  • habitations

    Language ergnonomic studies: 52 card deck

    Javascript:

    const cards = () => ['♥','♠','♣','♦'] .map((suite) => (['2','3','4','5','6','7','8','9','10','J','Q','K','A'].map((card) => suite + card)))

    Clojure:

    (defn new-deck [] (for [r [\♥ \♠ \♣ \♦] s [:2 :3 :4 :5 :6 :7 :8 :9 :10 :J :Q :K :A]] [r s]))

    💻

    Sunday October 18, 2020
  • habitations

    Zoom Doomed

    Not a day goes by when I don’t witness the over signifying of “meeting”. “Meeting” invokes a collective sigh for laborers since a better way seems a fairy tale. As a consequence, possibilities for elevated communication are damaged. But a meeting is so much more:

    Calling the coordinated exchange of messages or moderated access to shared material (like a blackboard) a conference or meeting seems to neglect a factor that is rather important in meetings. Often the explicit exchange of messages is relatively unimportant compared with the development of social relations in the group of participants that happens simply by the fact that they are close to each other as humans for a while.

    Perspectives and Metaphors for Human-Computer Interaction, Susanne Maaß and Horst Oberquelle

    Thus, we may be Zoom doomed. Perhaps. We need to hire facilitators and train our people in facilitation. It’s a technical skill that most technologists lack, profoundly.

    Wednesday September 30, 2020
  • habitations

    Not so byzantine algorithm studies: Using math to deliver your medication

    Like many commercial software developers, math plays a more sporadic role in my day-to-day work. That said, I certainly wouldn’t blithely demote knowing math below other techs like programming languages, web frameworks, and principles of software design/architecture; which already sets up a false dichotomy, anyway.

    Math presents a beautiful core proposition to software developers. Through its powers of abstracting and reducing real-world dynamics into expressable, repeatable, sequential logic – which we can be lifted easily into our favorite coding grammars – math can be the companion dynamo we need for achieving levels of faithful verisimilitude for real world complexities (Keep It Savvy Stupid).

    Italicizing knowing math is totally an intentional problematization to mess with the idea that knowing math is narrowly scoped to demonstrating technical prowess of calculation or proofing. I think software engineers can, and should, consider a relationship with math as knowing enough math; like, an ontological relationship with math’s formalism and general principles. Math The Good Parts, so to speak. Otherwise the bravado of equating math and problem solving or math as “hard” (dick) science of some kind is doomed to engender an axiomatic blur that will gate keep devs who are not deemed fit for lack of educational background or interest in fiddling with greek letters. Keep these gates wide open, por favor!

    (I also think the focus on calculation in my early schooling, alienated from everyday material uses, made me quite Meursault for it – perhaps why I don’t feel the need to champion for being mathy. It took entering commercial software life much later in life to see how the dynamo helps us make stuff and therefore able to consider embracing math (and being embraced by math) again.)

    Our praxis: we can coopt knowing math to ramify similar to our desire for Senior Engineers™ to be T-shaped with their knowledge of the stack in order that they can press more heavily on architecture and design. For example, we can desire for React developers to:

    • Understand just enough graph theory to see component architectures as graphs;
    • Understand the implications of data pipelining and message passing in a graph (bonus points for being a student of Christopher Alexanders' semilattices)
    • Or understand just enough category theory to understand the heritage and benefits of composability of first-class functions – is JS even fun without a bit of FP? What else?

    This is all just to say that some business problems map really well to higher-order maths and devs should be prepared for it, at least through the design phase. Again, once we sit down to write code, we can easily offload the fastidious implementation details; those few lines of copy|paste from the math deep state.

    For example, take a common e-commerce retail problem like Distributed Order Management (DOM). Modern omnichannel selling combined with cloud computing open a broad field of possibilities to achieve low-cost order fulfillment. Affine cost structures – resulting from variable/variadic shipping dynamics – will increase complexity as an ecommerce company scales. We’re talking about quite a bit of real-world complexity to model and adapt to.

    What, then, when we’re tasked to create an algorithm to satisfice a version of this problem where you have these rough requirements:

    1. Orders must be completely fulfilled
    2. There will be variant shipping costs per supplier
    3. Orders can be fully splittable by a supplier

    This kind of thing will break your brain once you start to grasp for a rational boundary to contain the argument ramifications: supplier availability, supplier inventory availability, etc…

    I was presented with a challenge like this recently and it took me a couple hours just to understand what this problem domain was; like, find my way to DOM through off-by-one google searches; a true StumbleUpon revival. Because I didn’t know the math well-enough yet. Google is pretty fast though, and within minutes I practically broke out in a sweat after discovering multiple long-winded computer science papers written on the topic filled with intimidating mathy notations. Then more OMGs and mon dieus as I careened sharply into Cantor’s theorem and Set Theory. Wait, am I doing math? Oh mersault!

    More and more it seemed a fantasy that I’d be capable of solving Amazon within a reasonable amount of time because this gestalt was increasing aggressively. Nonetheless, after some deep breaths and patience recoup and having worked with enough devs without math experience, I began to inculcate to this world and realize any solve was going to be an approximation of some sort and probably wouldn’t require me to crack open a text book; there was no silver bullet or excavation of secret proofs. Rather, this whole class of optimization maximations applied to fulfillment problems is a rigorous academic field, but when it meets the metal it softens and warms for Good Enough™️ programming. Each potential brute force linear assignment or dynamic programming algorithm was discarded. My inputs couldn’t be structured into a decision table or cost matrix amenable to path finding, traversal, or Cartesian production. Which meant I could rule out potentials like Kuhn’s Hungarian algorithm. In fact, the scope was something more akin to a set cover or networking problem – still a brave new world, but less and less unbounded the more I scoured the web. Ulimately, my task was gonna be something toward imagining all probabilities between order items and suppliers, and then reducing these matches against cost constraints. “All probabilities” was a strong clue.

    So, a bit surprisingly for someone not used to needing math everyday – and certainly not trying to fuck with this:

    Fully splittable
    Eq. (5a) ensures every order is fulfilled, and (5b) limits service provided by a store.

    I soon found myself drifting at a comfortable altitude through Probability and Combinatorics with the help of other cow path pavers – we are community-taught developers, after all! – crafting a studied, yet heuristic, approach from where I could thread my inputs through techs like combinations and permutations to make some educated guesses.

    The general step-rules of algorithm gradually accreted into something resembling following:

    1. Generate all possible combinations of order items
      • Order items are unique, therefore we are working with a Set. We can therefore use a mathematical definition of a Powerset and create a function which outputs a set of all subsets of any set S:
      powerset([A, B, C]) === [[A], [B], [A, B], [C], [A, C], [B, C], [A, B, C]];
    2. Generate all possible combinations of combinations order items that are less than or equal to the number of suppliers
      • Effectively take the result of the Step 1 as the input Set for another powerset that only returns combinations of order item splits that can be fulfilled by available suppliers. For two suppliers:
      powersetBySize(powerset([A, B, C]), 2) ===
        [
          [["A"]],
          [["B"]],
          [["A"], ["B"]],
          [["A", "B"]],
          [["A"], ["A", "B"]],
          [["B"], ["A", "B"]],
          [["C"]],
          [["A"], ["C"]],
          [["B"], ["C"]],
          [["A", "B"], ["C"]],
          [["A", "C"]],
          [["A"], ["A", "C"]],
          [["B"], ["A", "C"]],
          [
            ["A", "B"],
            ["A", "C"],
          ],
          [["C"], ["A", "C"]],
          [["B", "C"]],
          [["A"], ["B", "C"]],
          [["B"], ["B", "C"]],
          [
            ["A", "B"],
            ["B", "C"],
          ],
          [["C"], ["B", "C"]],
          [
            ["A", "C"],
            ["B", "C"],
          ],
          [["A", "B", "C"]],
          [["A"], ["A", "B", "C"]],
          [["B"], ["A", "B", "C"]],
          [
            ["A", "B"],
            ["A", "B", "C"],
          ],
          [["C"], ["A", "B", "C"]],
          [
            ["A", "C"],
            ["A", "B", "C"],
          ],
          [
            ["B", "C"],
            ["A", "B", "C"],
          ],
        ];
      
    3. Generate all permutations of suppliers
    4. Generate all viable routes by matching between the sized combinations of order items (result of Step 2) to supplier permutations (result of Step 3)
      • Basically a fancy zipping computation
    5. Filter viable routes against both superficial and business constraints like like duplicated suppliers and supplier availability and/or inventory
    6. Compute the lowest cost route!

    Now there’s a well-suited mathy modeling!

    Some other thoughts

    Combinatoric applications for this algorithm are quite expensive: you can see how the cardinality flourishes pretty fast above, and my example is for a modest 3 order items and 2 suppliers. If those numbers increase by any measure CPU will be tremendously exercised. (I believe the runtimes of these functions may be polynomial?) I can see why optimization becomes an attractively ripe apple for academicians. Quickly glossing, I can imagine optimizing the looping functions to break before completion when satsificing within a range of acceptance criteria; or structuring the data as Iterators and/or piping through transducers to minimize space complexity with lazy or eager techniques.

    By the way, JS has pretty underwhelming options for combinatorics. I found one library that I found a bit awkward to use and ended up ditching in favor of a few standalone implementations of powerset and permutations so I could ensure the code would comply with how I was trying to express the above heuristic. Unsurprisingly, Python’s itertools has combinatoric functions built in and even provides recipes for common tooling you can build on primitives like permutations() and combinations(). For example, powerset().

    def powerset(iterable):
        "powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)"
        s = list(iterable)
        return chain.from_iterable(combinations(s, r) for r in range(len(s)+1))
    

    Or simply import the blessed more-itertools library. Of course, this is expected for Python which is heavily developed for the data science world.

    Wednesday September 9, 2020
  • habitations

    The loss of logical purity primacy

    MV5BN2NhZTg0MDgtMTc5ZC00MGQxLThhNzEtYmE5ZjRiMzI5ZTcyXkEyXkFqcGdeQXVyMjUyNDk2ODc V1
    Femi Ogunbanjo & Hanne Klintoe, 1999, The Loss of Sexual Innocence

    Back in February, my entire notion of expertise and how experts make decisions became cracked after listening to episode 169 of the Greater than Code podcast. While the podcast is rolling I’m discovering my socialized construction of expert – like we pump and grow some kind of muscled query power of an expanding brain-database over time. And that combined with a trained analytical rigor gained through extensive tutelage, study, and practice.

    I’m struck by how neo-classical cyberpunk-capitalist fantasy that notion sounds: a techno-positivist outcome of brain as database; the privileging of expertise as accumulation. The time-value of memory perhaps.

    It’s kind of wild: wondering about what sort of instinct decision-making really is in concrete terms. Thankfully we have the >Code crew to help us navigate Decision Theory from the safety of our own home. The podcast panelists offer a different notion attributed to Gary Klein’s work Sources of Power: How People Make Decisions. Rein’s paraphrasing of this more probable notion for quick decision under time pressure situations in our business environs:

    Expertise is an act of rapid mental simulation without formal analysis based on previous experience. It’s more akin to “instinct” than a cartesian scientific methodology.

    Gary Klein calls this model “recognition primed decision” or RPD. The key is that over time and through experience we continue to enhance our instincts around familiar problems, and replace outdated versions that don’t make sense in our newer model of the world. No need to increase disk/head space! Then, when facing a quick-decision moment, we can make a strong guesses. Informed guesses, not overly formal; certainly not grooved to dreams of electric sheep or Moore’s law or, by necessity, scholastic architectures of cognition. But not totally informaled, either; we are making smart connections.

    I find RPD comforting as a way to rethink my role as an expert despite fewer than a decade of years practicing my trade: a sophomoric “mid-level” software engineer. While RPD reveals the distinctions between less experienced and more experienced people – notably the former’s exceeding alacrity to arrive at a first course of action – RPD is about evolving your pattern matching, identifying the least risky choice, mentally modeling/imagining outcomes (see Variations)

    Embracing an RPD notion of expertise can ramify more permissions for less skilled SWEs to face spontaneously emerging decision-making moments in the workplace with less imposter syndrome and tougher felicity. Fellow career middlers, if you have an idea, basically just trust your power of . RPD also beseeches us to expel myths like (quickly riffing): Principal and Staff technologists have reached their eminent station by putting in the time to refine their powers of inspection, analysis, interpretation, evaluation; they’ve amassed all the good frameworks, methodologies, and systems into their memory bank; and decisions they make are conclusive (foreclosing, narrowly, the scope of possibility) because of this special toolkit which is wielded though an infallible reason.

    Reason and logic might factor into RPD mental simulation, but there’s a primacy of imagination and knowing deep down. Strength of will even?

    I am enough. My powers to decide can be totally excellent.

    Of course, that doesn’t mean my first guess won’t be totally wrong. But hopefully I’m not asked to fix something urgent that far exceeds my bounded rationality of this domain; which might ramify harm on real people if the mark is missed, goddess forbid and forgive.

    I did some additional reflection with the >Code community in their Slack on why I was feeling relieved by Klein’s ideas (Jessica Kerr hearted this so I’m feeling proud to share!):

    Thanks. This really struck me because I think I have internalized some idea that experts have these academically rigorous approaches to problems where they draw on frameworks and data and tools blah blah to come to conclusions.

    Then Rein chimes in:

    Yeah, the analytic decision-making mode is what we are taught, but it’s not what people actually use, especially in high-pressure situations.

    The me again:

    Oh right that was the other piece. [Making decisions under] high pressure. Which in corporate culture feels like all the time a lot.

    Then some other folks tried to pivot the conversation to talk about how web tech is not really that high “pressure” because bad decisions don’t kill people or necessarily tank the business immediately. So I continue:

    I understand where y’all went with that idea of “high pressure.” But even without the kinds of excess you describe — which I’m grateful not to feel at my (fairly spread out) 1000 person company — I still feel like I’m caught up in a heavy stream; not in a surprising or terribly uncomfortable way; but like a tireless machine of ambient conversation between me and my work and my coworkers that is always humming. So maybe “high pressure” in the sense of constant. Day to day I feel coerced into what feels like quick/on-the-spot decision-making, and always on guard for such; and that’s definitely challenging for me.

    Like, I’m a developer, and I have to make a ton of small decisions all day long when coding. Even medium-sized decisions. (Like the other day deciding whether to coordinate on a revision of a standard component with a team in Spain with similar needs, vs plowing ahead myself because I knew that process may take weeks and I needed output in days). And often these decisions have to happen within a matter of minutes because of the above normal buzz. I think I remember there was another episode where folks talked about how developers make more decisions than CEOs on any given day?

    I was finding comfort in this idea from Gary Klein because I can consider whatever my instinct draws on in that moment to be a practice of expertise. I can feel like an expert; and without any added insecurity for a lack of analytical rigor, if I had to explain my decision. I usually have confidence in my decisions, but I didn’t have a framing of it as expertness. A framing I was likely lacking, as Rein said, because I’ve been taught a dichotomy between analytical thinking and instinct, where there actually isn’t one; at least as I understand the idea without having read the book yet. But as someone relatively new in the industry — just around 6 years building web — being able to come into expertise or see it forming in my behavior is really nice.

    Earlier today, delving through Software Development and Reality Construction, I was delighted to discover another rebuke to my previous ideas of expertise as analytical reason qua database. It smacked of Klein and I wonder if Reinhard Keil-Slawik is influenced by Klein – or conversely – or perhaps they swarm with each other in that world of systems thinking, cybernetics, constructivism, etc…

    From Keil-Slawik’s essay Artifacts in Software Design:

    Reinhard keil slawik insight

    Keil-Slawik argues that expertise – ie “insight” – is an enrichening of “understanding.” Understanding being a cognitive structure derived from the encounter with our envionrment which mashes up phenomena (physical, etc…) and circumstance. It’s not random access from a brain-as-database that has been populated with arrangements of stuff from “out there” - outside ourself/mind – (previous cognitive structures); as if insight is just a result of recall or trial; that smacks of an easy slippage slide into decision paralysis.

    Rather it’s:

    Reinhard keil slawik insight 2

    This notion seems to gel well with RPD. I wonder if Klein would welcome this cognitive science to explain how we do decisions.

    I’m tempted to accept this explanation pro-tem since the thought of unbounded accumulation of stuff in the brain, and lookup at exponential runtime, doesn’t satisfactorily explain the reliable guesses of experts under time pressure. This idea finds kinship with Nora Bateson’s characterization of sensemaking as Warm Data collection:

    “For some reason, you know what it would feel like to put your tongue on the rug — you almost know what it would taste like. You have been gathering the contrasts, comparisons, and differences that give you that sensing since you were born, maybe even through the sense making of prior generations. You have no files for those data.” 🚀

    Keil-Slawik and kin are proposing the idea of human expertise as an individual construction of meaning, as well as a co-constitutive emergence of our own body/mind wisdom and the “external memory” of symbols and tools we work with. The latter is an important point because it reinserts the importance of mneumonics as Turing machines, algebra, languages, frameworks, etc…. This affords us the benefit of a tradeoff: speed of resolution over perfectness. The example Keil-Slawik gives is of the ape who, if taught once to put sticks together to reach a banana, will subsequently devise similar means of combining objects to get the banana “without any hesitation…and any similar situation.”

    Monday September 7, 2020
  • habitations

    Byzantine algorithm studies: Using math to reverse an integer

    Making code do something requires exactness but that doesn’t necessarily mean programmers will express grammars with any particular concision or ease. Code is an outcome of social construction, background, perhaps aesthetic desire. Toy code problems reveal the apparatus, for sure. If you ask me to reverse an integer I’ll build a world of arrays – because I like them. Directionality makes me think of traversal, therefore arrays. I also recall the reverse() convenience immediately. However, as a result, I’m forced to deploy morphological contortions; which might feel icky for someone derived within an algorithmic hermitage, auto-recursively deriving pleasure therein.

    function reverseInt(int) { // STEP 1 - morph let intAsArrayOfStrings = int.toString().split(""); }

    Reversal is now nearby:

    function reverseInt(int) { // let intAsArrayOfChars = int.toString().split('')

    // STEP 2 - traverse, recombine let reversedIntString = intAsArrayOfStrings.reverse().join(""); }

    All that remains is a second transmogrification back to the initial type and signing:

    function reverseInt(int) { // let intAsArrayOfChars = int.toString().split('') // let reversedIntString = intAsArrayOfChars.reverse().join('')

    // STEP 3 - Sign let result = Number(reversedIntString) * Math.sign(num); }

    But it has come to my attention that a mathy person prefers preservation, whereby the input is arithmetically recombinated with the help of division and multiplication.

    function reverseInt(int) { let remainder = 0; let result = 0;

    while (int) { remainder = int % 10; result = result * 10 + remainder; int = parseInt(int / 10); } return result; }

    reverseInt(12345);

    But there’s no translation from integer to array. No shift. Grammar school level maths instead. I guess this solution exalts type consistency. Perhaps belies an obsession with 10? The base binary pair. I can’t imagine the benefit of doing that in a commercial software project so it could only be an academic fancy, benchmarked under extreme conditions. It’s really clever, though.

    Also, been thinking about slice a lot today. Never stop dreaming:

    let int = 12345; let intAsStr = int.toString(); new Array(intAsStr.length) .fill(null) .map((_, idx) => intAsStr.slice(-(idx + 1))[0]) .join("");
    Thursday August 6, 2020
  • habitations

    What's a bug?

    …bugs are interesting and important in themselves: they define the boundary between what is understood and what is not. Hence, they show us where our weaknees are, and provide opportunities for us to learn and grow.

    Software Development and Reality Construction
    Ch 5.1: The Denial of Error, Joseph A Goguen

    The rush to fix bugs to recoup customer satisfaction is absolutely necessary when writing for capital. I just worry that without the proper retrospection we won’t actually increase our understanding of the software, and therefore the recidivism rate of error will remain constant; or likely increase in the tail wind of growing complexity and entropy.

    Wednesday July 15, 2020
  • habitations

    Recursion is too easy for writing nested menus in UI

    Hannah Höch, Für ein Fest gemacht (Made for a Party), 1936 Collage

    When paired with component-based framework tools and HTML’s unordered lists. Since the rendering is done by the framework, we simply need to progressively stack render calls as we traverse the data. Such brutalist Dada.

    function List({ items }) {
      return items.map((item) => (
        
    • {item.name}
    • {item.children && }
    )); }
    Saturday July 11, 2020
  • habitations

    A Recursive Validation Function with User-Defined Exceptions

    Every time I use a recursive function for something practical in commercial software my notional machine of it’s behavior is refined. This amounts to a small list of heuristics amassing in my mental pocket:

    1. “It’s a function that calls itself.”
    2. “You make your list of tasks first then start from the last and work your way back up” or “It’s like unpacking a box with other, smaller boxes, with other smaller boxes, with other…and then only start looking at the boxes from the smallest to the largest, one at a time” (🎩👌 Aditya Bhargava’s grokking algorithms)
    3. “It’s good for building up a list or value, kind of like reduce”
    4. “It can be less code but less performant.”

    After working on another problem recently that involved deep-diving a nested JavaScript object and executing validations against each level I’m adding:

    “Recursion is awkward if you need to break early.”

    In my particular case I needed to validate a recursive data structure representing an org chart of Employees and return invalid immediately if the code traversed an Employee with bad data – extra fields, missing required fields, fields of the wrong type, etc….

    Breaking out from a recursive function is not quite as straightforward as you’d think. Also, historically I was used to seeing recursive code employed for tasks that wanted the call stack to build up all the way through the data.

    Like, (deep) flattening an array:

    function deepFlatten(nestedArray, result = []) { for (let element of nestedArray) { if (Array.isArray(element)) { deepFlatten(element, result); } else { result.push(element); } }

    return result; }

    Or, fetching a complete set of data from a remote source in chunks:

    async function fetchAll(params, all = []) { let chunk = await fetch(params); let nextPage = chunk.nextPage; all = all.concat(chunk.data);

    if (nextPage) { let nextParams = { …params, page: nextPage }; return await fetchAll(nextParams, all); }

    return all; }

    What I quickly discovered is that just trying to capture and emit an error from a recursive call stack is already a bit funky. Simply returning false in your function doesn’t work because calls lower on the stack may return true; and since we’re (kind of) “building a value” it only matters what the final call returns. This approach won’t work:

    // Will only return false if the last call in the stack returns false function validate(data, schema) { for (let item of data) { for (let rule of schema) { let field = item[rule.name]; let required = rule.required;

      if (required && !field) return false;
    
      // Recurse
      if (Array.isArray(field)) {
        validate(field, schema);
      }
    }
    

    }

    return true; }

    Using recursion is more like a leap of faith - you are handing over control to the JS engine over an unbounded data set; it’s quite reminiscent to the manner in which higher order functions operate with Array and Object collections. For example, forEach is a powerful and declarative alternative to for and for..of/in loops until you find yourself needing to skip over an iteration or break out of the loop. Keywords like continue and break are unavailable in Array and Object collection methods – these are closed iterators.

    Your only recourse in a recursive function is relying on outer calls – since the call stack is LIFO - to set that flag and pass it through each stack layer. So capturing and emitting an error from your recursive function might look like this:

    function validate(data, schema, errors = []) { for (let item of data) { for (let rule of schema) { let field = item[rule.name]; let required = rule.required;

      if (required && !field) {
        errors.push(error);
      }
    
      // Recurse
      if (Array.isArray(field)) {
        validate(field, schema, errors);
      }
    }
    

    }

    return errors; }

    If our program requirements suggest we want to parse the entire org chart for bad data, this function will give us a result array we can further process to report errors. But for my purpose, there’s too big a potential cost of unnecessary runs while a large call stack is cleared for a large org chart.

    In order to stop processing the org chart and return an invalid result early, we need a solution that stops execution entirely when the invalid check is entered in the outermost call. Alas, the solution ends up being rather elegant and simple, though counter-intuitive. Rather than returning (false, an error list, etc…), you can throw and thereby forcibly halt the engine’s execution of the code. Here’s an example with throw:

    function validate(data, schema) { for (let item of data) { for (let rule of schema) { let field = item[rule.name]; let required = rule.required;

      // It's even one less character to write! 🤣
      // Also now we have total control over the exception content
      if (required && !field) throw new MissingFieldError(item, rule);
    
      // Recurse
      if (Array.isArray(field)) {
        validate(field, schema);
      }
    }
    

    }

    return true; }

    Day in, day out we work constantly with client applications that only trhow as a result of unintended bugs in the program. But we can take advantage of this standard JavaScript behavior and erect an appropriate error boundary. Remember:

    Execution of the current function will stop (the statements after throw won’t be executed), and control will be passed to the first catch block in the call stack. 🔗

    Therefore we can rename and wrap our recursive function that throws, and put it inside an error boundary to achieve that early break we want. This approach even comes with the added advantage of declaring the content of our user-defined exception at throw site; eg, utilizing meaningful error constructors or factories like missingFieldError().

    function validate(data, schema) { try { validateInner(data, schema); } catch (error) { // returns new MissingFieldError()! return error; } return true; }

    Even more, the elegance of this design with an outer try/catch allows for separate testing of our validation business logic – the rules against which bad data throw – and error handling – what errors we emit for certain cases.

    Sunday July 5, 2020
  • habitations

    Debugging TS in VSCode and Russel Ackhoff's Problem Treatments

    When I’m drifting between jobs – say, during a global pandemic and massive civil rights upheaval – I tease away some time from protest and anti-racist organizing to study programming basics and plunge below the membranes of web frameworks. Of course, the first thing you realize after a few key clacks is the brutal lack of a full-featured dev environment. I think my industry folks who don’t spend free time on side projects can relate.

    I had a serious moment of angst this morning about whether I could achieve debugging for TypeScript files in VSCode within a reasonable amount of time – that is, an amount of time that wouldn’t deplete all my creative energy before writing any code. To my very pleasant surprise this was much easier than expected. You need two things.

    1. Add a task to run the compiler in watch mode (this is totally boilerplate):
    {
      "version": "2.0.0",
      "tasks": [
        {
          "type": "typescript",
          "tsconfig": "tsconfig.json",
          "option": "watch",
          "problemMatcher": ["$tsc-watch"],
          "group": "build",
          "label": "tsc: watch - tsconfig.json"
        }
      ]
    }
    
    1. Add your outfiles (dist) target glob to your launch.json file:
    {
      // Use IntelliSense to learn about possible attributes.
      // Hover to view descriptions of existing attributes.
      // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
      "version": "0.2.0",
      "configurations": [
        {
          "name": "Launch server.js via nodemon",
          "type": "node",
          "request": "launch",
          "runtimeExecutable": "nodemon",
          "program": "${file}",
          "restart": true,
          "console": "integratedTerminal",
          "internalConsoleOptions": "neverOpen",
          "runtimeArgs": ["--experimental-modules"],
          "outFiles": ["${workspaceFolder}/out/**/*.js"] // <-- ADD YOUR OUTFILES GLOB HERE
        }
      ]
    }
    

    And voila, pausing on breakpoint in a basic router I wrote while running a node server:

    Ok new topic…

    During episode 138 of the > Code podcast with guest Vaidehi Joshi (of BaseCS podcast fame amongst many other things), the discussion veers into the topic of problem solving (~20:30). Jamey and Vaidehi riff about that condition we all know and love – and never appreciate enough – whereby solutions come to you while away from the computer; while your brain is “percolating” (Jamey) or in a “mental marinade” (Vaidehi). Jacob Stoebel expands on that by describing another common mental turnabout we programmers do: where we often catch ourselves down a rabbit hole trying to – big “S” – solve the problem with our first instinct; but that increasingly deeper and muddier slog causes us to pause and shake off the adrenaline and climb back out of the hole and we find ourselves asking oursleves how do I get rid of this problem. Rein, per usual, picks up the thread and introduces the panel/listeners to a theorist named Russell Ackoff who minted a classification for approaches to problems called “Problem Treatments.”

    To list and paraphrase them:

    1. Absolution - Ignore the problem and hope it goes away

      It’s easy to joke about this one, but Rein actually provided a compelling case for this. For example, security engineers often make the tradeoff to ignore a security loophole if the threat isn’t great enough.

    2. Resolution - Doing something something “good enough”

      This feels like a pretty standard principle for building product considering some of the typical aspects of commercial software work:
      a) Your team will never have a complete understanding of how your software will express itself once “running” or “used” by customers. Therefore perfection, completeness, wholeness are not desirable or ever, actually, achievable.

      b) Deadlines and other external stakeholder pressures will force you to sacrifice however your team defines “great” or “better” software

    3. Solution - Finding an optimal solution with experimentation or research

      There are touchstone moments in the development of software where this happens. When choosing a framework. When designing a styleguide. When choosing a caching strategy. When designing a game plan to migrate code.

    4. Dissolution - Redesigning the system so that the problem no longer exists

      This is what Jacob was hinting at. That pure land you inhabit when refactoring code to circumvent an immediate impasse in the software by reframing the problem. I fondly remember dissolving a particularly tricky conditional statement in a web client by working with a backend engineer to implement an enum for a resource type. This removed a bunch of object introspection (Law of Demeter violations, etc…) in favor of a simple switch statement.

    Tuesday June 16, 2020
  • habitations

    The explanation of Question 12 of Lydia Hallie's fabulous list of JS interview questions, and others

    Lydia Hallie’s list of JS interview questions are extensive and probably one of the most comprehensive resources out there. The laundry list of questions runs the gamut from basic to esoteric language features, each of which progress you though various themes in what seem like meaningful sequences. It appears some modicum of pedagogy is involved, which therefore avoids that oft vacancy of substance typical of a listicle.

    Yesterday I was running through a block of questions with an ex-colleague and we noticed that some of the answers were a bit inchoate. It’s so easy for meaning to drift when talking about language features especially when it comes to JS. Consider navigating the following taxonomy – related to creating objects in JS – for readers of all skill levels without specious reasoning (the kind of short-hand you get the point or close-enough superstitious explanation we sometimes use; which, by the way, I think greatly hindered my early career development): object, constructor, function, class, prototype, instance, static, method, new, this. It’s quite hard but really worth being as correct as interpreting ECMAScript affords us.

    I decided to open up some discussion on Question 12 which tests one’s knowledge of the differentiated effects of using the new operator before a function call:

    https://github.com/lydiahallie/javascript-questions/issues/82#issuecomment-645022140

    Definitely called for a little revisit of YDKJS, no doubt.

    I also feel like I need some clarification on Question 14. What do we call the very root object in JS that all objects delegate to?

    https://github.com/lydiahallie/javascript-questions/issues/26#issuecomment-645058195

    Coming up with examples is hard. I think Question 11 could use a bit of rework too because there is a this scope issue that somewhat overloads the focus of the problem. Here I go again:

    https://github.com/lydiahallie/javascript-questions/issues/355#issuecomment-645066354

    Tuesday June 16, 2020