If people cannot write well, they cannot think well, and if they cannot think well, others will do their thinking for them.

Writing is, for many, a formidable task. The difficulty often stems not from the mechanics of language but from a dual uncertainty: a lack of clarity about the subject and an anxiety about how their words will shape perception. There have been countless pieces of writing on how to overcome these problems. And they are all great.

However my own relationship with writing is a little fundamental. It is not an act of performance for an audience, nor a platform for preaching. It is, quite simply, the process of thinking made tangible - a personal ledger of the paths taken, the decisions made, and the conclusions reached. And in that way, it is a very personal ritual to me.

The primary utility of this process is that writing is an unforgiving mirror for the mind. It reflects every gap in your understanding, every leap of faith you’ve passed off as logic. These gaps stare back at you, and the initial feeling can be one of intellectual humiliation. Pushing past this discomfort is a prerequisite for genuine intellectual growth. This exposure, however, is the entire point. It forces you to fill the void, to formulate the questions you hadn’t thought to ask, and in doing so, to uncover truths that were previously obscured.

I view writing as a materialized stream of consciousness. When the thinking is clear, the writing flows. When it is jagged, it serves as a diagnostic signal. This friction, this textual discomfort, is invaluable; it is the friction of thought encountering an unresolved problem. From that discomfort emerge the precise questions needed to bridge the gaps in your own logic.

This reframes the common complaint a writer’s block. For the terminally curious, the problem is inverted: a paralysing overabundance of questions. Which thread to pull first? What is the optimal path through this maze of inquiry? The solution to this lies in small shift in perspective. We tend to visualize knowledge exploration as a tree, with a single trunk and branching hierarchies that demand a specific starting point. A more accurate and liberating model is that of a graph - a web of interconnected nodes. It might be a Directed Acyclic Graph (DAG), where inquiry flows with logical dependence, or even a Cyclic Graph, acknowledging that we must periodically revisit fundamental questions with new understanding. In such a structure, there is no single “correct” entry point. The crucial skill is not finding the perfect start, but navigating the graph effectively. At any node, one must develop the discipline to pause and ask the most critical question of all: “So what?”

Navigating this graph of ideas is the first challenge. The second, more demanding challenge is to ensure that every edge you draw between the nodes - every inference, every causal link - is faithful, not merely plausible. This brings me to a critical lens from my work in AI research: the distinction between the two. An argument is plausible if it seems reasonable, coherent, and could conceivably be true. An argument is faithful only if it accurately represents the underlying structure of the problem and the logical steps taken to arrive at a conclusion.

Much of our internal monologue - and consequently, our initial, unedited writing - is merely plausible. To achieve rigorous, faithful thinking, one must apply a form of Occam’s Razor to every claim, seeking the simplest, most direct line of reasoning that holds true.

This distinction is paramount when writing is used as a tool for thought. If the written artifact is flawed i.e. if it settles for plausibility over faithfulness, then the underlying thought process it represents is equally flawed. And vice versa, you will end up writing dry plausible statements (like AI) if your thinking has gaps, or you didn’t question yourself enough while writing down your ideas.


PS. I am increasingly convinced about the problem of “plausibility vs faithfulness” as a fundamental challenge for AI systems and its compounding negative impact on human thinking and AI research itself.