Some of my unstructured thoughts. Not really questions or answers. Not refined enough to be blog posts. Just a lot of my open-ended ruminations and rants into the void.
Tenet and generational conflict
I don't think I gave this movie enough credit when I first watched it. I was too focused on the mechanics of time travel in the film, rather than paying attention to the deeper allegorial themes at play.
There's one interesting concept in particular that I noticed in Tenet: our profound hatred of the past.
To summarize, Tenet's antagnoist, Sator, makes contact with future humans, and they command him to eliminate all his present-day human compatriots by reversing the direction of entropy (which risks their own existence as well). Here, the central conflict emerges: the battle between the present and future; with both sides vying to justify their existence at all costs.
I think there's something to be said about humanity's distaste for our past; this is evident in our narration of history as an inevitable march from primitive, brutish tribes to modern, progressive civilizations. We pathologize the past as an exceptional evil---of which all traces must be obliterated. We are willing to kill it off, even if it means risking our very own destruction.
There's an additional complication of information asymmetry at play in the film. None of the characters understand why exactly the future desires to kill the present off---much less, why exactly killing the present wouldn't get rid of the future as well. They are fighting with an inexorable temporal restriction on information (to use the film's language, "what happens will happen").
What's interesting is that these restrictions end up working to the advantage of our protagonist. Being unable to change what happens also implies that the future cannot change the past, despite its best efforts. For that reason, the success of the protagonist's mission appears to have this rather counter-intuitive cross-temporal existence -- which is taken to it's an extreme when it turns out the protagnist's right hand was apparently a time ghost.
Expanding the domain of human cognition
One university professor once told me that the economy "never grows, it only finds ways to monetize previously unmonetizable things"; I think a similar principle applies to technology. We don't invent "new things", we merely subject a new domain to human cognition.
Take the weather for example: before weather forecasts, we did not think about whether it'd be rainy later in the afternoon before walking outside -- but now that we have forecasts, this becomes to think about. We've progressed far beyond that; not only do we have climate models that are precise over extraordinarily long time horizons, there are nations that possess the ability to engineer the weather itself.
Once you expand the domain of what is considered a human invention, then this re-definition of technology as the expansion of human cognition has more profound implications. If you take stories to be a human invention, then by our definition, stories are a technology that provides us a entry tunnel to our deeper values and morals.
Explaining technology through this language may explain the oft-cited contemporary malaise of "cognitive overload". By definition, technology expands the domain of human cognition, it is unsurprising that humans are experiencing the overuse of cognitive force.
Once we wielded the power to control the weather, the weather became our responsibility.
Second-order theories of morality and epistemic humility
I was captivated by an incredible podcast episode between Alex O'Connor and David Wolpe last weekend. Wolpe describes what was to me a novel resolution to the problem of evil. In short, God must permit a universe with unnecessary suffering, else it is impossible for truly good humans to exist. If humans are only kind because there exists a certainty of negative consequence, then there are no naturally kind humans.
What I find novel about this hypothetical universe is the certainty of negative consequence. It seems that perfect knowledge of the future is the only difference between our universe and this hypothetical one---if you can see infinitely into the future, then you have certainty of whether certain actions will bring about negative consequences for yourself.
There is a grain of intuition that emerges from this emphasis on certainty: do fundamental moral values rely on the existence of imperfect information? If I was a selfish criminal and knew with absolute certainty that my neighbor would never find out about my theivery, my optimal strategy would be to steal from him every day. It is only because of my imperfect knowledge of my neighbor's intentions that I choose against it.
One piece of fiction comes to mind while ruminating on this topic: Dune. A theme in the novel that I haven't heard discussed is the corrupting nature of prescience. One critical plot device is that as Paul Atreides acquires the power of omniscience, he becomes ever-more willing to wager the lives of billions to achieve his objective.
There is something that feels intuitively wrong about the patterns of behavior that would likely emerge if people gambled on poker whilst having perfect knowledge of everyone else's cards. I would suspect that the sight of people betting their family's life savings over a hand of cards would be gut wrenching.
Following that train of intuition, some part of me wonders whether our imperfect knowledge of the world is an evolutionary advantage. Would it truly produce a more stable society if we had perfect knowledge of each other and our actions? I could see our inability to neither hear each other's thoughts*** nor see into the future as somewhat of an evolutionary equilibrium point. Any closer to prescience, and we might be running against the collapse of civilization.
There seems to be an interesting relationship between the concept of optimal information distribution in a society and the metaphoriation of the monetary systems as a "database" of goods and services. What would be the optimal distribution of information in this database that would produce the most stable society?
***Trisolorans from Three Body Problem come to mind. Maybe there's a science fiction angle to this.
Consciousness, Abstraction, and Computers
I always found it interesting how the term "abstraction" is thrown around without much thought amongst software engineers. Abstraction is a rather non-trivial philosophical concept; whether they exist at all outside of our conscious experience is still a very open question.
I find it fascinating that abstractions "work". We can't quite describe what we're doing when we're generalizing an idea into its more abstract variant, but for some reason, this generalization appears to be necessary for us to develop any breakthrough in thinking. A question emerges: what would thinking look like sans any abstractions? Is it even possible to "think" without the orchestration of some limited set of abstract concepts?
It seems extraordinarily important to develop insights into the human capacity for abstraction in light of recent advances in artificial intelligence. As it concerns the capacity for human thought, our assessment of machines seems to be bottlenecked by our understanding of ourselves.
- Expansion of recursion to the study of consciousness
- Monism
- Panpsychism meets computers
- Artificial consciousness
- Quantum brains
- SOTA neural imaging for consiousness
Religion as a meta-cognitive structure
I have a hypothesis that part of the mass secularization of society may be in part explained by the Flynn effect. We have an explosion of abstract reasoning skills in the general populace, causing the core function of religion to become continuously more obsolete.
It appears to me that one of the primary functions of religion is its ability to develop a meta-cognitive framework for thought -- in essence, describing how one should think about the world. Any system of morality or value appears to envelop the world in some sort of interpretive framework; religion is no different. It operates from a fundamentally high level of abstraction; you area describing thoughts that have not even emerged yet, using a maximally generalizable grammar to explain all of life. Throughout history, large collectives of humans have adopted a comparatively small number of religions, orchestrating society under a unitary collective abstraction. It may be the case that as individual humans possess greater and greater abstract thinking skills, more humans can orchestrate individual abstractions without the need for some collective abstraction like religion.
Rationalizing the irrational
I'm puzzled by people who attempt to memorize the digits of pi by remembering some logical relationship between the numbers (e.g. telling yourself a story about what number comes next, describing algebraic relationships between digits, etc.). By virtue of pi being an irrational number, we know that these methods are not properly describing the nature of the number -- you're taking a fixed length random number and then attempting to rationalize each digit.
This became a starting point for a few interesting thoughts:
-
Are all the answers to our deepest problems contained in the number pi?
If the digits of pi serve as an infinite random number generator, then you pretty much have the proverbial infinite typewriting monkey contained within the number. You can develop some encoding schema between the digits of pi (ex: put it in base 26 and have it start outputting English characters), and then reap the benefits of all the secret knowledge contained in the digits.
You can even index each digit of pi and then start counting units of time before you reach this final "secret" contained in pi. What's fascinating about this heuristic is that it appears to reframe solutions to our deepest scientific questions as a function of time and randomness. With enough randomness and time, you can solve any problem (classic Darwinian take).
-
What could this thought experiment say about the way we narrate the past?
We know that our world is composed of entities (ex: pi, e, circles) that escape the language of real numbers, so to narrate the causal forces that drive this world might be as fruitless as predicting the next digit of pi after memoizing the many digits beforehand. Perhaps when we're developing a history of the past (taking some fixed pivot point in time and then developing a narrative that explains the digits before it), we're narrating the time like someone trying to remember the digits of pi.
After much authorship, you may very well have created a story with internal logical consistency and complete historical information, but the story itself cannot tell you what will happen next; just like how a story used to remember the digits of pi won't be able to tell you the next digit.
Social Darwinism
I recently watched the Kingdom of the Planet of the Apes movie, and one element of the plot that stuck out was the primates insisting on "evolving" their society through the aid of human technology. The movie seems to have this embedded critique of agential theories of evolution - that natural selection is a process propelled by conscious decisions rather than inarticulable environmental pressures. Along this path, the movie appears to also architect a critique of intellect as the primary bottleneck for civilizational advancement; the apes are clearly underdeveloped as ethical agents, but they have surpassed in cognitive ability the average human. There is more to be said along these two ideas as critiques of social darwinism.
Willpower / Ego Depletion
Ever since I was in high school, I thought that willpower was a finite commodity. You had to budget the quantity of difficult tasks that you were to complete throughout the day, otherwise you'd just run out of "will". At the time, I was compelled by a study that had subjects eat tasy / distasteful food before a difficult task and measured the likelihood of them completing that task.
Now, looking back, it seems like the minimal test case of having participants eat different foods before challenging tasks was too limited to generalize well to all domains involving "willpower". Additionally, there appear that this particular study has some replicability problems.
It appears that willpower-oriented explanations of human behavior are just observations of a placebo effect -- the more likely someone thinks that willpower is limited, you're more likely to feel less willpower. I'd be fascinated to see whether people would live different lives if this belief were reversed.
Probability
Probability is not "real".
I feel like there's a tie between this and machine learning.
Social Media Musings
I think a more targeted analysis of what social media platforms are would be necessary for us to articulate what we intuitively feel like is wrong about them.
Some dimensions of these platforms that make them suboptimal sources of meaningful knowledge:
-
How we access the knowledge
One interpretation of test-oriented learning is the obstruction of knowledge. You're not given the answers. This learning method bakes in habitual time investment as a condition to accessing information.
The bet is that this restriction will force students to study a wider breadth of knowledge (as opposed to the limited number of answers on the test key) and nudges them into spending more time wrestling with the material. For foundational knowledge domains that generalize well, it intuitively feels like this methodology fiats students that are well-trained in abstract reasoning.
It appears that social media doesn't quite pair well with that knowledge acquisition formula. There's a pretty wide diversity of posts optimized for short-term engagement. It's difficult to develop any good abstractions given the user experienced presented by these platforms.
-
What knowledge is shown on these platforms
A normal person's normal thought will not help you in your abnormal situation.
Most people on these platforms won't offer you amazing advice on problems beyond a certain point of specificity. Particular problems require particular information with empiricial tests and intuition build up over long periods of time.
I'm thinking about a concept that I like to call the "inversion of the knowledge-experience hierarchy".
We like in an age where kids are primarily consuming content made by other kids. You can say the same about teenagers and young adults. One possible side effect of this is the hollowing out of knowledge bases. We're not building on the paper trail of our forefathers; we're digging holes and then filling them in with short-term experiences. I'm curious (if not slightly worried) at where this will bring us in a few decades.
Now, you might ask: what made the past different in this regard? Surely peers were learning from other peers back before social media existed.
I'll say that one compelling distinction is that previously information had to be accessed far more intentionally than today. Information was dispersed at a far lower throughput with true physical barriers to access (e.g. internet timeouts / newspapers still on paper) that restricted its availability enough to require conscious choice.
I'd be interested if there's any literature on whether we're approaching maximal information throughput in humans. It seems like an upper limit must exist (if nothing more than just the limitations to eyesight, hearing, reading).
That is not to say that there is a limit to knowledge, which is a measure of information interpretation. An infinite number of interpretations may very well exist.
So what can we do about it?
Common objections to social media regulation that reduce to the inevitability of short-term human preference optimization are unpersuasive to me. Even though nicotine heavily spikes dopamine levels, we were able to phase out cigarettes with targeted public information campaigns (though it seems like vapes are making a comeback). I think we can begin to invest some thought into mirroring the most successful tactics in grassroots anti-drug movements for social media consumption (and we should obviously do away with the tactics that didn't work).