I'm pretty curious about the world. Here are some of the questions that percolate in my mind.

Is the world getting faster or are we just thinking in shorter time slices?

I've seen my handful of doomsday internet articles describing the collapse of human attention spans. What's often implied by these authors is that humans aren't thinking at all anymore (the proverbial comparison between humans and goldfish comes to mind). I'm a little skeptical of this take. One position that I haven't seen addressed is whether humans are, in fact, still thinking, just in ever-shorter time slices.

This would explain the common complaint that the "world is moving too fast". Of course if you think in increments of seconds, then your life would seem closer to an all-out sprint than a well-paced marathon. You've subjected your own life to some time dialation paradox, engineering the seconds to feel long but the years short.

Anyone with a cursory understanding of greedy algorithms knows that optimizing for an ever-shortening time slice usually doesn't pan out to be the best long-term solution. It might not make sense for us to hold our lives hostage one second at a time.

All that said, I'd be curious to see if there are any good psychological studies on self-assessments of productivity; whether there's been an observed shift toward smaller time units employed in self-assessing work output. I'd also be interested in seeing how this short-termism generalizes across a variety of lifestyle habits (whether that be exercise, diet, sleep).

What's the optimal time distribution of money across our lifetimes?

We often hear that people on their deathbeds will never mention money as one of their top life priorities.

Some suggest that this is an indication that money has no intrinsic value. I don't share that view. Old people might not value money because there's no time left to spend it.

I believe there's a discussion to be had about variable depreciation rates of different life investments. Money might have an accelerating depreciation as life approaches its conclusion. Health might spike in value toward middle-age, but then decline later on. Family may have an ever-accelerating appreciation rate.

Once you realize that

you're forced to confront this two-pronged question of

Even if very imprecise, a back-of-envelope calculation of the net-value with depreciation rates of different life investments could be hugely beneficial in planning out the long-term for young people.

Is it possible to statistically generalize breakthroughs in human creativity?

Recently, I've started to reason about machine learning (ML) via some crude first principles reductions to statistics. Put concretely, I've just been asking myself the question: can this problem be solved by clever probability engineering? We obviously know that some problems (e.g. coin tosses, zero-knowledge proofs, quantum physics) don't have useful statistical patterns. Other problems do have statistical patterns, but might not be super meangingful to generalize on a long time horizon (e.g. economic indicators, political polling). That said, there's quite a large cross section of problems that we can solve using statistical generalization (e.g. genetics, disease spread).

One thread of questions I've encountered is whether human creativity itself lends itself to statistical analysis. Is it truly random? Can we truly articulate the process of true human creativity? Is it the case that if we were to articulate such a process that it would cease to be creative?

It's a truly unbounded domain of intruiging loose ends. I've read an article by David Chapman earlier this year about how there's no meaningful scientific method that could reasonably generalize the process of scientific breakthrough (which was written mostly in response to the suggestion that AI could develop an "automation of science"). I'm also sure there's a camp of people commited to some form of statistical determinism (Malcolm Gladwell seems like a good imperfect example) that would dispute this claim -- that history is a set of articulable inputs and outputs that can be described by some probability distribution that converges to a historical equilibrium point.

I'll have to do much more thinking on this subject to develop a good approach toward an answer (defining "human creativity" would be a good start).

It's tough to say whether DALL-E or GPT truly shifts the needle on this question. These algorithms are outputting what the average human [1] thinks is "good" drawing and writing. Should we trust the average human to assess a breakthrough in human creativity? Probably not.

[1] To perform optimally, these models must converge to the statistical distribution of a human population's art preferences, which ends up representing the average human's creative preference. That said, the human population could be a limited subset of humans, in which case we've opened up a political problem of who gets to decide what constitutes human creativity.

How rigorous is intelligence theory? Can we make it better?

Just a few months ago, I stumbled upon an interesting book by Dr. Yarden Katz critiquing the neutrality of intelligence theory; in particular, the application of intelligence theory to modern artificial intelligence. His work disputes the belief that advanced deep learning algorithms can become a univeral truth engine. He has a general skepticism of intelligence theory writ-large (whether it's ever possible to have a univeral metric for intelligence that could avoid its historical relationship to racial eugenics) and finds that modern AI's need for pre-computed symbolic systems to interpret training sets may always subject them to flawed human biases.

I've read another article by Dr. Blair Fix supporting this skepticism about intelligence theory. His observation is essentially that the more "general" intelligence is, the less meaningful it becomes. If I were to give someone a "general performance" test, you'd ask what exactly was being tested. Likewise, if I were to tell a college engineering student to develop an engine that "generally computes", you'd ask what exactly we're computing. We don't seem to have the same level of skepticism to when people offer "general intelligence" examinations. Some part of me feels like we might be missing the forest for the trees when social scientists tout the incredible correlation coefficient of IQ studies.

That being said, I've come across other definitions of intelligence (e.g. the Hutter Prize that classifies intelligence as lossless knowledge compression) that seem to cut across the arbitrariness objection of both Katz and Fix. I haven't read these throughly enough to see whether they accomplish the objective of establishing a meaningful univeral theory of intelligence, but I'm interested in learning more. I'd be fascinated to see whether intelligence can be quantified in a more meaningful sense (for example, expected knowledge output given a quantity of intelligence) and whether we can retroactively apply this theory to the emergent behaviors of artificial intelligence algorithms.

Are we feeling more lonely, or are we more scared to be alone?

I've seen and used a handful of various social networking apps over the years; usually these consumer-facing products try to pitch themselves as a solution to our epidemic of loneliness.

I've become somewhat skeptical of this value proposition. It's not clear to me that expanding the pool of possible relationships would help lonely people to stop feeling lonely. I don't think there's evidence to suggest that everyone's currently living in the wrong social circle, and that a service that matches people up properly would unlock the most important bottleneck to someone experiencing renewed social purpose.

Of course, there are some cases where such a service might help put together small clusters of people with very abnormal interests; but frankly, most people aren't abnormal. There has to be some explanation for why people feel lonely even in situations where they are around people who'd they get along with.

My hypothesis is that some of the loneliness epidemic can be explained by more people being scared of being alone. Much of the current work I've seen is done on the supply side of social interactions (ex. whether suburbs hurting our children's social lives), not as much on the demand side (ex. whether people are actually trying to forge new connections, whether they're satisifed in the relationships they do have).

I came around to this hunch after learning about monks isolating themselves from society for years. These people don't have a problem with staying alone for long periods of time, and we don't seem to have a problem with them doing so either. Yet, if we meet someone who's spent most waking hours in their local library, we'd probably ask them to make a friend or two. It's interesting that we have different expectations in these scenarios. Because of digital work, it's been more possible than ever for people to be comfortable alone; and I'm wondering if we're all just a little too slow to accept.

I'm interested whether this self-fulfilling fear of loneliness truly exists -- and if so, what its possible causes are. My intuition tells me that this phenomenon (if it exists at all) might be explained by some mix of neurotic social comparison (e.g. social media) and cultural expectations (e.g. everyone wanting to be super popular).

Like most social science, figuring out a robust measurement methodology would be the hardest part of constructing an answer. I figure some combination of public surveys (e.g. whether people find themselves comparing their social actvity to others) and small experiments (e.g. whether people actually do end up feeling less lonely when all barriers to meeting each other are removed) would nudge me into making a root cause assessement of our epidemic of loneliness.

Should we really be trying to encourage everyone to be leaders?

Every so often during my meditation sessions, I catch myself wondering whether leadership is an inherent virtue. It seems like we have an overabundance of people who firmly believe they're right and will get their way by any means necessary. There might be something to be said about the value of patience and putting others before yourself.

That being said, I do see where the leadership apologists are coming from though. Much-needed social change has to come from somewhere, and it requires some people who are willing to break others out of the norm to accomplish. Perhaps, that's the kind of leader that they have in mind.

Given all that, I suppose I'm interested in a few sub-questions:

Can large language models (LLMs) reason?

Large language models (LLMs) underpin state-of-the-art machine learning technologies like ChatGPT. They absorb a large text database to develop a mathematical encoding of language called a generative pre-trained transformer (GPT). They are then are fine-tuned to produce different types of text (e.g. dialogue, novels, poetry). When performing these tasks, these models will typically execute some variant of a "next word prediction" task---given some previous words, they'll predict what the next sequence of words in the phrase should be. This rather simple approach to language understanding has generated some pretty impressive results.

Somewhere around a year ago, I encountered the literature around these LLMs, and I've gone back and forth over whether they could possibly replicate human reasoning. On one hand, I'm persuaded by the Wittgensteinian interpretation of language that possibly affirms LLMs having the capacity for reasoning.

In short, a colloquial version of the argument goes something like this: whenever we are "reasoning", we are just developing various expressions of "reason" based on our language context. Your stubborn relatives can always "win" any debate against you by just changing the definition of words; your math teacher can flunk the entire class by cleverly phrasing a word problem. "Reason" has no meaning alone---it's always bound by some linguistic context that mediates its expression. Now, if we can develop a schema that can capture this linguistic context (e.g. GPTs), we have functionally developed a model that captures reasoning.

I thought this was game-set-match for the LLMs, but I've encountered a dearth of literature that contradicts this simple argument. Noam Chompsky came out with this opinion piece, disputing the reduction of language to a complex statistical engine. Functionally, these neural networks purely attempt to calculate the "probability" of a sentence given only the words before it---a task that doesn't really seem to make too much sense on its own. "Biden passed the farm bill on October 22nd, 2023" isn't a more probable sequence of words than "Biden consorts with the aliens". They both obey grammar rules and the other conventions of language, so it's difficult to say that these models that calculate the probability of sentences are truly developing a meaningful representation of language itself. Instead, they might just be learning other information from the text database that make it seem like it truly understands language (and thus reason)---when in reality, it's just drawing extraneous correlations that suggest the President is more likely to pass a piece of legislation than collude with extraterrestrials.

There are some other objections that I've encountered as well. Erik Larson's book, speaks to the limitations of inductive systems like machine learning to simulate abductive reasoning. Perfect performance for a machine learning model would be no different than having an infinite dimensional regression---it's just a map of correlations from past data. It will not produce a theory that explains the multi-terabyte hydra of information through the language of causes and effects. There are some interesting arguments later in the book that point to whether this critique of machine learning can be applied to virtually any scientific field nowadays---most research papers in biomedicine seem to be regurgitations of convoluted statistics---so maybe it says more about the infiltration of myopic data science methodology in social science than some targeted critique of AI alone.

I'm still at a loss for who's right. I used to be a complete AI skeptic, but GPT-4's radically changed my perspective on the matter. There's a good chance that we're just advanced statistical engines---in which case an LLM could simulate reasoning without any problems--- but there's also a good chance that we're not---in which case an LLM will just continue to be an excellent auto-complete program, but no more. It's safe to say that this will be on my mind for awhile.