The three sisters of blue mountains, Katoomba, NSW. ©2026 Sidney Jeong, CC BY-SA 4.0.
The origin of discomfort might be largely a disincentive mechanism of human cognition, which I think a biochemical Bayesian prediction model, about observing an error that does not fit the model, because it means the model failed in prediction and thus it indicates lower chance of survival. In this sense, ‘it would be’ and ‘it should be’ are both predictions of different kinds.
In my observation, if the discomfort is the error response, there are two ways to deal with discomfort.
- Refining the model to include the unexplained data, either by:
a. Modifying the model itself, or;
b. Adding an ad-hoc explanation of why this data is an exception. - Ignoring the data and pretending their model describes the reality in 100% accuracy.
Any scientists after mid-20th century would know the option 2 is straight up unscientific, and the option 1b is only allowed in limited number of exceptions that does not outweigh the cost of refactoring the model (or at least an excessive usage of option 1b leads to a scientific revolution according to Kuhn). However, due to the nature of the human cognition’s priority on frugality, many people prefer the option 2, and then the option 1b, but almost never the option 1a.
But there are a small proportion of people whose incentive mechanism prioritises accuracy over frugality, which largely overlaps with the neurodivergent population. They sort and recognise data in an algorithm somewhat resembles merge sort, while the majority of population has something like bubble sort. Since merge sort has the worst-case performance of O(n log n), it is much heavier in everyday, low-complexity tasks than bubble sort that has O(n²) as the worst-case performance. But as anyone who knows basic polynomial could predict, the former is much more efficient on complex tasks, because the latter has the performance literally degrading quadratically. Furthermore, bubble sort has the best-case performance of O(n) for usually almost-sorted dataset, which means it’s optimised for everyday, easily-predictable data.
In other words, from different angle, some people prioritise on sensitivity of the model with a bit of sacrifice in specificity so even if they need to verify and update their findings to rule out false positives, they find hidden positives much faster and thus become a pioneering model. On the other hand, the majority of the people prioritise on specificity of the model with a bit of sacrifice in sensitivity, so they only accept what’s certainly positive, even if that means they miss a lot of false negatives.
I would argue focusing on sensitivity is a better strategy because you can always update and verify the findings, but it might be because my biased judgement because I am like that. Moreover, I do acknowledge that not everyone has resources enough to run that algorithm in every moments of life, not to mention about the verification process, because it’s excruciatingly tiring and loses a lot of false positives to be sanitised if they skip running the verification owing to resource scarcity.
The problem of how to reconcile the two types of algorithms remain; the merge sort population probably has survived because it has an evolutionary benefit of the species-level survival eventually, especially in hunter-gatherer societies. But in modern capitalist society (including state-driven capitalism in the false name of communism), it needs you to be a replaceable part, thus the bubble sort model is much more preferred. And the 21st century late capitalism after neoliberal wave goes even further and actively discourage merge sort population by gaslighting the bubble sort is the only correct way to sort data because merge sort population increases the risk of their exploitative nature to be seen and named correctly.
And that is where the discomfort comes back. They stimulate your discomfort and encourage you to choose the option 2 above, so they can erase the existence of the marginalised population without having blood on their hand. We need to examine if our discomfort is really grounded on empirical facts, but as we saw above, the distribution of resources makes it hard to deploy universally. And I believe that is the foundational problem the egalitarian ideal tried to address that enables us to address the upper-layer algorithmic problems.