Everything on this site, organized by portal.
No entries match the current filters.
A deep learning model using ICD-10-CM diagnosis codes with a permutation-invariant Deep Sets aggregator improved 30-day unplanned readmission (AUC 0.7496 vs 0.6553 for CCI) and 30-day postdischarge in-hospital mortality (AUC 0.8557 vs 0.7844 for age-adjusted CCI) compared with Charlson and Elixhauser comorbidity-index benchmarks in a national claims database of over 113 million adult hospitalizations.
We systematically decompose the sources of SIMD speedup for ML-KEM (Kyber) on Intel x86-64 AVX2. By benchmarking four compilation variants, we demonstrate that GCC’s auto-vectorizer provides negligible benefit, and that hand-written AVX2 assembly delivers a – performance increase for core arithmetic operations. This drives an end-to-end KEM speedup of –.
A compendium of years of informal, empirical experiments aimed at extending the efficacy of Anki beyond rote memorization to more intricate levels of learning.
A compendium of years of informal, empirical experiments aimed at extending the efficacy of Anki beyond rote memorization to more intricate levels of learning.
Notes from Underground is widely admired as a cornerstone of literature, culture, and philosophy. This paper develops the argument that the primary philosophical undercurrent is a rejection of logic and science as end-alls in modern life, traced through comparison with the more explicitly articulated works of Dostoevsky’s contemporaries and successors: Nietzsche, Heidegger, Shestov, Ellul, Sartre, Camus, Husserl, and Arendt.
I met a traveller from an antique land, / Who said — “Two vast and trunkless legs of stone / Stand in the desert.”
Like as the waves make towards the pebbled shore, / So do our minutes hasten to their end.
AI labs are likely deliberately reluctant to scale because they are aware that any imminient shift to locally run models as the norm would render their compute redundant. We take Anthropic as a principal case study to validate this hypothesis.
AI labs are likely deliberately reluctant to scale because they are aware that any imminient shift to locally run models as the norm would render their compute redundant. We take Anthropic as a principal case study to validate this hypothesis.
We systematically decompose the sources of SIMD speedup for ML-KEM (Kyber) on Intel x86-64 AVX2. By benchmarking four compilation variants, we demonstrate that GCC’s auto-vectorizer provides negligible benefit, and that hand-written AVX2 assembly delivers a – performance increase for core arithmetic operations. This drives an end-to-end KEM speedup of –.
A compendium of years of informal, empirical experiments aimed at extending the efficacy of Anki beyond rote memorization to more intricate levels of learning.