- Weak lensing in the blue: a counter-intuitive strategy for stratospheric observations The statistical power of weak lensing measurements is principally driven by the number of high redshift galaxies whose shapes are resolved. Conventional wisdom and physical intuition suggest this is optimised by deep imaging at long (red or near IR) wavelengths, to avoid losing redshifted Balmer break and Lyman break galaxies. We use the synthetic Emission Line EL-COSMOS catalogue to simulate lensing observations using different filters, from various altitudes. Here were predict the number of exposures to achieve a target z > 0.3 source density, using off-the-shelf and custom filters. Ground-based observations are easily better at red wavelengths, as (more narrowly) are space-based observations. However, we find that SuperBIT, a diffraction-limited observatory operating in the stratosphere, should instead perform its lensing-quality observations at blue wavelengths. 27 authors · Oct 17, 2022
7 Mangosteen: An Open Thai Corpus for Language Model Pretraining Pre-training data shapes a language model's quality, but raw web text is noisy and demands careful cleaning. Existing large-scale corpora rely on English-centric or language-agnostic pipelines whose heuristics do not capture Thai script or cultural nuances, leaving risky material such as gambling content untreated. Prior Thai-specific efforts customize pipelines or build new ones, yet seldom release their data or document design choices, hindering reproducibility and raising the question of how to construct a transparent, high-quality Thai corpus. We introduce Mangosteen: a 47 billion-token Thai corpus built through a Thai-adapted Dolma pipeline that includes custom rule-based language ID, revised C4/Gopher quality filters, and Thai-trained content filters, plus curated non-web sources such as Wikipedia, Royal Gazette texts, OCR-extracted books, and CC-licensed YouTube subtitles. Systematic ablations using GPT-2 show the pipeline trims CommonCrawl from 202M to 25M documents while raising SEA-HELM NLG from 3 to 11; an 8B-parameter SEA-LION model continually pre-trained on Mangosteen then surpasses SEA-LION-v3 and Llama-3.1 by about four points on Thai benchmarks. We release the full pipeline code, cleaning manifests, corpus snapshot, and all checkpoints, providing a fully reproducible foundation for future Thai and regional LLM research. 7 authors · Jul 19