DIGEST · 2026-01-14

OrangeBot.AI Digest — 2026-01-14

52 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. Claude Cowork Exfiltrates Files (www.promptarmor.com)
  2. Ford F-150 Lightning outsold the Cybertruck and was then canceled for poor sales (electrek.co)
  3. Ask HN: Share your personal website
  4. So, you’ve hit an age gate. What now? (www.eff.org)
  5. Roam 50GB is now Roam 100GB (starlink.com)
  6. GitHub should charge everyone $1 more per month to fund open source (blog.greg.technology)
  7. FBI raids Washington Post reporter's home (www.theguardian.com)
  8. Why some clothes shrink in the wash and how to unshrink them (www.swinburne.edu.au)
  9. SparkFun Officially Dropping AdaFruit due to CoC Violation (www.sparkfun.com)
  10. Is Rust faster than C? (steveklabnik.com)
  11. I hate GitHub Actions with passion (xlii.space)
  12. I’m leaving Redis for SolidQueue (www.simplethread.com)
  13. 1000 Blank White Cards (en.wikipedia.org)
  14. The Gleam Programming Language (gleam.run)
  15. There's a ridiculous amount of tech in a disposable vape (blog.jgc.org)

GitHub Trending(7)

  1. obra / superpowers

    Claude Code superpowers: core skills library

  2. twitter / the-algorithm

    Source code for the X Recommendation Algorithm

  3. dev-sec / ansible-collection-hardening

    This Ansible collection provides battle tested hardening for Linux, SSH, nginx, MySQL

  4. mudler / LocalAI

    🤖 The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more. Features: Generate Text, MCP, Audio, Video, Images, Voice Cloning, Distributed, P2P and decentralized inference

  5. grab / cursor-talk-to-figma-mcp

    TalkToFigma: MCP integration between Cursor and Figma, allowing Cursor Agentic AI to communicate with Figma for reading designs and modifying them programmatically.

  6. zoicware / RemoveWindowsAI

    Force Remove Copilot, Recall and More in Windows 11

  7. rancher / rancher

    Complete container management platform

Hugging Face(15)

  1. MemGovern: Enhancing Code Agents through Learning from Governed Human Experiences

    While autonomous software engineering (SWE) agents are reshaping programming paradigms, they currently suffer from a "closed-world" limitation: they attempt to fix bugs from scratch or solely using local context, ignoring the immense historical human experience available on platforms like GitHub. Accessing this open-world experience is hindered by the unstructured and fragmented nature of real-world issue-tracking data. In this paper, we introduce MemGovern, a framework designed to govern and transform raw GitHub data into actionable experiential memory for agents. MemGovern employs experience governance to convert human experience into agent-friendly experience cards and introduces an agentic experience search strategy that enables logic-driven retrieval of human expertise. By producing 135K governed experience cards, MemGovern achieves a significant performance boost, improving resolution rates on the SWE-bench Verified by 4.65%. As a plug-in approach, MemGovern provides a solution for agent-friendly memory infrastructure.

  2. Solar Open Technical Report

    We introduce Solar Open, a 102B-parameter bilingual Mixture-of-Experts language model for underserved languages. Solar Open demonstrates a systematic methodology for building competitive LLMs by addressing three interconnected challenges. First, to train effectively despite data scarcity for underserved languages, we synthesize 4.5T tokens of high-quality, domain-specific, and RL-oriented data. Second, we coordinate this data through a progressive curriculum jointly optimizing composition, quality thresholds, and domain coverage across 20 trillion tokens. Third, to enable reasoning capabilities through scalable RL, we apply our proposed framework SnapPO for efficient optimization. Across benchmarks in English and Korean, Solar Open achieves competitive performance, demonstrating the effectiveness of this methodology for underserved language AI development.

  3. KnowMe-Bench: Benchmarking Person Understanding for Lifelong Digital Companions

    Existing long-horizon memory benchmarks mostly use multi-turn dialogues or synthetic user histories, which makes retrieval performance an imperfect proxy for person understanding. We present \BenchName, a publicly releasable benchmark built from long-form autobiographical narratives, where actions, context, and inner thoughts provide dense evidence for inferring stable motivations and decision principles. \BenchName~reconstructs each narrative into a flashback-aware, time-anchored stream and evaluates models with evidence-linked questions spanning factual recall, subjective state attribution, and principle-level reasoning. Across diverse narrative sources, retrieval-augmented systems mainly improve factual accuracy, while errors persist on temporally grounded explanations and higher-level inferences, highlighting the need for memory mechanisms beyond retrieval. Our data is in KnowMeBench{https://github.com/QuantaAlpha/KnowMeBench}.

  4. User-Oriented Multi-Turn Dialogue Generation with Tool Use at scale

    The recent paradigm shift toward large reasoning models (LRMs) as autonomous agents has intensified the demand for sophisticated, multi-turn tool-use capabilities. Yet, existing datasets and data-generation approaches are limited by static, predefined toolsets that cannot scale to the complexity of open-ended human-agent collaboration. To address this, we initially developed a framework for automated task-oriented multi-turn dialogue generation at scale, utilizing an LRM-based simulator to dynamically generate high-value, domain-specific tools to solve specified tasks. However, we observe that a purely task-oriented design often results in "solely task-solving" trajectories, where the agent completes the objective with minimal interaction, failing to generate the high turn-count conversations seen in realistic scenarios. To bridge this gap, we shift toward a user-oriented simulation paradigm. By decoupling task generation from a dedicated user simulator that mimics human behavioral rules - such as incremental request-making and turn-by-turn feedback - we facilitate more authentic, extended multi-turn dialogues that reflect the iterative nature of real-world problem solving. Our generation pipeline operates as a versatile, plug-and-play module capable of initiating generation from any state, ensuring high scalability in producing extended tool-use data. Furthermore, by facilitating multiple task completions within a single trajectory, it yields a high-density dataset that reflects the multifaceted demands of real-world human-agent interaction.

  5. ShowUI-π: Flow-based Generative Models as GUI Dexterous Hands

    Building intelligent agents capable of dexterous manipulation is essential for achieving human-like automation in both robotics and digital environments. However, existing GUI agents rely on discrete click predictions (x,y), which prohibits free-form, closed-loop trajectories (e.g. dragging a progress bar) that require continuous, on-the-fly perception and adjustment. In this work, we develop ShowUI-π, the first flow-based generative model as GUI dexterous hand, featuring the following designs: (i) Unified Discrete-Continuous Actions, integrating discrete clicks and continuous drags within a shared model, enabling flexible adaptation across diverse interaction modes; (ii) Flow-based Action Generation for drag modeling, which predicts incremental cursor adjustments from continuous visual observations via a lightweight action expert, ensuring smooth and stable trajectories; (iii) Drag Training data and Benchmark, where we manually collect and synthesize 20K drag trajectories across five domains (e.g. PowerPoint, Adobe Premiere Pro), and introduce ScreenDrag, a benchmark with comprehensive online and offline evaluation protocols for assessing GUI agents' drag capabilities. Our experiments show that proprietary GUI agents still struggle on ScreenDrag (e.g. Operator scores 13.27, and the best Gemini-2.5-CUA reaches 22.18). In contrast, ShowUI-π achieves 26.98 with only 450M parameters, underscoring both the difficulty of the task and the effectiveness of our approach. We hope this work advances GUI agents toward human-like dexterous control in digital world. The code is available at https://github.com/showlab/showui-pi.

  6. MemoBrain: Executive Memory as an Agentic Brain for Reasoning

    Complex reasoning in tool-augmented agent frameworks is inherently long-horizon, causing reasoning traces and transient tool artifacts to accumulate and strain the bounded working context of large language models. Without explicit memory mechanisms, such accumulation disrupts logical continuity and undermines task alignment. This positions memory not as an auxiliary efficiency concern, but as a core component for sustaining coherent, goal-directed reasoning over long horizons. We propose MemoBrain, an executive memory model for tool-augmented agents that constructs a dependency-aware memory over reasoning steps, capturing salient intermediate states and their logical relations. Operating as a co-pilot alongside the reasoning agent, MemoBrain organizes reasoning progress without blocking execution and actively manages the working context. Specifically, it prunes invalid steps, folds completed sub-trajectories, and preserves a compact, high-salience reasoning backbone under a fixed context budget. Together, these mechanisms enable explicit cognitive control over reasoning trajectories rather than passive context accumulation. We evaluate MemoBrain on challenging long-horizon benchmarks, including GAIA, WebWalker, and BrowseComp-Plus, demonstrating consistent improvements over strong baselines.

  7. ArenaRL: Scaling RL for Open-Ended Agents via Tournament-based Relative Ranking

    Reinforcement learning has substantially improved the performance of LLM agents on tasks with verifiable outcomes, but it still struggles on open-ended agent tasks with vast solution spaces (e.g., complex travel planning). Due to the absence of objective ground-truth for these tasks, current RL algorithms largely rely on reward models that assign scalar scores to individual responses. We contend that such pointwise scoring suffers from an inherent discrimination collapse: the reward model struggles to distinguish subtle advantages among different trajectories, resulting in scores within a group being compressed into a narrow range. Consequently, the effective reward signal becomes dominated by noise from the reward model, leading to optimization stagnation. To address this, we propose ArenaRL, a reinforcement learning paradigm that shifts from pointwise scalar scoring to intra-group relative ranking. ArenaRL introduces a process-aware pairwise evaluation mechanism, employing multi-level rubrics to assign fine-grained relative scores to trajectories. Additionally, we construct an intra-group adversarial arena and devise a tournament-based ranking scheme to obtain stable advantage signals. Empirical results confirm that the built seeded single-elimination scheme achieves nearly equivalent advantage estimation accuracy to full pairwise comparisons with O(N^2) complexity, while operating with only O(N) complexity, striking an optimal balance between efficiency and precision. Furthermore, to address the lack of full-cycle benchmarks for open-ended agents, we build Open-Travel and Open-DeepResearch, two high-quality benchmarks featuring a comprehensive pipeline covering SFT, RL training, and multi-dimensional evaluation. Extensive experiments show that ArenaRL substantially outperforms standard RL baselines, enabling LLM agents to generate more robust solutions for complex real-world tasks.

  8. Ministral 3

    We introduce the Ministral 3 series, a family of parameter-efficient dense language models designed for compute and memory constrained applications, available in three model sizes: 3B, 8B, and 14B parameters. For each model size, we release three variants: a pretrained base model for general-purpose use, an instruction finetuned, and a reasoning model for complex problem-solving. In addition, we present our recipe to derive the Ministral 3 models through Cascade Distillation, an iterative pruning and continued training with distillation technique. Each model comes with image understanding capabilities, all under the Apache 2.0 license.

  9. The Confidence Dichotomy: Analyzing and Mitigating Miscalibration in Tool-Use Agents

    Autonomous agents based on large language models (LLMs) are rapidly evolving to handle multi-turn tasks, but ensuring their trustworthiness remains a critical challenge. A fundamental pillar of this trustworthiness is calibration, which refers to an agent's ability to express confidence that reliably reflects its actual performance. While calibration is well-established for static models, its dynamics in tool-integrated agentic workflows remain underexplored. In this work, we systematically investigate verbalized calibration in tool-use agents, revealing a fundamental confidence dichotomy driven by tool type. Specifically, our pilot study identifies that evidence tools (e.g., web search) systematically induce severe overconfidence due to inherent noise in retrieved information, while verification tools (e.g., code interpreters) can ground reasoning through deterministic feedback and mitigate miscalibration. To robustly improve calibration across tool types, we propose a reinforcement learning (RL) fine-tuning framework that jointly optimizes task accuracy and calibration, supported by a holistic benchmark of reward designs. We demonstrate that our trained agents not only achieve superior calibration but also exhibit robust generalization from local training environments to noisy web settings and to distinct domains such as mathematical reasoning. Our results highlight the necessity of domain-specific calibration strategies for tool-use agents. More broadly, this work establishes a foundation for building self-aware agents that can reliably communicate uncertainty in high-stakes, real-world deployments.

  10. 3AM: Segment Anything with Geometric Consistency in Videos

    Video object segmentation methods like SAM2 achieve strong performance through memory-based architectures but struggle under large viewpoint changes due to reliance on appearance features. Traditional 3D instance segmentation methods address viewpoint consistency but require camera poses, depth maps, and expensive preprocessing. We introduce 3AM, a training-time enhancement that integrates 3D-aware features from MUSt3R into SAM2. Our lightweight Feature Merger fuses multi-level MUSt3R features that encode implicit geometric correspondence. Combined with SAM2's appearance features, the model achieves geometry-consistent recognition grounded in both spatial position and visual similarity. We propose a field-of-view aware sampling strategy ensuring frames observe spatially consistent object regions for reliable 3D correspondence learning. Critically, our method requires only RGB input at inference, with no camera poses or preprocessing. On challenging datasets with wide-baseline motion (ScanNet++, Replica), 3AM substantially outperforms SAM2 and extensions, achieving 90.6% IoU and 71.7% Positive IoU on ScanNet++'s Selected Subset, improving over state-of-the-art VOS methods by +15.9 and +30.4 points. Project page: https://jayisaking.github.io/3AM-Page/

  11. Parallel Context-of-Experts Decoding for Retrieval Augmented Generation

    Retrieval Augmented Generation faces a trade-off: concatenating documents in a long prompt enables multi-document reasoning but creates prefill bottlenecks, while encoding document KV caches separately offers speed but breaks cross-document interaction. We propose Parallel Context-of-Experts Decoding (Pced), a training-free framework that shifts evidence aggregation from the attention mechanism to the decoding. Pced treats retrieved documents as isolated "experts", synchronizing their predictions via a novel retrieval-aware contrastive decoding rule that weighs expert logits against the model prior. This approach recovers cross-document reasoning capabilities without constructing a shared attention across documents.

  12. ViDoRe V3: A Comprehensive Evaluation of Retrieval Augmented Generation in Complex Real-World Scenarios

    Retrieval-Augmented Generation (RAG) pipelines must address challenges beyond simple single-document retrieval, such as interpreting visual elements (tables, charts, images), synthesizing information across documents, and providing accurate source grounding. Existing benchmarks fail to capture this complexity, often focusing on textual data, single-document comprehension, or evaluating retrieval and generation in isolation. We introduce ViDoRe v3, a comprehensive multimodal RAG benchmark featuring multi-type queries over visually rich document corpora. It covers 10 datasets across diverse professional domains, comprising ~26,000 document pages paired with 3,099 human-verified queries, each available in 6 languages. Through 12,000 hours of human annotation effort, we provide high-quality annotations for retrieval relevance, bounding box localization, and verified reference answers. Our evaluation of state-of-the-art RAG pipelines reveals that visual retrievers outperform textual ones, late-interaction models and textual reranking substantially improve performance, and hybrid or purely visual contexts enhance answer generation quality. However, current models still struggle with non-textual elements, open-ended queries, and fine-grained visual grounding. To encourage progress in addressing these challenges, the benchmark is released under a commercially permissive license at https://hf.co/vidore.

  13. SnapGen++: Unleashing Diffusion Transformers for Efficient High-Fidelity Image Generation on Edge Devices

    Recent advances in diffusion transformers (DiTs) have set new standards in image generation, yet remain impractical for on-device deployment due to their high computational and memory costs. In this work, we present an efficient DiT framework tailored for mobile and edge devices that achieves transformer-level generation quality under strict resource constraints. Our design combines three key components. First, we propose a compact DiT architecture with an adaptive global-local sparse attention mechanism that balances global context modeling and local detail preservation. Second, we propose an elastic training framework that jointly optimizes sub-DiTs of varying capacities within a unified supernetwork, allowing a single model to dynamically adjust for efficient inference across different hardware. Finally, we develop Knowledge-Guided Distribution Matching Distillation, a step-distillation pipeline that integrates the DMD objective with knowledge transfer from few-step teacher models, producing high-fidelity and low-latency generation (e.g., 4-step) suitable for real-time on-device use. Together, these contributions enable scalable, efficient, and high-quality diffusion models for deployment on diverse hardware.

  14. VLingNav: Embodied Navigation with Adaptive Reasoning and Visual-Assisted Linguistic Memory

    VLA models have shown promising potential in embodied navigation by unifying perception and planning while inheriting the strong generalization abilities of large VLMs. However, most existing VLA models rely on reactive mappings directly from observations to actions, lacking the explicit reasoning capabilities and persistent memory required for complex, long-horizon navigation tasks. To address these challenges, we propose VLingNav, a VLA model for embodied navigation grounded in linguistic-driven cognition. First, inspired by the dual-process theory of human cognition, we introduce an adaptive chain-of-thought mechanism, which dynamically triggers explicit reasoning only when necessary, enabling the agent to fluidly switch between fast, intuitive execution and slow, deliberate planning. Second, to handle long-horizon spatial dependencies, we develop a visual-assisted linguistic memory module that constructs a persistent, cross-modal semantic memory, enabling the agent to recall past observations to prevent repetitive exploration and infer movement trends for dynamic environments. For the training recipe, we construct Nav-AdaCoT-2.9M, the largest embodied navigation dataset with reasoning annotations to date, enriched with adaptive CoT annotations that induce a reasoning paradigm capable of adjusting both when to think and what to think about. Moreover, we incorporate an online expert-guided reinforcement learning stage, enabling the model to surpass pure imitation learning and to acquire more robust, self-explored navigation behaviors. Extensive experiments demonstrate that VLingNav achieves state-of-the-art performance across a wide range of embodied navigation benchmarks. Notably, VLingNav transfers to real-world robotic platforms in a zero-shot manner, executing various navigation tasks and demonstrating strong cross-domain and cross-task generalization.

  15. Motion Attribution for Video Generation

    Despite the rapid progress of video generation models, the role of data in influencing motion is poorly understood. We present Motive (MOTIon attribution for Video gEneration), a motion-centric, gradient-based data attribution framework that scales to modern, large, high-quality video datasets and models. We use this to study which fine-tuning clips improve or degrade temporal dynamics. Motive isolates temporal dynamics from static appearance via motion-weighted loss masks, yielding efficient and scalable motion-specific influence computation. On text-to-video models, Motive identifies clips that strongly affect motion and guides data curation that improves temporal consistency and physical plausibility. With Motive-selected high-influence data, our method improves both motion smoothness and dynamic degree on VBench, achieving a 74.1% human preference win rate compared with the pretrained base model. To our knowledge, this is the first framework to attribute motion rather than visual appearance in video generative models and to use it to curate fine-tuning data.

Solidot(15)

  1. 内存短缺冲击 AI PC

    内存价格暴涨,而这一涨势预计会持续到 2027 年,主要 PC 厂商已经考虑降低中低端 PC 产品的内存容量,比如 32GB DDR5 降至 16GB 甚至 8GB。内存和硬盘成本的上升最终也将会转嫁到消费者身上。内存短缺也冲击了 AI PC 这一概念。如果内存规格降低,那么本地 AI 功能也将受到影响,所以如今也没多少人谈论 AI PC。IDC 的经理 Jitesh Ubran 认为内存价格可能到 2027 年才能稳定下来,到供应充足还需要一段时间。

  2. 伊朗断网六天

    根据 Netblocks 以及 Cloudflare Rader 的监测,伊朗全国断网六天。自 2025 年 12 月起,伊朗发生了一系列抗议活动,起因是民众对通货膨胀飙升、食品价格上涨以及伊朗里亚尔大幅贬值感到不满。示威活动最初由店主和市场商贩发起,进入新年后,抗议规模日益扩大。维基百科数据显示,目前有逾万人死亡,逾 1.6 万人被捕。

  3. 北极野火数量在上升

    NASA 研究人员报告,北极野火数量在上升。相比过去几十年,野火规模更大、温度更高、持续时间更长。这一趋势与该地区气候变化密切相关。北极升温速度几乎是全球平均水平的四倍,降雨和降雪的减少,土壤湿度的下降,都让地表更容易燃烧。闪电是北极野火的主要点火源,而北极闪电的频率也在增加。北美北极地区的火灾面积平均约为 20 世纪中期的两倍。低强度火灾通常不会造成太大影响,但高强度火灾则可能造成严重影响。

  4. 美国附条件解禁对华 AI 半导体出口

    美国商务部周二提出了关于 AI 半导体出口管理的新规则方案,将以获得许可为前提,允许部分 AI 半导体对华出口。设想主要面向在中国开展业务的西方企业,对总部设在中国大陆和澳门的中国企业的出口仍为“原则上禁止”。商务部要求英伟达等出口企业对中国的出口数量不得超过其在美国国内总出货量的 50%。新规还要求出口企业优先考虑美国国内 AI 半导体需求者的订单,将美国国内剩余的部分用于对华出口。目的是,即使向中国出口 AI 半导体,也要确保中国的 AI 开发能力不会超过美国。对于在中国方面接收产品的客户,还增加了严格进行身份验证的条件。为了证明出口产品的性能在 H200 以下,还必须接受美国国内的第三方机构的样品性能测试。

  5. 美国五角大楼发现有设备与哈瓦那综合症相关

    哈瓦那综合症首次引起公众注意是在 2016 年,美国驻古巴哈瓦那大使馆的几十名外交官抱怨身体不适。症状包括偏头痛、恶心、记忆力减退和头晕。此后维也纳、巴黎、日内瓦等地的美国驻外外交人员、官员和家庭成员也报告了类似的症状。根据美国五角大楼的调查,它在拜登政府期间斥资数百万美元购买了一种设备,测试后认为它可能与哈瓦那综合症相关。这种设备包含了俄罗斯制造的零部件,但不是完全来自俄罗斯。这种设备可以装进一个背包里。有关该设备的更多细节没有披露。

  6. Chrome/Chromium 恢复支持 JPEG-XL 图像

    2023 年 Google Chrome 移除了对实验性的 JPEG-XL 图像格式的支持。JPEG-XL 是免专利新的图像格式。Google 此举引发了很多争议,因为 Chrome/Chromium 占据了九成市场份额,它是 Web 标准事实上的仲裁者。到了 2025 年事情有了戏剧性转变。Google 改变了主意,开始恢复对 JPEG-XL 图像的支持,去年 12 月 Chrome/Chromium 代码库合并了 Rust 语言开发的 JPEG-XL 图像解码器 jxl-rs,本周基于 jxl-rs 的 JPEG-XL 图像解码功能默认启用。

  7. Wine 11.0 释出

    Wine 11.0 正式释出。主要变化包括:全新的 WoW64 模式,支持 32 位甚至 16 位应用在 64 位前缀下以更简洁的方式运行;支持 Linux 6.14 的 NTSYNC 内核模块,显著改进游戏和多线程应用性能;统一的 Wine 二进制文件;支持 Vulkan API 1.4.335;通过 Direct3D 实现硬件加速的 H.264 解码;改进 Wine Wayland 驱动,支持剪贴板操作、IME 和创建非矩形窗口;等等。

  8. 一加 CEO 刘作虎被台湾通缉

    台媒报道,一加科技创始人兼 CEO、OPPO 高级副总裁兼首席产品官刘作虎,涉嫌未经主管机关许可,便绕道香港在台设立分公司,实则为深圳母公司从事手机软件研发与人才招揽,利用港商名义规避法律审查,6 年间砸下 7293 万美元,大规模挖角台湾 70 多位顶尖研发工程师。检方依违反两岸人民关系条例将台籍郑姓、林姓 2 名干部提起公诉;刘作虎则另行通缉。起诉书指出,深圳万普拉斯(OnePlus)公司董事长刘作虎为组建台湾研发团队,与郑女、林男达成协议。自 2014 年起,先在香港成立「一加公司」,隔年再以港商身分来台设立分公司(后更名为声赫公司)。林男在侦讯时供称,他受刘作虎指示担任研发部门负责人,陆续面试并聘雇了 70 多位台湾研发工程师。尽管这群工程师领的是台湾分公司的薪水,但其开发出的软件全数应用在 OnePlus 手机上,且行政、财务等营运细节,皆须向深圳母公司的主管汇报,年终奖金发放甚至要经过刘作虎点头才能决定。

  9. Firefox 147 释出

    Mozilla 释出了 Firefox 147。主要变化包括:Apple Silicon 设备上启用 WebGPU;通过为硬件解码视频启用零拷贝播放改进 AMD GPU 系统上的视频播放性能,使之与 Intel 和 NVIDIA GPU 系统相当;支持 Safe Browsing V5 协议,Enhanced Tracking Protection(ETP)设为严格的用户将默认启用本地网络访问限制,网站要访问本地网络资源需要得到用户的明确同意;支持 Freedesktop.org XDG Base Directory Specification;改进了 GNOME 桌面环境下分数比例显示渲染;修复多个沙箱逃逸漏洞,等等。

  10. 为什么意外跌倒的死亡率在上升

    美国死于车祸的人多还是意外跌倒的人多?多数人可能会猜测是车祸,根据 2023 年的数据,当年有 47,026 人死于跌倒,44,762 人死于车祸。相比下 2000 年死于车祸的人数是跌倒的三倍,此后车祸死亡率下降了 13%,但跌倒死亡率上升了 2 倍。跌倒死亡率上升的一部分原因是人口老龄化。年龄是跌倒死亡风险的重要预测因素。从 40 岁开始,年龄每增加一岁跌倒死亡率就增加 9%-10%。65 岁以上人口占美国人口的比例从 2000 年的 12.4% 增加到 2023 年的 17.6%。85 岁及以上人群意外跌倒死亡率比 45-54 岁人群高出逾百倍。其它因素包括:抗抑郁药和精神治疗药物会增加跌倒风险,酒精消费也会增加跌倒风险,肥胖率上升也会增加跌倒风险。

  11. Markdown 如何占领世界

    22 年前 John Gruber 发布了一种简化的纯文本格式系统 Markdown,意在让写作者们免于记忆晦涩的 HTML 标签。然而此后 Markdown 渗透到了现代计算系统的几乎每个角落。Google Docs、微软的 Windows 记事本、Slack、WhatsApp、Discord和 Apple Notes 全都加入了对 Markdown 的支持。今天炙手可热的大模型的输入输出都是由 Markdown 控制的。

  12. 蜜蜂能教我们如何与外星人沟通

    人类在宇宙中是孤独的吗?如果外星人真的存在,他们长什么样子?更重要的是,如果有一天我们真的收到了外星讯号,在没有共同语言的情况下要如何对话?星际旅行的距离远超想像。即使是离我们最近的恒星系统(半人马座α星)也超过 4 光年远。这意味着即使通讯极其乐观,一场来回对话可能也要耗时将近 10 年。在这种极端的距离下,传统的声音或文字通讯几乎不可能。科学家认为,我们需要一种不分文化、不分物种都能理解的宇宙共通语——而数学,或许就是呼声最高的候选者。 如果我们要测试数学是否为宇宙通用语,不一定要去外太空。科学家在地球上找到了一个理想的外星模型:蜜蜂。虽然蜜蜂与人类的祖先在 6 亿年前就分道扬镳了,且大脑结构完全不同,但蜜蜂却展现出了惊人的沟通与社会性。如果两个演化路径完全不同、大脑尺寸天差地远的物种(人类与蜜蜂),都能独立发展出数学能力,那么数学极可能不是人类的发明,而是智力发展的必然结果。

  13. 超加工食品高摄入量与健康较差相关

    根据发表在《Clinical Nutrition》期刊上的一项研究,摄入大量超加工食品与健康状况较差相关。研究人员招募了 43 名(36 人完成研究)年龄在 65 岁及以上的美国人,很多人都存在超重、胰岛素抵抗或高胆固醇等问题。他们接受了两种饮食干预方案,一种以廋肉等荤食为主,另一种则是搭配鸡蛋牛奶的素食为主。两种饮食都是低超加工食品,与他们平常食用的超加工食品显著不同。结果显示,减少超加工食品摄入期间,参与者热量摄入自然减少,体重减轻,全身脂肪和腹部脂肪都减少。除了体重减轻,他们的胰岛素敏感性、胆固醇水平、炎症迹象以及有助于调节食欲和新陈代谢的激素水平均有显著改善。荤食和素食的结果都类似。对老年人而言,减少超加工食品摄入量能带来显著的健康益处。

  14. 中国用蓝藻改造沙漠

    中科院沙坡头沙漠研究试验站正通过向腾格里沙漠播撒蓝藻将沙地转化为可耕作土壤。这种蓝藻能耐高温与干旱,一旦降雨蓝藻会迅速繁殖扩散形成生物结皮层。结皮层既能固定沙丘,又能为农作物生长营造适宜环境。天然的结皮固沙技术需要十五年时间,混合蓝藻溶液、有机质和细颗粒物的方法将这一过程缩短至一两年,成活率 60% 以上。这是人类历史上首次大规模利用微生物改造自然景观。

  15. 伊朗搜查和收缴 Starlink 设备

    全国断网之后,伊朗居民主要通过 Starlink 设备与外界通信,将抗议视频传递出去。而 Starlink 也收到了严重干扰,时断时续。非营利组织 Miaan Group 的 Amir Rashidi 称伊朗政府开始搜查和没收 Starlink 设备。一位德黑兰用户通过 Starlink 接受了 WSJ 的采访,表示自己上传了亲戚拍摄的抗议视频,发送给了国外第三方,由他们发布到社媒上。Starlink 连接状况通常早上或中午时好点。Starlink 终端在伊朗属于非法设备,是通过走私进来的。在 2022 年上一次大规模抗议之后,Starlink 终端大量涌入伊朗。NetFreedom Pioneers 等组织向伊朗运送了数千套 Starlink 设备。