DIGEST · 2025-09-11

OrangeBot.AI Digest — 2025-09-11

56 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. Claude's memory architecture is the opposite of ChatGPT's (www.shloked.com)
  2. AirPods live translation blocked for EU users with EU Apple accounts (www.macrumors.com)
  3. Top model scores may be skewed by Git history leaks in SWE-bench (github.com)
  4. Native ACME support comes to Nginx (letsencrypt.org)
  5. Spiral (spiraldb.com)
  6. The US is now the largest investor in commercial spyware (arstechnica.com)
  7. CRISPR offers new hope for treating diabetes (www.wired.com)
  8. GrapheneOS accessed Android security patches but not allowed to publish sources (grapheneos.social)
  9. Behind the scenes of Bun Install (bun.com)
  10. GrapheneOS and forensic extraction of data (2024) (discuss.grapheneos.org)
  11. Gregg Kellogg has died (lists.w3.org)
  12. Ireland will not participate in Eurovision if Israel takes part (www.rte.ie)
  13. Reshaped is now open source (reshaped.so)
  14. Germany is not supporting ChatControl – blocking minority secured (digitalcourage.social)
  15. Hashed sorting is typically faster than hash tables (reiner.org)

GitHub Trending(15)

  1. Physical-Intelligence / openpi
  2. modelcontextprotocol / registry

    A community driven registry service for Model Context Protocol (MCP) servers.

  3. twitter / the-algorithm

    Source code for the X Recommendation Algorithm

  4. google / material-design-icons

    Material Design icons by Google (Material Symbols)

  5. ccfos / nightingale

    Nightingale for monitoring and alerting, just as Grafana for visualization.

  6. mxrch / GHunt

    🕵️‍♂️ Offensive Google framework.

  7. agno-agi / agno

    High-performance runtime for multi-agent systems. Build, run and manage secure multi-agent systems in your cloud.

  8. trueadm / ripple

    the elegant TypeScript UI framework

  9. aaPanel / BillionMail

    BillionMail gives you open-source MailServer, NewsLetter, Email Marketing — fully self-hosted, dev-friendly, and free from monthly fees. Join the discord: https://discord.gg/asfXzBUhZr

  10. MotiaDev / motia

    Modern Backend Framework that unifies APIs, background jobs, workflows, and AI Agents into a single core primitive with built-in observability and state management.

  11. heroui-inc / heroui

    🚀 Beautiful, fast and modern React UI library. (Previously NextUI)

  12. ClemensElflein / OpenMower

    Let's upgrade cheap off-the-shelf robotic mowers to modern, smart RTK GPS based lawn mowing robots!

  13. epfml / ML_course

    EPFL Machine Learning Course, Fall 2025

  14. NationalSecurityAgency / ghidra

    Ghidra is a software reverse engineering (SRE) framework

  15. supabase / supabase

    The Postgres development platform. Supabase gives you a dedicated Postgres database to build your web, mobile, and AI applications.

Hugging Face(11)

  1. A Survey of Reinforcement Learning for Large Reasoning Models

    In this paper, we survey recent advances in Reinforcement Learning (RL) for reasoning with Large Language Models (LLMs). RL has achieved remarkable success in advancing the frontier of LLM capabilities, particularly in addressing complex logical tasks such as mathematics and coding. As a result, RL has emerged as a foundational methodology for transforming LLMs into LRMs. With the rapid progress of the field, further scaling of RL for LRMs now faces foundational challenges not only in computational resources but also in algorithm design, training data, and infrastructure. To this end, it is timely to revisit the development of this domain, reassess its trajectory, and explore strategies to enhance the scalability of RL toward Artificial SuperIntelligence (ASI). In particular, we examine research applying RL to LLMs and LRMs for reasoning abilities, especially since the release of DeepSeek-R1, including foundational components, core problems, training resources, and downstream applications, to identify future opportunities and directions for this rapidly evolving area. We hope this review will promote future research on RL for broader reasoning models. Github: https://github.com/TsinghuaC3I/Awesome-RL-for-LRMs

  2. RewardDance: Reward Scaling in Visual Generation

    Reward Models (RMs) are critical for improving generation models via Reinforcement Learning (RL), yet the RM scaling paradigm in visual generation remains largely unexplored. It primarily due to fundamental limitations in existing approaches: CLIP-based RMs suffer from architectural and input modality constraints, while prevalent Bradley-Terry losses are fundamentally misaligned with the next-token prediction mechanism of Vision-Language Models (VLMs), hindering effective scaling. More critically, the RLHF optimization process is plagued by Reward Hacking issue, where models exploit flaws in the reward signal without improving true quality. To address these challenges, we introduce RewardDance, a scalable reward modeling framework that overcomes these barriers through a novel generative reward paradigm. By reformulating the reward score as the model's probability of predicting a "yes" token, indicating that the generated image outperforms a reference image according to specific criteria, RewardDance intrinsically aligns reward objectives with VLM architectures. This alignment unlocks scaling across two dimensions: (1) Model Scaling: Systematic scaling of RMs up to 26 billion parameters; (2) Context Scaling: Integration of task-specific instructions, reference examples, and chain-of-thought (CoT) reasoning. Extensive experiments demonstrate that RewardDance significantly surpasses state-of-the-art methods in text-to-image, text-to-video, and image-to-video generation. Crucially, we resolve the persistent challenge of "reward hacking": Our large-scale RMs exhibit and maintain high reward variance during RL fine-tuning, proving their resistance to hacking and ability to produce diverse, high-quality outputs. It greatly relieves the mode collapse problem that plagues smaller models.

  3. 3D and 4D World Modeling: A Survey

    World modeling has become a cornerstone in AI research, enabling agents to understand, represent, and predict the dynamic environments they inhabit. While prior work largely emphasizes generative methods for 2D image and video data, they overlook the rapidly growing body of work that leverages native 3D and 4D representations such as RGB-D imagery, occupancy grids, and LiDAR point clouds for large-scale scene modeling. At the same time, the absence of a standardized definition and taxonomy for ``world models'' has led to fragmented and sometimes inconsistent claims in the literature. This survey addresses these gaps by presenting the first comprehensive review explicitly dedicated to 3D and 4D world modeling and generation. We establish precise definitions, introduce a structured taxonomy spanning video-based (VideoGen), occupancy-based (OccGen), and LiDAR-based (LiDARGen) approaches, and systematically summarize datasets and evaluation metrics tailored to 3D/4D settings. We further discuss practical applications, identify open challenges, and highlight promising research directions, aiming to provide a coherent and foundational reference for advancing the field. A systematic summary of existing literature is available at https://github.com/worldbench/survey

  4. AgentGym-RL: Training LLM Agents for Long-Horizon Decision Making through Multi-Turn Reinforcement Learning

    Developing autonomous LLM agents capable of making a series of intelligent decisions to solve complex, real-world tasks is a fast-evolving frontier. Like human cognitive development, agents are expected to acquire knowledge and skills through exploration and interaction with the environment. Despite advances, the community still lacks a unified, interactive reinforcement learning (RL) framework that can effectively train such agents from scratch -- without relying on supervised fine-tuning (SFT) -- across diverse and realistic environments. To bridge this gap, we introduce AgentGym-RL, a new framework to train LLM agents for multi-turn interactive decision-making through RL. The framework features a modular and decoupled architecture, ensuring high flexibility and extensibility. It encompasses a wide variety of real-world scenarios, and supports mainstream RL algorithms. Furthermore, we propose ScalingInter-RL, a training approach designed for exploration-exploitation balance and stable RL optimization. In early stages, it emphasizes exploitation by restricting the number of interactions, and gradually shifts towards exploration with larger horizons to encourage diverse problem-solving strategies. In this way, the agent develops more diverse behaviors and is less prone to collapse under long horizons. We perform extensive experiments to validate the stability and effectiveness of both the AgentGym-RL framework and the ScalingInter-RL approach. Our agents match or surpass commercial models on 27 tasks across diverse environments. We offer key insights and will open-source the complete AgentGym-RL framework -- including code and datasets -- to empower the research community in developing the next generation of intelligent agents.

  5. P3-SAM: Native 3D Part Segmentation

    Segmenting 3D assets into their constituent parts is crucial for enhancing 3D understanding, facilitating model reuse, and supporting various applications such as part generation. However, current methods face limitations such as poor robustness when dealing with complex objects and cannot fully automate the process. In this paper, we propose a native 3D point-promptable part segmentation model termed P3-SAM, designed to fully automate the segmentation of any 3D objects into components. Inspired by SAM, P3-SAM consists of a feature extractor, multiple segmentation heads, and an IoU predictor, enabling interactive segmentation for users. We also propose an algorithm to automatically select and merge masks predicted by our model for part instance segmentation. Our model is trained on a newly built dataset containing nearly 3.7 million models with reasonable segmentation labels. Comparisons show that our method achieves precise segmentation results and strong robustness on any complex objects, attaining state-of-the-art performance. Our code will be released soon.

  6. Hunyuan-MT Technical Report

    In this report, we introduce Hunyuan-MT-7B, our first open-source multilingual translation model, which supports bidirectional translation across 33 major languages and places a special emphasis on translation between Mandarin and several ethnic minority languages as well as dialects. Furthermore, to serve and address diverse translation scenarios and enhance model performance at test time, we introduce Hunyuan-MT-Chimera-7B, a translation model inspired by the slow thinking mode. This model integrates multiple outputs generated by the Hunyuan-MT-7B model under varying parameter settings, thereby achieving performance superior to that of conventional slow-thinking models based on Chain-of-Thought (CoT). The development of our models follows a holistic training process specifically engineered for multilingual translation, which begins with general and MT-oriented pre-training to build foundational capabilities, proceeds to Supervised Fine-Tuning (SFT) for task-specific adaptation, and culminates in advanced alignment through Reinforcement Learning (RL) and weak-to-strong RL. Through comprehensive experimentation, we demonstrate that both Hunyuan-MT-7B and Hunyuan-MT-Chimera-7B significantly outperform all translation-specific models of comparable parameter size and most of the SOTA large models, particularly on the task of translation between Mandarin and minority languages as well as dialects. In the WMT2025 shared task (General Machine Translation), our models demonstrate state-of-the-art performance, ranking first in 30 out of 31 language pairs. This result highlights the robustness of our models across a diverse linguistic spectrum, encompassing high-resource languages such as Chinese, English, and Japanese, as well as low-resource languages including Czech, Marathi, Estonian, and Icelandic.

  7. The Majority is not always right: RL training for solution aggregation

    Scaling up test-time compute, by generating multiple independent solutions and selecting or aggregating among them, has become a central paradigm for improving large language models (LLMs) on challenging reasoning tasks. While most prior work relies on simple majority voting or reward model ranking to aggregate solutions, these approaches may only yield limited benefits. In this work, we propose to learn aggregation as an explicit reasoning skill: given a set of candidate solutions, we train an aggregator model to review, reconcile, and synthesize a final, correct answer using reinforcement learning from verifiable rewards. A key ingredient is careful balancing of easy and hard training examples, allowing the model to learn both to recover minority-but-correct answers as well as easy majority-correct answers. Empirically, we find our method, AggLM, outperforms both strong rule-based and reward-model baselines, across multiple benchmarks. Furthermore, it generalizes effectively to solutions from differing models, including stronger ones than contained in the training data, all while requiring substantially fewer tokens than majority voting with larger numbers of solutions.

  8. <think> So let's replace this phrase with insult... </think> Lessons learned from generation of toxic texts with LLMs

    Modern Large Language Models (LLMs) are excellent at generating synthetic data. However, their performance in sensitive domains such as text detoxification has not received proper attention from the scientific community. This paper explores the possibility of using LLM-generated synthetic toxic data as an alternative to human-generated data for training models for detoxification. Using Llama 3 and Qwen activation-patched models, we generated synthetic toxic counterparts for neutral texts from ParaDetox and SST-2 datasets. Our experiments show that models fine-tuned on synthetic data consistently perform worse than those trained on human data, with a drop in performance of up to 30% in joint metrics. The root cause is identified as a critical lexical diversity gap: LLMs generate toxic content using a small, repetitive vocabulary of insults that fails to capture the nuances and variety of human toxicity. These findings highlight the limitations of current LLMs in this domain and emphasize the continued importance of diverse, human-annotated data for building robust detoxification systems.

  9. Statistical Methods in Generative AI

    Generative Artificial Intelligence is emerging as an important technology, promising to be transformative in many areas. At the same time, generative AI techniques are based on sampling from probabilistic models, and by default, they come with no guarantees about correctness, safety, fairness, or other properties. Statistical methods offer a promising potential approach to improve the reliability of generative AI techniques. In addition, statistical methods are also promising for improving the quality and efficiency of AI evaluation, as well as for designing interventions and experiments in AI. In this paper, we review some of the existing work on these topics, explaining both the general statistical techniques used, as well as their applications to generative AI. We also discuss limitations and potential future directions.

  10. EnvX: Agentize Everything with Agentic AI

    The widespread availability of open-source repositories has led to a vast collection of reusable software components, yet their utilization remains manual, error-prone, and disconnected. Developers must navigate documentation, understand APIs, and write integration code, creating significant barriers to efficient software reuse. To address this, we present EnvX, a framework that leverages Agentic AI to agentize GitHub repositories, transforming them into intelligent, autonomous agents capable of natural language interaction and inter-agent collaboration. Unlike existing approaches that treat repositories as static code resources, EnvX reimagines them as active agents through a three-phase process: (1) TODO-guided environment initialization, which sets up the necessary dependencies, data, and validation datasets; (2) human-aligned agentic automation, allowing repository-specific agents to autonomously perform real-world tasks; and (3) Agent-to-Agent (A2A) protocol, enabling multiple agents to collaborate. By combining large language model capabilities with structured tool integration, EnvX automates not just code generation, but the entire process of understanding, initializing, and operationalizing repository functionality. We evaluate EnvX on the GitTaskBench benchmark, using 18 repositories across domains such as image processing, speech recognition, document analysis, and video manipulation. Our results show that EnvX achieves a 74.07% execution completion rate and 51.85% task pass rate, outperforming existing frameworks. Case studies further demonstrate EnvX's ability to enable multi-repository collaboration via the A2A protocol. This work marks a shift from treating repositories as passive code resources to intelligent, interactive agents, fostering greater accessibility and collaboration within the open-source ecosystem.

  11. HumanAgencyBench: Scalable Evaluation of Human Agency Support in AI Assistants

    As humans delegate more tasks and decisions to artificial intelligence (AI), we risk losing control of our individual and collective futures. Relatively simple algorithmic systems already steer human decision-making, such as social media feed algorithms that lead people to unintentionally and absent-mindedly scroll through engagement-optimized content. In this paper, we develop the idea of human agency by integrating philosophical and scientific theories of agency with AI-assisted evaluation methods: using large language models (LLMs) to simulate and validate user queries and to evaluate AI responses. We develop HumanAgencyBench (HAB), a scalable and adaptive benchmark with six dimensions of human agency based on typical AI use cases. HAB measures the tendency of an AI assistant or agent to Ask Clarifying Questions, Avoid Value Manipulation, Correct Misinformation, Defer Important Decisions, Encourage Learning, and Maintain Social Boundaries. We find low-to-moderate agency support in contemporary LLM-based assistants and substantial variation across system developers and dimensions. For example, while Anthropic LLMs most support human agency overall, they are the least supportive LLMs in terms of Avoid Value Manipulation. Agency support does not appear to consistently result from increasing LLM capabilities or instruction-following behavior (e.g., RLHF), and we encourage a shift towards more robust safety and alignment targets.

Solidot(15)

  1. NASA 禁止中国公民参与其太空项目

    NASA 禁止持有有效签证的中国公民进入其设施,参与其太空项目。此前以合同工或学生身份参与 NASA 项目的中国公民在 9 月 5 日发现无法访问所有 NASA 系统和设施,NASA 随后证实它以国家安全理由禁止中国公民,“NASA 已针对中国公民采取了内部措施,包括限制其进入我们的设施、接触材料和网络,以确保我们工作的安全。”中美两国目前都在竞争重返月球,而美国的登月计划 Artemis 正面临成本超支和延误等问题。

  2. 为什么 Netflix 难以制作出高质量电影

    今年 2 月 Netflix 发布了一部饱受诟病的科幻片《The Electric State》,由明星 Chris Pratt 以及《怪奇物语》十一的扮演者 Millie Bobby Brown 主演。这部电影本应该很快被人遗忘,如果不是它的制作成本高达 3.2 亿美元的话。3.2 亿美元给 Netflix 带来了 Metacritic 综合评分 30/100,烂番茄综合评分 14%。为了填满其内容库,Netflix 投资制作了一系列低质量原创电影,它虽然也制作过一些高质量电影如《爱尔兰人》,但在影评网站如 IMDb、Letterboxd 和 TMDB 上,Netflix 电影的综合评分远低于院线电影。Netflix 曾与知名导演 Martin Scorsese、Alfonso Cuarón 和 Bradley Cooper 合作过,但大部分项目都是一次性的,知名导演很少会再次合作。今天很多导演拒绝与 Netflix 合作,即使 Netflix 提供更多的预算。《Weapons》的导演 Zach Cregger 拒绝了 Netflix 开出的 5000 万美元预算,而是选择了华纳兄弟的 3700 万美元预算和院线上映保证。Netflix 为 Emerald Fennell 和 Margot Robbie 改编自《呼啸山庄》的电影开出了 1.5 亿美元,但他们仍然选择了华纳兄弟的 8000 万美元预算和院线上映保证。

  3. 引力波证实霍金黑洞面积定理

    激光干涉引力波天文台(LIGO)探测到两个黑洞之间异常强烈的碰撞,这使得物理学家能够验证斯蒂芬·霍金在1971 年提出的黑洞面积定理。该定理指出,当两个黑洞合并时产生的黑洞视界,即连光都无法逃脱黑洞控制的边界,其面积不能小于两个原始黑洞的面积之和。该定理与热力学第二定律相呼应。热力学第二定律指出,熵或物体内部的无序状态永远不会减少。黑洞合并扭曲了宇宙的结构,产生了被称为引力波的微小时空波动,能被引力波探测器观测到。最近的这次碰撞被命名为 GW250114,与 2015 年首次观测到的产生引力波的碰撞几乎完全相同。这两次黑洞的质量都在太阳质量的 30-40 倍之间,发生在 13 亿光年之外。这一次升级后的 LIGO 探测器灵敏度是 2015 年的 3 倍,因此它们能够以前所未有的细节捕获碰撞产生的波。这使得研究人员能够通过计算证实黑洞合并后视界面积确实变大,从而验证了霍金的定理。

  4. 法国配音演员指控《古墓丽影 4-6 重制版》使用 AI 合成其声音

    古墓丽影系列的法语配音演员 Françoise Cadol 向《古墓丽影 4-6 重制版(Tomb Raider 4-6 Remastered)》开发商 Aspyr 发出停止通知函(cease and desist),指控 Aspyr 使用 AI 拷贝其声音但没有通知她或告诉游戏玩家。她形容此举是一种背叛,一种彻底的不尊重。除了法语,巴西和西班牙等地区的玩家也认为其语种的配音是由 AI 生成的,AI 合成了原配音演员的声音。巴西配音演员 Lene Bastos 收到了 Aspyr 的一封回信,它的调查显示外部开发合作伙伴在其不知情下使用生成式 AI 编辑原始声音,它表示自己没有授权这么做,对未能在审核中注意到该问题表示歉意。

  5. 小红书被要求限期整改

    网信办在一份简短声明中宣布以女性为主的社交应用小红书被要求限期整改。“针对小红书平台未落实信息内容管理主体责任,在热搜榜单重点环节频繁呈现多条炒作明星个人动态和琐事类词条等不良信息内容,破坏网络生态问题,国家网信办指导上海市网信办,依据《网络信息内容生态治理规定》等有关规定,对小红书平台采取约谈、责令限期改正、警告、从严处理责任人等处置处罚措施。”

  6. 甲骨文股价飙升,Larry Ellison 成为新首富

    甲骨文股价创下 1992 年以来最佳单日表现,股价飙升 36% 至 328 美元,市值增加 2440 亿美元接近一万亿美元大关。股价的上涨受益于 AI 驱动的云计算需求激增。股价飙升也使得公司创始人埃里森(Larry Ellison)的财富增加了 1000 亿美元,超过马斯克(Elon Musk)成为新的世界首富。

  7. 研究发现爱喝啤酒的人对蚊子有高吸引力

    一项音乐会研究发现,爱喝啤酒的人对蚊子有高吸引力。研究报告发表在预印本平台 bioRxiv 上。这项研究更多是一种轶事性质,有关蚊子如何跟踪人类有大量研究,人体会释放由体味、热量和二氧化碳组成的独特气味,蚊子通过这种气味跟踪人类,且蚊子接受气味的方法不只一种而是有多种。在这项研究中,研究人员将数千只雌性按蚊带到荷兰一年一度的 Lowlands 音乐节,搭建了一个临时实验室,招募了 500 名志愿者,填写了一份有关他们在音乐会期间饮食行为习惯的问卷,然后把手伸到装满蚊子的特质笼子内,蚊子能闻到人的气味但无法叮咬。摄像机记录了志愿者手臂上落下的蚊子数量,与笼子另一侧糖罐上的蚊子数量进行了比较。结果显示:喝啤酒的参与者对蚊子的吸引力比不喝啤酒的人高出 1.35 倍;前一天晚上与他人共枕的人也对蚊子有高吸引力;沐浴和涂防晒霜的人对蚊子的吸引力较低。研究人员一本正经的说,蚊子只对享乐主义者感兴趣。

  8. NASA 称毅力号漫游车在火星发现潜在生物特征

    NASA 宣布毅力号漫游车去年在火星耶泽罗陨石坑(Jezero Crater)附近一古老干涸河床采集到的一块岩石样本可能保存了古代微生物生命的证据。该样本被命名为“蓝宝石峡谷(Sapphire Canyon)”,采集自名叫 Cheyava Falls 的岩石。潜在的生物特征是指可能具有生物起源的物质或结构,但需要更多数据或进一步研究才能得出是否存在生命的结论。相关研究报告发表在《自然》期刊上。

  9. 疫情期间使用的一次性口罩留下了化学定时炸弹

    在新冠疫情高峰时期,全世界据估计每月使用 1290 亿个一次性口罩,这些口罩主要由聚丙烯等塑料制成。由于缺乏回收流程,大部分口罩要么被填埋要么散落在世界各地,研究显示陆地和水生环境都存在大量一次性口罩。而这些丢弃的口罩如今已经开始降解,可能会成为化学定时炸弹。研究人员将不同类型的口罩浸泡在水中,发现所有口罩都会渗出微塑料,其中 FFP2 和 FFP3 口罩的微塑料最多,它们释放的微塑料量是其它口罩的四到六倍。此外口罩还会释放出干扰内分泌的化学物质双酚B(bisphenol B),被人类和动物身体吸收之后,其作用会类似雌激素。研究人员根据口罩的产量估计它们会向环境释放 128-214 公斤的双酚B。

  10. 婴儿的哭泣声会让人的身体发热

    婴儿的哭泣声会触发男女快速的情绪反应,以至于身体都变热了。研究人员使用热成像发现,播放婴儿哭泣的录音时,血会涌上人的脸部,皮肤温度随之升高。当婴儿发出更痛苦的声音时,这种效应会更强烈也更加同步。研究表明,人类会对婴儿哭声中的特定特征自动做出反应,在婴儿感到疼痛时会增强。为了增加婴儿获得照顾的几率,演化让人类无法忽视婴儿的哭声。根据发表在《Journal of The Royal Society Interface》的研究,男女对婴儿哭声的反应基本相同。而婴儿最痛苦的哭声则会引发成年人面部温度最大的变化。

  11. Windows 10 拒绝消失

    距离微软终止支持 Windows 10 只剩下一个月时间,然而根据 Statcounter 的统计数据,Windows 10 拒绝消失。2025 年 8 月 Windows 11 的比例下降了 4 个百分点从 53% 降至 49%,Windows 10 的比例上升了 3 个百分点从 42% 增加到 45%。这一数据可能存在误差,但也表明 Windows 10 作为一款仍然能正常工作的操作系统,用户升级到对硬件需求更高的 Windows 11 的速度并不快。除非购买新 PC,大部分 Windows 10 用户可能会继续使用 Windows 10。

  12. 更温暖的气候可能会增加添加糖摄入量

    全球变暖可能会影响以含糖饮料和冷冻甜品形式增加的添加糖摄入量,尤其是在社会经济地位较低的人群中。在12℃~30℃ 的范围内每升高 1℃,每人每天的添加糖摄入就会增加 0.70 克。研究结果提示应缓解在未来气候变化情景下与过量摄入添加糖有关的潜在健康风险。气温波动会影响人们的饮食选择。更热的天气会增加身体对补水的需求,通常导致人们更喜欢食用冰镇或加糖的产品。这种情况在习惯摄入更多高糖含量食物和饮料的地区尤其常见。过量摄入添加糖与肥胖、代谢疾病等其他健康风险有关。然而,气候变化如何影响饮食习惯,以及潜在的健康后果,一直不明确。为了评估天气状况如何影响添加糖摄入,卡迪夫大学的何盼和同事分析了 2004 年至 2019 年美国家庭的食品采购数据,并将这些数据与该地区的气象学数据进行比较,气象学数据包括气温、风速、降水和湿度。他们发现,添加糖摄入与 12-30°C 范围内的温度正相关。主要驱动因素为含糖饮料(如汽水和果汁)和冷冻甜品(如冰淇淋和意式冰淇淋)的摄入量更大。这种影响在收入或教育程度较低家庭中更显著。作者还预计,到 2095 年(或相当于比工业化前水平升高 5°C)全国每天的添加糖摄入可能会增加 2.99 克,而特定人群——包括女性、低收入和低教育程度群体——的风险会更高。

  13. 海洋暖化危及食物网关键物种原绿球藻

    科学家几十年以来一直认为地球最丰富的浮游植物原绿球藻(Prochloroccoccus)会受益于暖化的全球环境,然而根据发表在《Nature Microbiology》期刊上的一项研究,海洋暖化会危及原绿球藻的生存,作为全球食物网的关键物种,它的数量如果减少将会产生巨大影响。原绿球藻非常小(0.6 微米),属于光合超微型浮游植物,是地球最丰富的光合生物,是海洋主要的初级生产者,通过光合作用产生了地球约五分之一的氧气。它们将阳光和二氧化碳转化为海洋生态系统底层的食物,热带海洋近半数食物是由原绿球藻产生的,有数以百计的物种依赖于原绿球藻。根据最新研究,如果地表水温超过约 27.8 摄氏度,未来 75 年热带海洋中的原绿球藻种群数量可能会减少一半。

  14. 任天堂获得召唤物并让召唤物战斗的美国专利

    任天堂和 The Pokémon Company 从 USPTO 获得了一项美国专利,专利号 12,403,397,该专利与召唤物并让召唤物为玩家战斗的游戏机制有关。这项专利可能会对游戏行业产生巨大影响,因为类似的游戏机制早就存在了几十年,被游戏开发商广泛使用,如 1990 年代的《暗黑破坏神》以及早期的《最终幻想》游戏系列都存在玩家使用技能或符咒召唤一个角色让其为玩家战斗的机制。任天堂以及 The Pokémon Company 正在与《幻兽帕鲁(Palworld)》开发商 Pocketpair 打侵权诉讼,《幻兽帕鲁》被指模仿了宝可梦系列的游戏机制。

  15. 天文学家在棕矮星发现硅烷

    天文学家首次在一颗古老的棕矮星「意外者」(The Accident)的气层中发现硅烷(silane)。行星科学家早已预测硅烷应存在于气态巨行星的大气,并在云层形成中扮演关键角色。然而数十年来,无论是木星、土星,还是其他棕矮星与系外气态巨行星的大气,始终未能探测到它。「意外者」距离地球约 50 光年,可能形成于 100–120 亿年前,是迄今发现最古老的棕矮星之一。在其大气中检测到硅烷,意味着在极为古老的环境下,硅倾向与氢结合形成轻分子,得以上升至气态巨行星大气的高层;而在较近期形成的天体如木星、土星中,硅则更容易与氧结合,生成较重的分子并沉降至深层,使硅烷难以被侦测。这项发现不仅验证了天文学家对气态巨行星云层形成的理论,也显示早期行星的大气化学可能与今日太阳系截然不同,暗示数十亿年前形成的世界曾展现出另一种样貌。