DIGEST · 2026-02-13

OrangeBot.AI Digest — 2026-02-13

55 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. The evolution of OpenAI's mission statement (simonwillison.net)
  2. OpenAI has deleted the word 'safely' from its mission (theconversation.com)
  3. The EU moves to kill infinite scrolling (www.politico.eu)
  4. GPT-5.2 derives a new result in theoretical physics (openai.com)
  5. IronClaw: a Rust-based clawd that runs tools in isolated WASM sandboxes (github.com)
  6. CBP signs Clearview AI deal to use face recognition for 'tactical targeting' (www.wired.com)
  7. Sandwich Bill of Materials (nesbitt.io)
  8. Fix the iOS keyboard before the timer hits zero or I'm switching back to Android (ios-countdown.win)
  9. Open source is not about you (2018) (gist.github.com)
  10. Zed editor switching graphics lib from blade to wgpu (github.com)
  11. Monosketch (monosketch.io)
  12. US repeals EPA endangerment finding for greenhouse gases (www.cnn.com)
  13. Cache Monet (cachemonet.com)
  14. MinIO repository is no longer maintained (github.com)
  15. Ring owners are returning their cameras (www.msn.com)

GitHub Trending(10)

  1. SynkraAI / aios-core

    Synkra AIOS: AI-Orchestrated System for Full Stack Development - Core Framework v4.0

  2. ChromeDevTools / chrome-devtools-mcp

    Chrome DevTools for coding agents

  3. danielmiessler / Personal_AI_Infrastructure

    Agentic AI Infrastructure for magnifying HUMAN capabilities.

  4. patchy631 / ai-engineering-hub

    In-depth tutorials on LLMs, RAGs and real-world AI agent applications.

  5. TelegramMessenger / MTProxy
  6. google-deepmind / superhuman
  7. cheahjs / free-llm-api-resources

    A list of free LLM inference resources accessible via API.

  8. HandsOnLLM / Hands-On-Large-Language-Models

    Official code repo for the O'Reilly Book - "Hands-On Large Language Models"

  9. THUDM / slime

    slime is an LLM post-training framework for RL Scaling.

  10. DebugSwift / DebugSwift

    A toolkit to make debugging iOS applications easier 🚀

Hugging Face(15)

  1. The Devil Behind Moltbook: Anthropic Safety is Always Vanishing in Self-Evolving AI Societies

    The emergence of multi-agent systems built from large language models (LLMs) offers a promising paradigm for scalable collective intelligence and self-evolution. Ideally, such systems would achieve continuous self-improvement in a fully closed loop while maintaining robust safety alignment--a combination we term the self-evolution trilemma. However, we demonstrate both theoretically and empirically that an agent society satisfying continuous self-evolution, complete isolation, and safety invariance is impossible. Drawing on an information-theoretic framework, we formalize safety as the divergence degree from anthropic value distributions. We theoretically demonstrate that isolated self-evolution induces statistical blind spots, leading to the irreversible degradation of the system's safety alignment. Empirical and qualitative results from an open-ended agent community (Moltbook) and two closed self-evolving systems reveal phenomena that align with our theoretical prediction of inevitable safety erosion. We further propose several solution directions to alleviate the identified safety concern. Our work establishes a fundamental limit on the self-evolving AI societies and shifts the discourse from symptom-driven safety patches to a principled understanding of intrinsic dynamical risks, highlighting the need for external oversight or novel safety-preserving mechanisms.

  2. Composition-RL: Compose Your Verifiable Prompts for Reinforcement Learning of Large Language Models

    Large-scale verifiable prompts underpin the success of Reinforcement Learning with Verifiable Rewards (RLVR), but they contain many uninformative examples and are costly to expand further. Recent studies focus on better exploiting limited training data by prioritizing hard prompts whose rollout pass rate is 0. However, easy prompts with a pass rate of 1 also become increasingly prevalent as training progresses, thereby reducing the effective data size. To mitigate this, we propose Composition-RL, a simple yet useful approach for better utilizing limited verifiable prompts targeting pass-rate-1 prompts. More specifically, Composition-RL automatically composes multiple problems into a new verifiable question and uses these compositional prompts for RL training. Extensive experiments across model sizes from 4B to 30B show that Composition-RL consistently improves reasoning capability over RL trained on the original dataset. Performance can be further boosted with a curriculum variant of Composition-RL that gradually increases compositional depth over training. Additionally, Composition-RL enables more effective cross-domain RL by composing prompts drawn from different domains. Codes, datasets, and models are available at https://github.com/XinXU-USTC/Composition-RL.

  3. DeepGen 1.0: A Lightweight Unified Multimodal Model for Advancing Image Generation and Editing

    Current unified multimodal models for image generation and editing typically rely on massive parameter scales (e.g., >10B), entailing prohibitive training costs and deployment footprints. In this work, we present DeepGen 1.0, a lightweight 5B unified model that achieves comprehensive capabilities competitive with or surpassing much larger counterparts. To overcome the limitations of compact models in semantic understanding and fine-grained control, we introduce Stacked Channel Bridging (SCB), a deep alignment framework that extracts hierarchical features from multiple VLM layers and fuses them with learnable 'think tokens' to provide the generative backbone with structured, reasoning-rich guidance. We further design a data-centric training strategy spanning three progressive stages: (1) Alignment Pre-training on large-scale image-text pairs and editing triplets to synchronize VLM and DiT representations, (2) Joint Supervised Fine-tuning on a high-quality mixture of generation, editing, and reasoning tasks to foster omni-capabilities, and (3) Reinforcement Learning with MR-GRPO, which leverages a mixture of reward functions and supervision signals, resulting in substantial gains in generation quality and alignment with human preferences, while maintaining stable training progress and avoiding visual artifacts. Despite being trained on only ~50M samples, DeepGen 1.0 achieves leading performance across diverse benchmarks, surpassing the 80B HunyuanImage by 28% on WISE and the 27B Qwen-Image-Edit by 37% on UniREditBench. By open-sourcing our training code, weights, and datasets, we provide an efficient, high-performance alternative to democratize unified multimodal research.

  4. Learning beyond Teacher: Generalized On-Policy Distillation with Reward Extrapolation

    On-policy distillation (OPD), which aligns the student with the teacher's logit distribution on student-generated trajectories, has demonstrated strong empirical gains in improving student performance and often outperforms off-policy distillation and reinforcement learning (RL) paradigms. In this work, we first theoretically show that OPD is a special case of dense KL-constrained RL where the reward function and the KL regularization are always weighted equally and the reference model can by any model. Then, we propose the Generalized On-Policy Distillation (G-OPD) framework, which extends the standard OPD objective by introducing a flexible reference model and a reward scaling factor that controls the relative weight of the reward term against the KL regularization. Through comprehensive experiments on math reasoning and code generation tasks, we derive two novel insights: (1) Setting the reward scaling factor to be greater than 1 (i.e., reward extrapolation), which we term ExOPD, consistently improves over standard OPD across a range of teacher-student size pairings. In particular, in the setting where we merge the knowledge from different domain experts, obtained by applying domain-specific RL to the same student model, back into the original student, ExOPD enables the student to even surpass the teacher's performance boundary and outperform the domain teachers. (2) Building on ExOPD, we further find that in the strong-to-weak distillation setting (i.e., distilling a smaller student from a larger teacher), performing reward correction by choosing the reference model as the teacher's base model before RL yields a more accurate reward signal and further improves distillation performance. However, this choice assumes access to the teacher's pre-RL variant and incurs more computational overhead. We hope our work offers new insights for future research on OPD.

  5. MOSS-Audio-Tokenizer: Scaling Audio Tokenizers for Future Audio Foundation Models

    Discrete audio tokenizers are fundamental to empowering large language models with native audio processing and generation capabilities. Despite recent progress, existing approaches often rely on pretrained encoders, semantic distillation, or heterogeneous CNN-based architectures. These designs introduce fixed inductive biases that limit reconstruction fidelity and hinder effective scaling. In this paper, we argue that discrete audio tokenization should be learned fully end-to-end using a homogeneous and scalable architecture. To this end, we first propose CAT (Causal Audio Tokenizer with Transformer), a purely Transformer-based architecture that jointly optimizes the encoder, quantizer, and decoder from scratch for high-fidelity reconstruction. Building on the CAT architecture, we develop MOSS-Audio-Tokenizer, a large-scale audio tokenizer featuring 1.6 billion parameters, pre-trained on 3 million hours of diverse, general audio data. We show that this simple, fully end-to-end approach built from homogeneous, causal Transformer blocks scales gracefully and supports high-fidelity reconstruction across diverse audio domains. Across speech, sound, and music, MOSS-Audio-Tokenizer consistently outperforms prior codecs over a wide range of bitrates, while exhibiting predictable improvements with increased scale. Notably, leveraging the discrete tokens from our model, we develop the first purely autoregressive TTS model that surpasses prior non-autoregressive and cascaded systems. Furthermore, MOSS-Audio-Tokenizer enables competitive ASR performance without auxiliary encoders. Our findings position the CAT architecture as a unified, scalable interface for the next generation of native audio foundation models.

  6. GigaBrain-0.5M*: a VLA That Learns From World Model-Based Reinforcement Learning

    Vision-language-action (VLA) models that directly predict multi-step action chunks from current observations face inherent limitations due to constrained scene understanding and weak future anticipation capabilities. In contrast, video world models pre-trained on web-scale video corpora exhibit robust spatiotemporal reasoning and accurate future prediction, making them a natural foundation for enhancing VLA learning. Therefore, we propose GigaBrain-0.5M*, a VLA model trained via world model-based reinforcement learning. Built upon GigaBrain-0.5, which is pre-trained on over 10,000 hours of robotic manipulation data, whose intermediate version currently ranks first on the international RoboChallenge benchmark. GigaBrain-0.5M* further integrates world model-based reinforcement learning via RAMP (Reinforcement leArning via world Model-conditioned Policy) to enable robust cross-task adaptation. Empirical results demonstrate that RAMP achieves substantial performance gains over the RECAP baseline, yielding improvements of approximately 30\% on challenging tasks including Laundry Folding, Box Packing, and Espresso Preparation. Critically, GigaBrain-0.5M^* exhibits reliable long-horizon execution, consistently accomplishing complex manipulation tasks without failure as validated by real-world deployment videos on our https://gigabrain05m.github.io{project page}.

  7. LawThinker: A Deep Research Legal Agent in Dynamic Environments

    Legal reasoning requires not only correct outcomes but also procedurally compliant reasoning processes. However, existing methods lack mechanisms to verify intermediate reasoning steps, allowing errors such as inapplicable statute citations to propagate undetected through the reasoning chain. To address this, we propose LawThinker, an autonomous legal research agent that adopts an Explore-Verify-Memorize strategy for dynamic judicial environments. The core idea is to enforce verification as an atomic operation after every knowledge exploration step. A DeepVerifier module examines each retrieval result along three dimensions of knowledge accuracy, fact-law relevance, and procedural compliance, with a memory module for cross-round knowledge reuse in long-horizon tasks. Experiments on the dynamic benchmark J1-EVAL show that LawThinker achieves a 24% improvement over direct reasoning and an 11% gain over workflow-based methods, with particularly strong improvements on process-oriented metrics. Evaluations on three static benchmarks further confirm its generalization capability. The code is available at https://github.com/yxy-919/LawThinker-agent .

  8. Thinking with Drafting: Optical Decompression via Logical Reconstruction

    Existing multimodal large language models have achieved high-fidelity visual perception and exploratory visual generation. However, a precision paradox persists in complex reasoning tasks: optical perception systems transcribe symbols without capturing logical topology, while pixel-based generative models produce visual artifacts lacking mathematical exactness. To bridge this gap, we propose that reasoning over visual inputs be reconceptualized as optical decompression-the process of reconstructing latent logical structures from compressed visual tokens. Guided by the axiom that Parsing is Reasoning, we introduce Thinking with Drafting (TwD), which utilizes a minimalist Domain-Specific Language (DSL) as a grounding intermediate representation. Unlike standard approaches that hallucinate answers directly, TwD forces the model to draft its mental model into executable code, rendering deterministic visual proofs for self-verification. To validate this, we present VisAlg, a visual algebra benchmark. Experiments demonstrate that TwD serve as a superior cognitive scaffold. Our work establishes a closed-loop system where visual generation acts not as a creative output but as a logical verifier, offering a generalizable path for visual reasoning.

  9. Think Longer to Explore Deeper: Learn to Explore In-Context via Length-Incentivized Reinforcement Learning

    Achieving effective test-time scaling requires models to engage in In-Context Exploration -- the intrinsic ability to generate, verify, and refine multiple reasoning hypotheses within a single continuous context. Grounded in State Coverage theory, our analysis identifies a critical bottleneck to enabling this capability: while broader state coverage requires longer reasoning trajectories, the probability of sampling such sequences decays exponentially during autoregressive generation, a phenomenon we term the ``Shallow Exploration Trap''. To bridge this gap, we propose Length-Incentivized Exploration(\method). This simple yet effective recipe explicitly encourages models to explore more via a length-based reward coupled with a redundancy penalty, thereby maximizing state coverage in two-step manner. Comprehensive experiments across different models (Qwen3, Llama) demonstrate that \method effectively incentivize in-context exploration. As a result, our method achieves an average improvement of 4.4\% on in-domain tasks and a 2.7\% gain on out-of-domain benchmarks.

  10. Stroke of Surprise: Progressive Semantic Illusions in Vector Sketching

    Visual illusions traditionally rely on spatial manipulations such as multi-view consistency. In this work, we introduce Progressive Semantic Illusions, a novel vector sketching task where a single sketch undergoes a dramatic semantic transformation through the sequential addition of strokes. We present Stroke of Surprise, a generative framework that optimizes vector strokes to satisfy distinct semantic interpretations at different drawing stages. The core challenge lies in the "dual-constraint": initial prefix strokes must form a coherent object (e.g., a duck) while simultaneously serving as the structural foundation for a second concept (e.g., a sheep) upon adding delta strokes. To address this, we propose a sequence-aware joint optimization framework driven by a dual-branch Score Distillation Sampling (SDS) mechanism. Unlike sequential approaches that freeze the initial state, our method dynamically adjusts prefix strokes to discover a "common structural subspace" valid for both targets. Furthermore, we introduce a novel Overlay Loss that enforces spatial complementarity, ensuring structural integration rather than occlusion. Extensive experiments demonstrate that our method significantly outperforms state-of-the-art baselines in recognizability and illusion strength, successfully expanding visual anagrams from the spatial to the temporal dimension. Project page: https://stroke-of-surprise.github.io/

  11. RISE: Self-Improving Robot Policy with Compositional World Model

    Despite the sustained scaling on model capacity and data acquisition, Vision-Language-Action (VLA) models remain brittle in contact-rich and dynamic manipulation tasks, where minor execution deviations can compound into failures. While reinforcement learning (RL) offers a principled path to robustness, on-policy RL in the physical world is constrained by safety risk, hardware cost, and environment reset. To bridge this gap, we present RISE, a scalable framework of robotic reinforcement learning via imagination. At its core is a Compositional World Model that (i) predicts multi-view future via a controllable dynamics model, and (ii) evaluates imagined outcomes with a progress value model, producing informative advantages for the policy improvement. Such compositional design allows state and value to be tailored by best-suited yet distinct architectures and objectives. These components are integrated into a closed-loop self-improving pipeline that continuously generates imaginary rollouts, estimates advantages, and updates the policy in imaginary space without costly physical interaction. Across three challenging real-world tasks, RISE yields significant improvement over prior art, with more than +35% absolute performance increase in dynamic brick sorting, +45% for backpack packing, and +35% for box closing, respectively.

  12. dVoting: Fast Voting for dLLMs

    Diffusion Large Language Models (dLLMs) represent a new paradigm beyond autoregressive modeling, offering competitive performance while naturally enabling a flexible decoding process. Specifically, dLLMs can generate tokens at arbitrary positions in parallel, endowing them with significant potential for parallel test-time scaling, which was previously constrained by severe inefficiency in autoregressive modeling. In this work, we introduce dVoting, a fast voting technique that boosts reasoning capability without training, with only an acceptable extra computational overhead. dVoting is motivated by the observation that, across multiple samples for the same prompt, token predictions remain largely consistent, whereas performance is determined by a small subset of tokens exhibiting cross-sample variability. Leveraging the arbitrary-position generation capability of dLLMs, dVoting performs iterative refinement by sampling, identifying uncertain tokens via consistency analysis, regenerating them through voting, and repeating this process until convergence. Extensive evaluations demonstrate that dVoting consistently improves performance across various benchmarks. It achieves gains of 6.22%-7.66% on GSM8K, 4.40%-7.20% on MATH500, 3.16%-14.84% on ARC-C, and 4.83%-5.74% on MMLU. Our code is available at https://github.com/fscdc/dVoting

  13. χ_{0}: Resource-Aware Robust Manipulation via Taming Distributional Inconsistencies

    High-reliability long-horizon robotic manipulation has traditionally relied on large-scale data and compute to understand complex real-world dynamics. However, we identify that the primary bottleneck to real-world robustness is not resource scale alone, but the distributional shift among the human demonstration distribution, the inductive bias learned by the policy, and the test-time execution distribution -- a systematic inconsistency that causes compounding errors in multi-stage tasks. To mitigate these inconsistencies, we propose χ_{0}, a resource-efficient framework with effective modules designated to achieve production-level robustness in robotic manipulation. Our approach builds off three technical pillars: (i) Model Arithmetic, a weight-space merging strategy that efficiently soaks up diverse distributions of different demonstrations, varying from object appearance to state variations; (ii) Stage Advantage, a stage-aware advantage estimator that provides stable, dense progress signals, overcoming the numerical instability of prior non-stage approaches; and (iii) Train-Deploy Alignment, which bridges the distribution gap via spatio-temporal augmentation, heuristic DAgger corrections, and temporal chunk-wise smoothing. χ_{0} enables two sets of dual-arm robots to collaboratively orchestrate long-horizon garment manipulation, spanning tasks from flattening, folding, to hanging different clothes. Our method exhibits high-reliability autonomy; we are able to run the system from arbitrary initial state for consecutive 24 hours non-stop. Experiments validate that χ_{0} surpasses the state-of-the-art π_{0.5} in success rate by nearly 250%, with only 20-hour data and 8 A100 GPUs. Code, data and models will be released to facilitate the community.

  14. EgoHumanoid: Unlocking In-the-Wild Loco-Manipulation with Robot-Free Egocentric Demonstration

    Human demonstrations offer rich environmental diversity and scale naturally, making them an appealing alternative to robot teleoperation. While this paradigm has advanced robot-arm manipulation, its potential for the more challenging, data-hungry problem of humanoid loco-manipulation remains largely unexplored. We present EgoHumanoid, the first framework to co-train a vision-language-action policy using abundant egocentric human demonstrations together with a limited amount of robot data, enabling humanoids to perform loco-manipulation across diverse real-world environments. To bridge the embodiment gap between humans and robots, including discrepancies in physical morphology and viewpoint, we introduce a systematic alignment pipeline spanning from hardware design to data processing. A portable system for scalable human data collection is developed, and we establish practical collection protocols to improve transferability. At the core of our human-to-humanoid alignment pipeline lies two key components. The view alignment reduces visual domain discrepancies caused by camera height and perspective variation. The action alignment maps human motions into a unified, kinematically feasible action space for humanoid control. Extensive real-world experiments demonstrate that incorporating robot-free egocentric data significantly outperforms robot-only baselines by 51\%, particularly in unseen environments. Our analysis further reveals which behaviors transfer effectively and the potential for scaling human data.

  15. DeepSight: An All-in-One LM Safety Toolkit

    As the development of Large Models (LMs) progresses rapidly, their safety is also a priority. In current Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs) safety workflow, evaluation, diagnosis, and alignment are often handled by separate tools. Specifically, safety evaluation can only locate external behavioral risks but cannot figure out internal root causes. Meanwhile, safety diagnosis often drifts from concrete risk scenarios and remains at the explainable level. In this way, safety alignment lack dedicated explanations of changes in internal mechanisms, potentially degrading general capabilities. To systematically address these issues, we propose an open-source project, namely DeepSight, to practice a new safety evaluation-diagnosis integrated paradigm. DeepSight is low-cost, reproducible, efficient, and highly scalable large-scale model safety evaluation project consisting of a evaluation toolkit DeepSafe and a diagnosis toolkit DeepScan. By unifying task and data protocols, we build a connection between the two stages and transform safety evaluation from black-box to white-box insight. Besides, DeepSight is the first open source toolkit that support the frontier AI risk evaluation and joint safety evaluation and diagnosis.

Solidot(15)

  1. 中国成功测试可重复使用的新型火箭

    中国载人航天办公室宣布于 2 月 11 日成功完成了长征十号运载火箭系统低空演示验证与梦舟载人飞船系统最大动压逃逸飞行试验,其中火箭第一级在分离之后重新点燃助推器,减速缓慢降落在停留的回收驳船附近,第一级于 2 月 13 日成功回收,这是中国首次在海上实施运载火箭搜索回收任务,朝着火箭可重复使用迈出了重要一步。长征十号运载火箭主要用于载人月球探测任务,此次测试的是其缩小版,梦舟飞船则将取代目前使用的神舟飞船。长征十号第一级以及梦舟都设计可重复使用多次。

  2. 科学家警告地球接近气候临界点

    科学家警告地球正接近气候变化的临界点,越过临界点可能会导致全球暖化失控,无法遏制,世界将陷入地狱般的温室地球气候,将与过去 1.1 万年人类文明经历的温和气候截然不同。最近几年地球气温只上升了 1.3 摄氏度,但极端天气已经在全球范围内夺走大量生命和摧毁无数人的生计。如果气温上升的幅度达到 3-4 摄氏度,那么经济和社会将无法像我们所熟知的那样运转。科学家表示,何时触发临界点难以预测,但最重要是采取预防措施,大幅削减化石燃料的消耗。

  3. 当超级智能成为信仰,我们需要谈谈节奏

    Nala Ginrut 写道:在今天的技术语境中,如果你长时间阅读来自硅谷的讨论,你会很快发现一种近乎单一的价值排序:规模更大、算力更强、模型更通用、迭代更迅猛。 但当你把视线从发布会的灯光下移开,落到真实社会的地面,你会发现另一个问题正在变得越来越重要:如果智能真的迅速跨越临界点,我们的社会结构是否准备好了?

  4. 麻疹卷土重来

    在许多国家,麻疹已变得极为罕见,甚至一些医生从未接诊过一个病例,但这种情况正在改变。美国去年报告了逾 2000 例麻疹病例,创 30 年来纪录,且 2026 年病例数可能超过 2025 年。今年 1 月,英国、西班牙、奥地利等 6 国均失去了官方“无麻疹”认证。加拿大在去年 11 月失去“无麻疹”国家的地位,美国预计于今年 4 月步其后尘。麻疹病毒具有极强的传染性,可引起发热、咳嗽和皮疹,甚至导致死亡。数据显示,若周围人群均易感,那么每位麻疹患者平均会传染 12-18 人;若接触感染者,高达 90% 的未免疫人群会患上麻疹。好在麻疹疫苗的效果非常显著。接种一剂后,93% 的人将获得免疫力而免于感染;接种两剂后,保护率可提升至 97%。对于多数人而言,这种保护作用终身有效。当 92%~94% 的人群通过接种疫苗或既往感染获得免疫力时,麻疹病毒便无法继续传播,这种现象被称为群体免疫。然而在美国,幼儿园儿童的疫苗接种率从 2019—2020 学年的 95.2% 降至 2024—2025 学年的 92.5%,为疫情暴发打开了大门。

  5. 首次观测到恒星直接坍缩成黑洞

    天文学家首次完整观测到一颗大质量恒星在生命终点并未经历超新星爆发,而是直接坍缩形成黑洞。这颗名为 M31-2014-DS1 的恒星位于距地球约 250 万光年的仙女座星系。加州理工研究团队分析了 2005-2023 年间来自 NASA NEOWISE 项目及其他多台地面与空间望远镜的观测数据,发现该恒星的红外辐射自 2014 年起异常增亮,随后在 2016 年亮度急剧下降,整个变暗过程持续不到一年。在 2022-2023 年,该恒星在可见光与近红外波段已基本不可见,亮度仅为原先的万分之一,仅在热辐射更强的中红外波段留存微弱信号,亮度亦降至之前的十分之一。研究团队认为亮度骤降并最终消失的现象强烈表明,该恒星核心发生了引力坍缩并形成了黑洞。通常大质量恒星在耗尽核燃料后,核心会先坍缩为中子星,并借由中微子爆发产生向外激波,触发超新星爆炸。但理论预测,若激波未能抛射外层物质,物质将回落到中子星上,使其进一步坍缩成黑洞。此次观测首次为这一过程提供了直接证据。

  6. 长江禁渔令缓解了生态系统恶化

    根据发表在《科学》期刊上的一项研究,研究人员报告在实施为期十年的全方位商业禁渔令后,生态系统恶化已达数十年的长江正显现复苏端倪。自 1950 年代以来,中国经济的快速发展导致其境内最大、最长的河流——长江的淡水生物多样性严重衰退。这种衰退主要是由长达数十年的过度捕捞和栖息地退化造成的。尽管政府在生态保护和水质改善方面投入了巨资,但生物多样性却在持续下降,这引发了人们对传统恢复措施有效性的质疑。为此中国于 2021 年在整个长江流域实施了前所未有的十年禁渔令,并辅之以严格执法和环境统筹管理。通过分析 2018 年至 2023 年的数据,研究人员估了长江水域在禁渔令实施前后的鱼类群落状况。研究结果表明,实施禁渔令后,长江流域已显现出生态恢复的早期迹象,鱼类生物量增加了一倍多,物种丰富度亦有适度提升。体型较大、营养层级较高的物种恢复尤为显著,其数量与健康状况均优于禁渔之前。若干濒危物种与洄游物种——以及极度濒危的长江江豚——的种群数量也呈回升态势。

  7. Linux Mint 考虑采用更长的开发周期

    基于 Ubuntu 的发行版 Linux Mint 考虑放慢发布周期。Ubuntu 是每半年发布一个新版本,Linux Mint 发布周期类似。项目联合创始人 Clem Lefebvre 指出,每六个月发布一个新版本,此外还包括 LMDE,他们花在测试、修 bug 和发布上的时间远多于开发时间。Linux Mint 考虑改变现状,采用更长的开发周期,未来会公布更多信息。

  8. ICE 部署反无人机激光器,FAA 紧急关闭空域

    本周二晚上 FAA(美国联邦航空管理局)突然宣布德州 El Paso 周围空域关闭 10 天,但在周三上午 FAA 又突然宣布解除关闭。特朗普政府官员声称此举是为了应对墨西哥贩毒集团无人机的突然入侵。但知情人士称真正原因是 ICE 在未给予航空官员足够时间评估对商用飞机风险的情况下,部署了借自国防部的反无人机激光器。ICE 使用激光器击中了一个目标,他们以为是贩毒集团无人机,结果是气球。

  9. Highguard 开发商裁掉大部分员工

    《Highguard》开发商 Wildlight Entertainment 证实了裁员的消息,但没有透露裁掉了多少员工。该公司的开发者称大部分员工都被辞退了。《Highguard》是一款以突袭为主题的英雄射击游戏,于 1 月 26 日上线,一度吸引了 9.7 万玩家同时在线,但这一热度并没有持续太长时间,在短短 17 天内同时在线玩家人数已经锐减到 2200 人左右,对一款需要长期运营的免费 PvP 游戏而言,结局可能已经注定了。

  10. 重编程特定神经元能恢复小鼠记忆功能

    瑞士研究团队采用部分重编程技术,短暂启动三个关键基因 Oct4、Sox2 和 Klf4(简称 OSK)。此前研究表明,这组因子可在一定程度上逆转细胞老化迹象。他们利用腺相关病毒作为载体,通过精确脑部注射,将标记学习激活神经元的荧光系统和可控开启 OSK 表达的时间开关两个元件送入两大关键脑区——海马体齿状回与内侧前额叶皮层。前者主导近期记忆的形成与提取,后者则负责远期记忆和回忆。结果显示,在老年小鼠中,仅需短暂激活海马体内的印痕神经元 OSK 表达,其记忆表现便恢复至年轻小鼠水平;而靶向前额叶皮层时,几周前形成的遥远记忆也得以重现。

  11. 人类的总能量消耗受到限制

    根据发表在《Current Biology》期刊上的一项研究,人类和其它动物的总能量消耗受到限制。这种能量支出模型被称为约束模型,约束模型认为人体的总能量预算是有限的,并会努力维持在一个相对稳定的区间内。当我们通过体育锻炼显著增加能量输出时,身体会悄悄减少其他方面的能量开支进行补偿,例如降低基础代谢率、睡眠时代谢率或减少用于细胞修复、免疫等内部生理活动的能量。研究人员认为,约束模型可能源于远古祖先的生存策略:在食物不稳定的时代,过度消耗能量会危及生命,因此身体进化出总能量控制系统,确保总支出稳定在安全区间。这解释了为什么现代人即使每天多跑几公里,体重减轻也往往缓慢。

  12. 惠普推出游戏笔记本订阅服务

    惠普推出了游戏笔记本订阅服务 OMEN Gaming Subscription,允许用户以按月付费的方式租用笔记本电脑,用户不拥有所有权,长期租用的费用会超过笔记本的零售价。用户可租用的笔记本电脑包括:低端的 HP Victus 15——RTX 4050 移动显卡,Ryzen 7 8845HS CPU、16 GB 内存 和 1TB SSD,月费 50 美元,其零售价为 950 美元,也就是租用两年的价格就超过了零售价;高端的 HP Omen Max 16—— Intel Core Ultra 9 处理器,RTX 5080 移动显卡、32 GB 内存 和 1 TB SSD,月费 130 美元,目前零售价 2,110 美元。放弃所有权的好处包括了:年度硬件升级、次日更换服务、全天候支持以及持续保修。用户无法在订阅之后中途取消,否则需要支付昂贵的提前终止费用——Victus 15 为 550 美元,Omen Max 16 为 1430 美元。用户需要至少订阅 12 个月之后才可以在第 13 个月取消订阅而无需支付终止费用。

  13. 大部分美国人不会付费阅读新闻

    根据皮尤研究中心(Pew Research Center)周三公布的报告,大部分美国人不付费阅读新闻。皮尤去年 12 月调查了 3560 名美国成年人,结果显示他们对于关注新闻的重要性没有共识,但对于是否为新闻付费他们的共识是不付费。83% 的受访者过去一年没有通过订阅、捐赠或成为会员的方式为任何新闻源付费,他们可以通过免费渠道获得新闻,为新闻付费对他们而言是一种奢侈。最有可能为新闻付费的群体是高收入人群(30%)、有研究生学历的成年人(35%)和自由派民主党人(29%)。只有 8% 的受访者认为美国民众有责任为新闻付费。最不认为为新闻付费是个人责任的群体是低收入者、共和党人或倾向共和党的人士、30 岁以下的成年人、高中及以下学历者。

  14. 俄罗斯屏蔽 WhatsApp

    Meta 旗下的 WhatsApp 透露,俄罗斯正尝试全面屏蔽该消息应用。WhatsApp 以及 Telegram 都是俄罗斯最受欢迎的消息应用,用户数超过一亿。俄罗斯官方塔斯社(Tass Media)早些时候报道,俄罗斯预计在 2026 年永久封禁 WhatsApp。俄罗斯已将 WhatsApp 母公司 Meta 认定为极端组织,官员表示屏蔽 WhatsApp 是绝对正当的。俄罗斯正在努力推广本土消息应用 Max,该应用被誉为俄罗斯的微信,它组合了即时通讯和政府服务,但没有加密功能。

  15. 安娜的档案悄悄发布 Spotify 音乐文件

    安娜的档案(Anna’s Archive)去年宣布抓取了音乐流媒体巨头 Spotify 的音乐文件,震惊了音乐行业。它随后发布了 Spotify 的元数据,但并没有公开音乐文件,尽管如此 Spotify 和唱片公司对安娜的档案提起了诉讼,导致了它失去了包括 .org 在内的多个域名。2 月 8 日有人发布了数十个新的 Spotify 种子文件,每个包含大约 6 万个文件,总共约 280 万个文件,约 6 TB 的音乐。安娜的档案此前表示它存档了 300 TB 的 Spotify 音乐文件,总共 8600 万首音乐,预计它未来可能会释出更多 Spotify 音乐文件。