OrangeBot.AI Digest — 2026-01-09
57 headlines across 8 sources, aggregated for this day.
Hacker News(15)
- Exercise can be nearly as effective as therapy for depression (www.sciencedaily.com)
- JavaScript Demos in 140 Characters (beta.dwitter.net)
- SendGrid isn’t emailing about ICE or BLM – it’s a phishing attack (fredbenenson.com)
- Show HN: I made a memory game to teach you to play piano by ear (lend-me-your-ears.specr.net)
- The Vietnam government has banned rooted phones from using any banking app (xdaforums.com)
- Cloudflare CEO on the Italy fines (twitter.com)
- How will the miracle happen today? (kk.org)
- London–Calcutta bus service (en.wikipedia.org)
- Kagi releases alpha version of Orion for Linux (help.kagi.com)
- Linux Runs on Raspberry Pi RP2350's Hazard3 RISC-V Cores (2024) (www.hackster.io)
- MCP is a fad (tombedor.dev)
- How Samba Was Written (2003) (download.samba.org)
- European Commission issues call for evidence on open source (lwn.net)
- What happened to WebAssembly (emnudge.dev)
- Mathematics for Computer Science (2018) [pdf] (courses.csail.mit.edu)
GitHub Trending(12)
- ChromeDevTools / chrome-devtools-mcp
Chrome DevTools for coding agents
- anthropics / claude-code
Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflows - all through natural language commands.
- obra / superpowers
Claude Code superpowers: core skills library
- tailwindlabs / tailwindcss
A utility-first CSS framework for rapid UI development.
- netbirdio / netbird
Connect your devices into a secure WireGuard®-based overlay network with SSO, MFA and granular access controls.
- C4illin / ConvertX
💾 Self-hosted online file converter. Supports 1000+ formats ⚙️
- Lightricks / ComfyUI-LTXVideo
LTX-Video Support for ComfyUI
- MiroMindAI / MiroThinker
MiroThinker is an open-source search agent model, built for tool-augmented reasoning and real-world information seeking, aiming to match the deep research experience of OpenAI Deep Research and Gemini Deep Research.
- google / googletest
GoogleTest - Google Testing and Mocking Framework
- bytedance / UI-TARS-desktop
The Open-Source Multimodal AI Agent Stack: Connecting Cutting-Edge AI Models and Agent Infra
- Johnshall / Shadowrocket-ADBlock-Rules-Forever
提供多款 Shadowrocket 规则,拥有强劲的广告过滤功能。每日 8 时重新构建规则。
- anomalyco / opencode
The open source coding agent.
Hugging Face(15)
- GDPO: Group reward-Decoupled Normalization Policy Optimization for Multi-reward RL Optimization
As language models become increasingly capable, users expect them to provide not only accurate responses but also behaviors aligned with diverse human preferences across a variety of scenarios. To achieve this, Reinforcement learning (RL) pipelines have begun incorporating multiple rewards, each capturing a distinct preference, to guide models toward these desired behaviors. However, recent work has defaulted to apply Group Relative Policy Optimization (GRPO) under multi-reward setting without examining its suitability. In this paper, we demonstrate that directly applying GRPO to normalize distinct rollout reward combinations causes them to collapse into identical advantage values, reducing the resolution of the training signal and resulting in suboptimal convergence and, in some cases, early training failure. We then introduce Group reward-Decoupled Normalization Policy Optimization (GDPO), a new policy optimization method to resolve these issues by decoupling the normalization of individual rewards, more faithfully preserving their relative differences and enabling more accurate multi-reward optimization, along with substantially improved training stability. We compare GDPO with GRPO across three tasks: tool calling, math reasoning, and coding reasoning, evaluating both correctness metrics (accuracy, bug ratio) and constraint adherence metrics (format, length). Across all settings, GDPO consistently outperforms GRPO, demonstrating its effectiveness and generalizability for multi-reward reinforcement learning optimization.
- Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers
Applying weight decay (WD) to matrix layers is standard practice in large-language-model pretraining. Prior work suggests that stochastic gradient noise induces a Brownian-like expansion of the weight matrices W, whose growth is counteracted by WD, leading to a WD-noise equilibrium with a certain weight norm ||W||. In this work, we view the equilibrium norm as a harmful artifact of the training procedure, and address it by introducing learnable multipliers to learn the optimal scale. First, we attach a learnable scalar multiplier to W and confirm that the WD-noise equilibrium norm is suboptimal: the learned scale adapts to data and improves performance. We then argue that individual row and column norms are similarly constrained, and free their scale by introducing learnable per-row and per-column multipliers. Our method can be viewed as a learnable, more expressive generalization of muP multipliers. It outperforms a well-tuned muP baseline, reduces the computational overhead of multiplier tuning, and surfaces practical questions such as forward-pass symmetries and the width-scaling of the learned multipliers. Finally, we validate learnable multipliers with both Adam and Muon optimizers, where it shows improvement in downstream evaluations matching the improvement of the switching from Adam to Muon.
- RL-AWB: Deep Reinforcement Learning for Auto White Balance Correction in Low-Light Night-time Scenes
Nighttime color constancy remains a challenging problem in computational photography due to low-light noise and complex illumination conditions. We present RL-AWB, a novel framework combining statistical methods with deep reinforcement learning for nighttime white balance. Our method begins with a statistical algorithm tailored for nighttime scenes, integrating salient gray pixel detection with novel illumination estimation. Building on this foundation, we develop the first deep reinforcement learning approach for color constancy that leverages the statistical algorithm as its core, mimicking professional AWB tuning experts by dynamically optimizing parameters for each image. To facilitate cross-sensor evaluation, we introduce the first multi-sensor nighttime dataset. Experiment results demonstrate that our method achieves superior generalization capability across low-light and well-illuminated images. Project page: https://ntuneillee.github.io/research/rl-awb/
- RoboVIP: Multi-View Video Generation with Visual Identity Prompting Augments Robot Manipulation
The diversity, quantity, and quality of manipulation data are critical for training effective robot policies. However, due to hardware and physical setup constraints, collecting large-scale real-world manipulation data remains difficult to scale across diverse environments. Recent work uses text-prompt conditioned image diffusion models to augment manipulation data by altering the backgrounds and tabletop objects in the visual observations. However, these approaches often overlook the practical need for multi-view and temporally coherent observations required by state-of-the-art policy models. Further, text prompts alone cannot reliably specify the scene setup. To provide the diffusion model with explicit visual guidance, we introduce visual identity prompting, which supplies exemplar images as conditioning inputs to guide the generation of the desired scene setup. To this end, we also build a scalable pipeline to curate a visual identity pool from large robotics datasets. Using our augmented manipulation data to train downstream vision-language-action and visuomotor policy models yields consistent performance gains in both simulation and real-robot settings.
- RelayLLM: Efficient Reasoning via Collaborative Decoding
Large Language Models (LLMs) for complex reasoning is often hindered by high computational costs and latency, while resource-efficient Small Language Models (SLMs) typically lack the necessary reasoning capacity. Existing collaborative approaches, such as cascading or routing, operate at a coarse granularity by offloading entire queries to LLMs, resulting in significant computational waste when the SLM is capable of handling the majority of reasoning steps. To address this, we propose RelayLLM, a novel framework for efficient reasoning via token-level collaborative decoding. Unlike routers, RelayLLM empowers the SLM to act as an active controller that dynamically invokes the LLM only for critical tokens via a special command, effectively "relaying" the generation process. We introduce a two-stage training framework, including warm-up and Group Relative Policy Optimization (GRPO) to teach the model to balance independence with strategic help-seeking. Empirical results across six benchmarks demonstrate that RelayLLM achieves an average accuracy of 49.52%, effectively bridging the performance gap between the two models. Notably, this is achieved by invoking the LLM for only 1.07% of the total generated tokens, offering a 98.2% cost reduction compared to performance-matched random routers.
- AT^2PO: Agentic Turn-based Policy Optimization via Tree Search
LLM agents have emerged as powerful systems for tackling multi-turn tasks by interleaving internal reasoning and external tool interactions. Agentic Reinforcement Learning has recently drawn significant research attention as a critical post-training paradigm to further refine these capabilities. In this paper, we present AT^2PO (Agentic Turn-based Policy Optimization via Tree Search), a unified framework for multi-turn agentic RL that addresses three core challenges: limited exploration diversity, sparse credit assignment, and misaligned policy optimization. AT^2PO introduces a turn-level tree structure that jointly enables Entropy-Guided Tree Expansion for strategic exploration and Turn-wise Credit Assignment for fine-grained reward propagation from sparse outcomes. Complementing this, we propose Agentic Turn-based Policy Optimization, a turn-level learning objective that aligns policy updates with the natural decision granularity of agentic interactions. ATPO is orthogonal to tree search and can be readily integrated into any multi-turn RL pipeline. Experiments across seven benchmarks demonstrate consistent improvements over the state-of-the-art baseline by up to 1.84 percentage points in average, with ablation studies validating the effectiveness of each component. Our code is available at https://github.com/zzfoutofspace/ATPO.
- Token-Level LLM Collaboration via FusionRoute
Large language models (LLMs) exhibit strengths across diverse domains. However, achieving strong performance across these domains with a single general-purpose model typically requires scaling to sizes that are prohibitively expensive to train and deploy. On the other hand, while smaller domain-specialized models are much more efficient, they struggle to generalize beyond their training distributions. To address this dilemma, we propose FusionRoute, a robust and effective token-level multi-LLM collaboration framework in which a lightweight router simultaneously (i) selects the most suitable expert at each decoding step and (ii) contributes a complementary logit that refines or corrects the selected expert's next-token distribution via logit addition. Unlike existing token-level collaboration methods that rely solely on fixed expert outputs, we provide a theoretical analysis showing that pure expert-only routing is fundamentally limited: unless strong global coverage assumptions hold, it cannot in general realize the optimal decoding policy. By augmenting expert selection with a trainable complementary generator, FusionRoute expands the effective policy class and enables recovery of optimal value functions under mild conditions. Empirically, across both Llama-3 and Gemma-2 families and diverse benchmarks spanning mathematical reasoning, code generation, and instruction following, FusionRoute outperforms both sequence- and token-level collaboration, model merging, and direct fine-tuning, while remaining competitive with domain experts on their respective tasks.
- Few Tokens Matter: Entropy Guided Attacks on Vision-Language Models
Vision-language models (VLMs) achieve remarkable performance but remain vulnerable to adversarial attacks. Entropy, a measure of model uncertainty, is strongly correlated with the reliability of VLM. Prior entropy-based attacks maximize uncertainty at all decoding steps, implicitly assuming that every token contributes equally to generation instability. We show instead that a small fraction (about 20%) of high-entropy tokens, i.e., critical decision points in autoregressive generation, disproportionately governs output trajectories. By concentrating adversarial perturbations on these positions, we achieve semantic degradation comparable to global methods while using substantially smaller budgets. More importantly, across multiple representative VLMs, such selective attacks convert 35-49% of benign outputs into harmful ones, exposing a more critical safety risk. Remarkably, these vulnerable high-entropy forks recur across architecturally diverse VLMs, enabling feasible transferability (17-26% harmful rates on unseen targets). Motivated by these findings, we propose Entropy-bank Guided Adversarial attacks (EGA), which achieves competitive attack success rates (93-95%) alongside high harmful conversion, thereby revealing new weaknesses in current VLM safety mechanisms.
- VideoAuto-R1: Video Auto Reasoning via Thinking Once, Answering Twice
Chain-of-thought (CoT) reasoning has emerged as a powerful tool for multimodal large language models on video understanding tasks. However, its necessity and advantages over direct answering remain underexplored. In this paper, we first demonstrate that for RL-trained video models, direct answering often matches or even surpasses CoT performance, despite CoT producing step-by-step analyses at a higher computational cost. Motivated by this, we propose VideoAuto-R1, a video understanding framework that adopts a reason-when-necessary strategy. During training, our approach follows a Thinking Once, Answering Twice paradigm: the model first generates an initial answer, then performs reasoning, and finally outputs a reviewed answer. Both answers are supervised via verifiable rewards. During inference, the model uses the confidence score of the initial answer to determine whether to proceed with reasoning. Across video QA and grounding benchmarks, VideoAuto-R1 achieves state-of-the-art accuracy with significantly improved efficiency, reducing the average response length by ~3.3x, e.g., from 149 to just 44 tokens. Moreover, we observe a low rate of thinking-mode activation on perception-oriented tasks, but a higher rate on reasoning-intensive tasks. This suggests that explicit language-based reasoning is generally beneficial but not always necessary.
- VerseCrafter: Dynamic Realistic Video World Model with 4D Geometric Control
Video world models aim to simulate dynamic, real-world environments, yet existing methods struggle to provide unified and precise control over camera and multi-object motion, as videos inherently operate dynamics in the projected 2D image plane. To bridge this gap, we introduce VerseCrafter, a 4D-aware video world model that enables explicit and coherent control over both camera and object dynamics within a unified 4D geometric world state. Our approach is centered on a novel 4D Geometric Control representation, which encodes the world state through a static background point cloud and per-object 3D Gaussian trajectories. This representation captures not only an object's path but also its probabilistic 3D occupancy over time, offering a flexible, category-agnostic alternative to rigid bounding boxes or parametric models. These 4D controls are rendered into conditioning signals for a pretrained video diffusion model, enabling the generation of high-fidelity, view-consistent videos that precisely adhere to the specified dynamics. Unfortunately, another major challenge lies in the scarcity of large-scale training data with explicit 4D annotations. We address this by developing an automatic data engine that extracts the required 4D controls from in-the-wild videos, allowing us to train our model on a massive and diverse dataset.
- The Illusion of Specialization: Unveiling the Domain-Invariant "Standing Committee" in Mixture-of-Experts Models
Mixture of Experts models are widely assumed to achieve domain specialization through sparse routing. In this work, we question this assumption by introducing COMMITTEEAUDIT, a post hoc framework that analyzes routing behavior at the level of expert groups rather than individual experts. Across three representative models and the MMLU benchmark, we uncover a domain-invariant Standing Committee. This is a compact coalition of routed experts that consistently captures the majority of routing mass across domains, layers, and routing budgets, even when architectures already include shared experts. Qualitative analysis further shows that Standing Committees anchor reasoning structure and syntax, while peripheral experts handle domain-specific knowledge. These findings reveal a strong structural bias toward centralized computation, suggesting that specialization in Mixture of Experts models is far less pervasive than commonly believed. This inherent bias also indicates that current training objectives, such as load-balancing losses that enforce uniform expert utilization, may be working against the model's natural optimization path, thereby limiting training efficiency and performance.
- Plenoptic Video Generation
Camera-controlled generative video re-rendering methods, such as ReCamMaster, have achieved remarkable progress. However, despite their success in single-view setting, these works often struggle to maintain consistency across multi-view scenarios. Ensuring spatio-temporal coherence in hallucinated regions remains challenging due to the inherent stochasticity of generative models. To address it, we introduce PlenopticDreamer, a framework that synchronizes generative hallucinations to maintain spatio-temporal memory. The core idea is to train a multi-in-single-out video-conditioned model in an autoregressive manner, aided by a camera-guided video retrieval strategy that adaptively selects salient videos from previous generations as conditional inputs. In addition, Our training incorporates progressive context-scaling to improve convergence, self-conditioning to enhance robustness against long-range visual degradation caused by error accumulation, and a long-video conditioning mechanism to support extended video generation. Extensive experiments on the Basic and Agibot benchmarks demonstrate that PlenopticDreamer achieves state-of-the-art video re-rendering, delivering superior view synchronization, high-fidelity visuals, accurate camera control, and diverse view transformations (e.g., third-person to third-person, and head-view to gripper-view in robotic manipulation). Project page: https://research.nvidia.com/labs/dir/plenopticdreamer/
- Agent-as-a-Judge
LLM-as-a-Judge has revolutionized AI evaluation by leveraging large language models for scalable assessments. However, as evaluands become increasingly complex, specialized, and multi-step, the reliability of LLM-as-a-Judge has become constrained by inherent biases, shallow single-pass reasoning, and the inability to verify assessments against real-world observations. This has catalyzed the transition to Agent-as-a-Judge, where agentic judges employ planning, tool-augmented verification, multi-agent collaboration, and persistent memory to enable more robust, verifiable, and nuanced evaluations. Despite the rapid proliferation of agentic evaluation systems, the field lacks a unified framework to navigate this shifting landscape. To bridge this gap, we present the first comprehensive survey tracing this evolution. Specifically, we identify key dimensions that characterize this paradigm shift and establish a developmental taxonomy. We organize core methodologies and survey applications across general and professional domains. Furthermore, we analyze frontier challenges and identify promising research directions, ultimately providing a clear roadmap for the next generation of agentic evaluation.
- CoV: Chain-of-View Prompting for Spatial Reasoning
Embodied question answering (EQA) in 3D environments often requires collecting context that is distributed across multiple viewpoints and partially occluded. However, most recent vision--language models (VLMs) are constrained to a fixed and finite set of input views, which limits their ability to acquire question-relevant context at inference time and hinders complex spatial reasoning. We propose Chain-of-View (CoV) prompting, a training-free, test-time reasoning framework that transforms a VLM into an active viewpoint reasoner through a coarse-to-fine exploration process. CoV first employs a View Selection agent to filter redundant frames and identify question-aligned anchor views. It then performs fine-grained view adjustment by interleaving iterative reasoning with discrete camera actions, obtaining new observations from the underlying 3D scene representation until sufficient context is gathered or a step budget is reached. We evaluate CoV on OpenEQA across four mainstream VLMs and obtain an average +11.56\% improvement in LLM-Match, with a maximum gain of +13.62\% on Qwen3-VL-Flash. CoV further exhibits test-time scaling: increasing the minimum action budget yields an additional +2.51\% average improvement, peaking at +3.73\% on Gemini-2.5-Flash. On ScanQA and SQA3D, CoV delivers strong performance (e.g., 116 CIDEr / 31.9 EM@1 on ScanQA and 51.1 EM@1 on SQA3D). Overall, these results suggest that question-aligned view selection coupled with open-view search is an effective, model-agnostic strategy for improving spatial reasoning in 3D EQA without additional training.
- Re-Align: Structured Reasoning-guided Alignment for In-Context Image Generation and Editing
In-context image generation and editing (ICGE) enables users to specify visual concepts through interleaved image-text prompts, demanding precise understanding and faithful execution of user intent. Although recent unified multimodal models exhibit promising understanding capabilities, these strengths often fail to transfer effectively to image generation. We introduce Re-Align, a unified framework that bridges the gap between understanding and generation through structured reasoning-guided alignment. At its core lies the In-Context Chain-of-Thought (IC-CoT), a structured reasoning paradigm that decouples semantic guidance and reference association, providing clear textual target and mitigating confusion among reference images. Furthermore, Re-Align introduces an effective RL training scheme that leverages a surrogate reward to measure the alignment between structured reasoning text and the generated image, thereby improving the model's overall performance on ICGE tasks. Extensive experiments verify that Re-Align outperforms competitive methods of comparable model scale and resources on both in-context image generation and editing tasks.
Solidot(15)
- 城市交通是大气塑料颗粒的主要来源
塑料颗粒已经无处不在。根据发表在《Science Advances》期刊上的一项研究,中科院地球环境研究所的研究人员分析了广州和西安两大城市大气中的微塑料和纳米塑料浓度。广州和西安属于两种不同类型的城市,前者是靠近海的空气湿润的大都市,而后者则是半干旱的内陆城市。结果显示,广州空气中微塑料和纳米塑料浓度分别达到每立方米 1.8 × 10⁵ 个和 5.0 × 10⁴ 个,而西安为每立方米 1.4 × 10⁵ 个和 3.0 × 10⁴ 个。研究显示,降雨是清除大气塑料颗粒的主要机制。
- Eric Schmidt 捐资建造四个望远镜项目
二战前世界绝大部分望远镜由对天文观测感兴趣的富豪资助建造,二战之后大部分天文观测设备的资金则由政府提供。如今情况可能出现逆转。前 Google CEO Eric Schmidt 及其妻子 Wendy Schmidt 宣布捐资建造四个望远镜项目。其中最引人注目的项目是名为 Lazuli 的太空望远镜,如果发射和部署成功,Lazuli 将可以接替哈勃望远镜,提供更先进更强大的天文观测能力。这些天文观测设备统称为 Schmidt Observatory System,Schmidt 夫妇没有透露资助金额,但至少应该有 5 亿美元。
- 内存价格飙升的同时内存制造商利润飙升
内存和硬盘价格飙升严重打击了 DIY PC 用户,而与此同时内存制造商的利润则创下了历史新高。三星电子发布了营收预测,2025 年第四季度营业利润在 19.9-20.1 万亿韩元(约 138 亿美元),而 2024 年第四季度仅为 6.49 万亿韩元。三星电子的内存业务对其利润影响巨大,2023 年三星就因为内存部门亏损数十亿美元而利润大幅下滑。另一家内存制造商 SK 海力士称 2025 年第三季度营收是“至今最高的季度业绩”,营业利润达到 11.38 万亿韩元(约 78 亿美元),而 2024 年第三季度是 7.03 万亿韩元。美光的 2026 年第一季度营收从 2025 年第一季度的 18.7 亿美元大幅提高到 52.4 亿美元。今天的内存价格暴涨被认为是黄牛囤积和 AI 热双重因素的影响。
- 电视价格为何如此廉价?
2001 年黑五期间,一台 50 英寸的电视百思买(Best Buy)售价为 1100 美元,而今天不到 200 美元(非智能版本)。过去 25 年每像素面积价格下降逾 90%,背后的主要原因是 LCD 技术从利基产品发展成为量产商品。2004 年 LCD 仅占电视市场的 5%;到 2018 年市场份额超过 95%。驱动成本下降的主要是基板玻璃尺寸的扩大,第一代母玻璃(Mother Glass)的尺寸约为 12 英寸 × 16 英寸,第 10.5 代尺寸为 116 英寸 × 133 英寸,几乎是第一代的 100 倍。这种尺寸的扩大显著节约了费用,设备费用的增长速度远低于玻璃面积的增长速度。
- 统信董事长解雇询问没有西装怎么办的内核工程师
根据社交媒体的消息,统信董事长解雇了一名询问没有西装怎么办的内核工程师。统信开发了基于 Linux 的发行版统信 UOS。它最近临时通知员工年会必须自备西装参加。通知来得猝不及防,一位负责 Linux 内核开发的核心工程师在群里问了句“没西装怎么办”,董事长随后将其解雇。此事导致网友将 UOS 形容是 Uniform Operating System 的缩写——原意是代表 Unity Operating System。
- 伊朗断网
根据 Netblocks 的监测,伊朗全国范围内断网,至今已持续逾 12 小时。自 2025 年 12 月起,伊朗发生了一系列抗议活动,起因是民众对通货膨胀飙升、食品价格上涨以及伊朗里亚尔大幅贬值感到不满。示威活动最初由店主和市场商贩发起,进入新年后,抗议规模日益扩大,目前有至少数十人死亡,逾千人被捕。
- 停止服用 GLP-1 减肥药的人会在两年内恢复体重
如果你通过 GLP-1 减肥药如 Ozempic 减轻体重,那么为了维持体重你将不能停药。根据发表在 BMJ 期刊上的一项研究,停止服用减肥药的人会在不到两年时间内恢复原先的体重。研究发现,GLP-1 减肥药能让使用者平均减轻 8.3 公斤体重,但如果停药,那么他们会平均每月增加 0.4 公斤体重,一年内反弹 4.8 公斤,1.7 年内恢复原先的体重。反弹的速度是通过调整饮食或运动减肥法的四倍。研究人员表示,这并非药物的缺陷,而是反映出肥胖作为一种慢性复发性疾病的本质。
- Gmail 深入整合 Gemini AI
即使你不想要 AI 功能,微软和 Google 也会把 AI 塞到你手中,为其 AI 服务增加一个用户数,以证明 AI 的巨额投资是正当的,还应该继续加大投资。Google 官方博客宣布,Gmail 进入了 Gemini 时代。新的基于 Gemini 的 AI 功能包括:收件箱搜索 AI Overviews,使用 Help Me Write 润色或者撰写邮件,使用 Suggested Replies 一键回复邮件,纠正语法的功能 Proofread(只提供给付费订阅用户),根据重要性整理收件箱的 AI Inbox。
- Cloudflare 否认委内瑞拉 BGP 路由泄漏是异常
在美国突袭委内瑞拉前一天,该国国有电信公司 CANTV 的自治系统 AS8048 发生了一起 BGP 路由泄漏事件。这起事件引发了美国可能发动中间人攻击,重路由 BGP 流量收集情报的猜测。美国 CDN 服务商 Cloudflare 通过官方博客回应了这一猜测。Cloudflare 称它的数据显示自去年 12 月以来,AS8048 发生了 11 起 BGP 路由泄漏事件。这一模式显示 AS8048 的路由进出入策略存在不足,BGP 路由泄漏可能只是技术问题而不是恶意行为。
- Bose 在产品终止支持前开源
Bose 去年 10 月宣布将于 2026 年 2 月 18 日终止对 SoundTouch Wi-Fi 扬声器和条形音箱的支持,此后这些产品将无法再收到安全更新和软件更新,失去云连接及相关配套应用。没有应用后 SoundTouch 也无法再集成 Spotify 等服务。SoundTouch 系列产品于 2013 年和 2015 年推出,售价不菲,从 399 美元到 1500 美元不等。这一消息在 Bose 客户中间引发了不满。现在 Bose 宣布了好消息,SoundTouch 终止支持后仍然会支持 AirPlay 和 Spotify Connect 功能,5 月 6 日更新后将支持有限的本地操作功能,不再需要云连接。Bose 还开源了 SoundTouch API。
- 韩国内存厂商准备再涨价七成
2025 年底内存价格上涨了 50%,而 2026 年初它还会上涨 70%。韩媒报道,最大的两家内存供应商三星电子和 SK 海力士计划对服务器内存芯片涨价最多 70%。三星电子、SK 海力士以及美光支配了全球内存市场,为满足 AI 市场的内存需求,这些公司都将内存产能转向盈利更高的 AI 数据中心市场,PC 和智能手机内存供不应求,导致价格在短时间内暴涨,IDC 估计影响可能会持续到 2027 年。
- 日本中部电力在核能安全审查中伪造地震风险数据
日本核能监管机构宣布取消了中部电力公司两座反应堆的安全审查,原因是该公司伪造了地震风险数据。自 2011 年福岛核事故后,日本只有不到四分之一商业核反应堆处于运行状态。日本中部电力公司申请重启滨冈核电站 3、4 号机组,而滨冈核电站地处南海海槽大地震的设想震源区域。监管机构原子能规制委员会去年 2 月接到举报,中部电力公司多年来伪造了两座反应堆的地震风险数据。中部电力公司已经承认了此事,它给出了说明,“在制定基准地震动时从不同计算条件的 20 组地震动选出最接近平均值的波作为“代表波”,然而事实上存在蓄意选择代表波的嫌疑。这一做法在 2018 年以前就已存在。”社长林欣吾表示考虑对核能部门实施彻底重组。
- 内核 Bug 平均需要 2.1 年才会发现
根据 Linux kernel 的 git 历史,内核至今修复了 125,183 个 bug,平均每个 bug 在引入 2.1 年之后才会被发现。不同子系统的 bug 存活时间有差异,其中 CAN 总线驱动 bug 平均要 4.2 年才会被发现,而 SCTP 网络平均是 4 年。存活时间最长是 ethtool 的一个缓冲区溢出 bug,存活了 20.7 年。内核的安全性过去几年有了显著提升,2010 年引入的 bug 平均需要近 10 年才会被发现,而 2024 年引入的 bug 仅 5 个月就会被发现——或者 2010 年引入的 bug 在当年发现的比例是 0%,而 2022 年这一比例提升到了 69%。
- 美国气象局使用 AI 生成了不存在的城镇
美国气象局撤下了一张用 AI 生成的天气预报图,原因是 AI 生成了不存在的城镇。它声称爱达荷州的 Orangeotild 有 10% 的概率出现强风,而 Whata Bod 不受影响——两个地点都不存在,都是 AI 捏造的。这不是美国气象局第一次犯此类错误,该机构正尝试将 AI 应用于从超前预报到图形设计等各个方面。它表示 AI 通常不用于面向公众的内容,但此类用途并未被禁止。
- 青少年周末补觉有助于防止抑郁
研究显示,青少年如果在周末补回工作日缺失的睡眠,可能有助于改善心理健康状况。研究发现,在 16-24 岁的年轻群体中,周末补觉的人出现抑郁症状的风险,比不补觉的人低 41%。青少年是睡眠问题高发、抑郁风险偏高的群体。工作日睡眠不足是普遍现象。学业、社交、课外活动及社会兼职等事务挤占了他们的时间与精力,导致睡眠时长缩水。研究人员分析了 2021-2023 年“全美健康与营养检查调查”中 16-24 岁人群的数据。这些年轻人需要报告自己工作日与周末的入睡及起床时间,研究人员据此计算出他们的周末补觉时长,即周末日均睡眠时长与工作日日均睡眠时长的差值。青少年的理想作息是晚上 11 点左右入睡、早上 8 点左右起床,但这与美国许多高中较早的上课时间相冲突。许多睡眠专家与医疗从业者都支持“推迟上学时间”的公共健康倡议。