DIGEST · 2026-01-05

OrangeBot.AI Digest — 2026-01-05

53 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. Google broke my heart (perishablepress.com)
  2. There were BGP anomalies during the Venezuela blackout (loworbitsecurity.com)
  3. Pebble Round 2 (repebble.com)
  4. Try to take my position: The best promotion advice I ever got (andrew.grahamyooll.com)
  5. X blames users for Grok-generated CSAM; no fixes announced (arstechnica.com)
  6. Show HN: Tailsnitch – A security auditor for Tailscale (github.com)
  7. Murder-suicide case shows OpenAI selectively hides data after users die (arstechnica.com)
  8. RevisionDojo, a YC startup, is running astroturfing campaigns targeting kids
  9. Show HN: DoNotNotify – log and intelligently block notifications on Android (donotnotify.com)
  10. All AI Videos Are Harmful (2025) (idiallo.com)
  11. I switched from VSCode to Zed (tenthousandmeters.com)
  12. Jensen: 'We've done our country a great disservice' by offshoring (www.barchart.com)
  13. It's hard to justify Tahoe icons (tonsky.me)
  14. Microsoft Office renamed to “Microsoft 365 Copilot app” (www.office.com)
  15. Anna's Archive loses .org domain after surprise suspension (torrentfreak.com)

GitHub Trending(10)

  1. anomalyco / opencode

    The open source coding agent.

  2. usememos / memos

    An open-source, self-hosted note-taking service. Your thoughts, your data, your control — no tracking, no ads, no subscription fees.

  3. OpenBB-finance / OpenBB

    Financial data platform for analysts, quants and AI agents.

  4. ourongxing / newsnow

    Elegant reading of real-time and hottest news

  5. virattt / ai-hedge-fund

    An AI Hedge Fund Team

  6. python / cpython

    The Python programming language

  7. microsoft / VibeVoice

    Open-Source Frontier Voice AI

  8. 3b1b / manim

    Animation engine for explanatory math videos

  9. maplibre / maplibre-gl-js

    MapLibre GL JS - Interactive vector tile maps in the browser

  10. anthropics / claude-code

    Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflows - all through natural language commands.

Hugging Face(13)

  1. NeoVerse: Enhancing 4D World Model with in-the-wild Monocular Videos

    In this paper, we propose NeoVerse, a versatile 4D world model that is capable of 4D reconstruction, novel-trajectory video generation, and rich downstream applications. We first identify a common limitation of scalability in current 4D world modeling methods, caused either by expensive and specialized multi-view 4D data or by cumbersome training pre-processing. In contrast, our NeoVerse is built upon a core philosophy that makes the full pipeline scalable to diverse in-the-wild monocular videos. Specifically, NeoVerse features pose-free feed-forward 4D reconstruction, online monocular degradation pattern simulation, and other well-aligned techniques. These designs empower NeoVerse with versatility and generalization to various domains. Meanwhile, NeoVerse achieves state-of-the-art performance in standard reconstruction and generation benchmarks. Our project page is available at https://neoverse-4d.github.io

  2. Youtu-Agent: Scaling Agent Productivity with Automated Generation and Hybrid Policy Optimization

    Existing Large Language Model (LLM) agent frameworks face two significant challenges: high configuration costs and static capabilities. Building a high-quality agent often requires extensive manual effort in tool integration and prompt engineering, while deployed agents struggle to adapt to dynamic environments without expensive fine-tuning. To address these issues, we propose Youtu-Agent, a modular framework designed for the automated generation and continuous evolution of LLM agents. Youtu-Agent features a structured configuration system that decouples execution environments, toolkits, and context management, enabling flexible reuse and automated synthesis. We introduce two generation paradigms: a Workflow mode for standard tasks and a Meta-Agent mode for complex, non-standard requirements, capable of automatically generating tool code, prompts, and configurations. Furthermore, Youtu-Agent establishes a hybrid policy optimization system: (1) an Agent Practice module that enables agents to accumulate experience and improve performance through in-context optimization without parameter updates; and (2) an Agent RL module that integrates with distributed training frameworks to enable scalable and stable reinforcement learning of any Youtu-Agents in an end-to-end, large-scale manner. Experiments demonstrate that Youtu-Agent achieves state-of-the-art performance on WebWalkerQA (71.47\%) and GAIA (72.8\%) using open-weight models. Our automated generation pipeline achieves over 81\% tool synthesis success rate, while the Practice module improves performance on AIME 2024/2025 by +2.7\% and +5.4\% respectively. Moreover, our Agent RL training achieves 40\% speedup with steady performance improvement on 7B LLMs, enhancing coding/reasoning and searching capabilities respectively up to 35\% and 21\% on Maths and general/multi-hop QA benchmarks.

  3. Avatar Forcing: Real-Time Interactive Head Avatar Generation for Natural Conversation

    Talking head generation creates lifelike avatars from static portraits for virtual communication and content creation. However, current models do not yet convey the feeling of truly interactive communication, often generating one-way responses that lack emotional engagement. We identify two key challenges toward truly interactive avatars: generating motion in real-time under causal constraints and learning expressive, vibrant reactions without additional labeled data. To address these challenges, we propose Avatar Forcing, a new framework for interactive head avatar generation that models real-time user-avatar interactions through diffusion forcing. This design allows the avatar to process real-time multimodal inputs, including the user's audio and motion, with low latency for instant reactions to both verbal and non-verbal cues such as speech, nods, and laughter. Furthermore, we introduce a direct preference optimization method that leverages synthetic losing samples constructed by dropping user conditions, enabling label-free learning of expressive interaction. Experimental results demonstrate that our framework enables real-time interaction with low latency (approximately 500ms), achieving 6.8X speedup compared to the baseline, and produces reactive and expressive avatar motion, which is preferred over 80% against the baseline.

  4. SenseNova-MARS: Empowering Multimodal Agentic Reasoning and Search via Reinforcement Learning

    While Vision-Language Models (VLMs) can solve complex tasks through agentic reasoning, their capabilities remain largely constrained to text-oriented chain-of-thought or isolated tool invocation. They fail to exhibit the human-like proficiency required to seamlessly interleave dynamic tool manipulation with continuous reasoning, particularly in knowledge-intensive and visually complex scenarios that demand coordinated external tools such as search and image cropping. In this work, we introduce SenseNova-MARS, a novel Multimodal Agentic Reasoning and Search framework that empowers VLMs with interleaved visual reasoning and tool-use capabilities via reinforcement learning (RL). Specifically, SenseNova-MARS dynamically integrates the image search, text search, and image crop tools to tackle fine-grained and knowledge-intensive visual understanding challenges. In the RL stage, we propose the Batch-Normalized Group Sequence Policy Optimization (BN-GSPO) algorithm to improve the training stability and advance the model's ability to invoke tools and reason effectively. To comprehensively evaluate the agentic VLMs on complex visual tasks, we introduce the HR-MMSearch benchmark, the first search-oriented benchmark composed of high-resolution images with knowledge-intensive and search-driven questions. Experiments demonstrate that SenseNova-MARS achieves state-of-the-art performance on open-source search and fine-grained image understanding benchmarks. Specifically, on search-oriented benchmarks, SenseNova-MARS-8B scores 67.84 on MMSearch and 41.64 on HR-MMSearch, surpassing proprietary models such as Gemini-3-Flash and GPT-5. SenseNova-MARS represents a promising step toward agentic VLMs by providing effective and robust tool-use capabilities. To facilitate further research in this field, we will release all code, models, and datasets.

  5. Taming Hallucinations: Boosting MLLMs' Video Understanding via Counterfactual Video Generation

    Multimodal Large Language Models (MLLMs) have made remarkable progress in video understanding. However, they suffer from a critical vulnerability: an over-reliance on language priors, which can lead to visual ungrounded hallucinations, especially when processing counterfactual videos that defy common sense. This limitation, stemming from the intrinsic data imbalance between text and video, is challenging to address due to the substantial cost of collecting and annotating counterfactual data. To address this, we introduce DualityForge, a novel counterfactual data synthesis framework that employs controllable, diffusion-based video editing to transform real-world videos into counterfactual scenarios. By embedding structured contextual information into the video editing and QA generation processes, the framework automatically produces high-quality QA pairs together with original-edited video pairs for contrastive training. Based on this, we build DualityVidQA, a large-scale video dataset designed to reduce MLLM hallucinations. In addition, to fully exploit the contrastive nature of our paired data, we propose Duality-Normalized Advantage Training (DNA-Train), a two-stage SFT-RL training regime where the RL phase applies pair-wise ell_1 advantage normalization, thereby enabling a more stable and efficient policy optimization. Experiments on DualityVidQA-Test demonstrate that our method substantially reduces model hallucinations on counterfactual videos, yielding a relative improvement of 24.0% over the Qwen2.5-VL-7B baseline. Moreover, our approach achieves significant gains across both hallucination and general-purpose benchmarks, indicating strong generalization capability. We will open-source our dataset and code.

  6. AdaGaR: Adaptive Gabor Representation for Dynamic Scene Reconstruction

    Reconstructing dynamic 3D scenes from monocular videos requires simultaneously capturing high-frequency appearance details and temporally continuous motion. Existing methods using single Gaussian primitives are limited by their low-pass filtering nature, while standard Gabor functions introduce energy instability. Moreover, lack of temporal continuity constraints often leads to motion artifacts during interpolation. We propose AdaGaR, a unified framework addressing both frequency adaptivity and temporal continuity in explicit dynamic scene modeling. We introduce Adaptive Gabor Representation, extending Gaussians through learnable frequency weights and adaptive energy compensation to balance detail capture and stability. For temporal continuity, we employ Cubic Hermite Splines with Temporal Curvature Regularization to ensure smooth motion evolution. An Adaptive Initialization mechanism combining depth estimation, point tracking, and foreground masks establishes stable point cloud distributions in early training. Experiments on Tap-Vid DAVIS demonstrate state-of-the-art performance (PSNR 35.49, SSIM 0.9433, LPIPS 0.0723) and strong generalization across frame interpolation, depth consistency, video editing, and stereo view synthesis. Project page: https://jiewenchan.github.io/AdaGaR/

  7. Nested Learning: The Illusion of Deep Learning Architectures

    Despite the recent progresses, particularly in developing Language Models, there are fundamental challenges and unanswered questions about how such models can continually learn/memorize, self-improve, and find effective solutions. In this paper, we present a new learning paradigm, called Nested Learning (NL), that coherently represents a machine learning model with a set of nested, multi-level, and/or parallel optimization problems, each of which with its own context flow. Through the lenses of NL, existing deep learning methods learns from data through compressing their own context flow, and in-context learning naturally emerges in large models. NL suggests a philosophy to design more expressive learning algorithms with more levels, resulting in higher-order in-context learning and potentially unlocking effective continual learning capabilities. We advocate for NL by presenting three core contributions: (1) Expressive Optimizers: We show that known gradient-based optimizers, such as Adam, SGD with Momentum, etc., are in fact associative memory modules that aim to compress the gradients' information (by gradient descent). Building on this insight, we present other more expressive optimizers with deep memory and/or more powerful learning rules; (2) Self-Modifying Learning Module: Taking advantage of NL's insights on learning algorithms, we present a sequence model that learns how to modify itself by learning its own update algorithm; and (3) Continuum Memory System: We present a new formulation for memory system that generalizes the traditional viewpoint of long/short-term memory. Combining our self-modifying sequence model with the continuum memory system, we present a continual learning module, called Hope, showing promising results in language modeling, knowledge incorporation, and few-shot generalization tasks, continual learning, and long-context reasoning tasks.

  8. Deep Delta Learning

    The efficacy of deep residual networks is fundamentally predicated on the identity shortcut connection. While this mechanism effectively mitigates the vanishing gradient problem, it imposes a strictly additive inductive bias on feature transformations, thereby limiting the network's capacity to model complex state transitions. In this paper, we introduce Deep Delta Learning (DDL), a novel architecture that generalizes the standard residual connection by modulating the identity shortcut with a learnable, data-dependent geometric transformation. This transformation, termed the Delta Operator, constitutes a rank-1 perturbation of the identity matrix, parameterized by a reflection direction vector k(X) and a gating scalar β(X). We provide a spectral analysis of this operator, demonstrating that the gate β(X) enables dynamic interpolation between identity mapping, orthogonal projection, and geometric reflection. Furthermore, we restructure the residual update as a synchronous rank-1 injection, where the gate acts as a dynamic step size governing both the erasure of old information and the writing of new features. This unification empowers the network to explicitly control the spectrum of its layer-wise transition operator, enabling the modeling of complex, non-monotonic dynamics while preserving the stable training characteristics of gated residual architectures.

  9. The Reasoning-Creativity Trade-off: Toward Creativity-Driven Problem Solving

    State-of-the-art large language model (LLM) pipelines rely on bootstrapped reasoning loops: sampling diverse chains of thought and reinforcing the highest-scoring ones, mainly optimizing correctness. We analyze how this design choice is sensitive to the collapse of the model's distribution over reasoning paths, slashing semantic entropy and undermining creative problem-solving. To analyze this failure, we introduce Distributional Creative Reasoning (DCR), a unified variational objective that casts training as gradient flow through probability measures on solution traces. STaR, GRPO, and DPO, as well as entropy bonuses, and other methods, all constitute special cases of the same loss. The framework delivers three core results: (i) the diversity decay theorem, describing how correctness-based objectives lead to distinct modes of diversity decay for STaR, GRPO, and DPO; (ii) designs that ensure convergence to a stable and diverse policy, effectively preventing collapse; and (iii) simple, actionable recipes to achieve this in practice. DCR thus offers the first principled recipe for LLMs that remain both correct and creative.

  10. Diversity or Precision? A Deep Dive into Next Token Prediction

    Recent advancements have shown that reinforcement learning (RL) can substantially improve the reasoning abilities of large language models (LLMs). The effectiveness of such RL training, however, depends critically on the exploration space defined by the pre-trained model's token-output distribution. In this paper, we revisit the standard cross-entropy loss, interpreting it as a specific instance of policy gradient optimization applied within a single-step episode. To systematically study how the pre-trained distribution shapes the exploration potential for subsequent RL, we propose a generalized pre-training objective that adapts on-policy RL principles to supervised learning. By framing next-token prediction as a stochastic decision process, we introduce a reward-shaping strategy that explicitly balances diversity and precision. Our method employs a positive reward scaling factor to control probability concentration on ground-truth tokens and a rank-aware mechanism that treats high-ranking and low-ranking negative tokens asymmetrically. This allows us to reshape the pre-trained token-output distribution and investigate how to provide a more favorable exploration space for RL, ultimately enhancing end-to-end reasoning performance. Contrary to the intuition that higher distribution entropy facilitates effective exploration, we find that imposing a precision-oriented prior yields a superior exploration space for RL.

  11. InfoSynth: Information-Guided Benchmark Synthesis for LLMs

    Large language models (LLMs) have demonstrated significant advancements in reasoning and code generation. However, efficiently creating new benchmarks to evaluate these capabilities remains a challenge. Traditional benchmark creation relies on manual human effort, a process that is both expensive and time-consuming. Furthermore, existing benchmarks often contaminate LLM training data, necessitating novel and diverse benchmarks to accurately assess their genuine capabilities. This work introduces InfoSynth, a novel framework for automatically generating and evaluating reasoning benchmarks guided by information-theoretic principles. We propose metrics based on KL-divergence and entropy to quantify benchmark novelty and diversity without relying on costly model evaluations. Building on this framework, we develop an end-to-end pipeline that synthesizes robust Python coding problems from seed datasets using genetic algorithms and iterative code feedback. Our method generates accurate test cases and solutions to new problems 97% of the time, and the synthesized benchmarks consistently exhibit higher novelty and diversity compared to their seed datasets. Moreover, our algorithm provides a method for controlling the novelty/diversity and difficulty of generated problems. InfoSynth offers a scalable, self-verifying pipeline for constructing high-quality, novel and diverse benchmarks for LLMs. Project Page: https://ishirgarg.github.io/infosynth_web/

  12. MorphAny3D: Unleashing the Power of Structured Latent in 3D Morphing

    3D morphing remains challenging due to the difficulty of generating semantically consistent and temporally smooth deformations, especially across categories. We present MorphAny3D, a training-free framework that leverages Structured Latent (SLAT) representations for high-quality 3D morphing. Our key insight is that intelligently blending source and target SLAT features within the attention mechanisms of 3D generators naturally produces plausible morphing sequences. To this end, we introduce Morphing Cross-Attention (MCA), which fuses source and target information for structural coherence, and Temporal-Fused Self-Attention (TFSA), which enhances temporal consistency by incorporating features from preceding frames. An orientation correction strategy further mitigates the pose ambiguity within the morphing steps. Extensive experiments show that our method generates state-of-the-art morphing sequences, even for challenging cross-category cases. MorphAny3D further supports advanced applications such as decoupled morphing and 3D style transfer, and can be generalized to other SLAT-based generative models. Project page: https://xiaokunsun.github.io/MorphAny3D.github.io/.

  13. Fast-weight Product Key Memory

    Sequence modeling layers in modern language models typically face a trade-off between storage capacity and computational efficiency. While Softmax attention offers unbounded storage at prohibitive quadratic costs, linear variants provide efficiency but suffer from limited, fixed-size storage. We propose Fast-weight Product Key Memory (FwPKM), a novel architecture that resolves this tension by transforming the sparse Product Key Memory (PKM) from a static module into a dynamic, "fast-weight" episodic memory. Unlike PKM, FwPKM updates its parameters dynamically at both training and inference time via local chunk-level gradient descent, allowing the model to rapidly memorize and retrieve new key-value pairs from input sequences. Experiments reveal that FwPKM functions as an effective episodic memory that complements the semantic memory of standard modules, yielding significant perplexity reductions on long-context datasets. Notably, in Needle in a Haystack evaluations, FwPKM generalizes to 128K-token contexts despite being trained on only 4K-token sequences.

Solidot(15)

  1. Reddit 在英国超过 TikTok 成为访问量第四大的社媒

    Reddit 在英国超过 TikTok 成为访问量第四大的社媒平台。英国用户是仅次于美国用户的第二大访问人群,过去两年英国用户人数增长了 88%,Ofcom 的数据显示三分之二的英国网民会访问 Reddit,而 2023 年是三分之一。Reddit 在英国年轻人群中非常受欢迎,18-24 岁英国用户中 Reddit 是访问量第六大的网站,而一年前这一数字是第十。Reddit 的崛起背后的因素包括了 Google 调整了算法突出了论坛类内容,而 Reddit 就是论坛形式的社媒。在 AI 时代,用户也越来越多的转向人工撰写的内容,Reddit 也受益于这一趋势。在 Reddit 的英国用户中女性占了半数以上。

  2. 测试显示 Windows 11 的速度在六个 Windows 版本中垫底

    Windows 11 是目前微软唯一支持的 Windows 版本,但它因为更高的硬件需求、更臃肿的系统以及 AI 而在用户中间口碑不佳。在六部旧笔电 ThinkPad X220——配置英特尔 Core i5-2520M CPU、8GB 内存和 256GB 硬盘——上测试最新版本的 Windows XP、Windows Vista、Windows 7、Windows 8.1、Windows 10 和 Windows 11,结果显示:Windows 11 启动速度最慢;安装容量为 37.3GB,略低于 Windows Vista 的 37.8GB 和 Windows 7 的 44.6GB;内存占用 3.3GB 最多 3.7GB;在旧硬件上更容易出现卡顿。

  3. 日本将利用 AI 加速漫画翻译

    日本漫画在海外颇受欢迎,但很多读者看的是盗版。由出版社等组成的一般社团法人 ABJ 调查了约 900 个以漫画为中心、刊载日本出版物的盗版网站,结果发现,仅 2025 年 6 月一个月内,这些网站就有来自 123 个国家和地区读者的共计 28 亿次访问,累计阅读量达到 14 亿册。年度损失额据估算达到 8.5万 亿日元。日本希望在 AI 帮助下加速漫画的翻译,以多语言推动正版作品在海外流通,以防止读者流向盗版网站。目前在日本每年出版的漫画中,只有一成左右被翻译成英语。

  4. 广电总局打击 AI 魔改视频

    国家广播电视总局宣布展开为期一个月的“AI魔改”视频专项治理行动。广电总局称: “随着生成式人工智能技术快速发展,部分网络账号滥用AI工具,对经典影视、动画片等内容进行颠覆性篡改、魔性解构与低俗化改编,这些内容严重背离经典作品精神内核,扰乱网络传播秩序,助长侵权行为,危害行业发展,干扰未成年人形成正确文化认知和现实感知。 专项治理重点清理基于四大名著、历史题材、革命题材、英模人物等电视剧作品进行“AI魔改”的下列视频:一是严重违背原作精神内核和角色形象,颠覆基本认知,解构普遍共识;二是内容渲染血腥暴力、猎奇低俗,宣扬错误价值观,违背公序良俗;三是存在对中华文化挪用、篡改的突出问题,导致对真实历史时空、中华文明标识产生明显错位认知,冲击文化认同。专项治理同步清理将少年儿童所熟知、所喜爱的动画形象进行改编生成的各类邪典动画。”

  5. 比亚迪电动汽车销量跃居世界第一

    比亚迪 2025 年电动汽车销量超过特斯拉跃居世界第一。2025 年比亚迪电动乘用车销量同比增长 28%,达到 225 万辆,特斯拉销量预计将减少 8%,降至 164 万辆。除电动汽车之外,包括插电式混合动力车等在内的比亚迪 2025 年新车销量整体增长 8%,达到 460 万辆。比亚迪汽车在中国的销量已经增长放缓,9 月单月销量低于上年同月。比亚迪董事长王传福在 2025 年 12 月召开的股东大会上就中国国内市场销量出现下滑表示,一方面由于比亚迪当前技术领先度不及前几年,技术成果的市场惊艳度有所下降,叠加行业同质化特征渐显,这一变化符合产品及技术发展的周期性规律。

  6. 大众电动汽车恢复使用物理按键

    德国大众汽车公布了其即将推出的电动汽车 ID. Polo 的内部装饰,显示了物理按键的全面回归,汽车不仅有物理按键、开关,甚至还有控制音频的旋钮。大众汽车首席设计师 Andreas Mindt 去年曾表示将在汽车中重新引入用于重要功能的物理按键。ID. Polo 的显示屏下有物理按键,方向盘上也有清晰易用的按键。该型号汽车将于今年在欧洲上市。

  7. 食物价格昂贵与儿童肥胖相关

    根据发表在《Global Food Security》上的一项研究,经济危机期间食物价格上涨与儿童肥胖相关,原因是多数家庭会优选廉价高热量食物,而不是营养丰富的食物。研究人员调查了 1990 年代末亚洲经济危机对印尼儿童的影响,发现经济危机引发的食物价格上涨加剧了儿童的慢性营养不良,使儿童发育迟缓率上升了 3.5 个百分点。受此影响严重的儿童不仅成年后身高会低于未受影响的同龄人,且更易发胖,原因是廉价高热量食物缺乏重要微量营养素。受食物价格上涨影响的主要是城市人口和低教育水平人口。研究人员称,儿童早期时期的营养匮乏可能会产生终身影响。同一场经济危机可能会同时加剧营养不良和肥胖。

  8. GLP-1 减肥药对人的改变不只是体重

    根据发表在《PLOS Global Public Health》期刊上的一项研究,GLP-1 减肥药如 Ozempic 和 Zepbound 对人的改变不只是体重,它还是一种强有力的社会和心理干预方法,重塑一个人的日常生活。研究人员发现,绝大多数用户是通过 TikTok、Reddit、Instagram 等在线平台了解 GLP-1 减肥药,此类平台既可能提供有益的健康建议,也可能传播虚假信息;很多人经历了严重的副作用如恶心、呕吐、头晕、疲劳和头痛,但服用者宁愿请病假或避开社交活动也不愿意停止服用药物;在医学上被认为健康的人群对 GLP-1 也是趋之若鹜,肥胖人口很少的国家如日本的国民也追捧它,背后的驱动因素被认为是体重焦虑;许多人的服用方法并不符合医嘱,因此引发了安全方面的担忧。

  9. 廉价太阳能改变非洲人的生活

    廉价太阳能正在改变非洲人频繁遭遇断电的生活。以南非为例,太阳能从 2019 年的几乎为零上升到占发电量的约 10%,其中多数为私人所有。过去十年,美国加大了化石燃料出口,中国则专注于主导可再生能源。今天全世界的太阳能电池板、电动汽车和电池大部分都由中国公司生产,以至于它们正在大幅降价,并拼命寻找买家。根据英国能源追踪组织 Ember 对2025年前 10 个月中国出口数据的分析,非洲从中国的太阳能进口量上升了 50%。

  10. 微软终止 Windows 10/11 的电话激活支持

    Windows 用户通过官方论坛和社交媒体报告,微软终止了 Windows 10/11 的电话激活方式。这意味着用户只能通过在线激活操作系统。以前 Windows 10/11 支持通过电话的离线激活,方法是开始>设置>系统>激活,在激活菜单下可选择通过电话激活。用户报告,当尝试通过电话激活时他们收到了该功能已经终止的自动语音提示,称“产品激活支持已转移到线上”。

  11. SpaceX 将在 2026 年降低其四千颗 Starlink 卫星轨道高度

    SpaceX Starlink 业务副总裁 Michael Nicolls 宣布,为了改善太空安全,在 2026 年内将约 4400 颗 Starlink 卫星的轨道高度从 550 公里降至 480 公里。降低轨道高度有助于在卫星发生故障后迅速脱离轨道重返大气层,此举可避免因卫星故障增加碰撞风险,避免产生太空碎片。过去几年近地轨道日益拥挤,其中 SpaceX 已经发射了上万颗 Starlink 卫星,其它宽带卫星公司也在加速发射,拥挤的近地轨道引发了凯斯勒现象(Kessler Syndrome)的担忧。凯斯勒现象或碰撞级联效应是美国科学家 Donald J. Kessler 于 1978 年提出的一种理论假设。该假设认为当在近地轨道的运转的物体的密度达到一定程度时,将让这些物体在碰撞后产生的碎片能够形成更多的新撞击,形成级联效应。如果凯斯勒现象发生,作为最大的卫星宽带运营商,SpaceX 显然会深受其害。

  12. 泰坦星可能不存在全球性的地下海洋

    土星最大卫星泰坦星(土卫六)稠密大气层和甲烷湖泊环境一直令科学家着迷,引发了它可能维持生命生存的猜测。根据发表在《自然》期刊上的一项研究,NASA JPL 重新分析了卡西尼号探测器收集的泰坦星数据,认为不存在全球性的地下海洋。研究人员认为泰坦内部的液体将以局部融化的融水囊形式存在。这些融水囊在潮汐能量的加热下,缓慢地向地表冰层上升。随着它们的上升,它们有可能将来自下方的有机分子带上来,并混合陨石撞击地表带来的物质。研究人员强调这并不排除它孕育基本生命形式的可能性。分析结果认为泰坦应该存在液态水区域,温度可能高达摄氏20度,能够将营养物质从岩石内核输送到高压冰层,最终到达地表的坚固冰壳。

  13. 国际空间站俄罗斯舱停止泄漏空气

    国际空间站上的俄罗斯舱段在近五年之后终于停止泄漏空气。漏气的舱段位于 Progress(进步号)气闸舱和 Zvezda(星辰号)服务舱之间的 PrK 模块,漏气原因是微小的结构裂缝。该问题在 2024 年因漏气率翻倍而引发严重担忧。过去五年俄罗斯宇航员一直在寻找微小的泄气点,他们会定期关闭 PrK 模块的舱门,然后再重新打开,通过微量灰尘堆积去寻找漏气位置,之后宇航员在裂缝处涂抹名为 Germetall-1 的密封剂。他们会再次关闭舱门,监测 PrK 的内部压力,重新开始寻找其它漏气点。

  14. 睡眠质量差与大脑加速衰老相关

    根据一项历时九年、跟踪了逾 2.75 万名中老年人的大型研究,睡眠质量差与大脑加速衰老相关,背后的原因之一可能是慢性炎症。研究人员从睡眠类型和持续时间、失眠、打鼾和日间嗜睡等五个方面评估了参与者的睡眠状况,然后用 MRI 成像扫描了参与者的大脑,利用机器学习模型评估其生物年龄。结果显示,健康睡眠评分每降低 1 分,大脑年龄与实际年龄之间的差距扩大约 6 个月。睡眠质量最差的人,其大脑的生物年龄比实际年龄大约 1 岁。研究人员使用 C -反应蛋白水平和白细胞计数等生物标志物测量低度炎症,炎症在不良睡眠模式与大脑衰老之间的关联中占超过 10%。

  15. 华硕宣布从 1 月 5 日起部分产品涨价

    华硕通知其合作伙伴,宣布将从 1 月 5 日起调整产品价格 aka 涨价,理由是受全球供应链结构性波动影响,内存和硬盘等核心组件成本上升。而结构性波动源于全球的 AI 狂热。由于内存和硬盘价格过去两个月大幅飙升,IDC 等机构已经预测 2026 年全球 PC 市场可能萎缩,出货量将会下降。