WEEK · 2026-W02

Weekly Digest — 2026-W02

140 unique stories (2026-01-052026-01-11), aggregated across 8 sources.

Hacker News(42)

  1. Google broke my heart (perishablepress.com)
  2. There were BGP anomalies during the Venezuela blackout (loworbitsecurity.com)
  3. Pebble Round 2 (repebble.com)
  4. Try to take my position: The best promotion advice I ever got (andrew.grahamyooll.com)
  5. X blames users for Grok-generated CSAM; no fixes announced (arstechnica.com)
  6. Show HN: Tailsnitch – A security auditor for Tailscale (github.com)
  7. Stop Doom Scrolling, Start Doom Coding: Build via the terminal from your phone (github.com)
  8. Spherical Snake (kevinalbs.com)
  9. Video Game Websites in the early 00s (www.webdesignmuseum.org)
  10. Opus 4.5 is not the normal AI agent experience that I have had thus far (burkeholland.github.io)
  11. Volkswagen Brings Back Physical Buttons (www.caranddriver.com)
  12. Why is the Gmail app 700 MB? (akr.am)

GitHub Trending(28)

  1. anomalyco / opencode

    The open source coding agent.

  2. usememos / memos

    An open-source, self-hosted note-taking service. Your thoughts, your data, your control — no tracking, no ads, no subscription fees.

  3. OpenBB-finance / OpenBB

    Financial data platform for analysts, quants and AI agents.

  4. ourongxing / newsnow

    Elegant reading of real-time and hottest news

  5. virattt / ai-hedge-fund

    An AI Hedge Fund Team

  6. python / cpython

    The Python programming language

  7. protocolbuffers / protobuf

    Protocol Buffers - Google's data interchange format

  8. Lissy93 / web-check

    🕵️‍♂️ All-in-one OSINT tool for analysing any website

  9. microsoft / PowerToys

    Microsoft PowerToys is a collection of utilities that help you customize Windows and streamline everyday tasks

  10. anthropics / claude-code-action
  11. microsoft / BitNet

    Official inference framework for 1-bit LLMs

  12. marcelscruz / public-apis

    A collaborative list of public APIs for developers

Hugging Face(32)

  1. NeoVerse: Enhancing 4D World Model with in-the-wild Monocular Videos

    In this paper, we propose NeoVerse, a versatile 4D world model that is capable of 4D reconstruction, novel-trajectory video generation, and rich downstream applications. We first identify a common limitation of scalability in current 4D world modeling methods, caused either by expensive and specialized multi-view 4D data or by cumbersome training pre-processing. In contrast, our NeoVerse is built upon a core philosophy that makes the full pipeline scalable to diverse in-the-wild monocular videos. Specifically, NeoVerse features pose-free feed-forward 4D reconstruction, online monocular degradation pattern simulation, and other well-aligned techniques. These designs empower NeoVerse with versatility and generalization to various domains. Meanwhile, NeoVerse achieves state-of-the-art performance in standard reconstruction and generation benchmarks. Our project page is available at https://neoverse-4d.github.io

  2. Youtu-Agent: Scaling Agent Productivity with Automated Generation and Hybrid Policy Optimization

    Existing Large Language Model (LLM) agent frameworks face two significant challenges: high configuration costs and static capabilities. Building a high-quality agent often requires extensive manual effort in tool integration and prompt engineering, while deployed agents struggle to adapt to dynamic environments without expensive fine-tuning. To address these issues, we propose Youtu-Agent, a modular framework designed for the automated generation and continuous evolution of LLM agents. Youtu-Agent features a structured configuration system that decouples execution environments, toolkits, and context management, enabling flexible reuse and automated synthesis. We introduce two generation paradigms: a Workflow mode for standard tasks and a Meta-Agent mode for complex, non-standard requirements, capable of automatically generating tool code, prompts, and configurations. Furthermore, Youtu-Agent establishes a hybrid policy optimization system: (1) an Agent Practice module that enables agents to accumulate experience and improve performance through in-context optimization without parameter updates; and (2) an Agent RL module that integrates with distributed training frameworks to enable scalable and stable reinforcement learning of any Youtu-Agents in an end-to-end, large-scale manner. Experiments demonstrate that Youtu-Agent achieves state-of-the-art performance on WebWalkerQA (71.47\%) and GAIA (72.8\%) using open-weight models. Our automated generation pipeline achieves over 81\% tool synthesis success rate, while the Practice module improves performance on AIME 2024/2025 by +2.7\% and +5.4\% respectively. Moreover, our Agent RL training achieves 40\% speedup with steady performance improvement on 7B LLMs, enhancing coding/reasoning and searching capabilities respectively up to 35\% and 21\% on Maths and general/multi-hop QA benchmarks.

  3. Avatar Forcing: Real-Time Interactive Head Avatar Generation for Natural Conversation

    Talking head generation creates lifelike avatars from static portraits for virtual communication and content creation. However, current models do not yet convey the feeling of truly interactive communication, often generating one-way responses that lack emotional engagement. We identify two key challenges toward truly interactive avatars: generating motion in real-time under causal constraints and learning expressive, vibrant reactions without additional labeled data. To address these challenges, we propose Avatar Forcing, a new framework for interactive head avatar generation that models real-time user-avatar interactions through diffusion forcing. This design allows the avatar to process real-time multimodal inputs, including the user's audio and motion, with low latency for instant reactions to both verbal and non-verbal cues such as speech, nods, and laughter. Furthermore, we introduce a direct preference optimization method that leverages synthetic losing samples constructed by dropping user conditions, enabling label-free learning of expressive interaction. Experimental results demonstrate that our framework enables real-time interaction with low latency (approximately 500ms), achieving 6.8X speedup compared to the baseline, and produces reactive and expressive avatar motion, which is preferred over 80% against the baseline.

  4. SenseNova-MARS: Empowering Multimodal Agentic Reasoning and Search via Reinforcement Learning

    While Vision-Language Models (VLMs) can solve complex tasks through agentic reasoning, their capabilities remain largely constrained to text-oriented chain-of-thought or isolated tool invocation. They fail to exhibit the human-like proficiency required to seamlessly interleave dynamic tool manipulation with continuous reasoning, particularly in knowledge-intensive and visually complex scenarios that demand coordinated external tools such as search and image cropping. In this work, we introduce SenseNova-MARS, a novel Multimodal Agentic Reasoning and Search framework that empowers VLMs with interleaved visual reasoning and tool-use capabilities via reinforcement learning (RL). Specifically, SenseNova-MARS dynamically integrates the image search, text search, and image crop tools to tackle fine-grained and knowledge-intensive visual understanding challenges. In the RL stage, we propose the Batch-Normalized Group Sequence Policy Optimization (BN-GSPO) algorithm to improve the training stability and advance the model's ability to invoke tools and reason effectively. To comprehensively evaluate the agentic VLMs on complex visual tasks, we introduce the HR-MMSearch benchmark, the first search-oriented benchmark composed of high-resolution images with knowledge-intensive and search-driven questions. Experiments demonstrate that SenseNova-MARS achieves state-of-the-art performance on open-source search and fine-grained image understanding benchmarks. Specifically, on search-oriented benchmarks, SenseNova-MARS-8B scores 67.84 on MMSearch and 41.64 on HR-MMSearch, surpassing proprietary models such as Gemini-3-Flash and GPT-5. SenseNova-MARS represents a promising step toward agentic VLMs by providing effective and robust tool-use capabilities. To facilitate further research in this field, we will release all code, models, and datasets.

  5. Taming Hallucinations: Boosting MLLMs' Video Understanding via Counterfactual Video Generation

    Multimodal Large Language Models (MLLMs) have made remarkable progress in video understanding. However, they suffer from a critical vulnerability: an over-reliance on language priors, which can lead to visual ungrounded hallucinations, especially when processing counterfactual videos that defy common sense. This limitation, stemming from the intrinsic data imbalance between text and video, is challenging to address due to the substantial cost of collecting and annotating counterfactual data. To address this, we introduce DualityForge, a novel counterfactual data synthesis framework that employs controllable, diffusion-based video editing to transform real-world videos into counterfactual scenarios. By embedding structured contextual information into the video editing and QA generation processes, the framework automatically produces high-quality QA pairs together with original-edited video pairs for contrastive training. Based on this, we build DualityVidQA, a large-scale video dataset designed to reduce MLLM hallucinations. In addition, to fully exploit the contrastive nature of our paired data, we propose Duality-Normalized Advantage Training (DNA-Train), a two-stage SFT-RL training regime where the RL phase applies pair-wise ell_1 advantage normalization, thereby enabling a more stable and efficient policy optimization. Experiments on DualityVidQA-Test demonstrate that our method substantially reduces model hallucinations on counterfactual videos, yielding a relative improvement of 24.0% over the Qwen2.5-VL-7B baseline. Moreover, our approach achieves significant gains across both hallucination and general-purpose benchmarks, indicating strong generalization capability. We will open-source our dataset and code.

  6. AdaGaR: Adaptive Gabor Representation for Dynamic Scene Reconstruction

    Reconstructing dynamic 3D scenes from monocular videos requires simultaneously capturing high-frequency appearance details and temporally continuous motion. Existing methods using single Gaussian primitives are limited by their low-pass filtering nature, while standard Gabor functions introduce energy instability. Moreover, lack of temporal continuity constraints often leads to motion artifacts during interpolation. We propose AdaGaR, a unified framework addressing both frequency adaptivity and temporal continuity in explicit dynamic scene modeling. We introduce Adaptive Gabor Representation, extending Gaussians through learnable frequency weights and adaptive energy compensation to balance detail capture and stability. For temporal continuity, we employ Cubic Hermite Splines with Temporal Curvature Regularization to ensure smooth motion evolution. An Adaptive Initialization mechanism combining depth estimation, point tracking, and foreground masks establishes stable point cloud distributions in early training. Experiments on Tap-Vid DAVIS demonstrate state-of-the-art performance (PSNR 35.49, SSIM 0.9433, LPIPS 0.0723) and strong generalization across frame interpolation, depth consistency, video editing, and stereo view synthesis. Project page: https://jiewenchan.github.io/AdaGaR/

  7. NextFlow: Unified Sequential Modeling Activates Multimodal Understanding and Generation

    We present NextFlow, a unified decoder-only autoregressive transformer trained on 6 trillion interleaved text-image discrete tokens. By leveraging a unified vision representation within a unified autoregressive architecture, NextFlow natively activates multimodal understanding and generation capabilities, unlocking abilities of image editing, interleaved content and video generation. Motivated by the distinct nature of modalities - where text is strictly sequential and images are inherently hierarchical - we retain next-token prediction for text but adopt next-scale prediction for visual generation. This departs from traditional raster-scan methods, enabling the generation of 1024x1024 images in just 5 seconds - orders of magnitude faster than comparable AR models. We address the instabilities of multi-scale generation through a robust training recipe. Furthermore, we introduce a prefix-tuning strategy for reinforcement learning. Experiments demonstrate that NextFlow achieves state-of-the-art performance among unified models and rivals specialized diffusion baselines in visual quality.

  8. Can LLMs Predict Their Own Failures? Self-Awareness via Internal Circuits

    Large language models (LLMs) generate fluent and complex outputs but often fail to recognize their own mistakes and hallucinations. Existing approaches typically rely on external judges, multi-sample consistency, or text-based self-critique, which incur additional compute or correlate weakly with true correctness. We ask: can LLMs predict their own failures by inspecting internal states during inference? We introduce Gnosis, a lightweight self-awareness mechanism that enables frozen LLMs to perform intrinsic self-verification by decoding signals from hidden states and attention patterns. Gnosis passively observes internal traces, compresses them into fixed-budget descriptors, and predicts correctness with negligible inference cost, adding only ~5M parameters and operating independently of sequence length. Across math reasoning, open-domain question answering, and academic knowledge benchmarks, and over frozen backbones ranging from 1.7B to 20B parameters, Gnosis consistently outperforms strong internal baselines and large external judges in both accuracy and calibration. Moreover, it generalizes zero-shot to partial generations, enabling early detection of failing trajectories and compute-aware control. These results show that reliable correctness cues are intrinsic to generation process and can be extracted efficiently without external supervision.

  9. K-EXAONE Technical Report

    This technical report presents K-EXAONE, a large-scale multilingual language model developed by LG AI Research. K-EXAONE is built on a Mixture-of-Experts architecture with 236B total parameters, activating 23B parameters during inference. It supports a 256K-token context window and covers six languages: Korean, English, Spanish, German, Japanese, and Vietnamese. We evaluate K-EXAONE on a comprehensive benchmark suite spanning reasoning, agentic, general, Korean, and multilingual abilities. Across these evaluations, K-EXAONE demonstrates performance comparable to open-weight models of similar size. K-EXAONE, designed to advance AI for a better life, is positioned as a powerful proprietary AI foundation model for a wide range of industrial and research applications.

  10. DreamID-V:Bridging the Image-to-Video Gap for High-Fidelity Face Swapping via Diffusion Transformer

    Video Face Swapping (VFS) requires seamlessly injecting a source identity into a target video while meticulously preserving the original pose, expression, lighting, background, and dynamic information. Existing methods struggle to maintain identity similarity and attribute preservation while preserving temporal consistency. To address the challenge, we propose a comprehensive framework to seamlessly transfer the superiority of Image Face Swapping (IFS) to the video domain. We first introduce a novel data pipeline SyncID-Pipe that pre-trains an Identity-Anchored Video Synthesizer and combines it with IFS models to construct bidirectional ID quadruplets for explicit supervision. Building upon paired data, we propose the first Diffusion Transformer-based framework DreamID-V, employing a core Modality-Aware Conditioning module to discriminatively inject multi-model conditions. Meanwhile, we propose a Synthetic-to-Real Curriculum mechanism and an Identity-Coherence Reinforcement Learning strategy to enhance visual realism and identity consistency under challenging scenarios. To address the issue of limited benchmarks, we introduce IDBench-V, a comprehensive benchmark encompassing diverse scenes. Extensive experiments demonstrate DreamID-V outperforms state-of-the-art methods and further exhibits exceptional versatility, which can be seamlessly adapted to various swap-related tasks.

  11. VAR RL Done Right: Tackling Asynchronous Policy Conflicts in Visual Autoregressive Generation

    Visual generation is dominated by three paradigms: AutoRegressive (AR), diffusion, and Visual AutoRegressive (VAR) models. Unlike AR and diffusion, VARs operate on heterogeneous input structures across their generation steps, which creates severe asynchronous policy conflicts. This issue becomes particularly acute in reinforcement learning (RL) scenarios, leading to unstable training and suboptimal alignment. To resolve this, we propose a novel framework to enhance Group Relative Policy Optimization (GRPO) by explicitly managing these conflicts. Our method integrates three synergistic components: 1) a stabilizing intermediate reward to guide early-stage generation; 2) a dynamic time-step reweighting scheme for precise credit assignment; and 3) a novel mask propagation algorithm, derived from principles of Reward Feedback Learning (ReFL), designed to isolate optimization effects both spatially and temporally. Our approach demonstrates significant improvements in sample quality and objective alignment over the vanilla GRPO baseline, enabling robust and effective optimization for VAR models.

  12. GARDO: Reinforcing Diffusion Models without Reward Hacking

    Fine-tuning diffusion models via online reinforcement learning (RL) has shown great potential for enhancing text-to-image alignment. However, since precisely specifying a ground-truth objective for visual tasks remains challenging, the models are often optimized using a proxy reward that only partially captures the true goal. This mismatch often leads to reward hacking, where proxy scores increase while real image quality deteriorates and generation diversity collapses. While common solutions add regularization against the reference policy to prevent reward hacking, they compromise sample efficiency and impede the exploration of novel, high-reward regions, as the reference policy is usually sub-optimal. To address the competing demands of sample efficiency, effective exploration, and mitigation of reward hacking, we propose Gated and Adaptive Regularization with Diversity-aware Optimization (GARDO), a versatile framework compatible with various RL algorithms. Our key insight is that regularization need not be applied universally; instead, it is highly effective to selectively penalize a subset of samples that exhibit high uncertainty. To address the exploration challenge, GARDO introduces an adaptive regularization mechanism wherein the reference model is periodically updated to match the capabilities of the online policy, ensuring a relevant regularization target. To address the mode collapse issue in RL, GARDO amplifies the rewards for high-quality samples that also exhibit high diversity, encouraging mode coverage without destabilizing the optimization process. Extensive experiments across diverse proxy rewards and hold-out unseen metrics consistently show that GARDO mitigates reward hacking and enhances generation diversity without sacrificing sample efficiency or exploration, highlighting its effectiveness and robustness.

Solidot(38)

  1. Reddit 在英国超过 TikTok 成为访问量第四大的社媒

    Reddit 在英国超过 TikTok 成为访问量第四大的社媒平台。英国用户是仅次于美国用户的第二大访问人群,过去两年英国用户人数增长了 88%,Ofcom 的数据显示三分之二的英国网民会访问 Reddit,而 2023 年是三分之一。Reddit 在英国年轻人群中非常受欢迎,18-24 岁英国用户中 Reddit 是访问量第六大的网站,而一年前这一数字是第十。Reddit 的崛起背后的因素包括了 Google 调整了算法突出了论坛类内容,而 Reddit 就是论坛形式的社媒。在 AI 时代,用户也越来越多的转向人工撰写的内容,Reddit 也受益于这一趋势。在 Reddit 的英国用户中女性占了半数以上。

  2. 测试显示 Windows 11 的速度在六个 Windows 版本中垫底

    Windows 11 是目前微软唯一支持的 Windows 版本,但它因为更高的硬件需求、更臃肿的系统以及 AI 而在用户中间口碑不佳。在六部旧笔电 ThinkPad X220——配置英特尔 Core i5-2520M CPU、8GB 内存和 256GB 硬盘——上测试最新版本的 Windows XP、Windows Vista、Windows 7、Windows 8.1、Windows 10 和 Windows 11,结果显示:Windows 11 启动速度最慢;安装容量为 37.3GB,略低于 Windows Vista 的 37.8GB 和 Windows 7 的 44.6GB;内存占用 3.3GB 最多 3.7GB;在旧硬件上更容易出现卡顿。

  3. 日本将利用 AI 加速漫画翻译

    日本漫画在海外颇受欢迎,但很多读者看的是盗版。由出版社等组成的一般社团法人 ABJ 调查了约 900 个以漫画为中心、刊载日本出版物的盗版网站,结果发现,仅 2025 年 6 月一个月内,这些网站就有来自 123 个国家和地区读者的共计 28 亿次访问,累计阅读量达到 14 亿册。年度损失额据估算达到 8.5万 亿日元。日本希望在 AI 帮助下加速漫画的翻译,以多语言推动正版作品在海外流通,以防止读者流向盗版网站。目前在日本每年出版的漫画中,只有一成左右被翻译成英语。

  4. 广电总局打击 AI 魔改视频

    国家广播电视总局宣布展开为期一个月的“AI魔改”视频专项治理行动。广电总局称: “随着生成式人工智能技术快速发展,部分网络账号滥用AI工具,对经典影视、动画片等内容进行颠覆性篡改、魔性解构与低俗化改编,这些内容严重背离经典作品精神内核,扰乱网络传播秩序,助长侵权行为,危害行业发展,干扰未成年人形成正确文化认知和现实感知。 专项治理重点清理基于四大名著、历史题材、革命题材、英模人物等电视剧作品进行“AI魔改”的下列视频:一是严重违背原作精神内核和角色形象,颠覆基本认知,解构普遍共识;二是内容渲染血腥暴力、猎奇低俗,宣扬错误价值观,违背公序良俗;三是存在对中华文化挪用、篡改的突出问题,导致对真实历史时空、中华文明标识产生明显错位认知,冲击文化认同。专项治理同步清理将少年儿童所熟知、所喜爱的动画形象进行改编生成的各类邪典动画。”

  5. 比亚迪电动汽车销量跃居世界第一

    比亚迪 2025 年电动汽车销量超过特斯拉跃居世界第一。2025 年比亚迪电动乘用车销量同比增长 28%,达到 225 万辆,特斯拉销量预计将减少 8%,降至 164 万辆。除电动汽车之外,包括插电式混合动力车等在内的比亚迪 2025 年新车销量整体增长 8%,达到 460 万辆。比亚迪汽车在中国的销量已经增长放缓,9 月单月销量低于上年同月。比亚迪董事长王传福在 2025 年 12 月召开的股东大会上就中国国内市场销量出现下滑表示,一方面由于比亚迪当前技术领先度不及前几年,技术成果的市场惊艳度有所下降,叠加行业同质化特征渐显,这一变化符合产品及技术发展的周期性规律。

  6. 大众电动汽车恢复使用物理按键

    德国大众汽车公布了其即将推出的电动汽车 ID. Polo 的内部装饰,显示了物理按键的全面回归,汽车不仅有物理按键、开关,甚至还有控制音频的旋钮。大众汽车首席设计师 Andreas Mindt 去年曾表示将在汽车中重新引入用于重要功能的物理按键。ID. Polo 的显示屏下有物理按键,方向盘上也有清晰易用的按键。该型号汽车将于今年在欧洲上市。

  7. 700 万年前人类祖先能直立行走

    根据发表于《科学进展》的研究,基于 700 万年前的化石,利用强有力的解剖学证据表明,外表像猿、大脑很小的撒海尔人乍得种能直立行走。这意味着,人类祖先直立行走的时间比预期的早得多。法国普瓦提埃大学的古生物学家在中非乍得德乍腊沙漠发现了撒海尔人乍得种化石。这些化石可追溯至 700 万年前。这些化石到底属于人类直系祖先,还是一种已灭绝的旁支类人猿,学术界对此长期存在争议,其中一个关键争论点就是撒海尔人乍得种是否能直立行走。研究人员利用先进的三维成像技术等,对撒海尔人乍得种的肢体骨骼化石进行分析,发现了支撑其双足行走的三个关键特征:一是股骨近端前侧有个结节。这个结构虽小却很重要,它是人体最强韧带——髂股韧带的附着点。这种韧带是直立行走的关键。这一特征迄今仅在人科动物中观察到。二是股骨自然旋转扭曲,即股骨前倾,其角度处于人科动物范围内,有助于腿部向前伸展,从而实现高效行走。三是臀肌与早期人科动物相似,能够稳定髋关节,并有助于站立、行走和奔跑。后两个特征此前已有研究提及,而这项新研究证实了它们的存在。

  8. 戴尔恢复 XPS 品牌

    去年 CES 上 AI 戴尔宣布了一项受争议的决定:重塑了其产品品牌,废弃了有几十年历史的 XPS、Inspiron、Latitude 品牌,改用 Dell、Dell Pro 和 Dell Pro Max,每个品牌下有三个级别:Base、Plus 和 Premium。戴尔称,此举是为了让客户更方便的找到满足其需求的 AI PC。Dell 品牌 PC 针对娱乐、教育和工作,Dell Pro 针对生产力,Dell Pro Max 瞄准最高性能。一年之后戴尔在 2026 年 CES 展会上承认放弃 XPS 品牌是错误的决定,它决定重启 XPS,将其定位为高端笔记本电脑产品,不过它无意重启 Inspiron 和 Latitude 品牌。

  9. 世嘉联合创始人 David Rosen 去世

    世嘉联合创始人 David Rosen 去世,享年 95 岁。David Rosen 在朝鲜战争期间是驻扎在日本的美国空军飞行员。战后他因为喜欢日本而留下,1954 年创办了 Rosen Enterprises,1965 年与另一家公司 Nihon Goraku Bussan 合并,该公司的投币游戏业务 Service Games 在新公司缩写为世嘉(Sega)。世嘉在之后的 15 年里从进口游戏转向自主设计游戏,从点唱机和弹珠台转向街机游戏,它还建立起了街机厅。Rosen 担任世嘉董事直到 1996 年,之后退休。在其任职期间,世嘉的街机业务是行业的领导者,但游戏机业务输给了任天堂。

  10. 美国大学仍然运作良好

    美国的民意调查显示美国人对高等教育的态度在恶化。皮尤研究中心发现,认为大学“非常重要”的成年人比例从 2013 年的 70% 降至如今的 35%;NBC 民调显示认为大学学位“不值钱”的比例从同期的 40% 增至如今的 63%。但大学入学数据却与多数人的感知截然不同。2023 年四年制大学授予 200 万个学士学位,而 2010 年是 160 万;过去 15 年 25 岁群体拥有学士学位的比例稳步增长。从经济角度看,高等教育仍然具有强大的吸引力。拥有学士学位的人平均收入比拥有类似工作经验的高中毕业生高约 70%。如果将奖学金考虑在内,自 2015 年以来美国公立四年制大学的学费下降逾 20%。即使考虑学生贷款,大学毕业生每年的净收入比无学位者高约 8000 美元。造成认知偏差的部分原因可能是对大学收费存在误解。近半数美国成年人认为所有人学费都一样,但实际上只有不到 20% 的家庭支付了官方公布的学费。

  11. 委内瑞拉事件前 BGP 路由发生异常

    根据 Cloudflare Radar 的监测,美国对委内瑞拉发动突然袭击前一天该国国有电信公司 CANTV 的自治系统 AS8048 发生了路由泄漏事件。当 BGP 流量从 A 点发送到 B 点,它可以被重路由经过 C 点。如果控制了 C 点,即使只有几个小时,理论上也可以收集到大量情报,这对政府机构非常有用。1 月 2 日 AS8048 的流量被路由经过了一条它原本不会经过的路线,该路线上的中转服务提供商 Sparkle 被公认不安全,也就是没有部署 BGP 安全功能如 RPKI 过滤。暂时还不清楚究竟发生了什么。

  12. 安娜的档案 .org 域名被封禁

    影子图书馆安娜的档案(Anna's Archive)主域名 annas-archive.org 被封禁。运营者在其 subreddit 上表示这与该网站最近发布 300TB 的 Spotify 存档有关。管理.org 域名的美国非营利组织 Public Interest Registry(PIR)通常很少停用域名,它对 annas-archive.org 采取行动应该是收到了法庭命令。运营者建议用户访问它的其它域名如 annas-archive.in 或 annas-archive.pm。