DIGEST · 2025-12-15

OrangeBot.AI Digest — 2025-12-15

60 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. Upcoming Changes to Let's Encrypt Certificates (community.letsencrypt.org)
  2. “Super secure” messaging app leaks everyone's phone number (ericdaigle.ca)
  3. Problems with D-Bus on the Linux desktop (blog.vaxry.net)
  4. US Tech Force (techforce.gov)
  5. Pro-democracy HK tycoon Jimmy Lai convicted in national security trial (www.bbc.com)
  6. Thousands of U.S. farmers have Parkinson's. They blame a deadly pesticide (www.mlive.com)
  7. It seems that OpenAI is scraping [certificate transparency] logs (benjojo.co.uk)
  8. Carrier Landing in Top Gun for the NES (relaxing.run)
  9. I'm Kenyan. I don't write like ChatGPT, ChatGPT writes like me (marcusolang.substack.com)
  10. Avoid UUID Version 4 Primary Keys in Postgres (andyatkinson.com)
  11. Rob Reiner has died (www.hollywoodreporter.com)
  12. Unscii (viznut.fi)
  13. Adafruit: Arduino’s Rules Are ‘Incompatible With Open Source’ (thenewstack.io)
  14. Read Something Wonderful (readsomethingwonderful.com)
  15. Arborium: Tree-sitter code highlighting with Native and WASM targets (arborium.bearcove.eu)

GitHub Trending(15)

  1. simstudioai / sim

    Open-source platform to build and deploy AI agent workflows.

  2. ZJU-LLMs / Foundations-of-LLMs

    A book for Learning the Foundations of LLMs

  3. jellyfin / jellyfin-desktop

    Jellyfin Desktop Client

  4. shadcn-ui / ui

    A set of beautifully-designed, accessible components and a code distribution platform. Works with your favorite frameworks. Open Source. Open Code.

  5. CopilotKit / CopilotKit

    React UI + elegant infrastructure for AI Copilots, AI chatbots, and in-app AI agents. The Agentic Frontend 🪁

  6. obsproject / obs-studio

    OBS Studio - Free and open source software for live streaming and screen recording

  7. Morganamilo / paru

    Feature packed AUR helper

  8. HKUDS / DeepCode

    "DeepCode: Open Agentic Coding (Paper2Code & Text2Web & Text2Backend)"

  9. Raphire / Win11Debloat

    A simple, lightweight PowerShell script to remove pre-installed apps, disable telemetry, as well as perform various other changes to customize, declutter and improve your Windows experience. Win11Debloat works for both Windows 10 and Windows 11.

  10. openai / codex

    Lightweight coding agent that runs in your terminal

  11. virattt / ai-hedge-fund

    An AI Hedge Fund Team

  12. theOehrly / Fast-F1

    FastF1 is a python package for accessing and analyzing Formula 1 results, schedules, timing data and telemetry

  13. C4illin / ConvertX

    💾 Self-hosted online file converter. Supports 1000+ formats ⚙️

  14. daytonaio / daytona

    Daytona is a Secure and Elastic Infrastructure for Running AI-Generated Code

  15. public-apis / public-apis

    A collective list of free APIs

Hugging Face(15)

  1. EgoX: Egocentric Video Generation from a Single Exocentric Video

    Egocentric perception enables humans to experience and understand the world directly from their own point of view. Translating exocentric (third-person) videos into egocentric (first-person) videos opens up new possibilities for immersive understanding but remains highly challenging due to extreme camera pose variations and minimal view overlap. This task requires faithfully preserving visible content while synthesizing unseen regions in a geometrically consistent manner. To achieve this, we present EgoX, a novel framework for generating egocentric videos from a single exocentric input. EgoX leverages the pretrained spatio temporal knowledge of large-scale video diffusion models through lightweight LoRA adaptation and introduces a unified conditioning strategy that combines exocentric and egocentric priors via width and channel wise concatenation. Additionally, a geometry-guided self-attention mechanism selectively attends to spatially relevant regions, ensuring geometric coherence and high visual fidelity. Our approach achieves coherent and realistic egocentric video generation while demonstrating strong scalability and robustness across unseen and in-the-wild videos.

  2. DentalGPT: Incentivizing Multimodal Complex Reasoning in Dentistry

    Reliable interpretation of multimodal data in dentistry is essential for automated oral healthcare, yet current multimodal large language models (MLLMs) struggle to capture fine-grained dental visual details and lack sufficient reasoning ability for precise diagnosis. To address these limitations, we present DentalGPT, a specialized dental MLLM developed through high-quality domain knowledge injection and reinforcement learning. Specifically, the largest annotated multimodal dataset for dentistry to date was constructed by aggregating over 120k dental images paired with detailed descriptions that highlight diagnostically relevant visual features, making it the multimodal dataset with the most extensive collection of dental images to date. Training on this dataset significantly enhances the MLLM's visual understanding of dental conditions, while the subsequent reinforcement learning stage further strengthens its capability for multimodal complex reasoning. Comprehensive evaluations on intraoral and panoramic benchmarks, along with dental subsets of medical VQA benchmarks, show that DentalGPT achieves superior performance in disease classification and dental VQA tasks, outperforming many state-of-the-art MLLMs despite having only 7B parameters. These results demonstrate that high-quality dental data combined with staged adaptation provides an effective pathway for building capable and domain-specialized dental MLLMs.

  3. SVG-T2I: Scaling Up Text-to-Image Latent Diffusion Model Without Variational Autoencoder

    Visual generation grounded in Visual Foundation Model (VFM) representations offers a highly promising unified pathway for integrating visual understanding, perception, and generation. Despite this potential, training large-scale text-to-image diffusion models entirely within the VFM representation space remains largely unexplored. To bridge this gap, we scale the SVG (Self-supervised representations for Visual Generation) framework, proposing SVG-T2I to support high-quality text-to-image synthesis directly in the VFM feature domain. By leveraging a standard text-to-image diffusion pipeline, SVG-T2I achieves competitive performance, reaching 0.75 on GenEval and 85.78 on DPG-Bench. This performance validates the intrinsic representational power of VFMs for generative tasks. We fully open-source the project, including the autoencoder and generation model, together with their training, inference, evaluation pipelines, and pre-trained weights, to facilitate further research in representation-driven visual generation.

  4. V-RGBX: Video Editing with Accurate Controls over Intrinsic Properties

    Large-scale video generation models have shown remarkable potential in modeling photorealistic appearance and lighting interactions in real-world scenes. However, a closed-loop framework that jointly understands intrinsic scene properties (e.g., albedo, normal, material, and irradiance), leverages them for video synthesis, and supports editable intrinsic representations remains unexplored. We present V-RGBX, the first end-to-end framework for intrinsic-aware video editing. V-RGBX unifies three key capabilities: (1) video inverse rendering into intrinsic channels, (2) photorealistic video synthesis from these intrinsic representations, and (3) keyframe-based video editing conditioned on intrinsic channels. At the core of V-RGBX is an interleaved conditioning mechanism that enables intuitive, physically grounded video editing through user-selected keyframes, supporting flexible manipulation of any intrinsic modality. Extensive qualitative and quantitative results show that V-RGBX produces temporally consistent, photorealistic videos while propagating keyframe edits across sequences in a physically plausible manner. We demonstrate its effectiveness in diverse applications, including object appearance editing and scene-level relighting, surpassing the performance of prior methods.

  5. Sliding Window Attention Adaptation

    The self-attention mechanism in Transformer-based Large Language Models (LLMs) scales quadratically with input length, making long-context inference expensive. Sliding window attention (SWA) reduces this cost to linear complexity, but naively enabling complete SWA at inference-time for models pretrained with full attention (FA) causes severe long-context performance degradation due to training-inference mismatch. This makes us wonder: Can FA-pretrained LLMs be well adapted to SWA without pretraining? We investigate this by proposing Sliding Window Attention Adaptation (SWAA), a set of practical recipes that combine five methods for better adaptation: (1) applying SWA only during prefilling; (2) preserving "sink" tokens; (3) interleaving FA/SWA layers; (4) chain-of-thought (CoT); and (5) fine-tuning. Our experiments show that SWA adaptation is feasible while non-trivial: no single method suffices, yet specific synergistic combinations effectively recover the original long-context performance. We further analyze the performance-efficiency trade-offs of different SWAA configurations and provide recommended recipes for diverse scenarios. Our code is available at https://github.com/yuyijiong/sliding-window-attention-adaptation

  6. PersonaLive! Expressive Portrait Image Animation for Live Streaming

    Current diffusion-based portrait animation models predominantly focus on enhancing visual quality and expression realism, while overlooking generation latency and real-time performance, which restricts their application range in the live streaming scenario. We propose PersonaLive, a novel diffusion-based framework towards streaming real-time portrait animation with multi-stage training recipes. Specifically, we first adopt hybrid implicit signals, namely implicit facial representations and 3D implicit keypoints, to achieve expressive image-level motion control. Then, a fewer-step appearance distillation strategy is proposed to eliminate appearance redundancy in the denoising process, greatly improving inference efficiency. Finally, we introduce an autoregressive micro-chunk streaming generation paradigm equipped with a sliding training strategy and a historical keyframe mechanism to enable low-latency and stable long-term video generation. Extensive experiments demonstrate that PersonaLive achieves state-of-the-art performance with up to 7-22x speedup over prior diffusion-based portrait animation models.

  7. Exploring MLLM-Diffusion Information Transfer with MetaCanvas

    Multimodal learning has rapidly advanced visual understanding, largely via multimodal large language models (MLLMs) that use powerful LLMs as cognitive cores. In visual generation, however, these powerful core models are typically reduced to global text encoders for diffusion models, leaving most of their reasoning and planning ability unused. This creates a gap: current multimodal LLMs can parse complex layouts, attributes, and knowledge-intensive scenes, yet struggle to generate images or videos with equally precise and structured control. We propose MetaCanvas, a lightweight framework that lets MLLMs reason and plan directly in spatial and spatiotemporal latent spaces and interface tightly with diffusion generators. We empirically implement MetaCanvas on three different diffusion backbones and evaluate it across six tasks, including text-to-image generation, text/image-to-video generation, image/video editing, and in-context video generation, each requiring precise layouts, robust attribute binding, and reasoning-intensive control. MetaCanvas consistently outperforms global-conditioning baselines, suggesting that treating MLLMs as latent-space planners is a promising direction for narrowing the gap between multimodal understanding and generation.

  8. MeshSplatting: Differentiable Rendering with Opaque Meshes

    Primitive-based splatting methods like 3D Gaussian Splatting have revolutionized novel view synthesis with real-time rendering. However, their point-based representations remain incompatible with mesh-based pipelines that power AR/VR and game engines. We present MeshSplatting, a mesh-based reconstruction approach that jointly optimizes geometry and appearance through differentiable rendering. By enforcing connectivity via restricted Delaunay triangulation and refining surface consistency, MeshSplatting creates end-to-end smooth, visually high-quality meshes that render efficiently in real-time 3D engines. On Mip-NeRF360, it boosts PSNR by +0.69 dB over the current state-of-the-art MiLo for mesh-based novel view synthesis, while training 2x faster and using 2x less memory, bridging neural rendering and interactive 3D graphics for seamless real-time scene interaction. The project page is available at https://meshsplatting.github.io/.

  9. Structure From Tracking: Distilling Structure-Preserving Motion for Video Generation

    Reality is a dance between rigid constraints and deformable structures. For video models, that means generating motion that preserves fidelity as well as structure. Despite progress in diffusion models, producing realistic structure-preserving motion remains challenging, especially for articulated and deformable objects such as humans and animals. Scaling training data alone, so far, has failed to resolve physically implausible transitions. Existing approaches rely on conditioning with noisy motion representations, such as optical flow or skeletons extracted using an external imperfect model. To address these challenges, we introduce an algorithm to distill structure-preserving motion priors from an autoregressive video tracking model (SAM2) into a bidirectional video diffusion model (CogVideoX). With our method, we train SAM2VideoX, which contains two innovations: (1) a bidirectional feature fusion module that extracts global structure-preserving motion priors from a recurrent model like SAM2; (2) a Local Gram Flow loss that aligns how local features move together. Experiments on VBench and in human studies show that SAM2VideoX delivers consistent gains (+2.60\% on VBench, 21-22\% lower FVD, and 71.4\% human preference) over prior baselines. Specifically, on VBench, we achieve 95.51\%, surpassing REPA (92.91\%) by 2.60\%, and reduce FVD to 360.57, a 21.20\% and 22.46\% improvement over REPA- and LoRA-finetuning, respectively. The project website can be found at https://sam2videox.github.io/ .

  10. LEO-RobotAgent: A General-purpose Robotic Agent for Language-driven Embodied Operator

    We propose LEO-RobotAgent, a general-purpose language-driven intelligent agent framework for robots. Under this framework, LLMs can operate different types of robots to complete unpredictable complex tasks across various scenarios. This framework features strong generalization, robustness, and efficiency. The application-level system built around it can fully enhance bidirectional human-robot intent understanding and lower the threshold for human-robot interaction. Regarding robot task planning, the vast majority of existing studies focus on the application of large models in single-task scenarios and for single robot types. These algorithms often have complex structures and lack generalizability. Thus, the proposed LEO-RobotAgent framework is designed with a streamlined structure as much as possible, enabling large models to independently think, plan, and act within this clear framework. We provide a modular and easily registrable toolset, allowing large models to flexibly call various tools to meet different requirements. Meanwhile, the framework incorporates a human-robot interaction mechanism, enabling the algorithm to collaborate with humans like a partner. Experiments have verified that this framework can be easily adapted to mainstream robot platforms including unmanned aerial vehicles (UAVs), robotic arms, and wheeled robot, and efficiently execute a variety of carefully designed tasks with different complexity levels. Our code is available at https://github.com/LegendLeoChen/LEO-RobotAgent.

  11. Causal Judge Evaluation: Calibrated Surrogate Metrics for LLM Systems

    LLM-as-judge evaluation has become the de facto standard for scaling model assessment, but the practice is statistically unsound: uncalibrated scores can invert preferences, naive confidence intervals on uncalibrated scores achieve near-0% coverage, and importance-weighted estimators collapse under limited overlap despite high effective sample size (ESS). We introduce Causal Judge Evaluation (CJE), a framework that fixes all three failures. On n=4,961 Chatbot Arena prompts (after filtering from 5k), CJE achieves 99% pairwise ranking accuracy at full sample size (94% averaged across configurations), matching oracle quality, at 14x lower cost (for ranking 5 policies) by calibrating a 16x cheaper judge on just 5% oracle labels (~250 labels). CJE combines three components: (i) AutoCal-R, reward calibration via mean-preserving isotonic regression; (ii) SIMCal-W, weight stabilization via stacking of S-monotone candidates; and (iii) Oracle-Uncertainty Aware (OUA) inference that propagates calibration uncertainty into confidence intervals. We formalize the Coverage-Limited Efficiency (CLE) diagnostic, which explains why IPS-style estimators fail even when ESS exceeds 90%: the logger rarely visits regions where target policies concentrate. Key findings: SNIPS inverts rankings even with reward calibration (38% pairwise, negative Kendall's tau) due to weight instability; calibrated IPS remains near-random (47%) despite weight stabilization, consistent with CLE; OUA improves coverage from near-0% to ~86% (Direct) and ~96% (stacked-DR), where naive intervals severely under-cover.

  12. CLINIC: Evaluating Multilingual Trustworthiness in Language Models for Healthcare

    Integrating language models (LMs) in healthcare systems holds great promise for improving medical workflows and decision-making. However, a critical barrier to their real-world adoption is the lack of reliable evaluation of their trustworthiness, especially in multilingual healthcare settings. Existing LMs are predominantly trained in high-resource languages, making them ill-equipped to handle the complexity and diversity of healthcare queries in mid- and low-resource languages, posing significant challenges for deploying them in global healthcare contexts where linguistic diversity is key. In this work, we present CLINIC, a Comprehensive Multilingual Benchmark to evaluate the trustworthiness of language models in healthcare. CLINIC systematically benchmarks LMs across five key dimensions of trustworthiness: truthfulness, fairness, safety, robustness, and privacy, operationalized through 18 diverse tasks, spanning 15 languages (covering all the major continents), and encompassing a wide array of critical healthcare topics like disease conditions, preventive actions, diagnostic tests, treatments, surgeries, and medications. Our extensive evaluation reveals that LMs struggle with factual correctness, demonstrate bias across demographic and linguistic groups, and are susceptible to privacy breaches and adversarial attacks. By highlighting these shortcomings, CLINIC lays the foundation for enhancing the global reach and safety of LMs in healthcare across diverse languages.

  13. Fairy2i: Training Complex LLMs from Real LLMs with All Parameters in {pm 1, pm i}

    Large language models (LLMs) have revolutionized artificial intelligence, yet their massive memory and computational demands necessitate aggressive quantization, increasingly pushing representations toward the theoretical limit of a single bit. While complex-valued LLMs, such as iFairy, offer a superior chance for low-bit representation compared to real-valued counterparts, they require training from scratch, preventing the utilization of the vast ecosystem of pre-trained real-valued foundation models. Here we present Fairy2i, a universal framework that transforms pre-trained real-valued layers into an equivalent widely-linear complex form, enabling extremely low-bit quantization while reusing existing checkpoints. By proving a lossless mathematical equivalence between real and widely-linear maps, we convert standard Transformers into the complex domain and employ a phase-aware quantization scheme with a highly efficient codebook of fourth roots of unity. Furthermore, we introduce a recursive residual quantization mechanism that iteratively minimizes quantization error, allowing inference to proceed via efficient multiplication-free accumulation. We demonstrate that Fairy2i restores the performance of LLaMA-2 7B at an effective 2-bit precision to levels nearly comparable with full-precision baselines, significantly outperforming state-of-the-art real-valued binary and ternary quantization methods. This work bridges the gap between the representational efficiency of complex-valued arithmetic and the practical utility of pre-trained models, paving a new way for efficient inference on commodity hardware.

  14. Task adaptation of Vision-Language-Action model: 1st Place Solution for the 2025 BEHAVIOR Challenge

    We present a vision-action policy that won 1st place in the 2025 BEHAVIOR Challenge - a large-scale benchmark featuring 50 diverse long-horizon household tasks in photo-realistic simulation, requiring bimanual manipulation, navigation, and context-aware decision making. Building on the Pi0.5 architecture, we introduce several innovations. Our primary contribution is correlated noise for flow matching, which improves training efficiency and enables correlation-aware inpainting for smooth action sequences. We also apply learnable mixed-layer attention and System 2 stage tracking for ambiguity resolution. Training employs multi-sample flow matching to reduce variance, while inference uses action compression and challenge-specific correction rules. Our approach achieves 26% q-score across all 50 tasks on both public and private leaderboards.

  15. Scaling Behavior of Discrete Diffusion Language Models

    Modern LLM pre-training consumes vast amounts of compute and training data, making the scaling behavior, or scaling laws, of different models a key distinguishing factor. Discrete diffusion language models (DLMs) have been proposed as an alternative to autoregressive language models (ALMs). However, their scaling behavior has not yet been fully explored, with prior work suggesting that they require more data and compute to match the performance of ALMs. We study the scaling behavior of DLMs on different noise types by smoothly interpolating between masked and uniform diffusion while paying close attention to crucial hyperparameters such as batch size and learning rate. Our experiments reveal that the scaling behavior of DLMs strongly depends on the noise type and is considerably different from ALMs. While all noise types converge to similar loss values in compute-bound scaling, we find that uniform diffusion requires more parameters and less data for compute-efficient training compared to masked diffusion, making them a promising candidate in data-bound settings. We scale our uniform diffusion model up to 10B parameters trained for 10^{22} FLOPs, confirming the predicted scaling behavior and making it the largest publicly known uniform diffusion model to date.

Solidot(15)

  1. 内存 SSD 之后机械硬盘也涨价

    内存、固态硬盘之后,机械硬盘过去几个月也开始涨价。10~12 月用于台式电脑和监控摄像头的 3.5 英寸 1TB 产品比前一季度上涨 4% 达到 53.0 美元左右。用于笔记本电脑的 2.5 英寸 1TB 产品也上涨 3% 至 50.0 美元左右。涨幅创 2023 年 10~12 月以来的新高。中国加速采购 PC 用 HDD,主要用于监控摄像头。机械硬盘价格上涨预计会持续一段时间。

  2. GNOME Shell 扩展禁止使用 AI 生成

    由于涌入了大量 AI 生成的 GNOME Shell 扩展,GNOME 项目宣布将拒绝接受此类扩展。开发者表示将 AI 用于辅助学习编程或作为代码补全等开发工具使用并不禁止,但扩展开发者应在合理范围内解释和说明其提交的代码。如果提交的代码包含大量不必要的代码、不一致代码风格和虚构 API 使用等任何表明代码由 AI 生成的迹象都将被拒绝。GNOME 开发者称部分开发者在使用 AI 时并不理解代码。

  3. 《时代》今年的年度人物是 AI 缔造者

    《时代》今年的年度人物是 AI 时代的主要建筑师——英伟达 CEO 黄仁宇、AMD CEO 苏姿丰、xAI CEO 马斯克(Elon Musk)、Meta CEO 扎克伯格、OpenAI CEO 奥特曼(Sam Altman)、有 AI 教母之称的李飞飞、Anthropic CEO Dario Amodei 以及 Google AI CEO Demis Hassabis。《时代》称,不管好坏,这些人主导了今年的新闻头条,他们开启了机器智能时代,令世人惊叹担忧,他们改变了现状和超越了可能。

  4. 丹麦计划严格限制 15 岁以下青少年使用社交媒体

    在澳大利亚之后,丹麦计划严格限制 15 岁以下青少年使用社交媒体。丹麦政府已与议会中三个执政联盟和两个反对党达成协议,计划最早在 2026 年中期成为法律。拟议措施将赋予部分家长允许其子女从 13 岁起使用社交媒体的权利,但完整计划尚未公布。丹麦数字事务部上个月宣布推出名为“数字证据(digital evidence)”的全新应用,预计明年春季上线,很可能是计划的核心。这款应用将显示年龄证明确保用户遵守社交媒体的年龄限制。马来西亚和挪威也在采取措施。

  5. iRobot 申请破产重组

    扫地机器人 Roomba 的制造商 iRobot 申请破产重组。根据重组协议,iRobot 的控制权将转交给其主要代工厂 深圳杉川机器人公司(Shenzhen PICEA Robotics)及其子公司香港杉川(Santrum Hong Kong)。iRobot 的主营业务因中国制造的扫地机器人的竞争而陷入困境,该公司曾试图出售给电商巨人亚马逊,但没有获得欧盟监管机构的批准。2021 年它的估值曾高达 35.6 亿美元,如今仅为 1.4 亿美元。iRobot 大部分销往美国的产品在越南制造,而美国对越南商品征收 46% 的进口关税,导致其今年的成本增加了 2300 万美元。iRobot 创办 35 年以来制造了逾 5000 万台扫地机器人。

  6. CEO 们计划 2026 年继续加大 AI 支出

    咨询公司 Teneo 调查了逾 350 位上市公司 CEO。这些上市公司的年收入都超过 10 亿美元。调查显示,68% 的CEO 计划在 2026 年增加 AI 支出,受访者同时表示目前的 AI 项目只有不到一半产生了超过支出的回报。CEO 们称,AI 在市场营销和客服领域应用最成功,在安全、法律和 HR 等高风险领域面临挑战。Teneo 还调查了约 400 家机构投资者,53% 预计 AI 项目将在六个月内投资开始产生回报。84% 的大型公司——年收入 100 亿美元或以上——CEO 认为 AI 项目的投资需要逾六个月时间才能产生回报。此外 67% 的 CEO 认为 AI 将增加公司入门级员工人数,58% 的 CEO 认为 AI 将增加领导层人数。

  7. LG TV webOS 更新加入 Copilot AI 且不可卸载

    微软看起来正向电视推广其 AI 应用。用户在 Reddit 上报告,其 LG 电视的 webOS 操作系统在更新之后加入了微软的 Copilot AI,而且该 AI 应用无法卸载。暂时不清楚 Copilot AI 能在电视上做什么。除 Copilot AI 外 LG 可能还在 webOS 中加入了其它 AI 功能——它还提供了名为 Live Plus 的设置,启用该功能之后电视能识别屏幕上显示的内容,将这些观影信息用于个性化推荐和广告,不过该功能可以关闭——Settings > All Settings > General > Additional Settings。

  8. 天文学家拍摄到类星战塔图因的系外行星

    天文学家成功直接拍摄到一颗如《星球大战》中塔图因星球般绕行双星运转的系外行星。能拍到太阳系外的行星本就极为罕见,而能拍到一颗同时绕行两颗恒星的行星,更是少之又少。令人惊讶的是,这颗名为 HD 143811 AB b的系外行星距离其双母恒星约 64AU,是目前已知以直接成像方式发现、且绕行双星系统的行星中,距离母恒星最近的一颗,其轨道半径比过去同类型行星小了约六倍。这颗新行星 HD 143811 AB b 其实隐藏在多年以前的观测资料中。其质量约为木星的 6 倍,年龄约为 1,300 万年,温度高于太阳系内任何行星。 HD 143811 这个系统的结构同样引人注目:两颗恒星彼此紧密环绕(0.18 AU),每 18 天完成一次公转;而行星则以轨道半长轴约 64AU、330 年的周期,绕着这对恒星运行,与冥王星绕行太阳的时间尺度相近。

  9. 全球电动汽车销量今年至今增长 21%

    Benchmark Mineral Intelligence 报告,2025 年 11 月全球电动汽车销量 200 万辆,今年迄今全球电动汽车销量 1850 万辆,比 2024 年同期增长 21%。欧洲 11 月的电动汽车销量增幅最高,同比增长 36%,其中纯电增长 35%,插电混动 39%,今年至今电动汽车总销量 380 万辆,比 2024 年同期增长 33%。北美因电动汽车减税政策于 9 月 30 日结束而销量下滑,今年至今电动汽车销量比 2024 年同期下降 1%。中国电动汽车销量仍然远超世界其它地区,今年至今销量增长 19% 达 1160 万辆,比亚迪 11 月电动汽车出口量创 131,935 辆的纪录,今年在欧洲销量达到 20 万辆,东南亚销量翻一番,南美销量增长逾 50%。除中国、欧洲和北美外,其它地区今年的电动汽车销量比 2024 年同期增长 48% 达到 150 万辆。

  10. Linux 6.19-rc1 释出,龙芯为内核加入 32 位架构 LoongArch32 支持

    Linus Torvalds 通常在周日释出新版内核的 RC 版本,而美国时间的周日是北京时间的周一。Torvalds 生活在北美,因此他通常是在北京时间的周一发布新内核 RC 版本。然而本周 Torvalds 在日本参加 Linux Plumbers 大会和 Linux 内核维护者峰会,而日本的周日相当于美国的周六,他在当地时间的周日释出 Linux 6.19-rc1。Torvalds 表示这可能会让那些习惯于最后一刻递交 pull request 的人措手不及。Linux 6.19 包含了驱动、子系统和架构更新,其中一个更新是龙芯加入了 32 位架构 LoongArch32 支持。大部分 CPU 架构都从 32 位过渡到 64 位,而龙芯则反其道而行之,从 64 位过渡到 32 位。

  11. 长期饮用能量饮料导致一名男子中风

    《BMJ Case Reports》期刊本周报告了一起不同寻常的病例:一名男子因长期饮用能量饮料而中风。这名 50 多岁的男子因突发左侧身体完全麻木和共济失调(Ataxia)而送医,他的血压高达 254/150 mm Hg——正常人血压为 120/80。他不吸烟、不喝酒、也不滥用任何药物,身体健康,所有常规检查结果都正常。但脑部扫描显示了动脉痉挛的证据,这与高血压强相关。MRI 扫描显示丘脑有组织坏死。他接受了中风康复治疗,三天后出院时血压降至 170/80 mm Hg,虽然很高但不再危急。该男子在接下来三个月定期复诊,发现血压再次上升,一度因高血压再次住院。医生开始询问他的生活信息,他披露平均每天要喝八罐高效能量饮料。一罐能量饮料标明含有 160 毫克咖啡因,咖啡因是能升高血压的兴奋剂。一杯普通咖啡约含有 90 毫克咖啡因,八罐能量饮料共有 1280 毫克咖啡因,意味着相当于喝 14 杯咖啡。医生指出能量饮料的其它成分也可能具有兴奋剂作用。此前的研究发现连续饮用能量饮料会对血压上升产生累积效应。医生在了解之后建议他停止饮用能量饮料。他照做了,一周后血压就恢复到正常水平,八年之后血压仍然正常。

  12. 中国卫星与一颗 Starlink 卫星差点相撞

    12 月 10 日中科宇航力箭一号遥十一运载火箭在东风发射场成功将 9 颗卫星送入轨道。力箭一号共搭载了阿联酋813 卫星、吉星高分07B01星、吉星高分07C01星、吉星高分07D01星、东坡15号卫星、驭星二号09星、逸仙-A星、SPNEX卫星、Slippers2Sat 卫星 9 颗卫星。SpaceX 周五披露其中一颗卫星与 Starlink 卫星 STARLINK-6079(56120)差点相撞,双方仅相距 200 米。Starlink 工程副总裁 Michael Nicolls 对这次发射没有提前与在轨卫星进行协调表示了不满。中科宇航表示正在调查这起事件。主要是因为 SpaceX,目前轨道上的卫星越来越多,2020 年轨道上正常运行的卫星不到 3400 颗,如今数量超过了 13000 颗,大部分是 SpaceX 的 Starlink 宽带卫星——数量多达 9300 颗,SpaceX 仅今年就发射了逾 3000 颗。

  13. 中国在近九成的关键技术领域处于领先地位

    根据智库澳洲战略政策研究所(Australian Strategic Policy Institute,ASPI)的报告,中国在近九成的关键技术领域处于领先地位。ASPI 评估了 74 项当前和新兴技术领域的研究,中国在核能、合成生物学、小型卫星等 66 项技术的研究上排名第一,美国在量子计算和地球工程等 8 项技术的研究上排名第一。结果显示中美的技术优势出现了显著的反转:21 世纪初美国在九成的技术领域位居第一,而中国则在不到 5% 的领域有优势。ASPI 分析了包含逾 900 万份出版物的数据库,根据过去五年引用次数前 10% 的论文署名作者的国别进行排名。苏州西交利物浦大学政治经济学 Steven Hai 认为这一分析不应被解读为美国实力的坍陷。

  14. 美国汽车联盟敦促政府阻止中国汽车厂商在美建厂

    由通用汽车、福特、丰田、大众、现代、斯特兰蒂斯等主要汽车厂商组成的产业联盟敦促美国政府阻止中国汽车厂商和电池制造商在美建厂,称中国对美国汽车行业构成了“明确而现实的威胁”。汽车产业联盟呼吁国会议员维持禁止从中国进口 IT 技术及服务的禁令——该禁令事实上禁止从中国厂商进口汽车。该组织称,美国汽车制造商和电池生产商在国内的投资,无论规模多大,都无法抵消中国通过补贴支持企业在全球长期过度供应的影响。这种过度供应可能导致倾销,国会和特朗普政府必须防止这种情况在美国市场发生。

  15. 俄罗斯勒索软件组织用明文储存主密钥

    亲俄罗斯黑客组织 Cyber​​Volk 在沉寂数月之后推出了基于 Telegram 的勒索软件即服务 CyberVolk 2.x(aka VolkLocker)。基于 Telegram 的服务降低了准入门槛,但好消息是开发者在测试程序时失误,导致主密钥硬编码在可执行文件中。这意味着受害者无需支付赎金就能解密被加密的文件。VolkLocker 不会动态生成加密密钥,硬编码的主密钥以明文写入 %TEMP% 文件夹。勒索软件被发现使用 AES-256-GCM(Galois/Counter Mode)对文件进行加密。