OrangeBot.AI Digest — 2025-12-01
60 headlines across 8 sources, aggregated for this day.
Hacker News(15)
- How to Attend Meetings – Internal guidelines from the New York Times (docs.google.com)
- Ghostty compiled to WASM with xterm.js API compatibility (github.com)
- High-income job losses are cooling housing demand (jbrec.com)
- India orders smartphone makers to preload state-owned cyber safety app (www.reuters.com)
- DeepSeek-v3.2: Pushing the frontier of open large language models [pdf] (huggingface.co)
- Google unkills JPEG XL? (tonisagrista.com)
- Ask HN: Who is hiring? (December 2025)
- Cartographers Have Been Hiding Covert Illustrations Inside of Switzerland's Maps (eyeondesign.aiga.org)
- Why xor eax, eax? (xania.org)
- Self-hosting a Matrix server for 5 years (yaky.dev)
- UK Government plans new powers to label dissenting movements as 'subversion' (netpol.org)
- Games using anti-cheats and their compatibility with GNU/Linux or Wine/Proton (areweanticheatyet.com)
- DeepSeekMath-V2: Towards Self-Verifiable Mathematical Reasoning (huggingface.co)
- Google Antigravity just deleted the contents of whole drive (old.reddit.com)
- Regarding Thien-Thi Nguyen
GitHub Trending(15)
- sansan0 / TrendRadar
🎯 告别信息过载,AI 助你看懂新闻资讯热点,简单的舆情监控分析 - 多平台热点聚合+基于 MCP 的AI分析工具。监控35个平台(抖音、知乎、B站、华尔街见闻、财联社等),智能筛选+自动推送+AI对话分析(用自然语言深度挖掘新闻:趋势追踪、情感分析、相似检索等13种工具)。支持企业微信/个人微信/飞书/钉钉/Telegram/邮件/ntfy/bark/slack 推送,30秒网页部署,1分钟手机通知,无需编程。支持Docker部署⭐ 让算法为你服务,用AI理解热点
- google / adk-go
An open-source, code-first Go toolkit for building, evaluating, and deploying sophisticated AI agents with flexibility and control.
- TapXWorld / ChinaTextbook
所有小初高、大学PDF教材。
- yeongpin / cursor-free-vip
[Support 0.49.x](Reset Cursor AI MachineID & Bypass Higher Token Limit) Cursor Ai ,自动重置机器ID , 免费升级使用Pro功能: You've reached your trial request limit. / Too many free trial accounts used on this machine. Please upgrade to pro. We have this limit in place to prevent abuse. Please let us know if you believe this is a mistake.
- nvm-sh / nvm
Node Version Manager - POSIX-compliant bash script to manage multiple active node.js versions
- traefik / traefik
The Cloud Native Application Proxy
- HKUDS / LightRAG
[EMNLP2025] "LightRAG: Simple and Fast Retrieval-Augmented Generation"
- bobeff / open-source-games
A list of open source games.
- volcengine / verl
verl: Volcano Engine Reinforcement Learning for LLMs
- GibsonAI / Memori
Open-Source Memory Engine for LLMs, AI Agents & Multi-Agent Systems
- yangshun / tech-interview-handbook
Curated coding interview preparation materials for busy software engineers
- microsoft / call-center-ai
Send a phone call from AI agent, in an API call. Or, directly call the bot from the configured phone number!
- MustardChef / WSABuilds
Run Windows Subsystem For Android on your Windows 10 and Windows 11 PC using prebuilt binaries with Google Play Store (MindTheGapps) and/or Magisk or KernelSU (root solutions) built in.
- playcanvas / engine
Powerful web graphics runtime built on WebGL, WebGPU, WebXR and glTF
- iptv-org / iptv
Collection of publicly available IPTV channels from all over the world
Hugging Face(15)
- Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer
The landscape of high-performance image generation models is currently dominated by proprietary systems, such as Nano Banana Pro and Seedream 4.0. Leading open-source alternatives, including Qwen-Image, Hunyuan-Image-3.0 and FLUX.2, are characterized by massive parameter counts (20B to 80B), making them impractical for inference, and fine-tuning on consumer-grade hardware. To address this gap, we propose Z-Image, an efficient 6B-parameter foundation generative model built upon a Scalable Single-Stream Diffusion Transformer (S3-DiT) architecture that challenges the "scale-at-all-costs" paradigm. By systematically optimizing the entire model lifecycle -- from a curated data infrastructure to a streamlined training curriculum -- we complete the full training workflow in just 314K H800 GPU hours (approx. $630K). Our few-step distillation scheme with reward post-training further yields Z-Image-Turbo, offering both sub-second inference latency on an enterprise-grade H800 GPU and compatibility with consumer-grade hardware (<16GB VRAM). Additionally, our omni-pre-training paradigm also enables efficient training of Z-Image-Edit, an editing model with impressive instruction-following capabilities. Both qualitative and quantitative experiments demonstrate that our model achieves performance comparable to or surpassing that of leading competitors across various dimensions. Most notably, Z-Image exhibits exceptional capabilities in photorealistic image generation and bilingual text rendering, delivering results that rival top-tier commercial models, thereby demonstrating that state-of-the-art results are achievable with significantly reduced computational overhead. We publicly release our code, weights, and online demo to foster the development of accessible, budget-friendly, yet state-of-the-art generative models.
- REASONEDIT: Towards Reasoning-Enhanced Image Editing Models
Recent advances in image editing models have shown remarkable progress. A common architectural design couples a multimodal large language model (MLLM) encoder with a diffusion decoder, as seen in systems such as Step1X-Edit and Qwen-Image-Edit, where the MLLM encodes both the reference image and the instruction but remains frozen during training. In this work, we demonstrate that unlocking the reasoning capabilities of MLLM can further push the boundaries of editing models. Specifically, we explore two reasoning mechanisms, thinking and reflection, which enhance instruction understanding and editing accuracy. Based on that, our proposed framework enables image editing in a thinking-editing-reflection loop: the thinking mechanism leverages the world knowledge of MLLM to interpret abstract instructions, while the reflection reviews editing results, automatically corrects unintended manipulations, and identifies the stopping round. Extensive experiments demonstrate that our reasoning approach achieves significant performance gains, with improvements of ImgEdit (+4.3%), GEdit (+4.7%), and Kris (+8.2%) when initializing our DiT from the Step1X-Edit (ReasonEdit-S), and also outperforms previous open-source methods on both GEdit and Kris when integrated with Qwen-Image-Edit (ReasonEdit-Q).
- AnyTalker: Scaling Multi-Person Talking Video Generation with Interactivity Refinement
Recently, multi-person video generation has started to gain prominence. While a few preliminary works have explored audio-driven multi-person talking video generation, they often face challenges due to the high costs of diverse multi-person data collection and the difficulty of driving multiple identities with coherent interactivity. To address these challenges, we propose AnyTalker, a multi-person generation framework that features an extensible multi-stream processing architecture. Specifically, we extend Diffusion Transformer's attention block with a novel identity-aware attention mechanism that iteratively processes identity-audio pairs, allowing arbitrary scaling of drivable identities. Besides, training multi-person generative models demands massive multi-person data. Our proposed training pipeline depends solely on single-person videos to learn multi-person speaking patterns and refines interactivity with only a few real multi-person clips. Furthermore, we contribute a targeted metric and dataset designed to evaluate the naturalness and interactivity of the generated multi-person videos. Extensive experiments demonstrate that AnyTalker achieves remarkable lip synchronization, visual quality, and natural interactivity, striking a favorable balance between data costs and identity scalability.
- Vision Bridge Transformer at Scale
We introduce Vision Bridge Transformer (ViBT), a large-scale instantiation of Brownian Bridge Models designed for conditional generation. Unlike traditional diffusion models that transform noise into data, Bridge Models directly model the trajectory between inputs and outputs, creating an efficient data-to-data translation paradigm. By scaling these models to 20B and 1.3B parameters, we demonstrate their effectiveness for image and video translation tasks. To support this scale, we adopt a Transformer architecture and propose a variance-stabilized velocity-matching objective for robust training. Together, these advances highlight the power of scaling Bridge Models for instruction-based image editing and complex video translation.
- Architecture Decoupling Is Not All You Need For Unified Multimodal Model
Unified multimodal models for image generation and understanding represent a significant step toward AGI and have attracted widespread attention from researchers. The main challenge of this task lies in the difficulty in establishing an optimal training paradigm due to inherent conflicting targets in understanding and generation tasks. To alleviate these conflicts and pursue higher performance, many researchers adopt varying degrees of model decoupling (e.g., Double image encoders, MOE/MOT architecture, or frozen MLLM). However, excessive model decoupling can lead to the loss of interleave generation ability, undermining the original intent of unified models. In this work, we aim to explore how to mitigate task conflicts without resorting to model decoupling. Firstly, we analyze why decoupling alleviates conflicts by studying the cross-modal attention behavior of models. We observe that model decoupling essentially drives models toward task-specific multimodal interaction patterns, as seen in Qwen-VL and HunyuanImage, and that the more thorough the decoupling, the more consistent the behavior becomes. Motivated by this observation, we propose Attention Interaction Alignment (AIA) loss, which explicitly learns Task-Specific multimodal interaction patterns during training. To demonstrate the generalizability of our AIA loss, we apply it to Emu3 and Janus-Pro during SFT and post-training stage respectively. Without bells and whistles, AIA not only refines cross-modal attention patterns, but also boosts both generation and understanding performance.
- DeepSeekMath-V2: Towards Self-Verifiable Mathematical Reasoning
Large language models have made significant progress in mathematical reasoning, which serves as an important testbed for AI and could impact scientific research if further advanced. By scaling reasoning with reinforcement learning that rewards correct final answers, LLMs have improved from poor performance to saturating quantitative reasoning competitions like AIME and HMMT in one year. However, this approach faces fundamental limitations. Pursuing higher final answer accuracy doesn't address a key issue: correct answers don't guarantee correct reasoning. Moreover, many mathematical tasks like theorem proving require rigorous step-by-step derivation rather than numerical answers, making final answer rewards inapplicable. To push the limits of deep reasoning, we believe it is necessary to verify the comprehensiveness and rigor of mathematical reasoning. Self-verification is particularly important for scaling test-time compute, especially for open problems without known solutions. Towards self-verifiable mathematical reasoning, we investigate how to train an accurate and faithful LLM-based verifier for theorem proving. We then train a proof generator using the verifier as the reward model, and incentivize the generator to identify and resolve as many issues as possible in their own proofs before finalizing them. To maintain the generation-verification gap as the generator becomes stronger, we propose to scale verification compute to automatically label new hard-to-verify proofs, creating training data to further improve the verifier. Our resulting model, DeepSeekMath-V2, demonstrates strong theorem-proving capabilities, achieving gold-level scores on IMO 2025 and CMO 2024 and a near-perfect 118/120 on Putnam 2024 with scaled test-time compute.
- DiP: Taming Diffusion Models in Pixel Space
Diffusion models face a fundamental trade-off between generation quality and computational efficiency. Latent Diffusion Models (LDMs) offer an efficient solution but suffer from potential information loss and non-end-to-end training. In contrast, existing pixel space models bypass VAEs but are computationally prohibitive for high-resolution synthesis. To resolve this dilemma, we propose DiP, an efficient pixel space diffusion framework. DiP decouples generation into a global and a local stage: a Diffusion Transformer (DiT) backbone operates on large patches for efficient global structure construction, while a co-trained lightweight Patch Detailer Head leverages contextual features to restore fine-grained local details. This synergistic design achieves computational efficiency comparable to LDMs without relying on a VAE. DiP is accomplished with up to 10times faster inference speeds than previous method while increasing the total number of parameters by only 0.3%, and achieves an 1.79 FID score on ImageNet 256times256.
- DualVLA: Building a Generalizable Embodied Agent via Partial Decoupling of Reasoning and Action
To build a generalizable Vision-Language-Action (VLA) model with strong reasoning ability, a common strategy is to first train a specialist VLA on robot demonstrations to acquire reliable manipulation skills, and then incorporate mixed annotated robot data together with multimodal data to restore broader reasoning capabilities. However, we observe that the resulting reasoning VLA often suffers from degraded action performance compared to the specialist model before fine-tuning, a phenomenon we refer to as action degeneration. To address this issue, we propose DualVLA, which enhances action performance through carefully designed post-training while still preserving reasoning capability. We first introduce a dual-layer data pruning method that removes redundant embodied reasoning, preventing it from adversely influencing action learning. To further strengthen action generation, we design a dual-teacher adaptive distillation strategy that assigns different supervision signals to different data domains while maintaining reasoning ability. To fill the evaluation gap for generalist VLAs, we also propose VLA Score, which decouples VLA capability into reasoning, intention, action, and alignment dimensions for a more fine-grained assessment. Experiments show that DualVLA achieves an average success rate of 61.0 in SimplerEnv and an average score of 65.4 across eight competitive multimodal benchmarks, demonstrating a stronger balance between precise action execution and multimodal understanding. Project Website: https://costaliya.github.io/DualVLA/.
- Adversarial Flow Models
We present adversarial flow models, a class of generative models that unifies adversarial models and flow models. Our method supports native one-step or multi-step generation and is trained using the adversarial objective. Unlike traditional GANs, where the generator learns an arbitrary transport plan between the noise and the data distributions, our generator learns a deterministic noise-to-data mapping, which is the same optimal transport as in flow-matching models. This significantly stabilizes adversarial training. Also, unlike consistency-based methods, our model directly learns one-step or few-step generation without needing to learn the intermediate timesteps of the probability flow for propagation. This saves model capacity, reduces training iterations, and avoids error accumulation. Under the same 1NFE setting on ImageNet-256px, our B/2 model approaches the performance of consistency-based XL/2 models, while our XL/2 model creates a new best FID of 2.38. We additionally show the possibility of end-to-end training of 56-layer and 112-layer models through depth repetition without any intermediate supervision, and achieve FIDs of 2.08 and 1.94 using a single forward pass, surpassing their 2NFE and 4NFE counterparts.
- Every Token Counts: Generalizing 16M Ultra-Long Context in Large Language Models
This work explores the challenge of building ``Machines that Can Remember'', framing long-term memory as the problem of efficient ultra-long context modeling. We argue that this requires three key properties: sparsity, random-access flexibility, and length generalization. To address ultra-long-context modeling, we leverage Hierarchical Sparse Attention (HSA), a novel attention mechanism that satisfies all three properties. We integrate HSA into Transformers to build HSA-UltraLong, which is an 8B-parameter MoE model trained on over 8 trillion tokens and is rigorously evaluated on different tasks with in-domain and out-of-domain context lengths to demonstrate its capability in handling ultra-long contexts. Results show that our model performs comparably to full-attention baselines on in-domain lengths while achieving over 90\% accuracy on most in-context retrieval tasks with contexts up to 16M. This report outlines our experimental insights and open problems, contributing a foundation for future research in ultra-long context modeling.
- Decoupled DMD: CFG Augmentation as the Spear, Distribution Matching as the Shield
Diffusion model distillation has emerged as a powerful technique for creating efficient few-step and single-step generators. Among these, Distribution Matching Distillation (DMD) and its variants stand out for their impressive performance, which is widely attributed to their core mechanism of matching the student's output distribution to that of a pre-trained teacher model. In this work, we challenge this conventional understanding. Through a rigorous decomposition of the DMD training objective, we reveal that in complex tasks like text-to-image generation, where CFG is typically required for desirable few-step performance, the primary driver of few-step distillation is not distribution matching, but a previously overlooked component we identify as CFG Augmentation (CA). We demonstrate that this term acts as the core ``engine'' of distillation, while the Distribution Matching (DM) term functions as a ``regularizer'' that ensures training stability and mitigates artifacts. We further validate this decoupling by demonstrating that while the DM term is a highly effective regularizer, it is not unique; simpler non-parametric constraints or GAN-based objectives can serve the same stabilizing function, albeit with different trade-offs. This decoupling of labor motivates a more principled analysis of the properties of both terms, leading to a more systematic and in-depth understanding. This new understanding further enables us to propose principled modifications to the distillation process, such as decoupling the noise schedules for the engine and the regularizer, leading to further performance gains. Notably, our method has been adopted by the Z-Image ( https://github.com/Tongyi-MAI/Z-Image ) project to develop a top-tier 8-step image generation model, empirically validating the generalization and robustness of our findings.
- RefineBench: Evaluating Refinement Capability of Language Models via Checklists
Can language models (LMs) self-refine their own responses? This question is increasingly relevant as a wide range of real-world user interactions involve refinement requests. However, prior studies have largely tested LMs' refinement abilities on verifiable tasks such as competition math or symbolic reasoning with simplified scaffolds, whereas users often pose open-ended queries and provide varying degrees of feedback on what they desire. The recent advent of reasoning models that exhibit self-reflection patterns in their chains-of-thought further motivates this question. To analyze this, we introduce RefineBench, a benchmark of 1,000 challenging problems across 11 domains paired with a checklist-based evaluation framework. We evaluate two refinement modes: (1) guided refinement, where an LM is provided natural language feedback, and (2) self-refinement, where LMs attempt to improve without guidance. In the self-refinement setting, even frontier LMs such as Gemini 2.5 Pro and GPT-5 achieve modest baseline scores of 31.3% and 29.1%, respectively, and most models fail to consistently improve across iterations (e.g., Gemini-2.5-Pro gains only +1.8%, while DeepSeek-R1 declines by -0.1%). By contrast, in guided refinement, both proprietary LMs and large open-weight LMs (>70B) can leverage targeted feedback to refine responses to near-perfect levels within five turns. These findings suggest that frontier LMs require breakthroughs to self-refine their incorrect responses, and that RefineBench provides a valuable testbed for tracking progress.
- Captain Safari: A World Engine
World engines aim to synthesize long, 3D-consistent videos that support interactive exploration of a scene under user-controlled camera motion. However, existing systems struggle under aggressive 6-DoF trajectories and complex outdoor layouts: they lose long-range geometric coherence, deviate from the target path, or collapse into overly conservative motion. To this end, we introduce Captain Safari, a pose-conditioned world engine that generates videos by retrieving from a persistent world memory. Given a camera path, our method maintains a dynamic local memory and uses a retriever to fetch pose-aligned world tokens, which then condition video generation along the trajectory. This design enables the model to maintain stable 3D structure while accurately executing challenging camera maneuvers. To evaluate this setting, we curate OpenSafari, a new in-the-wild FPV dataset containing high-dynamic drone videos with verified camera trajectories, constructed through a multi-stage geometric and kinematic validation pipeline. Across video quality, 3D consistency, and trajectory following, Captain Safari substantially outperforms state-of-the-art camera-controlled generators. It reduces MEt3R from 0.3703 to 0.3690, improves AUC@30 from 0.181 to 0.200, and yields substantially lower FVD than all camera-controlled baselines. More importantly, in a 50-participant, 5-way human study where annotators select the best result among five anonymized models, 67.6% of preferences favor our method across all axes. Our results demonstrate that pose-conditioned world memory is a powerful mechanism for long-horizon, controllable video generation and provide OpenSafari as a challenging new benchmark for future world-engine research.
- Nemotron-Flash: Towards Latency-Optimal Hybrid Small Language Models
Efficient deployment of small language models (SLMs) is essential for numerous real-world applications with stringent latency constraints. While previous work on SLM design has primarily focused on reducing the number of parameters to achieve parameter-optimal SLMs, parameter efficiency does not necessarily translate into proportional real-device speed-ups. This work aims to identify the key determinants of SLMs' real-device latency and offer generalizable principles and methodologies for SLM design and training when real-device latency is the primary consideration. Specifically, we identify two central architectural factors: depth-width ratios and operator choices. The former is crucial for small-batch-size latency, while the latter affects both latency and large-batch-size throughput. In light of this, we first study latency-optimal depth-width ratios, with the key finding that although deep-thin models generally achieve better accuracy under the same parameter budget, they may not lie on the accuracy-latency trade-off frontier. Next, we explore emerging efficient attention alternatives to evaluate their potential as candidate building operators. Using the identified promising operators, we construct an evolutionary search framework to automatically discover latency-optimal combinations of these operators within hybrid SLMs, thereby advancing the accuracy-latency frontier. In addition to architectural improvements, we further enhance SLM training using a weight normalization technique that enables more effective weight updates and improves final convergence. Combining these methods, we introduce a new family of hybrid SLMs, called Nemotron-Flash, which significantly advances the accuracy-efficiency frontier of state-of-the-art SLMs, e.g., achieving over +5.5% average accuracy, 1.3x/1.9x lower latency, and 18.7x/45.6x higher throughput compared to Qwen3-1.7B/0.6B, respectively.
- World in a Frame: Understanding Culture Mixing as a New Challenge for Vision-Language Models
In a globalized world, cultural elements from diverse origins frequently appear together within a single visual scene. We refer to these as culture mixing scenarios, yet how Large Vision-Language Models (LVLMs) perceive them remains underexplored. We investigate culture mixing as a critical challenge for LVLMs and examine how current models behave when cultural items from multiple regions appear together. To systematically analyze these behaviors, we construct CultureMix, a food Visual Question Answering (VQA) benchmark with 23k diffusion-generated, human-verified culture mixing images across four subtasks: (1) food-only, (2) food+food, (3) food+background, and (4) food+food+background. Evaluating 10 LVLMs, we find consistent failures to preserve individual cultural identities in mixed settings. Models show strong background reliance, with accuracy dropping 14% when cultural backgrounds are added to food-only baselines, and they produce inconsistent predictions for identical foods across different contexts. To address these limitations, we explore three robustness strategies. We find supervised fine-tuning using a diverse culture mixing dataset substantially improve model consistency and reduce background sensitivity. We call for increased attention to culture mixing scenarios as a critical step toward developing LVLMs capable of operating reliably in culturally diverse real-world environments.
Solidot(15)
- 树莓派因为内存价格飙升而涨价
树莓派宣布因为近期内存价格飙升而对部分型号的 Raspberry Pi 4 和 5 产品涨价,同时宣布推出一款 1GB 版本的 Raspberry Pi 5,售价 45 美元。Raspberry Pi 4 和 5 价格上涨最高 25 美元,最低 5 美元。4GB 版本的 Raspberry Pi 4 从 55 美元涨到 60 美元,16GB 版本的 Raspberry Pi 5 从 120 美元涨到 145 美元。树莓派表示近期的内存价格飙升是 AI 热推动的,一旦情况缓解它将会调低价格。
- 吸血章鱼揭示章鱼的起源
吸血章鱼是一种居住在深海的头足类,是八腕总目的一种,其祖先在侏罗纪时期为了躲避蛇颈龙目的猎食而移居深海,亿年来其形态不曾改变,被称为是活化石。日本研究团队在骏河湾意外捕捉到一只吸血章鱼,对其进行测序后发现其碱基对超过 110 亿,是已知章鱼类动物最大基因组的两倍多。吸血章鱼虽然名字中有章鱼,但它既不是章鱼也不是鱿鱼,更不是吸血鬼,它是一种古老谱系中最后也是唯一的幸存者,该谱系中其它成员都消失了。它的历史可追溯到 1.83 亿年前,保留了祖先的诸多特征,同时演化出适应深海黑暗环境、以腐肉为生的生存方式。其基因组规模比鱿鱼和章鱼都大得多,其中 62% 由重复序列组成。吸血章鱼属于八腕总目,但保留了十腕总目的部分染色体结构。研究人员表示吸血章鱼让我们能直接观察头足类动物演化的最早阶段。
- 韩国电商巨头逾 3000 万用户账户泄漏
韩国电商巨头酷澎发生了 3000 余万个用户账号信息遭泄事件。遭泄的个人信息包含用户姓名、电子邮箱、电话号码、地址,甚至包含部分订购记录。根据韩国 《个人信息保护法》,若企业违反相关法律,可以被处以最多相当于销售额 3% 的罚款。酷澎今年前三季度累计销售额为 36.3 万亿韩元。若从中减去与个人信息泄露案关联度不高的业务部门业绩等,销售额为 31 万亿韩元。若再将其折算为年销售额,罚款或达 1.2 万亿韩元。根据酷澎递交给警方的报告,用户信息泄露非因遭黑客攻击,而由公司中国籍员工外泄所致。该员工早已离职并离境。
- SmartTube 官方 APK 文件被植入恶意程序
SmartTube 开发者上周宣布数字签名泄漏,他发布了使用新签名的新版本应用,督促用户切换到新版本。SmartTube 是 Android TV 和 Fire TV 设备上 YouTube 应用的流行替代。开发者透露,他用于构建官方 APK 文件的计算机遭到入侵,导致部分 APK 版本植入了恶意程序。暂时不清楚哪个版本的 APK 最早包含了恶意程序。APKMirror 上的 SmartTube v30.43 和 30.47 都被标记为感染恶意程序。开发者表示,所有旧版本 SmartTube 都已经从项目的 GitHub 库中移除,感染恶意程序的计算机也进行了处理,旧数字签名被弃用。SmartTube v30.56 是使用新签名在干净计算机上构建的首个版本。
- 日本多家新闻社要求 Perplexity 停止使用其新闻稿
日本共同社、每日新闻社与产经新闻社周一向 AI 搜索公司 Perplexity 发送抗议书,以该公司擅自使用新闻机构发布的新闻稿、侵犯著作权为由,要求立即停止使用。在这之前,读卖新闻、朝日新闻社和日本经济新闻社也都提出了类似的要求和诉讼。共同社在抗议书中指出,确认到自 2024 年 8 月起的约 1 年里,Perplexity 合计数十万次访问刊登共同社与加盟报社稿件的新闻网站“47NEWS”。抗议书强调,Perplexity 未获许可即收集和复制新闻内容,并用于生成回答,侵犯了著作权。抗议书还指出,Perplexity 回答所显示的参考来源是共同社新闻稿,但给出的回答却是与稿件内容不同的虚假信息,损害了共同社新闻产品的信誉和品牌价值。
- 因发射事故俄罗斯失去了唯一一个载人飞船发射场
11 月 27 日俄罗斯在拜科努尔航天发射场成功发射了联盟号 MS-28 载人飞船。但 31/6 发射台下的移动维护舱却因为火箭尾焰从高空倒扣坠落而严重受损,在修复前发射台将无法使用,至于修复时间专家估计从几个月到三年。拜科努尔航天发射场是目前俄罗斯唯一能向国际空间站发射联盟号载人飞船和进步号无人货船的发射场地,而无人货船 MS-33 原计划于 12 月 21 日发射。俄罗斯还有其它发射场,但它们或者位于不适合的维度如 Plesetsk 发射场,或者没有获得载人飞行认证如东方发射场,或者已经退役移交给博物馆如拜科努尔的加加林发射台。
- NixOS 25.11 释出
基于 Nix 包管理器的发行版 NixOS 释出了代号为 Xantusia 的 NixOS 25.11。主要新特性包括:不依赖 bash 基于 Rust 的初始化系统 nixos-init;COSMIC 桌面环境的 beta 版本;FirewallD 支持;GNOME 49,移除 X11 会话支持,用户仍然可以通过 XWayland 运行 X11 应用;Papers 文档查看器取代 Evince;Showtime 播放器现取代 Totem;nixos-rebuild-ng 默认启用;Syncthing 2.0.0;LLVM 21 等等。
- Linux Kernel 6.18 释出
Linus Torvalds 在内核邮件列表上宣布释出 Linux Kernel 6.18。主要新特性包括:移除 Bcachefs 文件系统,Rust Binder 驱动,支持用文件句柄管理命名空间,支持 AccECN 拥塞控制协议,初步支持 BPF 程序签名,用 sheaves 改进内存管理,更好的控制透明大页等等。更多可浏览 KernelNewbies 6.18。
- Reddit 评论后端从 Python 迁移到 Go Microservice
Reddit 完成了将评论后端从 Python 迁移到 Go Microservice 的工作。此举旨在改进性能和可靠性。Reddit 是全世界访问量第七大的网站,是集社交新闻和论坛性质于一身的社交媒体。Reddit 团队称,新架构简化了评论系统的依赖链,同时维持了下游系统的完整事件交付保证。迁移到域特定微服务架构也为平台进一步分解其它核心服务奠定了基础。Reddit 高级软件工程师 Katie Shannon 称,相比旧的 Python 系统,关键写操作、创建、更新和递增端点的延迟降低了一半,之前的系统延迟曾最高达 15 秒。
- 挪威财富基金支持要求微软披露低人权国家业务风险提案
挪威主权财富基金宣布将在微软年度股东大会上支持一项股东提案。该提案要求微软提交一份关于在低人权国家开展业务的风险报告。微软管理层对该提案表达了反对立场。挪威主权财富基金还反对再次任命 CEO Satya Nadella 为董事会主席及其薪酬方案。挪威主权财富基金规模高达 2 万亿美元,持有的微软股份价值 500 亿美元,仅次于该基金持有的英伟达股份价值。它是微软的第八大股东。
- 争夺非洲 IPv4 地址之战
Lu Heng 生长于浙江宁波的渔村石浦,从小就接触网络。大学期间卖过《魔兽世界》的点卡(time cards),在荷兰居住期间创办了一家提供互联网服务的公司。他在 20 多岁时产生了一个能让他暴富的想法:收购非洲数百万未使用的 IPv4 地址,然后高价出租给非洲之外急需 IPv4 地址的公司。IPv4 地址在世界其它地方已经供不应求,从 IPv4 到 IPv6 的转变也没有预计的顺利,半数互联网流量仍然使用 IPv4,而非洲是唯一一个仍然有足够 IPv4 地址可供分配的大陆。2013 年 Lu 在非洲岛国塞舌尔注册了一家公司 Cloud Innovation,向 African Network Information Centre(Afrinic)申请 IP 地址。Afrinic 在 2013-2016 年间向 Cloud Innovation 分配了 620 万 IPv4 地址,比非洲人口最多的尼日利亚拥有的 IPv4 地址还要多。Cloud Innovation 将其拥有的 IPv4 地址转让给 Lu 在香港创办的公司如 Larus,再由 Larus 将地址出租给其它公司使用。Lu 声称他的关联公司控制着逾千万 IPv4 地址。互联网的设计者可能没有预想到会有人将 IP 地址货币化。Lu 的行为引发了争议和冲突。African 在 2020 年要求收回地址,称这些地址不应该在非洲之外使用。为了保住其持有的地址,Lu 在毛里求斯起诉了 Afrinic。当地法院于 2021 年 7 月冻结了 Afrinic 的银行账户,导致该机构瘫痪,破坏了该机构继续分配新 IPv4 地址的能力。Lu 还通过大量诉讼试图压制对他的批评,声称这些批评损害了他的声誉。
- 高污染环境下锻炼的效果较低
锻炼有益健康,但如果是在高污染环境下锻炼那么效果将会大打折扣。研究人员分析了英国、台湾、中国、丹麦和美国等地区十余年收集的逾 150 万成年人的健康数据。研究人员发现,每周进行至少两个半小时运动如慢跑的成年人,研究期间的死亡风险比运动量少的人低 30%。但在 PM2.5 水平超过 25 μg/m³ 的地区,运动的保护作用下降至 12%-15%。在污染更严重的地区如 PM2.5 水平超过 35 μg/m³,运动的保护作用进一步减弱。研究人员指出他们的研究数据主要来自高收入国家,如果在 PM2.5 水平超过 50 μg/m³ 的低收入国家,影响可能更为显著。
- 天文学家观测到红矮星的日冕物质抛射
天文学家首次观测到一颗红矮星的日冕物质抛射(Coronal Mass Ejection, CME)。这是第一次直接探测到来自附近恒星的高能量 Type II 无线电波爆发。这次爆发的来源为 M 型红矮星 StKM-1262,质量仅为太阳 60%,距离太阳系约 130 光年,位于天龙座边界。这次被捕捉到的 CME 威力比太阳典型的 CME 强上一万至十万倍,特征与强度类似于太阳的 Type II 爆发,然而这类爆发只占太阳全部 CME 事件的0.05%,属于极端事件。若从爆发的速度及发射频率推算,这次事件抛出的等离子粒子密度到达 M 型红矮星适居带边缘(0.2AU)时,足以把地球等级磁场行星的磁层压缩至行星表面,对行星大气是毁灭性的打击。
- 荷兰大学反思对微软软件的依赖
今年早些时候,荷兰海牙的国际刑事法院以战争罪对以色列总理内塔尼亚胡及前国防部长加兰特(Yoav Gallant)发出逮捕令,美国总统特朗普对首席检察官 Karim Khan 等人进行了制裁,微软立即封锁了 Khan 的电邮账户,迫使他改用瑞士电邮服务 Proton。这一事件促使欧洲各国和教育机构反思对美国科技公司的依赖。其中之一是荷兰大学。大学的学生、教职工以及 IT 管理员都广泛使用和依赖微软软件,教育机构还在微软云服务储存了大量数据。荷兰有七所大学和一所大学学院因切断或冻结与以色列机构的联系,被美国佛罗里达州列入制裁名单,在反复无常的特朗普领导下,荷兰的教育机构可能随时面临惩罚。但荷兰大学能摆脱微软吗?教授们指出,如果停止使用微软软件会导致教育和研究立即停滞,认为对科技巨头的依赖从根本上违背了自由、独立、自主和平等的公共价值观,呼吁与欧洲其它大学合作建立自主的 IT 基础设施。
- 因强太阳辐射可能破坏飞控数据空客对全世界六千架客机进行软件更新
今年 10 月 JetBlue Airways 公司一架飞美墨航线的空客飞机遭遇了飞行高度突然下降的事件,客机在佛罗里达紧急迫降,至少 15 人受伤。调查发现,强太阳辐射可能会破坏飞控功能的关键数据。为确保飞行安全空客宣布对全世界大约 6000 架客机进行紧急软件更新。受影响的主要是 A320 机型,以及部分 A318、A319 和 A321 机型。其中 5100 架客机只需要软件更新就能恢复飞行,还有 900 架旧型号客机则需要更换机载计算机,在完成更换前飞机将无法恢复飞行。