OrangeBot.AI Digest — 2025-09-17
60 headlines across 8 sources, aggregated for this day.
Hacker News(15)
- WASM 3.0 Completed (webassembly.org)
- Anthropic irks White House with limits on models’ use (www.semafor.com)
- Ton Roosendaal to step down as Blender chairman and CEO (www.cgchannel.com)
- DeepSeek writes less secure code for groups China disfavors? (www.washingtonpost.com)
- Depression reduces capacity to learn to actively avoid aversive events (www.eneuro.org)
- U.S. investors, Trump close in on TikTok deal with China (www.wsj.com)
- How to motivate yourself to do a thing you don't want to do (ashleyjanssen.com)
- YouTube addresses lower view counts which seem to be caused by ad blockers (9to5google.com)
- Tau² benchmark: How a prompt rewrite boosted GPT-5-mini by 22% (quesma.com)
- Firefox 143 for Android to introduce DoH (blog.mozilla.org)
- PureVPN IPv6 Leak (anagogistis.com)
- Apple Photos app corrupts images (tenderlovemaking.com)
- Determination of the fifth Busy Beaver value (arxiv.org)
- Alibaba's new AI chip: Key specifications comparable to H20 (news.futunn.com)
- EU Chat Control: Germany's position has been reverted to undecided (mastodon.social)
GitHub Trending(15)
- category-labs / monad
- microsoft / markitdown
Python tool for converting files and office documents to Markdown.
- category-labs / monad-bft
- WebKit / WebKit
Home of the WebKit project, the browser engine used by Safari, Mail, App Store and many other applications on macOS, iOS and Linux.
- ItzCrazyKns / Perplexica
Perplexica is an AI-powered search engine. It is an Open source alternative to Perplexity AI
- openai / codex
Lightweight coding agent that runs in your terminal
- nocodb / nocodb
🔥 🔥 🔥 Open Source Airtable Alternative
- dataease / SQLBot
基于大模型和 RAG 的智能问数系统。Text-to-SQL Generation via LLMs using RAG.
- google-research / timesfm
TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.
- virattt / ai-hedge-fund
An AI Hedge Fund Team
- Alibaba-NLP / DeepResearch
Tongyi DeepResearch, the Leading Open-source DeepResearch Agent
- sst / opencode
AI coding agent, built for the terminal.
- nanobrowser / nanobrowser
Open-Source Chrome extension for AI-powered web automation. Run multi-agent workflows using your own LLM API key. Alternative to OpenAI Operator.
- jordanbaird / Ice
Powerful menu bar manager for macOS
- flutter / flutter
Flutter makes it easy and fast to build beautiful apps for mobile and beyond
Hugging Face(15)
- WebWeaver: Structuring Web-Scale Evidence with Dynamic Outlines for Open-Ended Deep Research
This paper tackles open-ended deep research (OEDR), a complex challenge where AI agents must synthesize vast web-scale information into insightful reports. Current approaches are plagued by dual-fold limitations: static research pipelines that decouple planning from evidence acquisition and one-shot generation paradigms that easily suffer from long-context failure issues like "loss in the middle" and hallucinations. To address these challenges, we introduce WebWeaver, a novel dual-agent framework that emulates the human research process. The planner operates in a dynamic cycle, iteratively interleaving evidence acquisition with outline optimization to produce a comprehensive, source-grounded outline linking to a memory bank of evidence. The writer then executes a hierarchical retrieval and writing process, composing the report section by section. By performing targeted retrieval of only the necessary evidence from the memory bank for each part, it effectively mitigates long-context issues. Our framework establishes a new state-of-the-art across major OEDR benchmarks, including DeepResearch Bench, DeepConsult, and DeepResearchGym. These results validate our human-centric, iterative methodology, demonstrating that adaptive planning and focused synthesis are crucial for producing high-quality, reliable, and well-structured reports.
- Scaling Agents via Continual Pre-training
Large language models (LLMs) have evolved into agentic systems capable of autonomous tool use and multi-step reasoning for complex problem-solving. However, post-training approaches building upon general-purpose foundation models consistently underperform in agentic tasks, particularly in open-source implementations. We identify the root cause: the absence of robust agentic foundation models forces models during post-training to simultaneously learn diverse agentic behaviors while aligning them to expert demonstrations, thereby creating fundamental optimization tensions. To this end, we are the first to propose incorporating Agentic Continual Pre-training (Agentic CPT) into the deep research agents training pipeline to build powerful agentic foundational models. Based on this approach, we develop a deep research agent model named AgentFounder. We evaluate our AgentFounder-30B on 10 benchmarks and achieve state-of-the-art performance while retains strong tool-use ability, notably 39.9% on BrowseComp-en, 43.3% on BrowseComp-zh, and 31.5% Pass@1 on HLE.
- WebSailor-V2: Bridging the Chasm to Proprietary Agents via Synthetic Data and Scalable Reinforcement Learning
Transcending human cognitive limitations represents a critical frontier in LLM training. Proprietary agentic systems like DeepResearch have demonstrated superhuman capabilities on extremely complex information-seeking benchmarks such as BrowseComp, a feat previously unattainable. We posit that their success hinges on a sophisticated reasoning pattern absent in open-source models: the ability to systematically reduce extreme uncertainty when navigating vast information landscapes. Based on this insight, we introduce WebSailor, a complete post-training methodology designed to instill this crucial capability. Our approach involves generating novel, high-uncertainty tasks through structured sampling and information obfuscation, RFT cold start, and an efficient agentic RL training algorithm, Duplicating Sampling Policy Optimization (DUPO). With this integrated pipeline, WebSailor significantly outperforms all open-source agents in complex information-seeking tasks, matching proprietary agents' performance and closing the capability gap.
- Towards General Agentic Intelligence via Environment Scaling
Advanced agentic intelligence is a prerequisite for deploying Large Language Models in practical, real-world applications. Diverse real-world APIs demand precise, robust function-calling intelligence, which needs agents to develop these capabilities through interaction in varied environments. The breadth of function-calling competence is closely tied to the diversity of environments in which agents are trained. In this work, we scale up environments as a step towards advancing general agentic intelligence. This gives rise to two central challenges: (i) how to scale environments in a principled manner, and (ii) how to effectively train agentic capabilities from experiences derived through interactions with these environments. To address these, we design a scalable framework that automatically constructs heterogeneous environments that are fully simulated, systematically broadening the space of function-calling scenarios. We further adapt a two-phase agent fine-tuning strategy: first endowing agents with fundamental agentic capabilities, then specializing them for domain-specific contexts. Extensive experiments on agentic benchmarks, tau-bench, tau2-Bench, and ACEBench, demonstrate that our trained model, AgentScaler, significantly enhances the function-calling capability of models.
- WebResearcher: Unleashing unbounded reasoning capability in Long-Horizon Agents
Recent advances in deep-research systems have demonstrated the potential for AI agents to autonomously discover and synthesize knowledge from external sources. In this paper, we introduce WebResearcher, a novel framework for building such agents through two key components: (1) WebResearcher, an iterative deep-research paradigm that reformulates deep research as a Markov Decision Process, where agents periodically consolidate findings into evolving reports while maintaining focused workspaces, overcoming the context suffocation and noise contamination that plague existing mono-contextual approaches; and (2) WebFrontier, a scalable data synthesis engine that generates high-quality training data through tool-augmented complexity escalation, enabling systematic creation of research tasks that bridge the gap between passive knowledge recall and active knowledge construction. Notably, we find that the training data from our paradigm significantly enhances tool-use capabilities even for traditional mono-contextual methods. Furthermore, our paradigm naturally scales through parallel thinking, enabling concurrent multi-agent exploration for more comprehensive conclusions. Extensive experiments across 6 challenging benchmarks demonstrate that WebResearcher achieves state-of-the-art performance, even surpassing frontier proprietary systems.
- ReSum: Unlocking Long-Horizon Search Intelligence via Context Summarization
Large Language Model (LLM)-based web agents demonstrate strong performance on knowledge-intensive tasks but are hindered by context window limitations in paradigms like ReAct. Complex queries involving multiple entities, intertwined relationships, and high uncertainty demand extensive search cycles that rapidly exhaust context budgets before reaching complete solutions. To overcome this challenge, we introduce ReSum, a novel paradigm that enables indefinite exploration through periodic context summarization. ReSum converts growing interaction histories into compact reasoning states, maintaining awareness of prior discoveries while bypassing context constraints. For paradigm adaptation, we propose ReSum-GRPO, integrating GRPO with segmented trajectory training and advantage broadcasting to familiarize agents with summary-conditioned reasoning. Extensive experiments on web agents of varying scales across three benchmarks demonstrate that ReSum delivers an average absolute improvement of 4.5\% over ReAct, with further gains of up to 8.2\% following ReSum-GRPO training. Notably, with only 1K training samples, our WebResummer-30B (a ReSum-GRPO-trained version of WebSailor-30B) achieves 33.3\% Pass@1 on BrowseComp-zh and 18.3\% on BrowseComp-en, surpassing existing open-source web agents.
- Single-stream Policy Optimization
We revisit policy-gradient optimization for Large Language Models (LLMs) from a single-stream perspective. Prevailing group-based methods like GRPO reduce variance with on-the-fly baselines but suffer from critical flaws: frequent degenerate groups erase learning signals, and synchronization barriers hinder scalability. We introduce Single-stream Policy Optimization (SPO), which eliminates these issues by design. SPO replaces per-group baselines with a persistent, KL-adaptive value tracker and normalizes advantages globally across the batch, providing a stable, low-variance learning signal for every sample. Being group-free, SPO enables higher throughput and scales effectively in long-horizon or tool-integrated settings where generation times vary. Furthermore, the persistent value tracker naturally enables an adaptive curriculum via prioritized sampling. Experiments using Qwen3-8B show that SPO converges more smoothly and attains higher accuracy than GRPO, while eliminating computation wasted on degenerate groups. Ablation studies confirm that SPO's gains stem from its principled approach to baseline estimation and advantage normalization, offering a more robust and efficient path for LLM reasoning. Across five hard math benchmarks with Qwen3 8B, SPO improves the average maj@32 by +3.4 percentage points (pp) over GRPO, driven by substantial absolute point gains on challenging datasets, including +7.3 pp on BRUMO 25, +4.4 pp on AIME 25, +3.3 pp on HMMT 25, and achieves consistent relative gain in pass@k across the evaluated k values. SPO's success challenges the prevailing trend of adding incidental complexity to RL algorithms, highlighting a path where fundamental principles, not architectural workarounds, drive the next wave of progress in LLM reasoning.
- Hunyuan3D Studio: End-to-End AI Pipeline for Game-Ready 3D Asset Generation
The creation of high-quality 3D assets, a cornerstone of modern game development, has long been characterized by labor-intensive and specialized workflows. This paper presents Hunyuan3D Studio, an end-to-end AI-powered content creation platform designed to revolutionize the game production pipeline by automating and streamlining the generation of game-ready 3D assets. At its core, Hunyuan3D Studio integrates a suite of advanced neural modules (such as Part-level 3D Generation, Polygon Generation, Semantic UV, etc.) into a cohesive and user-friendly system. This unified framework allows for the rapid transformation of a single concept image or textual description into a fully-realized, production-quality 3D model complete with optimized geometry and high-fidelity PBR textures. We demonstrate that assets generated by Hunyuan3D Studio are not only visually compelling but also adhere to the stringent technical requirements of contemporary game engines, significantly reducing iteration time and lowering the barrier to entry for 3D content creation. By providing a seamless bridge from creative intent to technical asset, Hunyuan3D Studio represents a significant leap forward for AI-assisted workflows in game development and interactive media.
- 3D Aware Region Prompted Vision Language Model
We present Spatial Region 3D (SR-3D) aware vision-language model that connects single-view 2D images and multi-view 3D data through a shared visual token space. SR-3D supports flexible region prompting, allowing users to annotate regions with bounding boxes, segmentation masks on any frame, or directly in 3D, without the need for exhaustive multi-frame labeling. We achieve this by enriching 2D visual features with 3D positional embeddings, which allows the 3D model to draw upon strong 2D priors for more accurate spatial reasoning across frames, even when objects of interest do not co-occur within the same view. Extensive experiments on both general 2D vision language and specialized 3D spatial benchmarks demonstrate that SR-3D achieves state-of-the-art performance, underscoring its effectiveness for unifying 2D and 3D representation space on scene understanding. Moreover, we observe applicability to in-the-wild videos without sensory 3D inputs or ground-truth 3D annotations, where SR-3D accurately infers spatial relationships and metric measurements.
- EconProver: Towards More Economical Test-Time Scaling for Automated Theorem Proving
Large Language Models (LLMs) have recently advanced the field of Automated Theorem Proving (ATP), attaining substantial performance gains through widely adopted test-time scaling strategies, notably reflective Chain-of-Thought (CoT) reasoning and increased sampling passes. However, they both introduce significant computational overhead for inference. Moreover, existing cost analyses typically regulate only the number of sampling passes, while neglecting the substantial disparities in sampling costs introduced by different scaling strategies. In this paper, we systematically compare the efficiency of different test-time scaling strategies for ATP models and demonstrate the inefficiency of the current state-of-the-art (SOTA) open-source approaches. We then investigate approaches to significantly reduce token usage and sample passes while maintaining the original performance. Specifically, we propose two complementary methods that can be integrated into a unified EconRL pipeline for amplified benefits: (1) a dynamic Chain-of-Thought (CoT) switching mechanism designed to mitigate unnecessary token consumption, and (2) Diverse parallel-scaled reinforcement learning (RL) with trainable prefixes to enhance pass rates under constrained sampling passes. Experiments on miniF2F and ProofNet demonstrate that our EconProver achieves comparable performance to baseline methods with only 12% of the computational cost. This work provides actionable insights for deploying lightweight ATP models without sacrificing performance.
- Exact Coset Sampling for Quantum Lattice Algorithms
We give a simple, fully correct, and assumption-light replacement for the contested "domain-extension" in Step 9 of a recent windowed-QFT lattice algorithm with complex-Gaussian windows~chen2024quantum. The published Step~9 suffers from a periodicity/support mismatch. We present a pair-shift difference construction that coherently cancels all unknown offsets, produces an exact uniform CRT-coset state over Z_{P}, and then uses the QFT to enforce the intended modular linear relation. The unitary is reversible, uses poly(log M_2) gates, and preserves the algorithm's asymptotics. Project Page: https://github.com/yifanzhang-pro/quantum-lattice.
- Multimodal Reasoning for Science: Technical Report and 1st Place Solution to the ICML 2025 SeePhys Challenge
Multimodal reasoning remains a fundamental challenge in artificial intelligence. Despite substantial advances in text-based reasoning, even state-of-the-art models such as GPT-o3 struggle to maintain strong performance in multimodal scenarios. To address this gap, we introduce a caption-assisted reasoning framework that effectively bridges visual and textual modalities. Our approach achieved 1st place in the ICML 2025 AI for Math Workshop \& Challenge 2: SeePhys, highlighting its effectiveness and robustness. Furthermore, we validate its generalization on the MathVerse benchmark for geometric reasoning, demonstrating the versatility of our method. Our code is publicly available at https://github.com/OpenDCAI/SciReasoner.
- Phi: Preference Hijacking in Multi-modal Large Language Models at Inference Time
Recently, Multimodal Large Language Models (MLLMs) have gained significant attention across various domains. However, their widespread adoption has also raised serious safety concerns. In this paper, we uncover a new safety risk of MLLMs: the output preference of MLLMs can be arbitrarily manipulated by carefully optimized images. Such attacks often generate contextually relevant yet biased responses that are neither overtly harmful nor unethical, making them difficult to detect. Specifically, we introduce a novel method, Preference Hijacking (Phi), for manipulating the MLLM response preferences using a preference hijacked image. Our method works at inference time and requires no model modifications. Additionally, we introduce a universal hijacking perturbation -- a transferable component that can be embedded into different images to hijack MLLM responses toward any attacker-specified preferences. Experimental results across various tasks demonstrate the effectiveness of our approach. The code for Phi is accessible at https://github.com/Yifan-Lan/Phi.
- Stable Part Diffusion 4D: Multi-View RGB and Kinematic Parts Video Generation
We present Stable Part Diffusion 4D (SP4D), a framework for generating paired RGB and kinematic part videos from monocular inputs. Unlike conventional part segmentation methods that rely on appearance-based semantic cues, SP4D learns to produce kinematic parts - structural components aligned with object articulation and consistent across views and time. SP4D adopts a dual-branch diffusion model that jointly synthesizes RGB frames and corresponding part segmentation maps. To simplify the architecture and flexibly enable different part counts, we introduce a spatial color encoding scheme that maps part masks to continuous RGB-like images. This encoding allows the segmentation branch to share the latent VAE from the RGB branch, while enabling part segmentation to be recovered via straightforward post-processing. A Bidirectional Diffusion Fusion (BiDiFuse) module enhances cross-branch consistency, supported by a contrastive part consistency loss to promote spatial and temporal alignment of part predictions. We demonstrate that the generated 2D part maps can be lifted to 3D to derive skeletal structures and harmonic skinning weights with few manual adjustments. To train and evaluate SP4D, we construct KinematicParts20K, a curated dataset of over 20K rigged objects selected and processed from Objaverse XL (Deitke et al., 2023), each paired with multi-view RGB and part video sequences. Experiments show that SP4D generalizes strongly to diverse scenarios, including real-world videos, novel generated objects, and rare articulated poses, producing kinematic-aware outputs suitable for downstream animation and motion-related tasks.
- Optimal Brain Restoration for Joint Quantization and Sparsification of LLMs
Recent advances in Large Language Model (LLM) compression, such as quantization and pruning, have achieved notable success. However, as these techniques gradually approach their respective limits, relying on a single method for further compression has become increasingly challenging. In this work, we explore an alternative solution by combining quantization and sparsity. This joint approach, though promising, introduces new difficulties due to the inherently conflicting requirements on weight distributions: quantization favors compact ranges, while pruning benefits from high variance. To attack this problem, we propose Optimal Brain Restoration (OBR), a general and training-free framework that aligns pruning and quantization by error compensation between both. OBR minimizes performance degradation on downstream tasks by building on a second-order Hessian objective, which is then reformulated into a tractable problem through surrogate approximation and ultimately reaches a closed-form solution via group error compensation. Experiments show that OBR enables aggressive W4A4KV4 quantization with 50% sparsity on existing LLMs, and delivers up to 4.72x speedup and 6.4x memory reduction compared to the FP16-dense baseline.
Solidot(15)
- 迪士尼华纳等起诉中国 AI 公司侵犯版权
Disney(包括漫威、卢卡斯影业和 20 世纪福克斯)、Warner Bros. Discovery(包括 DC 漫画) 和 NBCUniversal (包括梦工厂)起诉中国 AI 公司上海稀宇科技有限公司(MiniMax)蓄意且肆无忌惮的侵犯版权。在递交到加州中区联邦地区法院的诉状中,好莱坞巨头指控 MiniMax 无视美国版权法,将它们的版权角色作为自己的角色使用。MiniMax 运营着名为海螺(Hailuo)的图像和视频生成服务,大规模盗版和掠夺原告们的版权作品。MiniMax 宣传海螺服务是口袋里的好莱坞工作室,但其业务是建立在窃取好莱坞工作室知识产权的基础之上。起诉书列举了侵权案例——使用迪士尼的版权角色达斯维达生成图像和视频。好莱坞工作室寻求赔偿以及禁止 MiniMax 继续侵犯其版权作品。
- 气候暖化会让土壤释放出更多碳
一项田野实验结果显示,气候变暖可能增加热带森林的土壤呼吸速率。研究表明,未来升温或使热带土壤的碳损失超过此前预计,影响全球气候预测。研究人员追踪了碳如何在波多黎各的热带森林土壤中迁移。研究人员将分别处于下、中、上坡位的 3 个具有林下植被和土壤的 12 平方米地块人为升温至比环境温度高4摄氏度。他们在一年时间里以半小时间隔记录了这些地块及类似位置对照样本的土壤呼吸速率,共收集了 574500 个测量数据。研究表明,加温地块的土壤呼吸速率比对照地块高 42%-204%,达到陆地生态系统报告中最高的土壤呼吸速率。此外,加温地块每年额外释放的碳为每公顷 6.5-81.7 吨,具体数量取决于坡位,且上坡地块释放的碳最多。研究人员认为,这些增长可能因为加温土壤中的微生物群落发生了改变。
- 字节跳动阿里巴巴等公司被要求停止购买英伟达 AI 芯片
FT 报道,主要科技公司已被要求停止购买英伟达 AI 芯片。字节跳动和阿里巴巴等公司被要求停止测试和订购英伟达专为中国市场设计的的R TX Pro 6000D。多家企业此前表示将订购上万块 RTX Pro 6000D 芯片,并已开始与英伟达的服务器供应商一道进行相关测试和查验工作。在收到命令后,他们要求供应商停止测试和验证工作。
- NPM 再次发现供应链攻击
在 NPM 包维护者被钓鱼攻击导致数十个包被植入窃取加密货币的恶意代码之后,NPM 包再次遭遇供应链攻击,,这一次攻击者可能有点恶作剧。至少 187 个 NPM 包感染了以《沙丘》中沙虫命名的自我复制蠕虫 Shai-Hulu,它会窃取开发者的凭证,然后公开发布到 GitHub 上的 Shai-Hulud 库中。一旦开发者安装了感染了蠕虫的 NPM 包,恶意软件会搜寻 npm 令牌,一旦发现它会修改该 npm 令牌能访问的 20 个最流行的包,植入该蠕虫,发布新版本。安全公司 CrowdStrike 有至少 25 个 NPM 包感染了该蠕虫,该公司表示这些软件包没有被 Falcon 使用,因此 Falcon 不受影响。安全研究人员发现,攻击者有意放过了 Windows 平台,假设开发者在 Linux 或 macOS 环境中工作。
- M87*黑洞图像显示其偏振方向发生翻转
事件视界望远镜(EHT)国际合作团队公布了 M87 星系中心超大质量黑洞的最新精细图像,揭示了黑洞周围不断变化的复杂环境。M87 距离地球约 5,500 万光年,中心存在着质量超过太阳 60 亿倍的超大质量黑洞。EHT 是一个由全球多座电波望远镜组成、大小犹如地球的观测网。EHT 于 2019 年宣布第一个 M87 黑洞阴影的图像,2021年公开 M87 黑洞周围的偏振讯号。透过比较 2017 年、2018 年和 2021 年的观测结果,科学家朝着揭示黑洞附近磁场如何随时间变化迈出了新的一步。黑洞周围亮环的大小多年来保持一致,验证了爱因斯坦广义相对论的预测,但偏振的样式却大幅改变!这意味着事件视界附近的磁化离子非常动态且复杂,让现有的理论模型面临挑战。研究显示,从 2017 到 2021 年间,黑洞周围的偏振方向发生了「翻转」:2017 年磁场呈现一种螺旋方向,2018年 逐渐稳定,而到了 2021 年则完全反转,呈现相反方向的螺旋。这种变化可能受到黑洞内部磁场结构与外部效应的共同影响。累积的观测结果显示,黑洞附近是一个持续演化、动荡的环境,磁场在其中主导了物质如何落入黑洞,以及能量如何向外喷发。
- Firefox 143.0 释出
Mozilla 释出了 Firefox v143.0。主要新特性包括:Windows 版本支持将网站作为 Web 应用固定在任务栏;通过拖曳标签页到标签栏的起始位置固定;可选 Microsoft Copilot 作为 AI 聊天助手;当网站请求摄像头访问权限时可在权限对话框中预览;地址栏支持显示重要的日期和事件,目前限于美国、英国、德国、法国和意大利等地区;改进指纹识别保护;自动删除隐私浏览模式下下载的文件(可在设置中修改该行为);支持 Windows UI Automation,等等。
- 白垩纪兽脚类恐龙奔跑时速能达到 45 公里
根据在内蒙古鄂托克发现的连续恐龙足迹,中科院等机构的研究人员估计白垩纪兽脚类恐龙奔跑时速能达到 45 公里。研究团队在鄂托克发现了一个新的足迹化石地点,识别出大型和中型两种类型足迹化石,共组成 4 条行迹和 2 个孤立足迹。其中一条中型兽脚类恐龙行迹由 5 个连续足迹组成,单个足迹平均长为 25.25 厘米,复步长达 5.3 米,显示造迹者正处于高速奔跑状态。研究人员计算出其奔跑速度分别高达 45 千米/小时和 41±4.9 千米/小时。它刷新了全球范围内白垩纪时期兽脚类恐龙的最快速度纪录。新发现的快速奔跑行迹与大型兽脚类恐龙行迹共存,研究者推测造迹恐龙之所以会快速奔跑,一方面可能是为了捕猎食物,另一方面不排除躲避大型兽脚类恐龙的捕杀。
- AMD 终止自己的 AMDVLK 开源驱动项目,拥抱社区项目 Mesa RADV
AMD 终止自己的 AMDVLK 开源驱动项目,拥抱社区驱动项目 Mesa RADV。这一决定早有先兆:AMD 在今年五月宣布支持 Mesa RADV 驱动。此后它就没有再发布任何新的 AMDVLK 驱动。现在 AMD 正式宣布,为简化开发流程,强化对开源社区的承诺,统一 Linux Vulkan 驱动战略,决定停止 AMDVLK 开源项目,全力支持 RADV 驱动作为 Radeon 显卡官方支持的开源 Vulkan 驱动。Mesa RADV 在 Linux 社区比 AMDVLK 更受欢迎,得到了 Valve、Google 和 Red Hat 等公司的支持。
- 苹果为 10 年前的 iPhone 6 释出安全更新
苹果为 10 年前的 iPhone 6 释出安全更新 iOS 15.8.5 和 iPadOS 15.8.5,修复了一个正被利用的安全漏洞。编号为 CVE-2025-43300 的漏洞是一个越界写入 bug 导致的,可能导致在处理恶意图像文件时内存损坏(memory corruption)。苹果称它收到了漏洞利用的报告。最新补丁适用于 iPhone 6s(所有型号)、iPhone 7(所有型号)、iPhone SE(第一代)、iPad Air 2、iPad mini(第四代)和 iPod touch(第七代)。
- 甲骨文、银湖和 Andreessen Horowitz 组成的财团将收购 TikTok 美国业务
甲骨文、银湖资本(Silver Lake)和 Andreessen Horowitz(a16z)组建的财团将收购 TikTok 美国业务。TikTok 美国业务将由一家新成立的公司运营,美国投资者持有八成股份,其余归中国股东,新公司的董事会也将由美国人主导,其中一名董事将由美国政府指定。TikTok 现有的美国用户将被要求迁移到新应用,美国用户的数据将由甲骨文位于德州的设施处理。美国总统特朗普(Donald Trump)本周二再次签署了一项行政令,第四次延长要求字节跳动剥离 TikTok 美国业务的时间至 12 月 16 日。
- 用户如何使用 ChatGPT?
OpenAI 与哈佛大学经济学家 David Denning 合作发表论文,首次使用内部数据披露用户是如何使用 ChatGPT 的。论文显示:ChatGPT 用户数从 2024 年初的 1 亿增长到 2025 年的逾 7 亿,全球约十分之一成年人人口使用它,每天发送 26 亿条消息,日流量为 Google 的五分之一;长期用户的日活跃度自 2025 年 6 月以来趋稳,近期的新增长来自于新注册用户;46% 的用户年龄在 18-25 岁之间;2022 年推出时八成用户为男性,如今女性用户占 52.4%;2025 年中期 72% 的使用与工作无关,用户更多将 ChatGPT 用于个人、创意和休闲需求而非生产力;28% 的对话涉及写作辅助(电子邮件、编辑、翻译),工作相关查询中写作辅助的比例提高到 42%,商业/管理职位中这一比例达到了 52%;14.9% 的工作相关使用与“做出决策和解决问题”相关。。
- TRAPPIST-1e 可能是至今发现的最类似地球的行星
自 2016 年以来,距离我们仅 40 光年的红矮星 TRAPPIST-1 的行星系统就引起了广泛关注,它有 7 颗类地行星位于宜居带或附近,因此可能存在液态水。天文学家认为它们是研究太阳系外行星适合生命存在的最著名实验室。韦伯太空望远镜升空之后天文学家对该星系展开了观测,首先排除了 TRAPPIST-1b 和 TRAPPIST-1d 存在大气层的可能性。韦伯发现 TRAPPIST-1e 有一个类似地球的气态包层,表面可能存在液态水。光谱显示其大气层富含分子氮,含有微量的二氧化碳和甲烷。如果获得确认,TRAPPIST-1e 可能是迄今为止发现的最像地球的系外行星。MIT 的天体物理学家 Ana Glidden 兴奋的表示,我们正处于一个探索的新时代。
- Java 25 释出
Java SE 25 释出,该版本是一个长期支持版本(LTS),将一直支持到 2030 年 9 月,以及额外 3-4 年的付费扩展支持(一直到 2034 年)。主要新特性包括:470 加密对象的 PEM 编码(预览),502 稳定值(预览),503 移除 32 位 x86 版构建支持,505 结构化并发(第五次预览),506 作用域值,507 模式、instanceof 和 switch 中的原始类型(第三次预览),508 向量API(第十次孵化),509 JFR CPU 时间分析(实验性),510 密钥派生函数 API,511 模块导入声明,512 紧凑源文件和实例主方法,513 灵活的构造函数体,514 提前编译命令行人体工程学,515 提前编译方法分析,518 JFR协作采样,519 紧凑对象头,520 JFR 方法计时与跟踪。521 Shenandoah 垃圾收集器生成模式从实验性功能变为产品功能等等。
- 韦伯捕捉到长 8 光年的原恒星喷流
NASA 韦伯太空望远镜捕捉到一股长达 8 光年的原恒星喷流,源自一颗质量约为太阳十倍的原恒星,位于距离地球约 1 万 5 千光年的 Sharpless 2-284 星云。这股喷流以每小时数十万至百万公里的速度横扫太空,对称且笔直,宛如《星际大战》的双刃光剑。如此庞大而强劲的喷流在恒星形成领域极为罕见。韦伯红外线影像显示,喷流具有丝状结构,正在冲击星际尘埃与气体,形成结点与弓形激波。这股喷流长度显示原恒星已持续活动约十万年。最远端的结构对应最早喷出的物质,而靠近核心的则是较近期的物质喷发。一般而言,大质量恒星的喷流规模与强度会随中心恒星质量的增加而放大,这次观测正好提供了一个清晰的例子。 Sharpless 2-284 星云位于银河系边缘,金属量低,环境条件类似早期宇宙,在此环境下形成的大质量恒星,能帮助我们理解宇宙初期恒星与星系的演化。
- Varnish 8.0.0 释出,因 IP 争议将改名
高性能网站反向缓存服务器 Varnish Cache 项目释出了 v8.0.0 版本,这将是使用 Varnish Cache 名字发布的最后一个版本。Varnish Cache 第一个版本是在 2006 年发布的,2026 年 2 月将迎来项目的二十周年。作为 FOSS 软件项目的 Varnish Cache 最初由挪威报纸 Verdens Gang 赞助和发起,它雇佣了公司 Linpro 和 Poul-Henning Kamp(PHK)。Linpro 发展成了 Varnish Software 并获得了商业使用 Varnish 的权利,作为项目维护者 PHK 与 Varnish Software 达成了一个口头协议,Varnish Cache 是 FOSS 项目,而 Varnish Software 是商业实体。但如今 Varnish Software 的 IP 律师的立场是未经明确许可,任何人不得在任何情况下使用“Varnish Cache”。Varnish Software 的 IP 律师坚持认为,Varnish Software 拥有 Varnish Cache 的名字,开源开发者们只拥有非常有限的许可,而他们则拥有否决权。在这种情况下,Varnish Cache 项目宣布改名为 The Vinyl Cache Project,明年 3 月发布的新版本将正式启用新名字。