DIGEST · 2025-08-10

OrangeBot.AI Digest — 2025-08-10

59 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. 1910: The year the modern world lost its mind (www.derekthompson.org)
  2. Fight Chat Control (fightchatcontrol.eu)
  3. Show HN: Bolt – A super-fast, statically-typed scripting language written in C (github.com)
  4. Diffusion language models are super data learners (jinjieni.notion.site)
  5. AOL closes its dial up internet service (www.ispreview.co.uk)
  6. Zig's Lovely Syntax (matklad.github.io)
  7. GPT-OSS vs. Qwen3 and a detailed look how things evolved since GPT-2 (magazine.sebastianraschka.com)
  8. Try and (ygdp.yale.edu)
  9. MCP: An (Accidentally) Universal Plugin System (worksonmymachine.ai)
  10. Show HN: Engineering.fyi – Search across tech engineering blogs in one place (engineering.fyi)
  11. Booting 5000 Erlangs on Ampere One 192-core (underjord.io)
  12. Open Lovable (github.com)
  13. Writing simple tab-completions for Bash and Zsh (mill-build.org)
  14. Melonking Website (melonking.net)
  15. Abogen – Generate audiobooks from EPUBs, PDFs and text (github.com)

GitHub Trending(13)

  1. umami-software / umami

    Umami is a modern, privacy-focused alternative to Google Analytics.

  2. libsdl-org / SDL

    Simple Directmedia Layer

  3. menloresearch / jan

    Jan is an open source alternative to ChatGPT that runs 100% offline on your computer

  4. nomic-ai / gpt4all

    GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.

  5. openai / codex

    Lightweight coding agent that runs in your terminal

  6. RSSNext / Folo

    🧡 Follow everything in one place

  7. polarsource / polar

    An open source engine for your digital products. Sell SaaS and digital products in minutes.

  8. fastapi / full-stack-fastapi-template

    Full stack, modern web application template. Using FastAPI, React, SQLModel, PostgreSQL, Docker, GitHub Actions, automatic HTTPS and more.

  9. topjohnwu / Magisk

    The Magic Mask for Android

  10. idosal / git-mcp

    Put an end to code hallucinations! GitMCP is a free, open-source, remote MCP server for any GitHub project

  11. binhnguyennus / awesome-scalability

    The Patterns of Scalable, Reliable, and Performant Large-Scale Systems

  12. openai / openai-node

    Official JavaScript / TypeScript library for the OpenAI API

  13. xiaoyaocz / dart_simple_live

    简简单单的看直播

Product Hunt(4)

  1. Simular Pro

    The production-grade computer use agent

  2. Shotva

    Prettify your screenshots

  3. SEO Shop Audit

    Turn eCommerce store owners into SEO retainer clients

  4. Yummery

    Healthy meal plans & recipes powered by AI

Hugging Face(15)

  1. On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification

    We present a simple yet theoretically motivated improvement to Supervised Fine-Tuning (SFT) for the Large Language Model (LLM), addressing its limited generalization compared to reinforcement learning (RL). Through mathematical analysis, we reveal that standard SFT gradients implicitly encode a problematic reward structure that may severely restrict the generalization capabilities of model. To rectify this, we propose Dynamic Fine-Tuning (DFT), stabilizing gradient updates for each token by dynamically rescaling the objective function with the probability of this token. Remarkably, this single-line code change significantly outperforms standard SFT across multiple challenging benchmarks and base models, demonstrating greatly improved generalization. Additionally, our approach shows competitive results in offline RL settings, offering an effective yet simpler alternative. This work bridges theoretical insight and practical solutions, substantially advancing SFT performance. The code will be available at https://github.com/yongliang-wu/DFT.

  2. R-Zero: Self-Evolving Reasoning LLM from Zero Data

    Self-evolving Large Language Models (LLMs) offer a scalable path toward super-intelligence by autonomously generating, refining, and learning from their own experiences. However, existing methods for training such models still rely heavily on vast human-curated tasks and labels, typically via fine-tuning or reinforcement learning, which poses a fundamental bottleneck to advancing AI systems toward capabilities beyond human intelligence. To overcome this limitation, we introduce R-Zero, a fully autonomous framework that generates its own training data from scratch. Starting from a single base LLM, R-Zero initializes two independent models with distinct roles, a Challenger and a Solver. These models are optimized separately and co-evolve through interaction: the Challenger is rewarded for proposing tasks near the edge of the Solver capability, and the Solver is rewarded for solving increasingly challenging tasks posed by the Challenger. This process yields a targeted, self-improving curriculum without any pre-existing tasks and labels. Empirically, R-Zero substantially improves reasoning capability across different backbone LLMs, e.g., boosting the Qwen3-4B-Base by +6.49 on math-reasoning benchmarks and +7.54 on general-domain reasoning benchmarks.

  3. Genie Envisioner: A Unified World Foundation Platform for Robotic Manipulation

    We introduce Genie Envisioner (GE), a unified world foundation platform for robotic manipulation that integrates policy learning, evaluation, and simulation within a single video-generative framework. At its core, GE-Base is a large-scale, instruction-conditioned video diffusion model that captures the spatial, temporal, and semantic dynamics of real-world robotic interactions in a structured latent space. Built upon this foundation, GE-Act maps latent representations to executable action trajectories through a lightweight, flow-matching decoder, enabling precise and generalizable policy inference across diverse embodiments with minimal supervision. To support scalable evaluation and training, GE-Sim serves as an action-conditioned neural simulator, producing high-fidelity rollouts for closed-loop policy development. The platform is further equipped with EWMBench, a standardized benchmark suite measuring visual fidelity, physical consistency, and instruction-action alignment. Together, these components establish Genie Envisioner as a scalable and practical foundation for instruction-driven, general-purpose embodied intelligence. All code, models, and benchmarks will be released publicly.

  4. DeepPHY: Benchmarking Agentic VLMs on Physical Reasoning

    Although Vision Language Models (VLMs) exhibit strong perceptual abilities and impressive visual reasoning, they struggle with attention to detail and precise action planning in complex, dynamic environments, leading to subpar performance. Real-world tasks typically require complex interactions, advanced spatial reasoning, long-term planning, and continuous strategy refinement, usually necessitating understanding the physics rules of the target scenario. However, evaluating these capabilities in real-world scenarios is often prohibitively expensive. To bridge this gap, we introduce DeepPHY, a novel benchmark framework designed to systematically evaluate VLMs' understanding and reasoning about fundamental physical principles through a series of challenging simulated environments. DeepPHY integrates multiple physical reasoning environments of varying difficulty levels and incorporates fine-grained evaluation metrics. Our evaluation finds that even state-of-the-art VLMs struggle to translate descriptive physical knowledge into precise, predictive control.

  5. Hi3DEval: Advancing 3D Generation Evaluation with Hierarchical Validity

    Despite rapid advances in 3D content generation, quality assessment for the generated 3D assets remains challenging. Existing methods mainly rely on image-based metrics and operate solely at the object level, limiting their ability to capture spatial coherence, material authenticity, and high-fidelity local details. 1) To address these challenges, we introduce Hi3DEval, a hierarchical evaluation framework tailored for 3D generative content. It combines both object-level and part-level evaluation, enabling holistic assessments across multiple dimensions as well as fine-grained quality analysis. Additionally, we extend texture evaluation beyond aesthetic appearance by explicitly assessing material realism, focusing on attributes such as albedo, saturation, and metallicness. 2) To support this framework, we construct Hi3DBench, a large-scale dataset comprising diverse 3D assets and high-quality annotations, accompanied by a reliable multi-agent annotation pipeline. We further propose a 3D-aware automated scoring system based on hybrid 3D representations. Specifically, we leverage video-based representations for object-level and material-subject evaluations to enhance modeling of spatio-temporal consistency and employ pretrained 3D features for part-level perception. Extensive experiments demonstrate that our approach outperforms existing image-based metrics in modeling 3D characteristics and achieves superior alignment with human preference, providing a scalable alternative to manual evaluations. The project page is available at https://zyh482.github.io/Hi3DEval/.

  6. Are We on the Right Way for Assessing Document Retrieval-Augmented Generation?

    Retrieval-Augmented Generation (RAG) systems using Multimodal Large Language Models (MLLMs) show great promise for complex document understanding, yet their development is critically hampered by inadequate evaluation. Current benchmarks often focus on specific part of document RAG system and use synthetic data with incomplete ground truth and evidence labels, therefore failing to reflect real-world bottlenecks and challenges. To overcome these limitations, we introduce Double-Bench: a new large-scale, multilingual, and multimodal evaluation system that is able to produce fine-grained assessment to each component within document RAG systems. It comprises 3,276 documents (72,880 pages) and 5,168 single- and multi-hop queries across 6 languages and 4 document types with streamlined dynamic update support for potential data contamination issues. Queries are grounded in exhaustively scanned evidence pages and verified by human experts to ensure maximum quality and completeness. Our comprehensive experiments across 9 state-of-the-art embedding models, 4 MLLMs and 4 end-to-end document RAG frameworks demonstrate the gap between text and visual embedding models is narrowing, highlighting the need in building stronger document retrieval models. Our findings also reveal the over-confidence dilemma within current document RAG frameworks that tend to provide answer even without evidence support. We hope our fully open-source Double-Bench provide a rigorous foundation for future research in advanced document RAG systems. We plan to retrieve timely corpus and release new benchmarks on an annual basis.

  7. Are Today's LLMs Ready to Explain Well-Being Concepts?

    Well-being encompasses mental, physical, and social dimensions essential to personal growth and informed life decisions. As individuals increasingly consult Large Language Models (LLMs) to understand well-being, a key challenge emerges: Can LLMs generate explanations that are not only accurate but also tailored to diverse audiences? High-quality explanations require both factual correctness and the ability to meet the expectations of users with varying expertise. In this work, we construct a large-scale dataset comprising 43,880 explanations of 2,194 well-being concepts, generated by ten diverse LLMs. We introduce a principle-guided LLM-as-a-judge evaluation framework, employing dual judges to assess explanation quality. Furthermore, we show that fine-tuning an open-source LLM using Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO) can significantly enhance the quality of generated explanations. Our results reveal: (1) The proposed LLM judges align well with human evaluations; (2) explanation quality varies significantly across models, audiences, and categories; and (3) DPO- and SFT-finetuned models outperform their larger counterparts, demonstrating the effectiveness of preference-based learning for specialized explanation tasks.

  8. CoAct-1: Computer-using Agents with Coding as Actions

    Autonomous agents that operate computers via Graphical User Interfaces (GUIs) often struggle with efficiency and reliability on complex, long-horizon tasks. While augmenting these agents with planners can improve task decomposition, they remain constrained by the inherent limitations of performing all actions through GUI manipulation, leading to brittleness and inefficiency. In this work, we introduce a more robust and flexible paradigm: enabling agents to use coding as a enhanced action. We present CoAct-1, a novel multi-agent system that synergistically combines GUI-based control with direct programmatic execution. CoAct-1 features an Orchestrator that dynamically delegates subtasks to either a conventional GUI Operator or a specialized Programmer agent, which can write and execute Python or Bash scripts. This hybrid approach allows the agent to bypass inefficient GUI action sequences for tasks like file management and data processing, while still leveraging visual interaction when necessary. We evaluate our system on the challenging OSWorld benchmark, where CoAct-1 achieves a new state-of-the-art success rate of 60.76%, significantly outperforming prior methods. Furthermore, our approach dramatically improves efficiency, reducing the average number of steps required to complete a task to just 10.15, compared to 15 for leading GUI agents. Our results demonstrate that integrating coding as a core action provides a more powerful, efficient, and scalable path toward generalized computer automation.

  9. Can Large Multimodal Models Actively Recognize Faulty Inputs? A Systematic Evaluation Framework of Their Input Scrutiny Ability

    Large Multimodal Models (LMMs) have witnessed remarkable growth, showcasing formidable capabilities in handling intricate multimodal tasks with exceptional performance. Recent research has underscored the inclination of large language models to passively accept defective inputs, often resulting in futile reasoning on invalid prompts. However, the same critical question of whether LMMs can actively detect and scrutinize erroneous inputs still remains unexplored. To address this gap, we introduce the Input Scrutiny Ability Evaluation Framework (ISEval), which encompasses seven categories of flawed premises and three evaluation metrics. Our extensive evaluation of ten advanced LMMs has identified key findings. Most models struggle to actively detect flawed textual premises without guidance, which reflects a strong reliance on explicit prompts for premise error identification. Error type affects performance: models excel at identifying logical fallacies but struggle with surface-level linguistic errors and certain conditional flaws. Modality trust varies-Gemini 2.5 pro and Claude Sonnet 4 balance visual and textual info, while aya-vision-8b over-rely on text in conflicts. These insights underscore the urgent need to enhance LMMs' proactive verification of input validity and shed novel insights into mitigating the problem. The code is available at https://github.com/MLGroupJLU/LMM_ISEval.

  10. Don't Overthink It: A Survey of Efficient R1-style Large Reasoning Models

    Recently, Large Reasoning Models (LRMs) have gradually become a research hotspot due to their outstanding performance in handling complex tasks. Among them, DeepSeek R1 has garnered significant attention for its exceptional performance and open-source nature, driving advancements in the research of R1-style LRMs. Unlike traditional Large Language Models (LLMs), these models enhance logical deduction and decision-making capabilities during reasoning by incorporating mechanisms such as long chain-of-thought and self-reflection through reinforcement learning. However, with the widespread application of these models, the problem of overthinking has gradually emerged. Specifically, when generating answers, these models often construct excessively long reasoning chains with redundant or repetitive steps, which leads to reduced reasoning efficiency and may affect the accuracy of the final answer. To this end, various efficient reasoning methods have been proposed, aiming to reduce the length of reasoning paths without compromising model performance and reasoning capability. By reviewing the current research advancements in the field of efficient reasoning methods systematically, we categorize existing works into two main directions based on the lens of single-model optimization versus model collaboration: (1) Efficient Reasoning with Single Model, which focuses on improving the reasoning efficiency of individual models; and (2) Efficient Reasoning with Model Collaboration, which explores optimizing reasoning paths through collaboration among multiple models. Besides, we maintain a public GitHub repository that tracks the latest progress in efficient reasoning methods.

  11. Marco-Voice Technical Report

    This paper presents a multifunctional speech synthesis system that integrates voice cloning and emotion control speech synthesis within a unified framework. The goal of this work is to address longstanding challenges in achieving highly expressive, controllable, and natural speech generation that faithfully preserves speaker identity across diverse linguistic and emotional contexts. Our approach introduces an effective speaker-emotion disentanglement mechanism with in-batch contrastive learning, enabling independent manipulation of speaker identity and eemotional style, as well as rotational emotional embedding integration method for smooth emotion control. To support comprehensive training and evaluation, we construct CSEMOTIONS, a high-quality emotional speech dataset containing 10 hours of Mandarin speech from six professional speakers across seven emotional categories. Extensive experiments demonstrate that our system, Marco-Voice, achieves substantial improvements in both objective and subjective metrics. Comprehensive evaluations and analysis were conducted, results show that MarcoVoice delivers competitive performance in terms of speech clarity and emotional richness, representing a substantial advance in the field of expressive neural speech synthesis.

  12. Evaluating, Synthesizing, and Enhancing for Customer Support Conversation

    Effective customer support requires not only accurate problem solving but also structured and empathetic communication aligned with professional standards. However, existing dialogue datasets often lack strategic guidance, and real-world service data is difficult to access and annotate. To address this, we introduce the task of Customer Support Conversation (CSC), aimed at training customer service agents to respond using well-defined support strategies. We propose a structured CSC framework grounded in COPC guidelines, defining five conversational stages and twelve strategies to guide high-quality interactions. Based on this, we construct CSConv, an evaluation dataset of 1,855 real-world customer-agent conversations rewritten using LLMs to reflect deliberate strategy use, and annotated accordingly. Additionally, we develop a role-playing approach that simulates strategy-rich conversations using LLM-powered roles aligned with the CSC framework, resulting in the training dataset RoleCS. Experiments show that fine-tuning strong LLMs on RoleCS significantly improves their ability to generate high-quality, strategy-aligned responses on CSConv. Human evaluations further confirm gains in problem resolution. All code and data will be made publicly available at https://github.com/aliyun/qwen-dianjin.

  13. InfiAlign: A Scalable and Sample-Efficient Framework for Aligning LLMs to Enhance Reasoning Capabilities

    Large language models (LLMs) have exhibited impressive reasoning abilities on a wide range of complex tasks. However, enhancing these capabilities through post-training remains resource intensive, particularly in terms of data and computational cost. Although recent efforts have sought to improve sample efficiency through selective data curation, existing methods often rely on heuristic or task-specific strategies that hinder scalability. In this work, we introduce InfiAlign, a scalable and sample-efficient post-training framework that integrates supervised fine-tuning (SFT) with Direct Preference Optimization (DPO) to align LLMs for enhanced reasoning. At the core of InfiAlign is a robust data selection pipeline that automatically curates high-quality alignment data from open-source reasoning datasets using multidimensional quality metrics. This pipeline enables significant performance gains while drastically reducing data requirements and remains extensible to new data sources. When applied to the Qwen2.5-Math-7B-Base model, our SFT model achieves performance on par with DeepSeek-R1-Distill-Qwen-7B, while using only approximately 12% of the training data, and demonstrates strong generalization across diverse reasoning tasks. Additional improvements are obtained through the application of DPO, with particularly notable gains in mathematical reasoning tasks. The model achieves an average improvement of 3.89% on AIME 24/25 benchmarks. Our results highlight the effectiveness of combining principled data selection with full-stage post-training, offering a practical solution for aligning large reasoning models in a scalable and data-efficient manner. The model checkpoints are available at https://huggingface.co/InfiX-ai/InfiAlign-Qwen-7B-SFT.

  14. StrandDesigner: Towards Practical Strand Generation with Sketch Guidance

    Realistic hair strand generation is crucial for applications like computer graphics and virtual reality. While diffusion models can generate hairstyles from text or images, these inputs lack precision and user-friendliness. Instead, we propose the first sketch-based strand generation model, which offers finer control while remaining user-friendly. Our framework tackles key challenges, such as modeling complex strand interactions and diverse sketch patterns, through two main innovations: a learnable strand upsampling strategy that encodes 3D strands into multi-scale latent spaces, and a multi-scale adaptive conditioning mechanism using a transformer with diffusion heads to ensure consistency across granularity levels. Experiments on several benchmark datasets show our method outperforms existing approaches in realism and precision. Qualitative results further confirm its effectiveness. Code will be released at [GitHub](https://github.com/fighting-Zhang/StrandDesigner).

  15. MOSEv2: A More Challenging Dataset for Video Object Segmentation in Complex Scenes

    Video object segmentation (VOS) aims to segment specified target objects throughout a video. Although state-of-the-art methods have achieved impressive performance (e.g., 90+% J&F) on existing benchmarks such as DAVIS and YouTube-VOS, these datasets primarily contain salient, dominant, and isolated objects, limiting their generalization to real-world scenarios. To advance VOS toward more realistic environments, coMplex video Object SEgmentation (MOSEv1) was introduced to facilitate VOS research in complex scenes. Building on the strengths and limitations of MOSEv1, we present MOSEv2, a significantly more challenging dataset designed to further advance VOS methods under real-world conditions. MOSEv2 consists of 5,024 videos and over 701,976 high-quality masks for 10,074 objects across 200 categories. Compared to its predecessor, MOSEv2 introduces significantly greater scene complexity, including more frequent object disappearance and reappearance, severe occlusions and crowding, smaller objects, as well as a range of new challenges such as adverse weather (e.g., rain, snow, fog), low-light scenes (e.g., nighttime, underwater), multi-shot sequences, camouflaged objects, non-physical targets (e.g., shadows, reflections), scenarios requiring external knowledge, etc. We benchmark 20 representative VOS methods under 5 different settings and observe consistent performance drops. For example, SAM2 drops from 76.4% on MOSEv1 to only 50.9% on MOSEv2. We further evaluate 9 video object tracking methods and find similar declines, demonstrating that MOSEv2 presents challenges across tasks. These results highlight that despite high accuracy on existing datasets, current VOS methods still struggle under real-world complexities. MOSEv2 is publicly available at https://MOSE.video.

Solidot(12)

  1. Debian 13 trixie 释出

    Debian 项目宣布释出最新的稳定版本 Debian 13 trixie,该版本将支持到 2030 年,主要变化包括 GNOME 48、KDE Plasma 6.3、Xfce 4.20、Linux 6.12、GCC 14.2、Python 3.13 和 systemd 257。Debian 13 新增 14,100 个软件包,移除了 8,840 个过时的软件包,软件包总数达到 69,830 个,有 44,326 个软件包更新。riscv64 成为 Debian 官方支持的架构,停止支持i386 架构。

  2. AI 淘汰初级编程开发者

    Jonathan Kim 在 2023 年花了近 2 万美元参加了一个编程训练营(coding bootcamp),希望这能帮助他找到一份程序员的工作。他在毕业之后申请了 600 多个程序员职位,没有一家公司向他伸出橄榄枝。他目前在叔叔的冰激凌店工作,还在继续寻找工作。过去十多年,编程训练营是非编程相关专业求职者获得硅谷高薪程序员工作的踏脚石。但今天编程训练营已经过时,AI 为其棺材敲上最后的钉子。数据显示,在 Kim 参加的 2023 年 Codesmith 训练营中,只有 37% 学生在毕业后六个月内找到了全职技术工作,远低于 2021 下半年的 83%。AI 非常擅长编程,结果是入门级的编程职位显著减少。Signalfire 今年五月发表一份报告称,应届毕业生招聘数量比 2019 年疫情前的水平下跌了一半。

  3. 签署捐赠誓言的 256 名亿万富翁只有 9 人信守诺言

    2010 年,盖茨夫妇和沃伦巴菲特发起了捐赠誓言(The Giving Pledge)运动,旨在鼓励亿万富翁们将至少一半的净资产用于慈善机构,无论是在他们有生之年还是在他们去世时。捐赠誓言是一种公开的捐赠意向承诺,并非是具有法律约束力的合同。在捐赠誓言发起 15 年后,在 256 名署名的亿万富翁(其中之一是马斯克)中,大部分人未兑现承诺。大部分署名者的资产远超签署时,大部分慈善捐款都流向了私人基金和捐赠者指定基金,而不是直接支持实际运营的慈善机构。在世的署名者中,只有 Laura 和 John Arnold 夫妇捐了半数资产。在 22 位已故署名者中,只有 8 位生前兑现了承诺,其中 Chuck Feeney 在世时捐出了全部财富。在 194 名美国署名者中,110 人仍然是亿万富翁,总资产 1.7 万亿美元,而不再属于亿万富翁的署名者,他们的财富缩水并不是因为捐赠。

  4. 科学家研发能针对多种病毒的通用疫苗

    大多数疫苗的设计是针对一种病原体提供免疫力。例如水痘疫苗是针对水痘-带状疱疹病毒。但在 COVID-19 疫情爆发后,世界各地的免疫系统研究人员正在努力超越传统的单一病原体疫苗。根据《细胞》上一项研究,研究人员已经开发出一套研究流程,以推动“通用疫苗”的研发。这些疫苗将针对广泛的病毒家族和变异的病毒变体。如果成功,这种方法有望研制出能够中和新出现的 SARS-CoV-2 变体以及许多其他可能引发大流行的病毒的疫苗。研究针对的是病毒在演化过程中始终保持不变的蛋白质序列。

  5. Google 发现了一种新骗局,随后自己成为了该骗局的受害者

    Google 在今年 6 月披露了一种针对 Salesforce 账号的新骗局,两个月后搜索巨人报告它也成为了该骗局的受害者。该骗局是一种社会工程攻击,攻击者滥用了 Salesforce 的一项功能,该功能允许客户关联账户与第三方应用,将数据与用于博客、地图工具和类似资源的内部系统集成。攻击者伪装成 IT 部门直接联络目标要求访问权限,攻击者指示公司员工将外部应用连接到 Salesforce 实例,之后攻击者会要求员工输入 Salesforce 界面的八位数安全码。攻击者随后使用安全码访问该实例及其中存储的所有数据,窃取数据之后高价出售给买家。遭到此类骗局的知名公司包括了阿迪达斯、澳洲航空、安联人寿、思科,LVMH 旗下奢侈品牌路易威登、迪奥和蒂芙尼,现在还有 Google。Google 本周表示它的 Salesforce 实例遭到类似骗局的攻击,数据被窃取。攻击发生在六月份,Google 现在才披露,可能是因为最近才发现,它表示攻击者窃取的大部分数据都是公开的非机密信息。

  6. 一颗大质量白矮星被发现是由双星合并而成

    白矮星通常是类太阳恒星在燃料耗尽之后塌缩留下的致密核心,质量通常为太阳的一半,最高不超过 1.44 倍太阳质量,超过这一上限会爆炸或塌缩成中子星。大质量白矮星 WD 0525+526 质量是太阳的 1.2 倍,研究发现它并非是恒星塌缩而是双星合并的结果。WD 0525+526 与其它白矮星不同之处包括大气中有微弱的碳信号,低碳含量以及超高温(表面温度约太阳的四倍)表明这颗白矮星处于合并后演化较早的阶段。

  7. 中国工程师解决磁悬浮列车的隧道微压波噪音

    最新型号的磁悬浮列车的时速能达到 600 公里。高速行驶的列车长期以来一直面临类似超音速飞机音爆的“隧道微压波噪音(tunnel boom)”问题:高速列车进入隧道撞击空气产生高压波,往隧道下游传递形成活塞效应,压力波抵达下游隧道口冲出隧道口外产生微压波,伴随令人不悦的噪音。微压波噪音对列车运营安全构成了严峻挑战,因为冲击波会干扰附近的人和动物,还会造成结构损坏。中国工程师报告,在隧道口内安装新型的隔音缓冲器能将微压波减少最多 96%。100 米长缓冲器采用多孔结构,结合隧道本体的多孔涂层,能使被困在封闭空间的空气在火车到达隧道口前逃逸,有效抑制噪音,类似在枪管上安装消音器。

  8. 中国主要太阳能公司去年裁员近三分之一

    数据显示,中国主要太阳能公司去年裁员近三分之一。隆基绿能、天合光能、晶科能源、晶澳太阳能和通威集团去年共计裁员约 8.7 万人,平均占员工总数的 31%。裁员凸显了企业受产能过剩和低迷需求,陷入价格战的影响。全球每年生产的太阳能电池板数量是使用量的两倍,大部分产品由中国公司制造。

  9. Windows 10 的 30 美元扩展安全更新支持单一账号 10 台设备

    Windows 10 即将于 2025 年 10 月 14 日终止支持,之后微软不再提供安全更新。然而 Windows 10 仍然有相当高的市场占有率,因此届时可能会有数以亿计的用户将无法获得安全更新。对于短时间内不会更新到 Windows 11 的用户,微软将向他们提供一次性的为期一年的扩展安全更新,费用为 30 美元,截至 2026 年 10 月 13 日。根据微软的支持文档,30 美元的扩展安全更新许可证支持一个微软帐户最多 10 台设备。

  10. Linux 桌面市场份额达到 6%

    IT 资产盘点和库存公司 Lansweeper 称,对逾 1500 万消费者桌面操作系统的分析显示,Linux 桌面市场份额达到 6% 以上。有多项统计数据都显示 Linux 桌面在 6% 左右,如统计访问美国联邦政府网站和应用的 US Federal Government Website and App Analytics 显示,过去 90 天 Linux 桌面操作系统市场份额达到创记录的 6.3%,StatCounter 数据显示 7 月 Linux 桌面操作系统的市场份额创下了 5.24% 的新高。欧洲商业服务和政府等部门的 Linux 普及率高于北美,但北美科技、电信、金融和保险等部门的 Linux 普及率则高于欧洲。

  11. OpenAI 发布 GPT-5

    OpenAI 发布了新模型 GPT-5。相比旧模型,GPT-5 仍然是一个渐进改变的版本,并不是一次巨大的飞跃。OpenAI 称,GPT-5 更智能和更快,显著减少了幻觉率。CEO Sam Altman 声称,和 GPT-5 对话就像是和一位博士水平的专家对话。GPT-5 提供给所有用户,免费用户的配额用光之后将改用 PT-5 mini,Pro 会员将使用 GPT-5 Pro 版本。

  12. 科学家重新创造宇宙第一种分子

    138 亿年前的创世大爆炸后,宇宙处于难以想象的炙热和难以置信的致密状态。但几秒钟后,宇宙就冷却到足以形成第一批元素,主要是氢和氦。这些元素仍处于完全电离态,宇宙温度在接近 38 万年后才冷却到能通过与自由电子复合形成中性原子。这为最初的化学反应铺平了道路。最古老的分子是氦合氢离子(HeH+),这标志着链式反应的开始,最终形成了今天宇宙最常见的氢分子(H2)。简单分子如 HeH⁺ 和 H2 对首批恒星的形成至关重要。德国马普核物理研究所的研究人员首次在类似早期宇宙的条件下重新创造出宇宙第一种分子,观察其链式反应,进一步接近解开第一批恒星形成之谜。