DIGEST · 2025-10-09

OrangeBot.AI Digest — 2025-10-09

58 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. Python 3.14 is here. How fast is it? (blog.miguelgrinberg.com)
  2. Rubygems.org AWS Root Access Event – September 2025 (rubycentral.org)
  3. A small number of samples can poison LLMs of any size (www.anthropic.com)
  4. New nanotherapy clears amyloid-β, reversing symptoms of Alzheimer's in mice (www.drugtargetreview.com)
  5. My first contribution to Linux (vkoskiv.com)
  6. Show HN: I've built a tiny hand-held keyboard (github.com)
  7. The great software quality collapse or, how we normalized catastrophe (techtrenches.substack.com)
  8. Why Self-Host? (romanzipp.com)
  9. Figure 03, our 3rd generation humanoid robot (www.figure.ai)
  10. Show HN: I built a web framework in C (github.com)
  11. Nobel Prize in Literature 2025: László Krasznahorkai (www.nobelprize.org)
  12. N8n raises $180M (blog.n8n.io)
  13. California passes law to ban ultra-processed foods from school lunches (www.gov.ca.gov)
  14. The React Foundation (engineering.fb.com)
  15. Two things LLM coding agents are still bad at (kix.dev)

GitHub Trending(13)

  1. Stremio / stremio-web

    Stremio - Freedom to Stream

  2. MODSetter / SurfSense

    Open Source Alternative to NotebookLM / Perplexity, connected to external sources such as Search Engines, Slack, Linear, Jira, ClickUp, Confluence, Notion, YouTube, GitHub, Discord and more. Join our discord: https://discord.gg/ejRNvftDp9

  3. google / computer-use-preview
  4. TibixDev / winboat

    Run Windows apps on 🐧 Linux with ✨ seamless integration

  5. timelinize / timelinize

    Store your data from all your accounts and devices in a single cohesive timeline on your own computer

  6. rust-lang / rustfmt

    Format Rust code

  7. PixelGuys / Cubyz

    Voxel sandbox game with a large render distance, procedurally generated content and some cool graphical effects.

  8. openai / openai-agents-python

    A lightweight, powerful framework for multi-agent workflows

  9. TapXWorld / ChinaTextbook

    所有小初高、大学PDF教材。

  10. browserbase / stagehand

    The AI Browser Automation Framework

  11. rustdesk / rustdesk

    An open-source remote desktop application designed for self-hosting, as an alternative to TeamViewer.

  12. FlowiseAI / Flowise

    Build AI Agents, Visually

  13. winapps-org / winapps

    Run Windows apps such as Microsoft Office/Adobe in Linux (Ubuntu/Fedora) and GNOME/KDE as if they were a part of the native OS, including Nautilus integration. Hard fork of https://github.com/Fmstrat/winapps/

Hugging Face(15)

  1. Cache-to-Cache: Direct Semantic Communication Between Large Language Models

    Multi-LLM systems harness the complementary strengths of diverse Large Language Models, achieving performance and efficiency gains unattainable by a single model. In existing designs, LLMs communicate through text, forcing internal representations to be transformed into output token sequences. This process both loses rich semantic information and incurs token-by-token generation latency. Motivated by these limitations, we ask: Can LLMs communicate beyond text? Oracle experiments show that enriching the KV-Cache semantics can improve response quality without increasing cache size, supporting KV-Cache as an effective medium for inter-model communication. Thus, we propose Cache-to-Cache (C2C), a new paradigm for direct semantic communication between LLMs. C2C uses a neural network to project and fuse the source model's KV-cache with that of the target model to enable direct semantic transfer. A learnable gating mechanism selects the target layers that benefit from cache communication. Compared with text communication, C2C utilizes the deep, specialized semantics from both models, while avoiding explicit intermediate text generation. Experiments show that C2C achieves 8.5-10.5% higher average accuracy than individual models. It further outperforms the text communication paradigm by approximately 3.0-5.0%, while delivering an average 2.0x speedup in latency. Our code is available at https://github.com/thu-nics/C2C.

  2. Ming-UniVision: Joint Image Understanding and Generation with a Unified Continuous Tokenizer

    Visual tokenization remains a core challenge in unifying visual understanding and generation within the autoregressive paradigm. Existing methods typically employ tokenizers in discrete latent spaces to align with the tokens from large language models, where the quantization errors can limit semantic expressiveness and degrade the capability of vision-language understanding. To address this, we introduce MingTok, a new family of visual tokenizers with a continuous latent space, for unified autoregressive generation and understanding. While understanding tasks favor discriminative high-dimensional features, generation tasks prefer compact low-level codes. Thus, to reconcile these competing demands, MingTok adopts a three-stage sequential architecture involving low-level encoding, semantic expansion, and visual reconstruction. Built on top of it, Ming-UniVision eliminates the need for task-specific visual representations, and unifies diverse vision-language tasks under a single autoregrsssive prediction paradigm. By formulating both understanding and generation as next-token prediction in a shared continuous space, it seamlessly supports multi-round, in-context tasks such as iterative understanding, generation and editing. Empirically, we find that using a unified continuous visual representation reconciles the competing requirements on the tokenizers by the understanding and generation tasks, thereby leading to state-of-the-art level performance across both domains. We hope our findings will facilitate unified visual tokenization in the continuous domain. Inference code and model weights are released to benefit community.

  3. Lumina-DiMOO: An Omni Diffusion Large Language Model for Multi-Modal Generation and Understanding

    We introduce Lumina-DiMOO, an open-source foundational model for seamless multi-modal generation and understanding. Lumina-DiMOO sets itself apart from prior unified models by utilizing a fully discrete diffusion modeling to handle inputs and outputs across various modalities. This innovative approach allows Lumina-DiMOO to achieve higher sampling efficiency compared to previous autoregressive (AR) or hybrid AR-Diffusion paradigms and adeptly support a broad spectrum of multi-modal tasks, including text-to-image generation, image-to-image generation (e.g., image editing, subject-driven generation, and image inpainting, etc.), as well as image understanding. Lumina-DiMOO achieves state-of-the-art performance on multiple benchmarks, surpassing existing open-source unified multi-modal models. To foster further advancements in multi-modal and discrete diffusion model research, we release our code and checkpoints to the community. Project Page: https://synbol.github.io/Lumina-DiMOO.

  4. SHANKS: Simultaneous Hearing and Thinking for Spoken Language Models

    Current large language models (LLMs) and spoken language models (SLMs) begin thinking and taking actions only after the user has finished their turn. This prevents the model from interacting during the user's turn and can lead to high response latency while it waits to think. Consequently, thinking after receiving the full input is not suitable for speech-to-speech interaction, where real-time, low-latency exchange is important. We address this by noting that humans naturally "think while listening." In this paper, we propose SHANKS, a general inference framework that enables SLMs to generate unspoken chain-of-thought reasoning while listening to the user input. SHANKS streams the input speech in fixed-duration chunks and, as soon as a chunk is received, generates unspoken reasoning based on all previous speech and reasoning, while the user continues speaking. SHANKS uses this unspoken reasoning to decide whether to interrupt the user and to make tool calls to complete the task. We demonstrate that SHANKS enhances real-time user-SLM interaction in two scenarios: (1) when the user is presenting a step-by-step solution to a math problem, SHANKS can listen, reason, and interrupt when the user makes a mistake, achieving 37.1% higher interruption accuracy than a baseline that interrupts without thinking; and (2) in a tool-augmented dialogue, SHANKS can complete 56.9% of the tool calls before the user finishes their turn. Overall, SHANKS moves toward models that keep thinking throughout the conversation, not only after a turn ends. Animated illustrations of Shanks can be found at https://d223302.github.io/SHANKS/

  5. RLinf-VLA: A Unified and Efficient Framework for VLA+RL Training

    Recent progress in vision and language foundation models has significantly advanced multimodal understanding, reasoning, and generation, inspiring a surge of interest in extending such capabilities to embodied settings through vision-language-action (VLA) models. Yet, most VLA models are still trained with supervised fine-tuning (SFT), which struggles to generalize under distribution shifts due to error accumulation. Reinforcement learning (RL) offers a promising alternative by directly optimizing task performance through interaction, but existing attempts remain fragmented and lack a unified platform for fair and systematic comparison across model architectures and algorithmic designs. To address this gap, we introduce RLinf-VLA, a unified and efficient framework for scalable RL training of VLA models. The system adopts a highly flexible resource allocation design that addresses the challenge of integrating rendering, training, and inference in RL+VLA training. In particular, for GPU-parallelized simulators, RLinf-VLA implements a novel hybrid fine-grained pipeline allocation mode, achieving a 1.61x-1.88x speedup in training. Through a unified interface, RLinf-VLA seamlessly supports diverse VLA architectures (e.g., OpenVLA, OpenVLA-OFT), multiple RL algorithms (e.g., PPO, GRPO), and various simulators (e.g., ManiSkill, LIBERO). In simulation, a unified model achieves 98.11\% across 130 LIBERO tasks and 97.66\% across 25 ManiSkill tasks. Beyond empirical performance, our study distills a set of best practices for applying RL to VLA training and sheds light on emerging patterns in this integration. Furthermore, we present preliminary deployment on a real-world Franka robot, where RL-trained policies exhibit stronger generalization than those trained with SFT. We envision RLinf-VLA as a foundation to accelerate and standardize research on embodied intelligence.

  6. MATRIX: Mask Track Alignment for Interaction-aware Video Generation

    Video DiTs have advanced video generation, yet they still struggle to model multi-instance or subject-object interactions. This raises a key question: How do these models internally represent interactions? To answer this, we curate MATRIX-11K, a video dataset with interaction-aware captions and multi-instance mask tracks. Using this dataset, we conduct a systematic analysis that formalizes two perspectives of video DiTs: semantic grounding, via video-to-text attention, which evaluates whether noun and verb tokens capture instances and their relations; and semantic propagation, via video-to-video attention, which assesses whether instance bindings persist across frames. We find both effects concentrate in a small subset of interaction-dominant layers. Motivated by this, we introduce MATRIX, a simple and effective regularization that aligns attention in specific layers of video DiTs with multi-instance mask tracks from the MATRIX-11K dataset, enhancing both grounding and propagation. We further propose InterGenEval, an evaluation protocol for interaction-aware video generation. In experiments, MATRIX improves both interaction fidelity and semantic alignment while reducing drift and hallucination. Extensive ablations validate our design choices. Codes and weights will be released.

  7. Vibe Checker: Aligning Code Evaluation with Human Preference

    Large Language Models (LLMs) have catalyzed vibe coding, where users leverage LLMs to generate and iteratively refine code through natural language interactions until it passes their vibe check. Vibe check is tied to real-world human preference and goes beyond functionality: the solution should feel right, read cleanly, preserve intent, and remain correct. However, current code evaluation remains anchored to pass@k and captures only functional correctness, overlooking the non-functional instructions that users routinely apply. In this paper, we hypothesize that instruction following is the missing piece underlying vibe check that represents human preference in coding besides functional correctness. To quantify models' code instruction following capabilities with measurable signals, we present VeriCode, a taxonomy of 30 verifiable code instructions together with corresponding deterministic verifiers. We use the taxonomy to augment established evaluation suites, resulting in Vibe Checker, a testbed to assess both code instruction following and functional correctness. Upon evaluating 31 leading LLMs, we show that even the strongest models struggle to comply with multiple instructions and exhibit clear functional regression. Most importantly, a composite score of functional correctness and instruction following correlates the best with human preference, with the latter emerging as the primary differentiator on real-world programming tasks. Our work identifies core factors of the vibe check, providing a concrete path for benchmarking and developing models that better align with user preferences in coding.

  8. Multi-Agent Tool-Integrated Policy Optimization

    Large language models (LLMs) increasingly rely on multi-turn tool-integrated planning for knowledge-intensive and complex reasoning tasks. Existing implementations typically rely on a single agent, but they suffer from limited context length and noisy tool responses. A natural solution is to adopt a multi-agent framework with planner- and worker-agents to manage context. However, no existing methods support effective reinforcement learning post-training of tool-integrated multi-agent frameworks. To address this gap, we propose Multi-Agent Tool-Integrated Policy Optimization (MATPO), which enables distinct roles (planner and worker) to be trained within a single LLM instance using role-specific prompts via reinforcement learning. MATPO is derived from a principled credit assignment mechanism across planner and worker rollouts. This design eliminates the need to deploy multiple LLMs, which would be memory-intensive, while preserving the benefits of specialization. Experiments on GAIA-text, WebWalkerQA, and FRAMES show that MATPO consistently outperforms single-agent baselines by an average of 18.38% relative improvement in performance and exhibits greater robustness to noisy tool outputs. Our findings highlight the effectiveness of unifying multiple agent roles within a single LLM and provide practical insights for stable and efficient multi-agent RL training.

  9. CALM Before the STORM: Unlocking Native Reasoning for Optimization Modeling

    Large Reasoning Models (LRMs) have demonstrated strong capabilities in complex multi-step reasoning, opening new opportunities for automating optimization modeling. However, existing domain adaptation methods, originally designed for earlier instruction-tuned models, often fail to exploit the advanced reasoning patterns of modern LRMs -- In particular, we show that direct fine-tuning on traditional non-reflective datasets leads to limited gains. To fully leverage LRMs' inherent reasoning abilities, we propose CALM (Corrective Adaptation with Lightweight Modification), a framework that progressively refines LRMs within their native reasoning modes for optimization modeling tasks. In CALM, an expert intervener identifies reasoning flaws and provides concise corrective hints, which the LRM incorporates to produce improved reasoning trajectories. These interventions modify fewer than 2.6\% of generated tokens, but generate high-quality data for soft adaptation through supervised fine-tuning. The adapted model is then further improved through reinforcement learning. Building on CALM, we develop STORM (Smart Thinking Optimization Reasoning Model), a 4B-parameter LRM that achieves a new state-of-the-art average accuracy of 68.9\% across five popular optimization modeling benchmarks, matching the performance of a 671B LRM. These results demonstrate that dynamic, hint-based data synthesis both preserves and amplifies the native reasoning patterns of modern LRMs, offering a more effective and scalable path towards expert-level performance on challenging optimization modeling tasks.

  10. Why Low-Precision Transformer Training Fails: An Analysis on Flash Attention

    The pursuit of computational efficiency has driven the adoption of low-precision formats for training transformer models. However, this progress is often hindered by notorious training instabilities. This paper provides the first mechanistic explanation for a long-standing and unresolved failure case where training with flash attention in low-precision settings leads to catastrophic loss explosions. Our in-depth analysis reveals that the failure is not a random artifact but caused by two intertwined phenomena: the emergence of similar low-rank representations within the attention mechanism and the compounding effect of biased rounding errors inherent in low-precision arithmetic. We demonstrate how these factors create a vicious cycle of error accumulation that corrupts weight updates, ultimately derailing the training dynamics. To validate our findings, we introduce a minimal modification to the flash attention that mitigates the bias in rounding errors. This simple change stabilizes the training process, confirming our analysis and offering a practical solution to this persistent problem.

  11. Artificial Hippocampus Networks for Efficient Long-Context Modeling

    Long-sequence modeling faces a fundamental trade-off between the efficiency of compressive fixed-size memory in RNN-like models and the fidelity of lossless growing memory in attention-based Transformers. Inspired by the Multi-Store Model in cognitive science, we introduce a memory framework of artificial neural networks. Our method maintains a sliding window of the Transformer's KV cache as lossless short-term memory, while a learnable module termed Artificial Hippocampus Network (AHN) recurrently compresses out-of-window information into a fixed-size compact long-term memory. To validate this framework, we instantiate AHNs using modern RNN-like architectures, including Mamba2, DeltaNet, and Gated DeltaNet. Extensive experiments on long-context benchmarks LV-Eval and InfiniteBench demonstrate that AHN-augmented models consistently outperform sliding window baselines and achieve performance comparable or even superior to full-attention models, while substantially reducing computational and memory requirements. For instance, augmenting the Qwen2.5-3B-Instruct with AHNs reduces inference FLOPs by 40.5% and memory cache by 74.0%, while improving its average score on LV-Eval (128k sequence length) from 4.41 to 5.88. Code is available at: https://github.com/ByteDance-Seed/AHN.

  12. Pushing on Multilingual Reasoning Models with Language-Mixed Chain-of-Thought

    Recent frontier models employ long chain-of-thought reasoning to explore solution spaces in context and achieve stonger performance. While many works study distillation to build smaller yet capable models, most focus on English and little is known about language-specific reasoning. To bridge this gap, we first introduct **Language-Mixed CoT**, a reasoning schema that switches between English and a target language, using English as an anchor to excel in reasoning while minimizing translation artificats. As a Korean case study, we curate **Yi-Sang**: 5.79M native-Korean prompts from web Q&A, exams, STEM, and code; 3.7M long reasoning traces generated from Qwen3-32B; and a targeted 260k high-yield subset. We train ninve models (4B-35B) across six families (Qwen2.5, Llama-3.1, Gemma-3, etc). Our best model, **KO-REAson-35B**, achieves state-of-the-art performance, with the highest overall average score (64.0 \pm 25), ranking first on 5/9 benchmarks and second on the remainder. Samller and mid-sized models also benefit substantially, with an average improvement of +18.6 points across teh evaluated nine benchmarks. Ablations show **Language-Mixed CoT** is more effective than monolingual CoT, also resulting in cross-lingual and mult-modal performance gains. We release our data-curation pipeline, evaluation system, datasets, and models to advance research on language-specific reasoning. Data and model collection: https://huggingface.co/KOREAson.

  13. The Markovian Thinker

    Reinforcement learning (RL) has recently become a strong recipe for training reasoning LLMs that produce long chains of thought (LongCoT). Yet the standard RL "thinking environment", where the state is the prompt plus all prior reasoning tokens, makes the state unbounded and forces attention-based policies to pay quadratic compute as thoughts lengthen. We revisit the environment itself. We propose Markovian Thinking, a paradigm in which the policy advances reasoning while conditioning on a constant-size state, decoupling thinking length from context size. As an immediate consequence this yields linear compute with constant memory. We instantiate this idea with Delethink, an RL environment that structures reasoning into fixed-size chunks. Within each chunk, the model thinks as usual; at the boundary, the environment resets the context and reinitializes the prompt with a short carryover. Through RL, the policy learns to write a textual state near the end of each chunk sufficient for seamless continuation of reasoning after reset. Trained in this environment, an R1-Distill 1.5B model reasons in 8K-token chunks yet thinks up to 24K tokens, matching or surpassing LongCoT-RL trained with a 24K budget. With test-time scaling, Delethink continues to improve where LongCoT plateaus. The effect of linear compute is substantial: we empirically estimate at 96K average thinking length LongCoT-RL costs 27 H100-months vs. 7 for Delethink. Analysis at RL initialization shows off-the-shelf reasoning models (1.5B-120B) often sample Markovian traces zero-shot across diverse benchmarks, providing positive samples that make RL effective at scale. Our results show that redesigning the thinking environment is a powerful lever: it enables very long reasoning without quadratic overhead and opens a path toward efficient, scalable reasoning LLMs.

  14. Native Hybrid Attention for Efficient Sequence Modeling

    Transformers excel at sequence modeling but face quadratic complexity, while linear attention offers improved efficiency but often compromises recall accuracy over long contexts. In this work, we introduce Native Hybrid Attention (NHA), a novel hybrid architecture of linear and full attention that integrates both intra \& inter-layer hybridization into a unified layer design. NHA maintains long-term context in key-value slots updated by a linear RNN, and augments them with short-term tokens from a sliding window. A single softmax attention operation is then applied over all keys and values, enabling per-token and per-head context-dependent weighting without requiring additional fusion parameters. The inter-layer behavior is controlled through a single hyperparameter, the sliding window size, which allows smooth adjustment between purely linear and full attention while keeping all layers structurally uniform. Experimental results show that NHA surpasses Transformers and other hybrid baselines on recall-intensive and commonsense reasoning tasks. Furthermore, pretrained LLMs can be structurally hybridized with NHA, achieving competitive accuracy while delivering significant efficiency gains. Code is available at https://github.com/JusenD/NHA.

  15. OBS-Diff: Accurate Pruning For Diffusion Models in One-Shot

    Large-scale text-to-image diffusion models, while powerful, suffer from prohibitive computational cost. Existing one-shot network pruning methods can hardly be directly applied to them due to the iterative denoising nature of diffusion models. To bridge the gap, this paper presents OBS-Diff, a novel one-shot pruning framework that enables accurate and training-free compression of large-scale text-to-image diffusion models. Specifically, (i) OBS-Diff revitalizes the classic Optimal Brain Surgeon (OBS), adapting it to the complex architectures of modern diffusion models and supporting diverse pruning granularity, including unstructured, N:M semi-structured, and structured (MHA heads and FFN neurons) sparsity; (ii) To align the pruning criteria with the iterative dynamics of the diffusion process, by examining the problem from an error-accumulation perspective, we propose a novel timestep-aware Hessian construction that incorporates a logarithmic-decrease weighting scheme, assigning greater importance to earlier timesteps to mitigate potential error accumulation; (iii) Furthermore, a computationally efficient group-wise sequential pruning strategy is proposed to amortize the expensive calibration process. Extensive experiments show that OBS-Diff achieves state-of-the-art one-shot pruning for diffusion models, delivering inference acceleration with minimal degradation in visual quality.

Solidot(15)

  1. Ubuntu 25.10 'Questing Quokka' 释出

    Canonical 释出了代号为 Questing Quokka 的 Ubuntu 25.10。这是一个短期支持版本,仅支持 9 个月,明年 4 月发布的下一个版本 Ubuntu 26.04 是长期支持版本。Ubuntu 25.10 的新特性包括:Linux kernel 6.17、Mesa 25.2.3、GNOME 49、Firefox 143、LibreOffice 25.8、Audacity 3.7.1、GIMP 3.0.4、BlueZ 5.83、Pipewire 1.4.7、OpenSSL 3.5.3、GCC 15.2、binutils 2.45 以及 glibc 2.42 等等。其它重要变化包括:Ubuntu 会话仅支持 Wayland,英伟达私有驱动启用暂停/恢复支持,新的图像查看器和终端默认应用,TPM 支持全盘加密的恢复密钥管理,等等。

  2. 黄金价格首次突破每盎司 4000 美元

    在半年前突破 3000 美元之后,黄金的国际价格在历史上首次突破了 4000 美元/盎司的大关。这是二战后,继 1970 年代前半期、2000 年代后半期之后第三次暴涨浪潮。其背景是美元的主导权地位正在动摇。在国际政治面临分裂的情况下,无处可去的资金正在集中于作为实物资产的黄金。今年金价已飙升逾50%,周二触及 4001 美元,在不到两年内实现翻倍。对冲基金亿万富翁雷•达里奥(Ray Dalio)周二表示,黄金是比美元更安全的替代选择,并称其是投资组合的“极佳分散化工具”。

  3. Fedora 43 将 /boot 分区容量从 1GB 增加到 2GB

    Fedora 发行版在 2016 年将 /boot 分区容量从 500 MB 增加到 1GB。随着各种固件的容量越来越大,即将在今年晚些时候释出的 Fedora 43 将 /boot 分区容量从 1GB 增加到了 2GB,希望能在未来五年内满足新硬件的要求。在众多固件中,GPU 固件文件最大,英伟达的固件文件有 100MB 大小,而英伟达 GPU 系统处理器(GSP)的固件压缩后的容量就接近了 49MB。高通骁龙 X Elite 笔记本电脑等 ARM64 系统上的固件容量也在不断增长。

  4. Starlink 每天有 1-2 颗重返大气层

    SpaceX 的宽带卫星星座 Starlink 每天有 1-2 颗重返大气层。哈佛天体物理学家 Jonathan McDowell 表示 Starlink 卫星成为太空垃圾的风险不高,首先其轨道高度不到 600 公里,其次是其脱轨方法是使用卫星上的推进器将其推到较低的高度,在不受控但有辅助的条件下重返大气层并在这一过程中燃烧殆尽。他认为最危险的轨道高度是 600-1000 公里。中国的部分卫星星座项目就运行在这一高度附近,因此最令人担忧,如果这些卫星出现问题,可能导致该轨道区域本已严重的太空垃圾问题雪上加霜,而更高的轨道意味着重返大气层的时间越长,会在更长时间里增加发生太空碰撞的问题。

  5. Salesforce 拒绝为被盗的客户数据支付赎金

    Google 在今年 6 月披露了一种针对 Salesforce 账号的新骗局,该骗局是一种社会工程攻击,攻击者滥用了 Salesforce 的一项功能,该功能允许客户关联账户与第三方应用,将数据与用于博客、地图工具和类似资源的内部系统集成。攻击者伪装成 IT 部门直接联络目标要求访问权限,指示公司员工将外部应用连接到 Salesforce 实例,之后攻击者会要求员工输入 Salesforce 界面的八位数安全码,随后使用安全码访问该实例及其中存储的所有数据。遭到该骗局的知名公司包括了阿迪达斯、澳洲航空、安联人寿、思科,LVMH 旗下奢侈品牌路易威登、迪奥和蒂芙尼,以及 Google 自己。现在犯罪组织创建了一个网站,称恢复了 9.8945 亿条记录,要求 Salesforce 就赎金金额展开谈判,否则将泄漏该公司客户的数据,截止日期为周五。Salesforce 公开声明拒绝支付赎金。

  6. 研究认为 BMI 指数不能反映一个人是否健康

    BMI 指数数十年来一直被视为衡量人体健康状况的黄金标准。但滑铁卢大学的一项研究显示,BMI 只能反映一个人健康状况的一小部分,而且弊大于利。研究主要作者 Aly Bailey 教授称,BMI 无法区分肌肉和脂肪,没有考虑到脂肪在体内的分布,忽略了年龄、性别和种族等重要因素。两个人的 BMI 指数相同,但健康状况可能完全不同。BMI 是从 19 世纪发展起来的,是基于身高和体重的测量指标,没有考虑健康。研究报告发表在《Body Image》期刊上。

  7. 8 月赴美留学生同比下降 19%

    8 月份的美国入境记录显示,赴美国际学生人数比去年同期减少 19%。专家表示,数据并不能表明留学生人数的下降,因为没有包括暑假期间未离开美国的在校国际学生,也没有统计因开学较晚或其它原因而于 9 月入境的国际学生。尽管如此,最新数据仍然是迄今关于美国签证延误如何影响国际学生入学的权威数据之一。绝大多数国际学生在 8 月抵达美国。自秋季学期开始起,部分美国大学报告注册的国际学生人数出现下降。美国国家学生协会 (NAFSA) 等机构预测国际学生入学人数将下降 15%。分析显示,国际学生下降最显著的地区是非洲(比去年同期减少 32%)、亚洲(减少 24%)和中东(减少 17%),欧洲和大洋洲人数保持平稳。

  8. OpenAI 今年内签署的 AI 计算交易已高达 1 万亿美元

    OpenAI 最近先后与英伟达、甲骨文和 CoreWeave 以及 AMD 达成了金额巨大的交易,今年内签署的 AI 计算交易已高达 1 万亿美元,但它的收入与承诺的支出相去甚远,它如何筹集资金令人倍感质疑。这些交易将让 OpenAI 在未来十年获得逾 20GW 的计算能力,相当于 20 座核反应堆的发电量。根据 OpenAI 高管的估计,以当前的价格计算,每 1GW AI 计算能力的部署成本约 500 亿美元,总成本约 1 万亿美元。这些交易将世界知名科技公司与 OpenAI 的盈利能力绑定起来,OpenAI 需要成为一家能履行其日益沉重的财务义务的盈利企业。

  9. 丹麦计划禁止 15 岁以下儿童使用社交媒体

    丹麦政府计划禁止 15 岁以下儿童使用社交媒体。丹麦首相 Mette Frederiksen 对议会表示,“手机和社交媒体正在偷走孩子的童年,我们释放了一个怪物,” 她指出几乎所有丹麦七年级学生(13 或 14 岁)都拥有手机。“我希望在座议员能帮助加强法律,以更好地照顾丹麦的儿童。”她并未透露社媒禁令的具体内容,政府下一议会年度的立法计划中也没有相关法案。

  10. 阿富汗屏蔽 Facebook、Instagram 和 Snapchat 等社交平台

    根据 Netblocks 的监测,阿富汗政府屏蔽访问了 Facebook 和 Instagram,以及 Snapchat 和 TikTok 等主要社交平台。访问这些受限平台需要使用代理工具,而 WhatsApp 和 YouTube 仍然可以正常访问。塔利班政府尚未对此发表公开声明。在这之前阿富汗曾罕见的全面断网超过 48 小时。

  11. 在销量暴跌之后群晖允许其 NAS 产品使用第三方品牌硬盘

    群晖今年早些时候做出了一项受争议决策:2025 年款 Plus 系列 NAS 产品只兼容自有品牌硬盘。群晖声称,如果安装不兼容的硬盘,NAS 设备可能无法创建存储池。群晖并不生产硬盘,它主要是重新包装来自希捷和东芝的硬盘,群晖品牌硬盘通常比相似规格的第三方型号略贵。举例来说,群晖 Plus 系列 8TB 3.5 英寸 HDD HAT3310 在其官网上的售价为 210 美元。HAT3310 原装硬盘之一——东芝 N300 在多个网店售价为 173 美元。群晖此举招致了消费者的广泛批评,消费者们选择了用脚投票,其产品过去几个月销量暴跌。现在群晖释出了 DSM 7.3,悄悄撤销了这一受争议政策,使用第三方硬盘不会再触发警告或限制功能。批评人士认为这一事件损害了群晖的声誉。

  12. 2025 年诺贝尔化学奖授予了美日英科学家

    2025 年诺贝尔化学奖授予了日本科学家 Susumu Kitagawa、英国科学家 Richard Robson 和美国科学家 Omar M. Yaghi,表彰他们“在金属有机框架开发领域的贡献”。他们开发了一种新的分子结构。在结构中,金属离子作为由长有机(碳基)分子连接的基石。金属离子和分子结合在一起,形成了包含大空洞的晶体。这些多孔材料被称为金属有机框架(MOF)。通过改变 MOF 中使用的构建块,化学家可以设计它们来捕获和存储特定的物质。MOF 还可以驱动化学反应或导电。在这奖者的突破性发现之后,化学家们已经构建了成千上万种不同的MOF。其中一些可能有助于解决人类面临的一些最大挑战,包括从水中分离 PFAS,分解环境中的药物痕迹,捕获二氧化碳或从沙漠空气中收集水。

  13. 砍伐森林,蝴蝶失色

    生活在天然林里的蝴蝶与生活在人工林里的蝴蝶是不同的。在天然林中,五彩斑斓的颜色攸关蝴蝶的生存,有助于吸引配偶和躲避捕食者。但随着人类砍伐森林,将其改造为单一作物的人工林,生活在其中的蝴蝶也变得暗淡无光。巴西研究人员发现,天然林生活了 31 种各种颜色的蝴蝶,而人工林中只生活着 21 种,且以棕色为主。天然林为蝴蝶提供了多样化的栖息地,而在背景单一的人工林颜色单调的物种则更容易生存。蝴蝶是世界上颜色最丰富的物种之一,它们能对环境变化迅速做出反应。当动物颜色的多样性减少时,通常也意味着生态环境及其功能在衰退。

  14. 微软表示会继续开发 XBox 游戏机

    微软最近再次上调了 Xbox Series X 和 Series S 游戏机的售价,将订阅服务 Xbox Game Pass Ultimate 价格上涨 50%。一系列动作让很多人不看好微软游戏机业务的未来,包括 Costco 在内的零售商决定将 Xbox 产品下架。索尼 PS5 之后有 PS6,但 Xbox Series X 之后是否还会有新 Xbox?对于它可能放弃硬件制造的传言,微软周一发表声明重审它仍然致力于开发 Xbox 游戏机,继续与 AMD 公司在硬件方面进行合作。微软和索尼目前游戏机都使用 AMD 提供的 CPU 和 GPU 方案。微第一方 Xbox 掌机的计划据报道已经取消,原因据称是 AMD 在合同中要求销量至少要达到一千万,而 Steam Deck 自 2022 年发布以来销量也只有 400-500 万台。

  15. Ubuntu Linux 26.04 LTS 代号 Resolute Raccoon

    在 Ubuntu 25.10 即将释出之际,Canonical 宣布下一个 LTS(长期支持版)Ubuntu 26.04 的代号为 Resolute Raccoon。Ubuntu 25.10 只支持九个月,而 Ubuntu 26.04 将支持五年,预计 2026 年 4 月释出。Ubuntu 25.10 的主要特性包括:Linux 6.17,GCC 15,使用 Rust 语言开发的系统组件 sudo-rs 和 Rust Coreutils,默认桌面环境 GNOME 49,等等。Ubuntu 26.04 的具体特性将在未来几个月逐步揭晓。