OrangeBot.AI Digest — 2025-10-26
60 headlines across 8 sources, aggregated for this day.
Hacker News(15)
- Nvidia DGX Spark: When benchmark numbers meet production reality (publish.obsidian.md)
- Alzheimer's disrupts circadian rhythms of plaque-clearing brain cells (medicine.washu.edu)
- Movie posters from Ghana in the 1980s and 90s (www.utterlyinteresting.com)
- Myanmar military shuts down a major cybercrime center, detains over 2k people (apnews.com)
- Let's Help NetBSD Cross the Finish Line Before 2025 Ends (mail-index.netbsd.org)
- A bug that taught me more about PyTorch than years of using it (elanapearl.github.io)
- You Should Feed the Bots (maurycyz.com)
- You already have a Git server (maurycyz.com)
- Downloadable movie posters from the 40s, 50s, 60s, and 70s (hrc.contentdm.oclc.org)
- Eavesdropping on Internal Networks via Unencrypted Satellites (satcom.sysnet.ucsd.edu)
- Advent of Code 2025: Number of puzzles reduce from 25 to 12 for the first time (adventofcode.com)
- Clojure Land – Discover open-source Clojure libraries and frameworks (clojure.land)
- Asbestosis (diamondgeezer.blogspot.com)
- What If Tariffs? (www.swatch.com)
- GenAI Image Editing Showdown (genai-showdown.specr.net)
GitHub Trending(15)
- LadybirdBrowser / ladybird
Truly independent web browser
- yeongpin / cursor-free-vip
[Support 0.49.x](Reset Cursor AI MachineID & Bypass Higher Token Limit) Cursor Ai ,自动重置机器ID , 免费升级使用Pro功能: You've reached your trial request limit. / Too many free trial accounts used on this machine. Please upgrade to pro. We have this limit in place to prevent abuse. Please let us know if you believe this is a mistake.
- cjpais / Handy
A free, open source, and extensible speech-to-text application that works completely offline.
- Shubhamsaboo / awesome-llm-apps
Collection of awesome LLM apps with AI Agents and RAG using OpenAI, Anthropic, Gemini and opensource models.
- microsoft / agent-lightning
The absolute trainer to light up AI agents.
- public-apis / public-apis
A collective list of free APIs
- coinbase / x402
A payments protocol for the internet. Built on HTTP.
- donnemartin / system-design-primer
Learn how to design large-scale systems. Prep for the system design interview. Includes Anki flashcards.
- MHSanaei / 3x-ui
Xray panel supporting multi-protocol multi-user expire day & traffic & IP limit (Vmess, Vless, Trojan, ShadowSocks, Wireguard, Tunnel, Mixed, HTTP)
- 2dust / v2rayN
A GUI client for Windows, Linux and macOS, support Xray and sing-box and others
- chartdb / chartdb
Database diagrams editor that allows you to visualize and design your DB with a single query.
- go-gitea / gitea
Git with a cup of tea! Painless self-hosted all-in-one software development service, including Git hosting, code review, team collaboration, package registry and CI/CD
- hoppscotch / hoppscotch
Open-Source API Development Ecosystem • https://hoppscotch.io • Offline, On-Prem & Cloud • Web, Desktop & CLI • Open-Source Alternative to Postman, Insomnia
- paperless-ngx / paperless-ngx
A community-supported supercharged document management system: scan, index and archive all your documents
- cloudcommunity / Free-Certifications
A curated list of free courses with certifications. Also available at https://free-certifications.com/
Hugging Face(15)
- Human-Agent Collaborative Paper-to-Page Crafting for Under $0.1
In the quest for scientific progress, communicating research is as vital as the discovery itself. Yet, researchers are often sidetracked by the manual, repetitive chore of building project webpages to make their dense papers accessible. While automation has tackled static slides and posters, the dynamic, interactive nature of webpages has remained an unaddressed challenge. To bridge this gap, we reframe the problem, arguing that the solution lies not in a single command, but in a collaborative, hierarchical process. We introduce AutoPage, a novel multi-agent system that embodies this philosophy. AutoPage deconstructs paper-to-page creation into a coarse-to-fine pipeline from narrative planning to multimodal content generation and interactive rendering. To combat AI hallucination, dedicated "Checker" agents verify each step against the source paper, while optional human checkpoints ensure the final product aligns perfectly with the author's vision, transforming the system from a mere tool into a powerful collaborative assistant. To rigorously validate our approach, we also construct PageBench, the first benchmark for this new task. Experiments show AutoPage not only generates high-quality, visually appealing pages but does so with remarkable efficiency in under 15 minutes for less than \0.1. Code and dataset will be released at https://mqleet.github.io/AutoPage_ProjectPage/{Webpage}$.
- AdaSPEC: Selective Knowledge Distillation for Efficient Speculative Decoders
Speculative Decoding (SD) accelerates large language model inference by employing a small draft model to generate predictions, which are then verified by a larger target model. The effectiveness of SD hinges on the alignment between these models, which is typically enhanced by Knowledge Distillation (KD). However, conventional KD methods aim to minimize the KL divergence between the draft and target models across all tokens, a goal that is misaligned with the true objective of SD, which is to maximize token acceptance rate. Therefore, draft models often struggle to fully assimilate the target model's knowledge due to capacity constraints, leading to suboptimal performance. To address this challenge, we propose AdaSPEC, a novel method that incorporates selective token filtering into the KD process. AdaSPEC utilizes a reference model to identify and filter out difficult-to-fit tokens, enabling the distillation of a draft model that better aligns with the target model on simpler tokens. This approach improves the overall token acceptance rate without compromising generation quality. We evaluate AdaSPEC across diverse tasks, including arithmetic reasoning, instruction-following, coding, and summarization, using model configurations of 31M/1.4B and 350M/2.7B parameters. Our results demonstrate that AdaSPEC consistently outperforms the state-of-the-art DistillSpec method, achieving higher acceptance rates across all tasks (up to 15\%). The code is publicly available at https://github.com/yuezhouhu/adaspec.
- Open-o3 Video: Grounded Video Reasoning with Explicit Spatio-Temporal Evidence
Most video reasoning models only generate textual reasoning traces without indicating when and where key evidence appears. Recent models such as OpenAI-o3 have sparked wide interest in evidence-centered reasoning for images, yet extending this ability to videos is more challenging, as it requires joint temporal tracking and spatial localization across dynamic scenes. We introduce Open-o3 Video, a non-agent framework that integrates explicit spatio-temporal evidence into video reasoning, and carefully collect training data and design training strategies to address the aforementioned challenges. The model highlights key timestamps, objects, and bounding boxes alongside its answers, allowing reasoning to be grounded in concrete visual observations. To enable this functionality, we first curate and build two high-quality datasets, STGR-CoT-30k for SFT and STGR-RL-36k for RL, with carefully constructed temporal and spatial annotations, since most existing datasets offer either temporal spans for videos or spatial boxes on images, lacking unified spatio-temporal supervision and reasoning traces. Then, we adopt a cold-start reinforcement learning strategy with multiple specially designed rewards that jointly encourage answer accuracy, temporal alignment, and spatial precision. On V-STAR benchmark, Open-o3 Video achieves state-of-the-art performance, raising mAM by 14.4% and mLGM by 24.2% on the Qwen2.5-VL baseline. Consistent improvements are also observed on a broad range of video understanding benchmarks, including VideoMME, WorldSense, VideoMMMU, and TVGBench. Beyond accuracy, the reasoning traces produced by Open-o3 Video also provide valuable signals for test-time scaling, enabling confidence-aware verification and improving answer reliability.
- HoloCine: Holistic Generation of Cinematic Multi-Shot Long Video Narratives
State-of-the-art text-to-video models excel at generating isolated clips but fall short of creating the coherent, multi-shot narratives, which are the essence of storytelling. We bridge this "narrative gap" with HoloCine, a model that generates entire scenes holistically to ensure global consistency from the first shot to the last. Our architecture achieves precise directorial control through a Window Cross-Attention mechanism that localizes text prompts to specific shots, while a Sparse Inter-Shot Self-Attention pattern (dense within shots but sparse between them) ensures the efficiency required for minute-scale generation. Beyond setting a new state-of-the-art in narrative coherence, HoloCine develops remarkable emergent abilities: a persistent memory for characters and scenes, and an intuitive grasp of cinematic techniques. Our work marks a pivotal shift from clip synthesis towards automated filmmaking, making end-to-end cinematic creation a tangible future. Our code is available at: https://holo-cine.github.io/.
- DyPE: Dynamic Position Extrapolation for Ultra High Resolution Diffusion
Diffusion Transformer models can generate images with remarkable fidelity and detail, yet training them at ultra-high resolutions remains extremely costly due to the self-attention mechanism's quadratic scaling with the number of image tokens. In this paper, we introduce Dynamic Position Extrapolation (DyPE), a novel, training-free method that enables pre-trained diffusion transformers to synthesize images at resolutions far beyond their training data, with no additional sampling cost. DyPE takes advantage of the spectral progression inherent to the diffusion process, where low-frequency structures converge early, while high-frequencies take more steps to resolve. Specifically, DyPE dynamically adjusts the model's positional encoding at each diffusion step, matching their frequency spectrum with the current stage of the generative process. This approach allows us to generate images at resolutions that exceed the training resolution dramatically, e.g., 16 million pixels using FLUX. On multiple benchmarks, DyPE consistently improves performance and achieves state-of-the-art fidelity in ultra-high-resolution image generation, with gains becoming even more pronounced at higher resolutions. Project page is available at https://noamissachar.github.io/DyPE/.
- Loopholing Discrete Diffusion: Deterministic Bypass of the Sampling Wall
Discrete diffusion models offer a promising alternative to autoregressive generation through parallel decoding, but they suffer from a sampling wall: once categorical sampling occurs, rich distributional information collapses into one-hot vectors and cannot be propagated across steps, forcing subsequent steps to operate with limited information. To mitigate this problem, we introduce Loopholing, a novel and simple mechanism that preserves this information via a deterministic latent pathway, leading to Loopholing Discrete Diffusion Models (LDDMs). Trained efficiently with a self-conditioning strategy, LDDMs achieve substantial gains-reducing generative perplexity by up to 61% over prior baselines, closing (and in some cases surpassing) the gap with autoregressive models, and producing more coherent text. Applied to reasoning tasks, LDDMs also improve performance on arithmetic benchmarks such as Countdown and Game of 24. These results also indicate that loopholing mitigates idle steps and oscillations, providing a scalable path toward high-quality non-autoregressive text generation.
- SAKE: Towards Editing Auditory Attribute Knowledge of Large Audio-Language Models
Knowledge editing offers an efficient way to update model knowledge without full retraining, but prior work has concentrated almost exclusively on textual or visual modalities. We introduce SAKE, the first benchmark specifically designed for editing auditory attribute knowledge in Large Audio-Language Models (LALMs). Unlike factual updates, SAKE targets several abstract auditory attributes, capturing knowledge types that go beyond conventional textual and visual domains. We benchmark seven editing methods on two LALMs along four dimensions: reliability, generality, audio/text locality, and portability. Results highlight challenges such as preserving intra-attribute knowledge unrelated to the edit, generalizing edits to multimodal reasoning, and maintaining edits under sequential updates. SAKE provides a principled framework to study how knowledge editing extends to the auditory modalities, opening new directions for maintaining and adapting LALMs in more diverse real-world scenarios.
- Every Question Has Its Own Value: Reinforcement Learning with Explicit Human Values
We propose Reinforcement Learning with Explicit Human Values (RLEV), a method that aligns Large Language Model (LLM) optimization directly with quantifiable human value signals. While Reinforcement Learning with Verifiable Rewards (RLVR) effectively trains models in objective domains using binary correctness rewards, it overlooks that not all tasks are equally significant. RLEV extends this framework by incorporating human-defined value signals directly into the reward function. Using exam-style data with explicit ground-truth value labels, RLEV consistently outperforms correctness-only baselines across multiple RL algorithms and model scales. Crucially, RLEV policies not only improve value-weighted accuracy but also learn a value-sensitive termination policy: concise for low-value prompts, thorough for high-value ones. We demonstrate this behavior stems from value-weighted gradient amplification on end-of-sequence tokens. Ablation studies confirm the gain is causally linked to value alignment. RLEV remains robust under noisy value signals, such as difficulty-based labels, demonstrating that optimizing for an explicit utility function offers a practical path to aligning LLMs with human priorities.
- Investigating Safety Vulnerabilities of Large Audio-Language Models Under Speaker Emotional Variations
Large audio-language models (LALMs) extend text-based LLMs with auditory understanding, offering new opportunities for multimodal applications. While their perception, reasoning, and task performance have been widely studied, their safety alignment under paralinguistic variation remains underexplored. This work systematically investigates the role of speaker emotion. We construct a dataset of malicious speech instructions expressed across multiple emotions and intensities, and evaluate several state-of-the-art LALMs. Our results reveal substantial safety inconsistencies: different emotions elicit varying levels of unsafe responses, and the effect of intensity is non-monotonic, with medium expressions often posing the greatest risk. These findings highlight an overlooked vulnerability in LALMs and call for alignment strategies explicitly designed to ensure robustness under emotional variation, a prerequisite for trustworthy deployment in real-world settings.
- The Massive Legal Embedding Benchmark (MLEB)
We present the Massive Legal Embedding Benchmark (MLEB), the largest, most diverse, and most comprehensive open-source benchmark for legal information retrieval to date. MLEB consists of ten expert-annotated datasets spanning multiple jurisdictions (the US, UK, EU, Australia, Ireland, and Singapore), document types (cases, legislation, regulatory guidance, contracts, and literature), and task types (search, zero-shot classification, and question answering). Seven of the datasets in MLEB were newly constructed in order to fill domain and jurisdictional gaps in the open-source legal information retrieval landscape. We document our methodology in building MLEB and creating the new constituent datasets, and release our code, results, and data openly to assist with reproducible evaluations.
- Search Self-play: Pushing the Frontier of Agent Capability without Supervision
Reinforcement learning with verifiable rewards (RLVR) has become the mainstream technique for training LLM agents. However, RLVR highly depends on well-crafted task queries and corresponding ground-truth answers to provide accurate rewards, which requires massive human efforts and hinders the RL scaling processes, especially under agentic scenarios. Although a few recent works explore task synthesis methods, the difficulty of generated agentic tasks can hardly be controlled to provide effective RL training advantages. To achieve agentic RLVR with higher scalability, we explore self-play training for deep search agents, in which the learning LLM utilizes multi-turn search engine calling and acts simultaneously as both a task proposer and a problem solver. The task proposer aims to generate deep search queries with well-defined ground-truth answers and increasing task difficulty. The problem solver tries to handle the generated search queries and output the correct answer predictions. To ensure that each generated search query has accurate ground truth, we collect all the searching results from the proposer's trajectory as external knowledge, then conduct retrieval-augmentation generation (RAG) to test whether the proposed query can be correctly answered with all necessary search documents provided. In this search self-play (SSP) game, the proposer and the solver co-evolve their agent capabilities through both competition and cooperation. With substantial experimental results, we find that SSP can significantly improve search agents' performance uniformly on various benchmarks without any supervision under both from-scratch and continuous RL training setups. The code is at https://github.com/Alibaba-Quark/SSP.
- Thought Communication in Multiagent Collaboration
Natural language has long enabled human cooperation, but its lossy, ambiguous, and indirect nature limits the potential of collective intelligence. While machines are not subject to these constraints, most LLM-based multi-agent systems still rely solely on natural language, exchanging tokens or their embeddings. To go beyond language, we introduce a new paradigm, thought communication, which enables agents to interact directly mind-to-mind, akin to telepathy. To uncover these latent thoughts in a principled way, we formalize the process as a general latent variable model, where agent states are generated by an unknown function of underlying thoughts. We prove that, in a nonparametric setting without auxiliary information, both shared and private latent thoughts between any pair of agents can be identified. Moreover, the global structure of thought sharing, including which agents share which thoughts and how these relationships are structured, can also be recovered with theoretical guarantees. Guided by the established theory, we develop a framework that extracts latent thoughts from all agents prior to communication and assigns each agent the relevant thoughts, along with their sharing patterns. This paradigm naturally extends beyond LLMs to all modalities, as most observational data arise from hidden generative processes. Experiments on both synthetic and real-world benchmarks validate the theory and demonstrate the collaborative advantages of thought communication. We hope this work illuminates the potential of leveraging the hidden world, as many challenges remain unsolvable through surface-level observation alone, regardless of compute or data scale.
- Seed3D 1.0: From Images to High-Fidelity Simulation-Ready 3D Assets
Developing embodied AI agents requires scalable training environments that balance content diversity with physics accuracy. World simulators provide such environments but face distinct limitations: video-based methods generate diverse content but lack real-time physics feedback for interactive learning, while physics-based engines provide accurate dynamics but face scalability limitations from costly manual asset creation. We present Seed3D 1.0, a foundation model that generates simulation-ready 3D assets from single images, addressing the scalability challenge while maintaining physics rigor. Unlike existing 3D generation models, our system produces assets with accurate geometry, well-aligned textures, and realistic physically-based materials. These assets can be directly integrated into physics engines with minimal configuration, enabling deployment in robotic manipulation and simulation training. Beyond individual objects, the system scales to complete scene generation through assembling objects into coherent environments. By enabling scalable simulation-ready content creation, Seed3D 1.0 provides a foundation for advancing physics-based world simulators. Seed3D 1.0 is now available on https://console.volcengine.com/ark/region:ark+cn-beijing/experience/vision?modelId=doubao-seed3d-1-0-250928&tab=Gen3D
- Conan: Progressive Learning to Reason Like a Detective over Multi-Scale Visual Evidence
Video reasoning, which requires multi-step deduction across frames, remains a major challenge for multimodal large language models (MLLMs). While reinforcement learning (RL)-based methods enhance reasoning capabilities, they often rely on text-only chains that yield ungrounded or hallucinated conclusions. Conversely, frame-retrieval approaches introduce visual grounding but still struggle with inaccurate evidence localization. To address these challenges, we present Conan, a framework for evidence-grounded multi-step video reasoning. Conan identifies contextual and evidence frames, reasons over cross-frame clues, and adaptively decides when to conclude or explore further. To achieve this, we (1) construct Conan-91K, a large-scale dataset of automatically generated reasoning traces that includes frame identification, evidence reasoning, and action decision, and (2) design a multi-stage progressive cold-start strategy combined with an Identification-Reasoning-Action (AIR) RLVR training framework to jointly enhance multi-step visual reasoning. Extensive experiments on six multi-step reasoning benchmarks demonstrate that Conan surpasses the baseline Qwen2.5-VL-7B-Instruct by an average of over 10% in accuracy, achieving state-of-the-art performance. Furthermore, Conan generalizes effectively to long-video understanding tasks, validating its strong scalability and robustness.
- Diff-XYZ: A Benchmark for Evaluating Diff Understanding
Reliable handling of code diffs is central to agents that edit and refactor repositories at scale. We introduce Diff-XYZ, a compact benchmark for code-diff understanding with three supervised tasks: apply (old code + diff rightarrow new code), anti-apply (new code - diff rightarrow old code), and diff generation (new code - old code rightarrow diff). Instances in the benchmark are triples langle old code, new code, diff rangle drawn from real commits in CommitPackFT, paired with automatic metrics and a clear evaluation protocol. We use the benchmark to do a focused empirical study of the unified diff format and run a cross-format comparison of different diff representations. Our findings reveal that different formats should be used depending on the use case and model size. For example, representing diffs in search-replace format is good for larger models in the diff generation scenario, yet not suited well for diff analysis and smaller models. The Diff-XYZ benchmark is a reusable foundation for assessing and improving diff handling in LLMs that can aid future development of diff formats and models editing code. The dataset is published on HuggingFace Hub: https://huggingface.co/datasets/JetBrains-Research/diff-xyz.
Solidot(15)
- 日本向国际空间站发射新型货运飞船 HTV-X
10 月 26 日上午 9 点,日本宇宙航空研究开发机构(JAXA)从鹿儿岛县的种子岛宇宙中心用 H3 火箭 7 号机发射了向国际空间站(ISS)运送食物和实验装置的新型无人补给飞船“HTV-X”1号机。HTV-X 在大约 14 分钟后与火箭分离,发射取得成功。HTV-X 是 2009-2020 年 9 次运送全部成功的“鹳”的后续飞船。物资运载量由 4 吨增至近 6 吨,此外新增对运载物资的供电功能,能运送需要在冷冻柜低温保存的实验样品。
- 双星系统发现三颗类地行星
天文学家在距离地球约 190 光年的双星系统 TOI-2267 中,发现了三颗地球大小的行星。TOI-2267 拥有独特的行星配置:两颗行星绕其中一颗恒星运行,另一颗行星则绕伴星运行。这使得 TOI-2267 成为第一个已知在两颗恒星周围都观测到凌日现象的双星系统。 TOI-2267 是一个紧密双星系统,两颗恒星以极近距离互绕,形成一个对行星形成而言极不稳定的重力环境,然而却在其中发现了三颗短周期、地球大小的行星。在如此紧密的双星系统中发现三颗地球大小行星,是一次极为罕见的机会,能让科学家在复杂重力环境下测试行星形成理论的极限,并更深入理解银河系中行星系统结构的多样性。这个系统堪称是研究岩质行星如何在极端动力学条件下形成与存续的天然实验室,在此之前普遍认为这样的环境无法维持稳定的行星轨道。
- 美国初创公司推广 996 工作制
华盛顿邮报报道了硅谷和纽约初创公司推广 996 工作制——即每周工作六天,从早上九点一直到晚上九点。这些企业将 996 宣扬为美德,认为是一种磨练,目的是在市场的激烈竞争中取得优势。因为 AI 领域的机会窗口只有 2-3 年,谁能率先获得优势,就能占领市场。风险投资公司 LifeX Ventures 管理合伙人 Inaki Berenguer 说,你最好跑得比其他任何人都快。旧金山 AI 初创公司 Sonatic CEO Kinjal Nandy 表示虽然工作时间长,但他们仍然提供了各种福利,留出了就餐和锻炼健身的时间,甚至还提供约会服务 Raya 的免费订阅。许多初创公司要求员工到办公室工作,不允许远程办公。AI 初创公司 StarSling 要求每周六天到办公室工作;Rilla 要求每周去办公室工作 70 小时;Google 联合创始人 Sergey Brin 此前也建议 AI 工程师每周工作 60 小时。WHO 的数据显示,相比标准的 35-40 小时工作时间,每周工作逾 55 小时会导致中风风险增加 35%,心脏病死亡风险增加 17%。长时间工作也影响生产力。一项英国研究表明,每周工作逾 60 小时会降低整体产出,降低认知能力。
- 英特尔不到两年裁员 3.55 万名员工
陈立武就任英特尔 CEO 后宣布的第一件事就是大裁员。英特尔公司在约三个月内裁掉了多达 20,500 名员工,加上上一任裁掉的 15,000 个职位,芯片巨人在不到两年内裁掉了 35,500 名员工。截至 2024 年 12 月 28 日,英特尔拥有 108,900 名员工,其中包括数千名 Altera 员工——如今 Altera 已经成为独立公司。根据英特尔递交到 SEC 的最新文件,截至 2025 年 9 月 27 日,该公司共有 88,400 名员工,其中英特尔 83,300 名,Mobileye 等子公司 5,100 名。这意味着在陈立武的领导下,英特尔解雇了 20,50 0名员工,裁员主要集中在第二季度。
- 朱雀三号可重复使用火箭通过静态点火试验
蓝箭航天本周完成了朱雀三号遥一运载火箭首飞任务的第一阶段工作——加注合练及静态点火试验,为今年晚些时候第二阶段的试飞和第一级回收做准备。朱雀三号一二级箭体直径 4.5 米,整流罩直径 5.2 米,全箭长 66.1 米,起飞质量约 570 吨,起飞推力超过 750 吨,采用不锈钢作为箭体主结构材料,一子级配备九台天鹊-12A液氧甲烷发动机,设计可在执行轨道发射任务后自主高精度返回,在回收场实现软着陆并重复使用。火箭如果是一次性使用其有效载荷为 11,800 公斤,如果尝试回收第一级则有效载荷为 8,000 公斤。相比下 SpaceX 的 Falcon 9 能将 22,800 公斤负荷发射到低地球轨道。
- 前联合创始人试图为 MAGA 改造维基百科
美国保守派不喜欢现在的维基百科:他们打造了保守派自己的在线百科全书 Conservapedia,但根本没人看;共和党人坚称维基百科存在自由主义偏见,要求运营维基百科的非盈利组织维基媒体基金会给出解释,称要对潜在的平台操纵展开调查;马斯克(Elon Musk)表示将会推出 AI 驱动的百科全书替代 Grokipedia。参与创建维基百科的联合创始人 Larry Sanger 很早就离开了该平台,他如今皈依基督教,2024 年投票给了特朗普,拥抱 MAGA,目前正致力于招募保守派人士去积极编辑维基百科条目。
- 2023 年海洋热浪导致佛罗里达造礁珊瑚功能性灭绝
根据发表在《科学》期刊上的研究,2023 年创纪录的海洋热浪导致佛罗里达州近乎所有极度濒危的鹿角珊瑚属(Acropora)珊瑚群体死亡,标志着该物种在佛罗里达珊瑚礁中已功能性灭绝。这些发现对快速暖化海洋中的珊瑚生态系统的未来敲响了灾难性的警钟。海洋热浪等极端气候事件的频率和强度不断增加,它们正在严重破坏全球生态系统的健康、结构和复原能力。珊瑚礁是海洋环境中对热最为敏感的生态系统之一,它们数十年来一直因海洋温度的上升而导致大规模白化和死亡。研究显示,在这场史无前例的事件中,佛罗里达的珊瑚礁经历了该地区有记录以来的最高海洋温度,其峰值为 2023 年 7 月时的 32.3 摄氏度。随着这波热浪的持续,到 2024 年 3 月,佛罗里达群岛和干龟群岛的 97.8% 至 100% 的掌状鹿角珊瑚和鹿角状鹿角珊瑚群落死于长期极度的热应激。北部区域的珊瑚死亡率较低(37.9%),这可能是由于佛罗里达东南部海域的水温较低。
- 新晋诺奖得主开发出持久性调节性T细胞
2025 年的诺贝尔生理学或医学奖授予 3 位科学家,以表彰调节性T细胞(Tregs)的发现。这种细胞能阻止身体自身器官受到意外的免疫攻击。如果科学家能制造出大量Tregs,并在体内长期存在且持续发挥作用,那么它们可能会成为治疗自身免疫性疾病的有效疗法。新晋诺奖得主之一、日本大阪大学的免疫学家坂口志文用一种新方法制造出了大量持久存在的Tregs。在 10 月 22 日发表于《科学-转化医学》的两篇论文的第一篇中,他和同事描述了实验室生成的细胞如何有效抑制小鼠的免疫反应。在第二篇论文中,他和其他研究人员制造Tregs来治疗小鼠的一种自身免疫性皮肤病,并用类似方法从疼痛性病症患者的血液中制造出人类Tregs。坂口从传统的T细胞,包括那些导致自身免疫性疾病的T细胞中产生Tregs。在血液中,这些细胞比Tregs更常见,也更容易在培养皿中生长。
- CS2 饰品暴跌市值蒸发逾 30 亿美元
《反恐精英2(CS2)》有着庞大的饰品交易市场,但 Valve 本周三释出的“小更新”将这个市场搅动的天翻地覆,曾经高贵的稀有饰品不再稀有,导致其价值暴跌,持有这些稀有饰品的玩家损失惨重。Valve 的最新更新允许玩家通过以旧换新合同(Trade Up)用五种红色饰品合成稀有饰品。这一变动导致一天前售价逾 1.4 万美元的稀有刀具一天后暴跌逾 50%,而以前售价 10 美元的普通饰品一下子暴涨到了 100 美元以上。所有 CS2 饰品价格的总市值从超过 60 亿美元下跌 49%,蒸发逾 30 亿美元。一位中国玩家展示的截图显示他的饰品总价值一天内缩水 640 万人民币,一位 Reddit 玩家收藏的逾 600 件以前价值不高的红色饰品如今价值超过 330 万英镑。
- 亚马逊上草药类书籍可能多达 82% 是 AI 写的
为大学和企业提供 AI 检测工具的 Originality.ai 在 1-9 月之间扫描了亚马逊上 558 本草药类别的图书,发现其中 82% 很可能是 AI 撰写的。AI 垃圾完全攻陷了亚马逊上的草药医术学作品。草药医生 Sue Sprung 表示这些书会误导读者。其中一本疑似 AI 撰写的书是《Natural Healing Handbook》,位于护肤、香薰疗法和草药类书籍畅销书榜榜首,作者声称自己是澳大利亚的草药师 Luna Filby,My Harmony Herb 品牌的创始人...然而除了亚马逊上的介绍页面,互联网上没有关于她以及其品牌的任何信息,Originality.ai 以 100% 可信度认为该书是 AI 生成的。英国出版商协会 CEO Dan Conway 表示正督促亚马逊标注 AI 作品。
- ROG Xbox Ally 的 Linux 性能超过 Windows
微软与华硕合作推出的 Xbox 掌机 ROG Xbox Ally 运行了一个专为掌机优化的 Windows 版本,但测试显示微软的掌机优化还有很多改进余地。测试者在 Xbox 掌机上安装了 Bazzite,其外观与 Valve 的 Steam OS 基本相同,但底层有些区别:Steam OS 是基于 Arch,而 Bazzite 是基于 Fedora 发行版。在 Bazzite 和 Windows 分别运行游戏《Kingdom Come: Deliverance 2》和《Hogwarts Legacy》的测试显示,Linux 下的游戏 FPS 比 Windows 下平均高 13.47%,而且帧率更平稳,FPS 最高多了 32%(17W 功率模式下的《KCD2》)。
- Django 6.0 beta 1 释出
开源 Web 应用框架 Django 项目释出了 v6.0 的首个 Beta 版本。beta 1 代表着开发的冻结,此后的任务主要是修 Bug 和修复性能问题,预计 RC 版本将在一个月后发表,正式版本预计于 12 月 3 日发布。Django 6.0 支持 Python 3.12、3.13 和 3.14,开发者建议第三方库的开发者停止支持 Django 5.2 之前的版本。Django 6.0 的主要变化包括:支持内容安全政策(Content Security Policy 或CSP);模板语言支持模板局部(Template Partials);使用 Python 的 email API 处理邮件;等等。
- 无人机被用于投箭射杀动物
网友在社交媒体发布视频称,自己养的马被热成像无人机投箭射杀。一时引发了广泛关注。记者调查发现,一套无人机投箭设备约需 3 万多元,需要加装空投器通过光感控制空投箭矢。目前已有地区在禁猎公告中明确提出,“无人机等飞行器辅助投射标枪或箭支装具”为禁猎工具。在电商平台搜索关键字“无人机空投箭头”,有很多商家在网上售卖挂载在无人机空投器上的圆锥形箭头,命名为“空投牙签”,店铺还标明,仅允许有专业资质的人使用,非专业人员需在专业人员陪同下使用。
- 富士通推出了内置蓝光光驱的新笔电
在光驱日益罕见的世界,日本公司富士通推出了一款内置蓝光光驱的新笔电。自 2015 年起,绝大部分笔记本电脑制造商不再提供光驱,但日本公司拒绝跟随这一趋势。富士通的新笔电型号是 FMV Note A A77-K3,其光驱支持读取和刻录蓝光光盘,配备了 AMD Ryzen 7 7000 系列 APU(7735U)。富士康还推出了另外两款配备 13 代英特尔处理器的 FMV Note A 笔电,但没有内置蓝光光驱而是 DVD 光驱。
- 耐药菌发展速度快于抗生素
根据 WHO 的最新报告《Global antibiotic resistance surveillance report 2025》,快速发展的耐药菌正构成日益严重的全球公共卫生威胁。报告称,2018-2023 年抗生素耐药性在监视的病原体‑药物组合中平均增长逾 40%,每年增幅 5–15%。2023 年六分之一经实验室确诊的细菌感染被证实对抗生素治疗有耐药性。耐药革兰氏阴性菌构成了最大的威胁,尤其是大肠杆菌和肺炎克雷伯菌。报告警告,逾 40% 的大肠杆菌和逾 55% 的肺炎克雷伯菌菌株对第三代头孢菌素产生了耐药性,而第三代头孢菌素是治疗此类感染的首选药物。