DIGEST · 2025-10-25

OrangeBot.AI Digest — 2025-10-25

60 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. California invests in battery energy storage, leaving rolling blackouts behind (www.latimes.com)
  2. The Journey Before main() (amit.prasad.me)
  3. We do not have sufficient links to the UK for Online Safety Act to be applicable (libera.chat)
  4. Rock Tumbler Instructions (rocktumbler.com)
  5. Windows 10 Deadline Boosts Mac Sales (www.macrumors.com)
  6. Synadia and TigerBeetle Commit $512k USD to the Zig Software Foundation (www.synadia.com)
  7. Making a micro Linux distro (2023) (popovicu.com)
  8. Tell HN: OpenAI now requires ID verification and won't refund API credits
  9. The future of Python web services looks GIL-free (blog.baro.dev)
  10. React vs. Backbone in 2025 (backbonenotbad.hyperclay.com)
  11. Euro cops take down cybercrime network with 49M fake accounts (www.itnews.com.au)
  12. Meet the real screen addicts: the elderly (www.economist.com)
  13. Key IOCs for Pegasus and Predator Spyware Removed with iOS 26 Update (iverify.io)
  14. Advice for new principal tech ICs (i.e., notes to myself) (eugeneyan.com)
  15. People with blindness can read again after retinal implant and special glasses (www.nbcnews.com)

GitHub Trending(15)

  1. LadybirdBrowser / ladybird

    Truly independent web browser

  2. TheRobotStudio / SO-ARM100

    Standard Open Arm 100

  3. coinbase / x402

    A payments protocol for the internet. Built on HTTP.

  4. guofei9987 / blind_watermark

    Blind&Invisible Watermark ,图片盲水印,提取水印无须原图!

  5. paperless-ngx / paperless-ngx

    A community-supported supercharged document management system: scan, index and archive all your documents

  6. zephyrproject-rtos / zephyr

    Primary Git Repository for the Zephyr Project. Zephyr is a new generation, scalable, optimized, secure RTOS for multiple hardware architectures.

  7. hoppscotch / hoppscotch

    Open source API development ecosystem - https://hoppscotch.io (open-source alternative to Postman, Insomnia)

  8. public-apis / public-apis

    A collective list of free APIs

  9. ubicloud / ubicloud

    Open source alternative to AWS. Elastic compute, block storage (non replicated), firewall and load balancer, managed Postgres, K8s, AI inference, and IAM services.

  10. yt-dlp / yt-dlp

    A feature-rich command-line audio/video downloader

  11. OpenMind / OM1

    Modular AI runtime for robots

  12. jaywcjlove / awesome-mac

     Now we have become very big, Different from the original idea. Collect premium software in various categories.

  13. ashishps1 / awesome-system-design-resources

    Learn System Design concepts and prepare for interviews using free resources.

  14. minio / minio

    MinIO is a high-performance, S3 compatible object store, open sourced under GNU AGPLv3 license.

  15. k2-fsa / sherpa-onnx

    Speech-to-text, text-to-speech, speaker diarization, speech enhancement, source separation, and VAD using next-gen Kaldi with onnxruntime without Internet connection. Support embedded systems, Android, iOS, HarmonyOS, Raspberry Pi, RISC-V, x86_64 servers, websocket server/client, support 12 programming languages

Hugging Face(15)

  1. Human-Agent Collaborative Paper-to-Page Crafting for Under $0.1

    In the quest for scientific progress, communicating research is as vital as the discovery itself. Yet, researchers are often sidetracked by the manual, repetitive chore of building project webpages to make their dense papers accessible. While automation has tackled static slides and posters, the dynamic, interactive nature of webpages has remained an unaddressed challenge. To bridge this gap, we reframe the problem, arguing that the solution lies not in a single command, but in a collaborative, hierarchical process. We introduce AutoPage, a novel multi-agent system that embodies this philosophy. AutoPage deconstructs paper-to-page creation into a coarse-to-fine pipeline from narrative planning to multimodal content generation and interactive rendering. To combat AI hallucination, dedicated "Checker" agents verify each step against the source paper, while optional human checkpoints ensure the final product aligns perfectly with the author's vision, transforming the system from a mere tool into a powerful collaborative assistant. To rigorously validate our approach, we also construct PageBench, the first benchmark for this new task. Experiments show AutoPage not only generates high-quality, visually appealing pages but does so with remarkable efficiency in under 15 minutes for less than \0.1. Code and dataset will be released at https://mqleet.github.io/AutoPage_ProjectPage/{Webpage}$.

  2. AdaSPEC: Selective Knowledge Distillation for Efficient Speculative Decoders

    Speculative Decoding (SD) accelerates large language model inference by employing a small draft model to generate predictions, which are then verified by a larger target model. The effectiveness of SD hinges on the alignment between these models, which is typically enhanced by Knowledge Distillation (KD). However, conventional KD methods aim to minimize the KL divergence between the draft and target models across all tokens, a goal that is misaligned with the true objective of SD, which is to maximize token acceptance rate. Therefore, draft models often struggle to fully assimilate the target model's knowledge due to capacity constraints, leading to suboptimal performance. To address this challenge, we propose AdaSPEC, a novel method that incorporates selective token filtering into the KD process. AdaSPEC utilizes a reference model to identify and filter out difficult-to-fit tokens, enabling the distillation of a draft model that better aligns with the target model on simpler tokens. This approach improves the overall token acceptance rate without compromising generation quality. We evaluate AdaSPEC across diverse tasks, including arithmetic reasoning, instruction-following, coding, and summarization, using model configurations of 31M/1.4B and 350M/2.7B parameters. Our results demonstrate that AdaSPEC consistently outperforms the state-of-the-art DistillSpec method, achieving higher acceptance rates across all tasks (up to 15\%). The code is publicly available at https://github.com/yuezhouhu/adaspec.

  3. Open-o3 Video: Grounded Video Reasoning with Explicit Spatio-Temporal Evidence

    Most video reasoning models only generate textual reasoning traces without indicating when and where key evidence appears. Recent models such as OpenAI-o3 have sparked wide interest in evidence-centered reasoning for images, yet extending this ability to videos is more challenging, as it requires joint temporal tracking and spatial localization across dynamic scenes. We introduce Open-o3 Video, a non-agent framework that integrates explicit spatio-temporal evidence into video reasoning, and carefully collect training data and design training strategies to address the aforementioned challenges. The model highlights key timestamps, objects, and bounding boxes alongside its answers, allowing reasoning to be grounded in concrete visual observations. To enable this functionality, we first curate and build two high-quality datasets, STGR-CoT-30k for SFT and STGR-RL-36k for RL, with carefully constructed temporal and spatial annotations, since most existing datasets offer either temporal spans for videos or spatial boxes on images, lacking unified spatio-temporal supervision and reasoning traces. Then, we adopt a cold-start reinforcement learning strategy with multiple specially designed rewards that jointly encourage answer accuracy, temporal alignment, and spatial precision. On V-STAR benchmark, Open-o3 Video achieves state-of-the-art performance, raising mAM by 14.4% and mLGM by 24.2% on the Qwen2.5-VL baseline. Consistent improvements are also observed on a broad range of video understanding benchmarks, including VideoMME, WorldSense, VideoMMMU, and TVGBench. Beyond accuracy, the reasoning traces produced by Open-o3 Video also provide valuable signals for test-time scaling, enabling confidence-aware verification and improving answer reliability.

  4. HoloCine: Holistic Generation of Cinematic Multi-Shot Long Video Narratives

    State-of-the-art text-to-video models excel at generating isolated clips but fall short of creating the coherent, multi-shot narratives, which are the essence of storytelling. We bridge this "narrative gap" with HoloCine, a model that generates entire scenes holistically to ensure global consistency from the first shot to the last. Our architecture achieves precise directorial control through a Window Cross-Attention mechanism that localizes text prompts to specific shots, while a Sparse Inter-Shot Self-Attention pattern (dense within shots but sparse between them) ensures the efficiency required for minute-scale generation. Beyond setting a new state-of-the-art in narrative coherence, HoloCine develops remarkable emergent abilities: a persistent memory for characters and scenes, and an intuitive grasp of cinematic techniques. Our work marks a pivotal shift from clip synthesis towards automated filmmaking, making end-to-end cinematic creation a tangible future. Our code is available at: https://holo-cine.github.io/.

  5. DyPE: Dynamic Position Extrapolation for Ultra High Resolution Diffusion

    Diffusion Transformer models can generate images with remarkable fidelity and detail, yet training them at ultra-high resolutions remains extremely costly due to the self-attention mechanism's quadratic scaling with the number of image tokens. In this paper, we introduce Dynamic Position Extrapolation (DyPE), a novel, training-free method that enables pre-trained diffusion transformers to synthesize images at resolutions far beyond their training data, with no additional sampling cost. DyPE takes advantage of the spectral progression inherent to the diffusion process, where low-frequency structures converge early, while high-frequencies take more steps to resolve. Specifically, DyPE dynamically adjusts the model's positional encoding at each diffusion step, matching their frequency spectrum with the current stage of the generative process. This approach allows us to generate images at resolutions that exceed the training resolution dramatically, e.g., 16 million pixels using FLUX. On multiple benchmarks, DyPE consistently improves performance and achieves state-of-the-art fidelity in ultra-high-resolution image generation, with gains becoming even more pronounced at higher resolutions. Project page is available at https://noamissachar.github.io/DyPE/.

  6. Loopholing Discrete Diffusion: Deterministic Bypass of the Sampling Wall

    Discrete diffusion models offer a promising alternative to autoregressive generation through parallel decoding, but they suffer from a sampling wall: once categorical sampling occurs, rich distributional information collapses into one-hot vectors and cannot be propagated across steps, forcing subsequent steps to operate with limited information. To mitigate this problem, we introduce Loopholing, a novel and simple mechanism that preserves this information via a deterministic latent pathway, leading to Loopholing Discrete Diffusion Models (LDDMs). Trained efficiently with a self-conditioning strategy, LDDMs achieve substantial gains-reducing generative perplexity by up to 61% over prior baselines, closing (and in some cases surpassing) the gap with autoregressive models, and producing more coherent text. Applied to reasoning tasks, LDDMs also improve performance on arithmetic benchmarks such as Countdown and Game of 24. These results also indicate that loopholing mitigates idle steps and oscillations, providing a scalable path toward high-quality non-autoregressive text generation.

  7. SAKE: Towards Editing Auditory Attribute Knowledge of Large Audio-Language Models

    Knowledge editing offers an efficient way to update model knowledge without full retraining, but prior work has concentrated almost exclusively on textual or visual modalities. We introduce SAKE, the first benchmark specifically designed for editing auditory attribute knowledge in Large Audio-Language Models (LALMs). Unlike factual updates, SAKE targets several abstract auditory attributes, capturing knowledge types that go beyond conventional textual and visual domains. We benchmark seven editing methods on two LALMs along four dimensions: reliability, generality, audio/text locality, and portability. Results highlight challenges such as preserving intra-attribute knowledge unrelated to the edit, generalizing edits to multimodal reasoning, and maintaining edits under sequential updates. SAKE provides a principled framework to study how knowledge editing extends to the auditory modalities, opening new directions for maintaining and adapting LALMs in more diverse real-world scenarios.

  8. The Massive Legal Embedding Benchmark (MLEB)

    We present the Massive Legal Embedding Benchmark (MLEB), the largest, most diverse, and most comprehensive open-source benchmark for legal information retrieval to date. MLEB consists of ten expert-annotated datasets spanning multiple jurisdictions (the US, UK, EU, Australia, Ireland, and Singapore), document types (cases, legislation, regulatory guidance, contracts, and literature), and task types (search, zero-shot classification, and question answering). Seven of the datasets in MLEB were newly constructed in order to fill domain and jurisdictional gaps in the open-source legal information retrieval landscape. We document our methodology in building MLEB and creating the new constituent datasets, and release our code, results, and data openly to assist with reproducible evaluations.

  9. Every Question Has Its Own Value: Reinforcement Learning with Explicit Human Values

    We propose Reinforcement Learning with Explicit Human Values (RLEV), a method that aligns Large Language Model (LLM) optimization directly with quantifiable human value signals. While Reinforcement Learning with Verifiable Rewards (RLVR) effectively trains models in objective domains using binary correctness rewards, it overlooks that not all tasks are equally significant. RLEV extends this framework by incorporating human-defined value signals directly into the reward function. Using exam-style data with explicit ground-truth value labels, RLEV consistently outperforms correctness-only baselines across multiple RL algorithms and model scales. Crucially, RLEV policies not only improve value-weighted accuracy but also learn a value-sensitive termination policy: concise for low-value prompts, thorough for high-value ones. We demonstrate this behavior stems from value-weighted gradient amplification on end-of-sequence tokens. Ablation studies confirm the gain is causally linked to value alignment. RLEV remains robust under noisy value signals, such as difficulty-based labels, demonstrating that optimizing for an explicit utility function offers a practical path to aligning LLMs with human priorities.

  10. Investigating Safety Vulnerabilities of Large Audio-Language Models Under Speaker Emotional Variations

    Large audio-language models (LALMs) extend text-based LLMs with auditory understanding, offering new opportunities for multimodal applications. While their perception, reasoning, and task performance have been widely studied, their safety alignment under paralinguistic variation remains underexplored. This work systematically investigates the role of speaker emotion. We construct a dataset of malicious speech instructions expressed across multiple emotions and intensities, and evaluate several state-of-the-art LALMs. Our results reveal substantial safety inconsistencies: different emotions elicit varying levels of unsafe responses, and the effect of intensity is non-monotonic, with medium expressions often posing the greatest risk. These findings highlight an overlooked vulnerability in LALMs and call for alignment strategies explicitly designed to ensure robustness under emotional variation, a prerequisite for trustworthy deployment in real-world settings.

  11. Seed3D 1.0: From Images to High-Fidelity Simulation-Ready 3D Assets

    Developing embodied AI agents requires scalable training environments that balance content diversity with physics accuracy. World simulators provide such environments but face distinct limitations: video-based methods generate diverse content but lack real-time physics feedback for interactive learning, while physics-based engines provide accurate dynamics but face scalability limitations from costly manual asset creation. We present Seed3D 1.0, a foundation model that generates simulation-ready 3D assets from single images, addressing the scalability challenge while maintaining physics rigor. Unlike existing 3D generation models, our system produces assets with accurate geometry, well-aligned textures, and realistic physically-based materials. These assets can be directly integrated into physics engines with minimal configuration, enabling deployment in robotic manipulation and simulation training. Beyond individual objects, the system scales to complete scene generation through assembling objects into coherent environments. By enabling scalable simulation-ready content creation, Seed3D 1.0 provides a foundation for advancing physics-based world simulators. Seed3D 1.0 is now available on https://console.volcengine.com/ark/region:ark+cn-beijing/experience/vision?modelId=doubao-seed3d-1-0-250928&tab=Gen3D

  12. Thought Communication in Multiagent Collaboration

    Natural language has long enabled human cooperation, but its lossy, ambiguous, and indirect nature limits the potential of collective intelligence. While machines are not subject to these constraints, most LLM-based multi-agent systems still rely solely on natural language, exchanging tokens or their embeddings. To go beyond language, we introduce a new paradigm, thought communication, which enables agents to interact directly mind-to-mind, akin to telepathy. To uncover these latent thoughts in a principled way, we formalize the process as a general latent variable model, where agent states are generated by an unknown function of underlying thoughts. We prove that, in a nonparametric setting without auxiliary information, both shared and private latent thoughts between any pair of agents can be identified. Moreover, the global structure of thought sharing, including which agents share which thoughts and how these relationships are structured, can also be recovered with theoretical guarantees. Guided by the established theory, we develop a framework that extracts latent thoughts from all agents prior to communication and assigns each agent the relevant thoughts, along with their sharing patterns. This paradigm naturally extends beyond LLMs to all modalities, as most observational data arise from hidden generative processes. Experiments on both synthetic and real-world benchmarks validate the theory and demonstrate the collaborative advantages of thought communication. We hope this work illuminates the potential of leveraging the hidden world, as many challenges remain unsolvable through surface-level observation alone, regardless of compute or data scale.

  13. Search Self-play: Pushing the Frontier of Agent Capability without Supervision

    Reinforcement learning with verifiable rewards (RLVR) has become the mainstream technique for training LLM agents. However, RLVR highly depends on well-crafted task queries and corresponding ground-truth answers to provide accurate rewards, which requires massive human efforts and hinders the RL scaling processes, especially under agentic scenarios. Although a few recent works explore task synthesis methods, the difficulty of generated agentic tasks can hardly be controlled to provide effective RL training advantages. To achieve agentic RLVR with higher scalability, we explore self-play training for deep search agents, in which the learning LLM utilizes multi-turn search engine calling and acts simultaneously as both a task proposer and a problem solver. The task proposer aims to generate deep search queries with well-defined ground-truth answers and increasing task difficulty. The problem solver tries to handle the generated search queries and output the correct answer predictions. To ensure that each generated search query has accurate ground truth, we collect all the searching results from the proposer's trajectory as external knowledge, then conduct retrieval-augmentation generation (RAG) to test whether the proposed query can be correctly answered with all necessary search documents provided. In this search self-play (SSP) game, the proposer and the solver co-evolve their agent capabilities through both competition and cooperation. With substantial experimental results, we find that SSP can significantly improve search agents' performance uniformly on various benchmarks without any supervision under both from-scratch and continuous RL training setups. The code is at https://github.com/Alibaba-Quark/SSP.

  14. Conan: Progressive Learning to Reason Like a Detective over Multi-Scale Visual Evidence

    Video reasoning, which requires multi-step deduction across frames, remains a major challenge for multimodal large language models (MLLMs). While reinforcement learning (RL)-based methods enhance reasoning capabilities, they often rely on text-only chains that yield ungrounded or hallucinated conclusions. Conversely, frame-retrieval approaches introduce visual grounding but still struggle with inaccurate evidence localization. To address these challenges, we present Conan, a framework for evidence-grounded multi-step video reasoning. Conan identifies contextual and evidence frames, reasons over cross-frame clues, and adaptively decides when to conclude or explore further. To achieve this, we (1) construct Conan-91K, a large-scale dataset of automatically generated reasoning traces that includes frame identification, evidence reasoning, and action decision, and (2) design a multi-stage progressive cold-start strategy combined with an Identification-Reasoning-Action (AIR) RLVR training framework to jointly enhance multi-step visual reasoning. Extensive experiments on six multi-step reasoning benchmarks demonstrate that Conan surpasses the baseline Qwen2.5-VL-7B-Instruct by an average of over 10% in accuracy, achieving state-of-the-art performance. Furthermore, Conan generalizes effectively to long-video understanding tasks, validating its strong scalability and robustness.

  15. LayerComposer: Interactive Personalized T2I via Spatially-Aware Layered Canvas

    Despite their impressive visual fidelity, existing personalized generative models lack interactive control over spatial composition and scale poorly to multiple subjects. To address these limitations, we present LayerComposer, an interactive framework for personalized, multi-subject text-to-image generation. Our approach introduces two main contributions: (1) a layered canvas, a novel representation in which each subject is placed on a distinct layer, enabling occlusion-free composition; and (2) a locking mechanism that preserves selected layers with high fidelity while allowing the remaining layers to adapt flexibly to the surrounding context. Similar to professional image-editing software, the proposed layered canvas allows users to place, resize, or lock input subjects through intuitive layer manipulation. Our versatile locking mechanism requires no architectural changes, relying instead on inherent positional embeddings combined with a new complementary data sampling strategy. Extensive experiments demonstrate that LayerComposer achieves superior spatial control and identity preservation compared to the state-of-the-art methods in multi-subject personalized image generation.

Solidot(15)

  1. 2023 年海洋热浪导致佛罗里达造礁珊瑚功能性灭绝

    根据发表在《科学》期刊上的研究,2023 年创纪录的海洋热浪导致佛罗里达州近乎所有极度濒危的鹿角珊瑚属(Acropora)珊瑚群体死亡,标志着该物种在佛罗里达珊瑚礁中已功能性灭绝。这些发现对快速暖化海洋中的珊瑚生态系统的未来敲响了灾难性的警钟。海洋热浪等极端气候事件的频率和强度不断增加,它们正在严重破坏全球生态系统的健康、结构和复原能力。珊瑚礁是海洋环境中对热最为敏感的生态系统之一,它们数十年来一直因海洋温度的上升而导致大规模白化和死亡。研究显示,在这场史无前例的事件中,佛罗里达的珊瑚礁经历了该地区有记录以来的最高海洋温度,其峰值为 2023 年 7 月时的 32.3 摄氏度。随着这波热浪的持续,到 2024 年 3 月,佛罗里达群岛和干龟群岛的 97.8% 至 100% 的掌状鹿角珊瑚和鹿角状鹿角珊瑚群落死于长期极度的热应激。北部区域的珊瑚死亡率较低(37.9%),这可能是由于佛罗里达东南部海域的水温较低。

  2. 新晋诺奖得主开发出持久性调节性T细胞

    2025 年的诺贝尔生理学或医学奖授予 3 位科学家,以表彰调节性T细胞(Tregs)的发现。这种细胞能阻止身体自身器官受到意外的免疫攻击。如果科学家能制造出大量Tregs,并在体内长期存在且持续发挥作用,那么它们可能会成为治疗自身免疫性疾病的有效疗法。新晋诺奖得主之一、日本大阪大学的免疫学家坂口志文用一种新方法制造出了大量持久存在的Tregs。在 10 月 22 日发表于《科学-转化医学》的两篇论文的第一篇中,他和同事描述了实验室生成的细胞如何有效抑制小鼠的免疫反应。在第二篇论文中,他和其他研究人员制造Tregs来治疗小鼠的一种自身免疫性皮肤病,并用类似方法从疼痛性病症患者的血液中制造出人类Tregs。坂口从传统的T细胞,包括那些导致自身免疫性疾病的T细胞中产生Tregs。在血液中,这些细胞比Tregs更常见,也更容易在培养皿中生长。

  3. CS2 饰品暴跌市值蒸发逾 30 亿美元

    《反恐精英2(CS2)》有着庞大的饰品交易市场,但 Valve 本周三释出的“小更新”将这个市场搅动的天翻地覆,曾经高贵的稀有饰品不再稀有,导致其价值暴跌,持有这些稀有饰品的玩家损失惨重。Valve 的最新更新允许玩家通过以旧换新合同(Trade Up)用五种红色饰品合成稀有饰品。这一变动导致一天前售价逾 1.4 万美元的稀有刀具一天后暴跌逾 50%,而以前售价 10 美元的普通饰品一下子暴涨到了 100 美元以上。所有 CS2 饰品价格的总市值从超过 60 亿美元下跌 49%,蒸发逾 30 亿美元。一位中国玩家展示的截图显示他的饰品总价值一天内缩水 640 万人民币,一位 Reddit 玩家收藏的逾 600 件以前价值不高的红色饰品如今价值超过 330 万英镑。

  4. 亚马逊上草药类书籍可能多达 82% 是 AI 写的

    为大学和企业提供 AI 检测工具的 Originality.ai 在 1-9 月之间扫描了亚马逊上 558 本草药类别的图书,发现其中 82% 很可能是 AI 撰写的。AI 垃圾完全攻陷了亚马逊上的草药医术学作品。草药医生 Sue Sprung 表示这些书会误导读者。其中一本疑似 AI 撰写的书是《Natural Healing Handbook》,位于护肤、香薰疗法和草药类书籍畅销书榜榜首,作者声称自己是澳大利亚的草药师 Luna Filby,My Harmony Herb 品牌的创始人...然而除了亚马逊上的介绍页面,互联网上没有关于她以及其品牌的任何信息,Originality.ai 以 100% 可信度认为该书是 AI 生成的。英国出版商协会 CEO Dan Conway 表示正督促亚马逊标注 AI 作品。

  5. ROG Xbox Ally 的 Linux 性能超过 Windows

    微软与华硕合作推出的 Xbox 掌机 ROG Xbox Ally 运行了一个专为掌机优化的 Windows 版本,但测试显示微软的掌机优化还有很多改进余地。测试者在 Xbox 掌机上安装了 Bazzite,其外观与 Valve 的 Steam OS 基本相同,但底层有些区别:Steam OS 是基于 Arch,而 Bazzite 是基于 Fedora 发行版。在 Bazzite 和 Windows 分别运行游戏《Kingdom Come: Deliverance 2》和《Hogwarts Legacy》的测试显示,Linux 下的游戏 FPS 比 Windows 下平均高 13.47%,而且帧率更平稳,FPS 最高多了 32%(17W 功率模式下的《KCD2》)。

  6. Django 6.0 beta 1 释出

    开源 Web 应用框架 Django 项目释出了 v6.0 的首个 Beta 版本。beta 1 代表着开发的冻结,此后的任务主要是修 Bug 和修复性能问题,预计 RC 版本将在一个月后发表,正式版本预计于 12 月 3 日发布。Django 6.0 支持 Python 3.12、3.13 和 3.14,开发者建议第三方库的开发者停止支持 Django 5.2 之前的版本。Django 6.0 的主要变化包括:支持内容安全政策(Content Security Policy 或CSP);模板语言支持模板局部(Template Partials);使用 Python 的 email API 处理邮件;等等。

  7. 无人机被用于投箭射杀动物

    网友在社交媒体发布视频称,自己养的马被热成像无人机投箭射杀。一时引发了广泛关注。记者调查发现,一套无人机投箭设备约需 3 万多元,需要加装空投器通过光感控制空投箭矢。目前已有地区在禁猎公告中明确提出,“无人机等飞行器辅助投射标枪或箭支装具”为禁猎工具。在电商平台搜索关键字“无人机空投箭头”,有很多商家在网上售卖挂载在无人机空投器上的圆锥形箭头,命名为“空投牙签”,店铺还标明,仅允许有专业资质的人使用,非专业人员需在专业人员陪同下使用。

  8. 富士通推出了内置蓝光光驱的新笔电

    在光驱日益罕见的世界,日本公司富士通推出了一款内置蓝光光驱的新笔电。自 2015 年起,绝大部分笔记本电脑制造商不再提供光驱,但日本公司拒绝跟随这一趋势。富士通的新笔电型号是 FMV Note A A77-K3,其光驱支持读取和刻录蓝光光盘,配备了 AMD Ryzen 7 7000 系列 APU(7735U)。富士康还推出了另外两款配备 13 代英特尔处理器的 FMV Note A 笔电,但没有内置蓝光光驱而是 DVD 光驱。

  9. 耐药菌发展速度快于抗生素

    根据 WHO 的最新报告《Global antibiotic resistance surveillance report 2025》,快速发展的耐药菌正构成日益严重的全球公共卫生威胁。报告称,2018-2023 年抗生素耐药性在监视的病原体‑药物组合中平均增长逾 40%,每年增幅 5–15%。2023 年六分之一经实验室确诊的细菌感染被证实对抗生素治疗有耐药性。耐药革兰氏阴性菌构成了最大的威胁,尤其是大肠杆菌和肺炎克雷伯菌。报告警告,逾 40% 的大肠杆菌和逾 55% 的肺炎克雷伯菌菌株对第三代头孢菌素产生了耐药性,而第三代头孢菌素是治疗此类感染的首选药物。

  10. 特朗普赦免赵长鹏

    美国总统特朗普赦免了币安创始人赵长鹏。币安是全球最大的加密货币交易所,与特朗普家族的加密货币业务 World Liberty Financial 有深度合作。美国 SEC 于 2023 年 6 月对币安和赵长鹏提出 13 项指控,当年 11 月赵长鹏承认违反了美国洗钱法。2024 年 4 月在支付巨幅罚款后他被判 4 个月徒刑,9 月出狱。他同时被永久禁止涉足加密货币银行业务,他请求特朗普赦免的就是取消这一限制。

  11. 中国核电总装机超 1.25 亿千瓦

    根据最新数据,中国在运核电机组达 59 台,总装机容量为 6248 万千瓦;核准在建机组 53 台,装机容量达 6293 万千瓦,总装机规模已突破 1.25 亿千瓦,连续保持世界第一。第三代压水堆核电站华龙一号在建与在运机组总数已达 41 台。2024 年全球核电发电量创近 10 年新高,在全球能源结构向清洁低碳转型的进程中,核能的重要性日益凸显。预计到 2050 年,全球核电装机规模将突破 9 亿千瓦。

  12. 天文学家在恒星宜居带发现一颗超级地球

    天文学家在距离地球不到 20 光年的地方发现了一颗位于宜居带的超级地球。这颗被称为 GJ 251 c 的系外行星质量是大约地球的四倍,可能是一颗岩石行星。天文学家是使用 Habitable-Zone Planet Finder(HPF)收集的数据发现这颗行星的,它围绕恒星一周需要 54 天,其母星是一颗红矮星,位于双子星座。目前还无法确认 GJ 251 c上是否存在大气或生命,但它可成为未来探索的一个有希望的目标。研究报告发布在《The Astronomical Journal》期刊上。

  13. OpenBSD 7.8 释出

    OpenBSD 项目释出了 v7.8。主要变化包括:启用 AMD Secure Encrypted Virtualization(AMD SEV)支持,支持 Raspberry Pi 5,高通、瑞芯(Rockchip)和苹果 ARM 新驱动,改进 FUSE 文件系统支持与 Linux 实现的兼容性,挂起/休眠改进、SMP 改进、更新到 Linux 6.12.50 DRM 图形驱动,uvideo 驱动的 H.264 视频支持以及多项网络驱动改进,等等。

  14. Fedora 批准使用 AI 的政策

    Linux 发行版 Fedora 理事会批准了最新版的 AI-Assisted Contributions 政策,允许开发者使用 AI 辅助编程或润色文本。开发者需要对自己的贡献,不管 AI 生成的比例,必须承担全部责任;如果大部分内容都是 AI 工具生成的,开发者必须披露工具使用情况,如果只是将 AI 用于纠正语法和拼写,润色文本,则不需要披露;AI 工具可用于为审核者提供分析和建议,但审核者不能将 AI 的判断当作最终裁决。

  15. NVIDIA 中国开发者日 2025 将于11月14日在苏州举办

    重点面向A创业团队与技术型创始人开放 开发者日仅涵盖大模型应用开发、机器人及物理AI等前沿方向,更提供: ✅ 免费NVIDIA Associate级别认证考试(提升团队技术 credibility) ✅ 全天AI实战培训与工具链演示 ✅ 与NVIDIA工程师及行业落地伙伴面对面交流机会 活动旨在推动创业企业技术创新、加速产品商业化进程。即日起开放报名。 报名入口:https://jinshuju.com/f/Uh4yZ6?x_field_1=zhiding 面向开发者、AI工程师及技术决策者开放 除主论坛(大模型、物理 AI、机器人)和三大技术分论坛外,还将开放免费 NVIDIA Certified Associate(NCA)级别认证考试,常规费用960 元,参与本次活动将全额免除。 开放科目(三选一): NCA-GENL:生成式 AI / 大语言模型开发 NCA-GENM:多模态生成式 AI(文本/图像/音频) NCA-AIIO:AI 基础设施与运维 名额有限,仅100个免费席位,抓紧报名 报名地址:https://developer.nvidia.cn/developer-day?ncid=pa-so-zdn-510609-vt16