DIGEST · 2025-11-14

OrangeBot.AI Digest — 2025-11-14

60 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. Show HN: Epstein Files Organized and Searchable (searchepsteinfiles.com)
  2. AI World Clocks (clocks.brianmoore.com)
  3. A race condition in Aurora RDS (hightouch.com)
  4. Bitchat for Gaza – messaging without internet (updates.techforpalestine.org)
  5. The disguised return of EU Chat Control (reclaimthenet.org)
  6. Germany to ban Huawei from future 6G network (www.bloomberg.com)
  7. Being poor vs. being broke (blog.ctms.me)
  8. 'No One Lives Forever' turns 25 and you still can't buy it legitimately (www.techdirt.com)
  9. Oracle hit hard in Wall Street's tech sell-off over its AI bet (www.ft.com)
  10. I think nobody wants AI in Firefox, Mozilla (manualdousuario.net)
  11. Winamp clone in Swift for macOS (github.com)
  12. AGI fantasy is a blocker to actual engineering (www.tomwphillips.co.uk)
  13. Nvidia is gearing up to sell servers instead of just GPUs and components (www.tomshardware.com)
  14. Backblaze Drive Stats for Q3 2025 (www.backblaze.com)
  15. Honda: 2 years of ml vs 1 month of prompting - heres what we learned (www.levs.fyi)

GitHub Trending(15)

  1. sansan0 / TrendRadar

    🎯 告别信息过载,AI 助你看懂新闻资讯热点,简单的舆情监控分析 - 多平台热点聚合+基于 MCP 的AI分析工具。监控35个平台(抖音、知乎、B站、华尔街见闻、财联社等),智能筛选+自动推送+AI对话分析(用自然语言深度挖掘新闻:趋势追踪、情感分析、相似检索等13种工具)。支持企业微信/飞书/钉钉/Telegram/邮件/ntfy推送,30秒网页部署,1分钟手机通知,无需编程。支持Docker部署⭐ 让算法为你服务,用AI理解热点

  2. google / adk-go

    An open-source, code-first Go toolkit for building, evaluating, and deploying sophisticated AI agents with flexibility and control.

  3. TapXWorld / ChinaTextbook

    所有小初高、大学PDF教材。

  4. yeongpin / cursor-free-vip

    [Support 0.49.x](Reset Cursor AI MachineID & Bypass Higher Token Limit) Cursor Ai ,自动重置机器ID , 免费升级使用Pro功能: You've reached your trial request limit. / Too many free trial accounts used on this machine. Please upgrade to pro. We have this limit in place to prevent abuse. Please let us know if you believe this is a mistake.

  5. nvm-sh / nvm

    Node Version Manager - POSIX-compliant bash script to manage multiple active node.js versions

  6. traefik / traefik

    The Cloud Native Application Proxy

  7. HKUDS / LightRAG

    [EMNLP2025] "LightRAG: Simple and Fast Retrieval-Augmented Generation"

  8. bobeff / open-source-games

    A list of open source games.

  9. volcengine / verl

    verl: Volcano Engine Reinforcement Learning for LLMs

  10. GibsonAI / Memori

    Open-Source Memory Engine for LLMs, AI Agents & Multi-Agent Systems

  11. yangshun / tech-interview-handbook

    Curated coding interview preparation materials for busy software engineers

  12. microsoft / call-center-ai

    Send a phone call from AI agent, in an API call. Or, directly call the bot from the configured phone number!

  13. MustardChef / WSABuilds

    Run Windows Subsystem For Android on your Windows 10 and Windows 11 PC using prebuilt binaries with Google Play Store (MindTheGapps) and/or Magisk or KernelSU (root solutions) built in.

  14. playcanvas / engine

    Powerful web graphics runtime built on WebGL, WebGPU, WebXR and glTF

  15. iptv-org / iptv

    Collection of publicly available IPTV channels from all over the world

Hugging Face(15)

  1. One Small Step in Latent, One Giant Leap for Pixels: Fast Latent Upscale Adapter for Your Diffusion Models

    Diffusion models struggle to scale beyond their training resolutions, as direct high-resolution sampling is slow and costly, while post-hoc image super-resolution (ISR) introduces artifacts and additional latency by operating after decoding. We present the Latent Upscaler Adapter (LUA), a lightweight module that performs super-resolution directly on the generator's latent code before the final VAE decoding step. LUA integrates as a drop-in component, requiring no modifications to the base model or additional diffusion stages, and enables high-resolution synthesis through a single feed-forward pass in latent space. A shared Swin-style backbone with scale-specific pixel-shuffle heads supports 2x and 4x factors and remains compatible with image-space SR baselines, achieving comparable perceptual quality with nearly 3x lower decoding and upscaling time (adding only +0.42 s for 1024 px generation from 512 px, compared to 1.87 s for pixel-space SR using the same SwinIR architecture). Furthermore, LUA shows strong generalization across the latent spaces of different VAEs, making it easy to deploy without retraining from scratch for each new decoder. Extensive experiments demonstrate that LUA closely matches the fidelity of native high-resolution generation while offering a practical and efficient path to scalable, high-fidelity image synthesis in modern diffusion pipelines.

  2. UniVA: Universal Video Agent towards Open-Source Next-Generation Video Generalist

    While specialized AI models excel at isolated video tasks like generation or understanding, real-world applications demand complex, iterative workflows that combine these capabilities. To bridge this gap, we introduce UniVA, an open-source, omni-capable multi-agent framework for next-generation video generalists that unifies video understanding, segmentation, editing, and generation into cohesive workflows. UniVA employs a Plan-and-Act dual-agent architecture that drives a highly automated and proactive workflow: a planner agent interprets user intentions and decomposes them into structured video-processing steps, while executor agents execute these through modular, MCP-based tool servers (for analysis, generation, editing, tracking, etc.). Through a hierarchical multi-level memory (global knowledge, task context, and user-specific preferences), UniVA sustains long-horizon reasoning, contextual continuity, and inter-agent communication, enabling interactive and self-reflective video creation with full traceability. This design enables iterative and any-conditioned video workflows (e.g., text/image/video-conditioned generation rightarrow multi-round editing rightarrow object segmentation rightarrow compositional synthesis) that were previously cumbersome to achieve with single-purpose models or monolithic video-language models. We also introduce UniVA-Bench, a benchmark suite of multi-step video tasks spanning understanding, editing, segmentation, and generation, to rigorously evaluate such agentic video systems. Both UniVA and UniVA-Bench are fully open-sourced, aiming to catalyze research on interactive, agentic, and general-purpose video intelligence for the next generation of multimodal AI systems. (https://univa.online/)

  3. PAN: A World Model for General, Interactable, and Long-Horizon World Simulation

    A world model enables an intelligent agent to imagine, predict, and reason about how the world evolves in response to its actions, and accordingly to plan and strategize. While recent video generation models produce realistic visual sequences, they typically operate in the prompt-to-full-video manner without causal control, interactivity, or long-horizon consistency required for purposeful reasoning. Existing world modeling efforts, on the other hand, often focus on restricted domains (e.g., physical, game, or 3D-scene dynamics) with limited depth and controllability, and struggle to generalize across diverse environments and interaction formats. In this work, we introduce PAN, a general, interactable, and long-horizon world model that predicts future world states through high-quality video simulation conditioned on history and natural language actions. PAN employs the Generative Latent Prediction (GLP) architecture that combines an autoregressive latent dynamics backbone based on a large language model (LLM), which grounds simulation in extensive text-based knowledge and enables conditioning on language-specified actions, with a video diffusion decoder that reconstructs perceptually detailed and temporally coherent visual observations, to achieve a unification between latent space reasoning (imagination) and realizable world dynamics (reality). Trained on large-scale video-action pairs spanning diverse domains, PAN supports open-domain, action-conditioned simulation with coherent, long-term dynamics. Extensive experiments show that PAN achieves strong performance in action-conditioned world simulation, long-horizon forecasting, and simulative reasoning compared to other video generators and world models, taking a step towards general world models that enable predictive simulation of future world states for reasoning and acting.

  4. Black-Box On-Policy Distillation of Large Language Models

    Black-box distillation creates student large language models (LLMs) by learning from a proprietary teacher model's text outputs alone, without access to its internal logits or parameters. In this work, we introduce Generative Adversarial Distillation (GAD), which enables on-policy and black-box distillation. GAD frames the student LLM as a generator and trains a discriminator to distinguish its responses from the teacher LLM's, creating a minimax game. The discriminator acts as an on-policy reward model that co-evolves with the student, providing stable, adaptive feedback. Experimental results show that GAD consistently surpasses the commonly used sequence-level knowledge distillation. In particular, Qwen2.5-14B-Instruct (student) trained with GAD becomes comparable to its teacher, GPT-5-Chat, on the LMSYS-Chat automatic evaluation. The results establish GAD as a promising and effective paradigm for black-box LLM distillation.

  5. Hail to the Thief: Exploring Attacks and Defenses in Decentralised GRPO

    Group Relative Policy Optimization (GRPO) has demonstrated great utilization in post-training of Large Language Models (LLMs). In GRPO, prompts are answered by the model and, through reinforcement learning, preferred completions are learnt. Owing to the small communication volume, GRPO is inherently suitable for decentralised training as the prompts can be concurrently answered by multiple nodes and then exchanged in the forms of strings. In this work, we present the first adversarial attack in decentralised GRPO. We demonstrate that malicious parties can poison such systems by injecting arbitrary malicious tokens in benign models in both out-of-context and in-context attacks. Using empirical examples of math and coding tasks, we show that adversarial attacks can easily poison the benign nodes, polluting their local LLM post-training, achieving attack success rates up to 100% in as few as 50 iterations. We propose two ways to defend against these attacks, depending on whether all users train the same model or different models. We show that these defenses can achieve stop rates of up to 100%, making the attack impossible.

  6. Depth Anything 3: Recovering the Visual Space from Any Views

    We present Depth Anything 3 (DA3), a model that predicts spatially consistent geometry from an arbitrary number of visual inputs, with or without known camera poses. In pursuit of minimal modeling, DA3 yields two key insights: a single plain transformer (e.g., vanilla DINO encoder) is sufficient as a backbone without architectural specialization, and a singular depth-ray prediction target obviates the need for complex multi-task learning. Through our teacher-student training paradigm, the model achieves a level of detail and generalization on par with Depth Anything 2 (DA2). We establish a new visual geometry benchmark covering camera pose estimation, any-view geometry and visual rendering. On this benchmark, DA3 sets a new state-of-the-art across all tasks, surpassing prior SOTA VGGT by an average of 44.3% in camera pose accuracy and 25.1% in geometric accuracy. Moreover, it outperforms DA2 in monocular depth estimation. All models are trained exclusively on public academic datasets.

  7. Superpositional Gradient Descent: Harnessing Quantum Principles for Model Training

    Large language models (LLMs) are increasingly trained with classical optimization techniques like AdamW to improve convergence and generalization. However, the mechanisms by which quantum-inspired methods enhance classical training remain underexplored. We introduce Superpositional Gradient Descent (SGD), a novel optimizer linking gradient updates with quantum superposition by injecting quantum circuit perturbations. We present a mathematical framework and implement hybrid quantum-classical circuits in PyTorch and Qiskit. On synthetic sequence classification and large-scale LLM fine-tuning, SGD converges faster and yields lower final loss than AdamW. Despite promising results, scalability and hardware constraints limit adoption. Overall, this work provides new insights into the intersection of quantum computing and deep learning, suggesting practical pathways for leveraging quantum principles to control and enhance model behavior.

  8. Solving a Million-Step LLM Task with Zero Errors

    LLMs have achieved remarkable breakthroughs in reasoning, insights, and tool use, but chaining these abilities into extended processes at the scale of those routinely executed by humans, organizations, and societies has remained out of reach. The models have a persistent error rate that prevents scale-up: for instance, recent experiments in the Towers of Hanoi benchmark domain showed that the process inevitably becomes derailed after at most a few hundred steps. Thus, although LLM research is often still benchmarked on tasks with relatively few dependent logical steps, there is increasing attention on the ability (or inability) of LLMs to perform long range tasks. This paper describes MAKER, the first system that successfully solves a task with over one million LLM steps with zero errors, and, in principle, scales far beyond this level. The approach relies on an extreme decomposition of a task into subtasks, each of which can be tackled by focused microagents. The high level of modularity resulting from the decomposition allows error correction to be applied at each step through an efficient multi-agent voting scheme. This combination of extreme decomposition and error correction makes scaling possible. Thus, the results suggest that instead of relying on continual improvement of current LLMs, massively decomposed agentic processes (MDAPs) may provide a way to efficiently solve problems at the level of organizations and societies.

  9. AlphaResearch: Accelerating New Algorithm Discovery with Language Models

    Large language models have made significant progress in complex but easy-to-verify problems, yet they still struggle with discovering the unknown. In this paper, we present AlphaResearch, an autonomous research agent designed to discover new algorithms on open-ended problems. To synergize the feasibility and innovation of the discovery process, we construct a novel dual research environment by combining the execution-based verify and simulated real-world peer review environment. AlphaResearch discovers new algorithm by iteratively running the following steps: (1) propose new ideas (2) verify the ideas in the dual research environment (3) optimize the research proposals for better performance. To promote a transparent evaluation process, we construct AlphaResearchComp, a new evaluation benchmark that includes an eight open-ended algorithmic problems competition, with each problem carefully curated and verified through executable pipelines, objective metrics, and reproducibility checks. AlphaResearch gets a 2/8 win rate in head-to-head comparison with human researchers, demonstrate the possibility of accelerating algorithm discovery with LLMs. Notably, the algorithm discovered by AlphaResearch on the ``packing circles'' problem achieves the best-of-known performance, surpassing the results of human researchers and strong baselines from recent work (e.g., AlphaEvolve). Additionally, we conduct a comprehensive analysis of the remaining challenges of the 6/8 failure cases, providing valuable insights for future research.

  10. Rubric-Based Benchmarking and Reinforcement Learning for Advancing LLM Instruction Following

    Recent progress in large language models (LLMs) has led to impressive performance on a range of tasks, yet advanced instruction following (IF)-especially for complex, multi-turn, and system-prompted instructions-remains a significant challenge. Rigorous evaluation and effective training for such capabilities are hindered by the lack of high-quality, human-annotated benchmarks and reliable, interpretable reward signals. In this work, we introduce AdvancedIF (we will release this benchmark soon), a comprehensive benchmark featuring over 1,600 prompts and expert-curated rubrics that assess LLMs ability to follow complex, multi-turn, and system-level instructions. We further propose RIFL (Rubric-based Instruction-Following Learning), a novel post-training pipeline that leverages rubric generation, a finetuned rubric verifier, and reward shaping to enable effective reinforcement learning for instruction following. Extensive experiments demonstrate that RIFL substantially improves the instruction-following abilities of LLMs, achieving a 6.7% absolute gain on AdvancedIF and strong results on public benchmarks. Our ablation studies confirm the effectiveness of each component in RIFL. This work establishes rubrics as a powerful tool for both training and evaluating advanced IF in LLMs, paving the way for more capable and reliable AI systems.

  11. ResearchRubrics: A Benchmark of Prompts and Rubrics For Evaluating Deep Research Agents

    Deep Research (DR) is an emerging agent application that leverages large language models (LLMs) to address open-ended queries. It requires the integration of several capabilities, including multi-step reasoning, cross-document synthesis, and the generation of evidence-backed, long-form answers. Evaluating DR remains challenging because responses are lengthy and diverse, admit many valid solutions, and often depend on dynamic information sources. We introduce ResearchRubrics, a standardized benchmark for DR built with over 2,800+ hours of human labor that pairs realistic, domain-diverse prompts with 2,500+ expert-written, fine-grained rubrics to assess factual grounding, reasoning soundness, and clarity. We also propose a new complexity framework for categorizing DR tasks along three axes: conceptual breadth, logical nesting, and exploration. In addition, we develop human and model-based evaluation protocols that measure rubric adherence for DR agents. We evaluate several state-of-the-art DR systems and find that even leading agents like Gemini's DR and OpenAI's DR achieve under 68% average compliance with our rubrics, primarily due to missed implicit context and inadequate reasoning about retrieved information. Our results highlight the need for robust, scalable assessment of deep research capabilities, to which end we release ResearchRubrics(including all prompts, rubrics, and evaluation code) to facilitate progress toward well-justified research assistants.

  12. Benchmarking Diversity in Image Generation via Attribute-Conditional Human Evaluation

    Despite advances in generation quality, current text-to-image (T2I) models often lack diversity, generating homogeneous outputs. This work introduces a framework to address the need for robust diversity evaluation in T2I models. Our framework systematically assesses diversity by evaluating individual concepts and their relevant factors of variation. Key contributions include: (1) a novel human evaluation template for nuanced diversity assessment; (2) a curated prompt set covering diverse concepts with their identified factors of variation (e.g. prompt: An image of an apple, factor of variation: color); and (3) a methodology for comparing models in terms of human annotations via binomial tests. Furthermore, we rigorously compare various image embeddings for diversity measurement. Notably, our principled approach enables ranking of T2I models by diversity, identifying categories where they particularly struggle. This research offers a robust methodology and insights, paving the way for improvements in T2I model diversity and metric development.

  13. Music Flamingo: Scaling Music Understanding in Audio Language Models

    We introduce Music Flamingo, a novel large audio-language model designed to advance music (including song) understanding in foundational audio models. While audio-language research has progressed rapidly, music remains challenging due to its dynamic, layered, and information-dense nature. Progress has been further limited by the difficulty of scaling open audio understanding models, primarily because of the scarcity of high-quality music data and annotations. As a result, prior models are restricted to producing short, high-level captions, answering only surface-level questions, and showing limited generalization across diverse musical cultures. To address these challenges, we curate MF-Skills, a large-scale dataset labeled through a multi-stage pipeline that yields rich captions and question-answer pairs covering harmony, structure, timbre, lyrics, and cultural context. We fine-tune an enhanced Audio Flamingo 3 backbone on MF-Skills and further strengthen multiple skills relevant to music understanding. To improve the model's reasoning abilities, we introduce a post-training recipe: we first cold-start with MF-Think, a novel chain-of-thought dataset grounded in music theory, followed by GRPO-based reinforcement learning with custom rewards. Music Flamingo achieves state-of-the-art results across 10+ benchmarks for music understanding and reasoning, establishing itself as a generalist and musically intelligent audio-language model. Beyond strong empirical results, Music Flamingo sets a new standard for advanced music understanding by demonstrating how models can move from surface-level recognition toward layered, human-like perception of songs. We believe this work provides both a benchmark and a foundation for the community to build the next generation of models that engage with music as meaningfully as humans do.

  14. CC30k: A Citation Contexts Dataset for Reproducibility-Oriented Sentiment Analysis

    Sentiments about the reproducibility of cited papers in downstream literature offer community perspectives and have shown as a promising signal of the actual reproducibility of published findings. To train effective models to effectively predict reproducibility-oriented sentiments and further systematically study their correlation with reproducibility, we introduce the CC30k dataset, comprising a total of 30,734 citation contexts in machine learning papers. Each citation context is labeled with one of three reproducibility-oriented sentiment labels: Positive, Negative, or Neutral, reflecting the cited paper's perceived reproducibility or replicability. Of these, 25,829 are labeled through crowdsourcing, supplemented with negatives generated through a controlled pipeline to counter the scarcity of negative labels. Unlike traditional sentiment analysis datasets, CC30k focuses on reproducibility-oriented sentiments, addressing a research gap in resources for computational reproducibility studies. The dataset was created through a pipeline that includes robust data cleansing, careful crowd selection, and thorough validation. The resulting dataset achieves a labeling accuracy of 94%. We then demonstrated that the performance of three large language models significantly improves on the reproducibility-oriented sentiment classification after fine-tuning using our dataset. The dataset lays the foundation for large-scale assessments of the reproducibility of machine learning papers. The CC30k dataset and the Jupyter notebooks used to produce and analyze the dataset are publicly available at https://github.com/lamps-lab/CC30k .

  15. AffordBot: 3D Fine-grained Embodied Reasoning via Multimodal Large Language Models

    Effective human-agent collaboration in physical environments requires understanding not only what to act upon, but also where the actionable elements are and how to interact with them. Existing approaches often operate at the object level or disjointedly handle fine-grained affordance reasoning, lacking coherent, instruction-driven grounding and reasoning. In this work, we introduce a new task: Fine-grained 3D Embodied Reasoning, which requires an agent to predict, for each referenced affordance element in a 3D scene, a structured triplet comprising its spatial location, motion type, and motion axis, based on a task instruction. To solve this task, we propose AffordBot, a novel framework that integrates Multimodal Large Language Models (MLLMs) with a tailored chain-of-thought (CoT) reasoning paradigm. To bridge the gap between 3D input and 2D-compatible MLLMs, we render surround-view images of the scene and project 3D element candidates into these views, forming a rich visual representation aligned with the scene geometry. Our CoT pipeline begins with an active perception stage, prompting the MLLM to select the most informative viewpoint based on the instruction, before proceeding with step-by-step reasoning to localize affordance elements and infer plausible interaction motions. Evaluated on the SceneFun3D dataset, AffordBot achieves state-of-the-art performance, demonstrating strong generalization and physically grounded reasoning with only 3D point cloud input and MLLMs.

Solidot(15)

  1. Blue Origin 完成首次 New Glenn 火箭回收

    贝佐斯(Jeff Bezos)旗下的 Blue Origin 公司仅用了两次尝试就成功将 New Glenn 巨型火箭的助推器降落在大西洋的无人驳船上,成为 SpaceX 之后第二家完成这一壮举的公司。New Glenn 是直径 7 米的两级构型火箭。其第一级火箭由 7个 BE-4 发动机提供动力,它被设计为可重复使用。第二级则为一次性使用。New Glenn 于 2025 年 1 月 16 日执行了首次发射。本周四的任务是它的第二次发射,执行将两颗 NASA 探测器送到火星的商业任务。这次任务的成功意味着 Blue Origin 有能力与 SpaceX 展开竞争。

  2. 小鹏汇天开始量产“飞行汽车”

    小鹏旗下的汇天飞行汽车量产工厂已于 11 月 3日 试产并顺利下线首台陆地航母飞行器。这座占地 12 万平方米的工厂规划年产能 10,000 辆,初期年产能 5,000 辆,满产状态下,生产线每 30 分钟可下线一台飞行器,将加速 2026 年陆地航母实现大规模量产。小鹏的飞行汽车和传统意义上会飞行的汽车不同,它实际上由汽车和垂直起降混电飞机组成——汽车被称为陆地航母,飞机被称为全倾转混电飞行汽车 A868,航速 360 公里/时,续航超过 500 公里,可搭载 6 人。

  3. 美国停止铸造一美分硬币

    美国费城造币厂将于周四铸造最后一批一美分硬币,结束分币逾 230 年的铸造史。分币仍将继续流通,但它的逐步淘汰促使企业开始调整价格。政府表示,此举将有助于节省开支。一美分硬币是为了纪念美国内战时的总统林肯而铸造的,由镀铜锌制成。美国财政部称,今天每枚分币的制造成本接近 4 美分,是十年前的两倍多。财政部估计,停止铸造一美分硬币每年将节省约 5600 万美元。

  4. 加州圣迭戈警告新生学术准备能力急剧下降

    加州大学圣迭戈分校(UCSD)招生工作组(Senate-Administration Working Group on Admissions)发布了一份报告,警告过去五年新生入学时的学术准备能力急剧下降。从 2020 到 2025 年,新生数学能力低于中学水平的人数增加了近 30 倍,从约 30 人增至 921 人。这些学生占新生总数的八分之一。大学数学系此前发现学生在基础分数和一至八年级所学的算术运算上存在困难,为此今年重新设计了补习方案,将重心集中在小学和中学阶段的数学内容上。学术能力的退步不限于数学。2024 年近五分之一的美国本土新生需要接受写作补习。各学科的教师都反映学生越来越难以理解篇幅较长、内容复杂的文本。学术能力的退步与多种干扰因素同时发生。新冠疫情迫使加州大学系统自 2020 年春季起实行远程教学。2021 年加州大学系统取消了 SAT 和 ACT 考试要求。期间高中成绩膨胀加剧,导致成绩单无法可靠反映学生的实际学习水平。工作组认为招收大量准备不足的学生不仅会损害学生的利益,还会导致本就有限的教学资源紧张。报告呼吁加州大学系统重新考虑标准化考试要求。

  5. Google 表示将允许侧载 Android 应用

    Google Android 官方博客再次发表文章强调了验证应用开发者身份的重要性,但同时表示它听取了社区的反馈将会继续允许资深用户侧载应用,更多信息将会在未来几个月公布。Google 举了东南亚电诈为例解释为什么要验证开发者身份去加强安全。Google 称,对于一部分开发者和能承担更高风险的资深用户,它正在构建一个复杂流程允许有丰富经验的用户自行承担安装未经验证开发者身份的软件的风险,这个流程将包含清晰的警告,确保用户充分了解相关风险,但最终选择权仍然掌握在用户手中。

  6. 旅行者1号距离地球即将达到一光日

    根据旅行者 1 号的速度,一年之后的 2026 年 11 月 13 日它将抵达距离地球一光日的地方。人类历史上此前还没有任何人造物体飞行距离如此之遥远。旅行者 1 号于 1977 年 9 月 5 日发射,至今飞行了 48 年 2 个月,它使用 钚-238 放射性同位素热电机提供电力,预计到 2030 年代初能量将会耗尽,但它还会继续飞行,它将穿过奥尔特云,在可预见的未来——大约四万年之后——近距离接触另一颗恒星 Gliese 445。

  7. 2024 年中国专利申请量继续遥遥领先

    世界知识产权组织(WIPO)发布年度报告《世界知识产权指标》,2024 年全球专利申请数较上年增加 4.9%,达到约 372.5 万件,创历史新高。其中中国以约 180 万件居首,之后是美国(501,831 件)、日本(419,132 件)、韩国(295,722 件)和德国(133,485 件)。商标申请数减少 0.1%,外观设计申请数增加 2.2%。在前 20 大来源国中,印度( +19.1% )、芬兰( +15.4% )和土耳其( +14.6% )是 2024 年仅有的三个实现两位数增长的国家。印度连续第六年保持两位数增长,主要得益于本土申请量的强劲增长。芬兰则连续第二年实现两位数增长,主要得益于海外申请的强劲表现。土耳其 2024 年的整体增长同样由本土申请量提升所推动。商标申请量最大的是中国申请人,其国内外申请类别总数约达 730 万件;而美国第二为 836,457 件。

  8. 中国北方的沙尘主要源自蒙古

    近二十年来持续下降的东亚中部沙尘活动近年来出现明显反弹。中国科学院西北生态环境资源研究院的研究团队发现,蒙古国已成为该区域主要的起尘源区。研究团队估算了 2000 年至 2023 年间东亚中部地区 136 次大规模沙尘事件的逐小时起尘量。分析范围涵盖中国北方及蒙古国主要起尘区,并综合了风速、土壤质地、植被覆盖和土壤湿度等高分辨率数据。结果显示,从 21 世纪初至 2020 年,沙尘暴的频率与强度总体呈下降趋势,但这一趋势在 2020 年后急剧逆转。区域总起尘量从 2020 年的 570 万吨激增至 2023 年的 4030 万吨,增长超过 7 倍;大型沙尘事件的年均次数也从 3 次上升至 5 次。蒙古国在这些事件中的起尘贡献率由 21 世纪初的 43% 上升至近年来的 53%,在 2022 年达到 62%,并在 202 3年 4 月的大规模沙尘事件中达到 64%。研究指出,更强的地表风速、植被退化以及土壤干燥是导致沙尘反弹的主要驱动因素,其中风速变化的贡献约为 46%,植被退化约为1 9%,土壤湿度下降约为 9%。蒙古高原气旋的增强与戈壁沙漠长期干旱共同加剧了地表风蚀与扬尘过程,而中国北方的生态修复工程则有效降低了本地排放。

  9. 冰岛将大西洋洋流的潜在崩溃视为国家安全风险

    冰岛气候部长表示,冰岛政府将大西洋洋流系统可能的崩溃视为国家安全风险和生存威胁,允许政府着手制定应对最坏情境的策略。大西洋经向翻转环流(Atlantic Meridional Overturning Circulation 或 AMOC)将热带暖水向北输送至北极,有助于维持欧洲冬季气候温和。但随着气温升高北极冰盖加速融化,融化的冰水涌入海洋,可能会扰乱洋流的流动。AMOC 的潜在崩溃,可能引发现代版的冰河期,导致北欧冬季气温骤降。AMOC 曾在上一次冰河期前崩溃过。

  10. Google 释出 Android 16 QPR1 源代码

    在发布两个月之后,Google 向 Android Open Source Project 释出了 Android 16 QPR1 源代码。Google 最近改变了 Android 补丁的发布方式,主要通过季度更新发布安全补丁、bug 修复和性能改进,该季度补丁被称为 QPR,QPR1 在 9 月推送给了 Google 的 Pixel 手机,提供给了 Google 的签约合作伙伴,但 Android 的社区发行版此前一直无法访问到 QPR1。

  11. Reddit Mod 因分享电影中的裸露镜头被判缓刑

    一位 Reddit Mod 因剪辑分享了电影电视剧中的女演员裸露镜头被丹麦法庭判了缓刑和社区服务,可能面临高额罚款。数十名女演员投诉了该 Mod 管理的 subreddit“SeDetForPlottet” (WatchItForthePlot),部分演员感觉自己被骚扰或虐待。代表丹麦演员协会的 Rights Alliance 推动对此事展开了刑事调查,这位 Mod 被控违反了版权法中的“精神权利(moral rights)”。根据丹麦版权法中尊重作品完整性的相关条款,演员有权反对他人对其表演进行脱离语境的剪辑或修改,导致演员形象或声誉受损。这位 40 岁的 Mod 承认违反了精神权利,他分享了至少 347 个视频片段,涉及 100 多位女演员,视频的总观看次数高达 420 万次。他还被发现分享了逾 25TB 的盗版内容。他被处以七个月缓刑,120 小时社区服务,面临民事赔偿。版权所有者要求每段裸露视频赔偿 2300 至 4600 美元,最高赔偿金额可能超过 150 万美元。

  12. Valve 宣布 Linux 游戏机 Steam Machine

    Valve 宣布了三款新硬件产品,它们将于 2026 年初上市,具体日期和价格尚未披露。三款产品包括了手柄 Steam Controller、头显 Steam Frame 以及运行基于 Arch Linux 的 SteamOS 3(桌面环境是 KDE)的 Steam Machine。其中头显使用的处理器是高通的第三代骁龙 8,2160 x 2160 LCD(单眼),16GB 统一 LPDDR5X RAM。Steam Machine 则是一款标准的入门级游戏 PC:AMD Zen 4 6c / 12T,28 个 AMD RDNA3 CU,16GB DDR5 + 8GB GDDR6 VRAM,可选 512GB NVMe SSD 或 2TB NVMe SSD,重 2.6 公斤。Valve 称三款产品均会在 Steam Deck 当前的发售地区(美国、加拿大、英国、欧盟和澳大利亚)和 KOMODO 所覆盖的地区(日本、韩国、香港和台湾)供货。可能和 Steam Deck 情况类似,中国大陆地区的用户只能通过代购了。

  13. 钱志敏在英国被判 11 年 8 个月

    天津蓝天格锐 430 亿元非法集资案当事人、携款逃到英国的钱志敏因洗钱罪被判 11 年 8 个月。中国受害者正寻求英国政府归还部分其扣押的目前价值约 50 亿英镑的比特币。现年 47 岁的钱志敏于 2017 年 9 月持假护照抵达英国,她搬进了 Hampstead Heath 的一栋豪宅,月租金逾 1.7 万英镑。为支付租金需要将比特币兑换成现金,她装成是一位富有的古董钻石继承人,雇佣了前外卖员温俭为其私人助理,帮助将比特币兑换成现金和房产等资产。随着比特币价格飙升,钱看起来可以兑现蓝天格锐向投资者承诺的“躺着也能致富”的目标。温俭在庭审中表示,钱大部分时间都躺在床上玩游戏和网购。但在她们试图购买一栋豪宅时,由于无法说明财富来源而引起警方调查。警方在搜查中扣押了数万枚比特币。这些比特币目前价值 466 亿人民币,超过了从投资者骗取的 430 亿元集资。暂时还不清楚投资者能否获得全额或更多退款。英国财政部尚未回应如何处理这些比特币。

  14. 掌握多种语言可能有助于减缓衰老

    根据发表在《Nature Aging》期刊上的一项研究,研究人员发现使用多种语言与显著延缓的衰老过程相关。在这项大规模研究中,研究人员分析了来自 27 个欧洲国家的 86,149 名健康参与者的数据。为更精确衡量衰老速度,研究团队开发了一种名为“生物行为年龄差”的新指标。这个指标综合了个体的功能能力、教育水平、认知表现等积极因素,以及心脏病、高血压、感官损伤等消极因素,来预测一个人的生物行为年龄。当这个预测年龄超过其实际年龄时,就意味着他正在加速衰老,反之则说明衰老延缓。研究结果显示,仅会说母语的单语者,其经历加速衰老的概率是多语者的 2.11 倍。掌握至少一门外语的人,经历加速衰老的概率降低了超过一半。这种保护效应还呈现出“剂量依赖性”,即掌握的外语越多,其经历加速衰老的可能性就越低。研究人员推测,这种保护效应源于多语能力对大脑“认知储备”的不断锻炼。当一个人掌握多门语言时,即使只使用其中一种,其他语言也始终处于活跃状态。大脑需要持续地进行抑制和切换,这极大地锻炼了执行功能、注意力和记忆力等高级认知能力。这些被频繁调用的脑网络,恰恰是那些在衰老过程中最容易退化的区域。因此,长期使用多种语言,就像是给大脑进行持续的“健身”,增强了其抵抗因为年龄增加而衰退的能力。

  15. PS5 销量超过所有版本的 Xbox

    索尼宣布 PS5 游戏机的销量突破 8420 万台,正式超过了所有已发售的 Xbox 游戏机型号。在截至 9 月 30 日的三个月内,PS5 新增销量 390 万,超过去年同期的 380 万,且这一销量是在价格上涨之后实现的,因此这一成绩令人瞩目。微软最畅销的型号是 2016 年 4 月停产的 Xbox 360,其销量约 8400 万台。分析师估计,PS5 的销量至少是微软同代游戏机 Xbox Series X 和 S 总销量的两倍。索尼表示,PS5 是它最成功的游戏机。