DIGEST · 2025-11-16

OrangeBot.AI Digest — 2025-11-16

60 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. Peter Thiel sells off all Nvidia stock, stirring bubble fears (www.thestreet.com)
  2. I have recordings proving Coinbase knew about breach months before disclosure (jonathanclark.com)
  3. Browser fingerprinting via favicon (github.com)
  4. Open-source Zig book (www.zigbook.net)
  5. Dissecting Flock Safety: The Cameras Tracking You Are a Security Nightmare [video] (www.youtube.com)
  6. FPGA Based IBM-PC-XT (bit-hack.net)
  7. Heretic: Automatic censorship removal for language models (github.com)
  8. Iran begins cloud seeding operations as drought bites (www.arabnews.com)
  9. The internet is no longer a safe haven (brainbaking.com)
  10. A new documentary about the history of forced psychiatric treatment in Spain (www.bbc.co.uk)
  11. Brimstone: ES2025 JavaScript engine written in Rust (github.com)
  12. Why use OpenBSD? (www.tumfatig.net)
  13. Anthropic’s paper smells like bullshit (djnn.sh)
  14. Maybe you’re not trying (usefulfictions.substack.com)
  15. UK's first small nuclear power station to be built in north Wales (www.bbc.com)

GitHub Trending(15)

  1. sansan0 / TrendRadar

    🎯 告别信息过载,AI 助你看懂新闻资讯热点,简单的舆情监控分析 - 多平台热点聚合+基于 MCP 的AI分析工具。监控35个平台(抖音、知乎、B站、华尔街见闻、财联社等),智能筛选+自动推送+AI对话分析(用自然语言深度挖掘新闻:趋势追踪、情感分析、相似检索等13种工具)。支持企业微信/飞书/钉钉/Telegram/邮件/ntfy推送,30秒网页部署,1分钟手机通知,无需编程。支持Docker部署⭐ 让算法为你服务,用AI理解热点

  2. google / adk-go

    An open-source, code-first Go toolkit for building, evaluating, and deploying sophisticated AI agents with flexibility and control.

  3. TapXWorld / ChinaTextbook

    所有小初高、大学PDF教材。

  4. yeongpin / cursor-free-vip

    [Support 0.49.x](Reset Cursor AI MachineID & Bypass Higher Token Limit) Cursor Ai ,自动重置机器ID , 免费升级使用Pro功能: You've reached your trial request limit. / Too many free trial accounts used on this machine. Please upgrade to pro. We have this limit in place to prevent abuse. Please let us know if you believe this is a mistake.

  5. nvm-sh / nvm

    Node Version Manager - POSIX-compliant bash script to manage multiple active node.js versions

  6. traefik / traefik

    The Cloud Native Application Proxy

  7. HKUDS / LightRAG

    [EMNLP2025] "LightRAG: Simple and Fast Retrieval-Augmented Generation"

  8. bobeff / open-source-games

    A list of open source games.

  9. volcengine / verl

    verl: Volcano Engine Reinforcement Learning for LLMs

  10. GibsonAI / Memori

    Open-Source Memory Engine for LLMs, AI Agents & Multi-Agent Systems

  11. yangshun / tech-interview-handbook

    Curated coding interview preparation materials for busy software engineers

  12. microsoft / call-center-ai

    Send a phone call from AI agent, in an API call. Or, directly call the bot from the configured phone number!

  13. MustardChef / WSABuilds

    Run Windows Subsystem For Android on your Windows 10 and Windows 11 PC using prebuilt binaries with Google Play Store (MindTheGapps) and/or Magisk or KernelSU (root solutions) built in.

  14. playcanvas / engine

    Powerful web graphics runtime built on WebGL, WebGPU, WebXR and glTF

  15. iptv-org / iptv

    Collection of publicly available IPTV channels from all over the world

Hugging Face(15)

  1. One Small Step in Latent, One Giant Leap for Pixels: Fast Latent Upscale Adapter for Your Diffusion Models

    Diffusion models struggle to scale beyond their training resolutions, as direct high-resolution sampling is slow and costly, while post-hoc image super-resolution (ISR) introduces artifacts and additional latency by operating after decoding. We present the Latent Upscaler Adapter (LUA), a lightweight module that performs super-resolution directly on the generator's latent code before the final VAE decoding step. LUA integrates as a drop-in component, requiring no modifications to the base model or additional diffusion stages, and enables high-resolution synthesis through a single feed-forward pass in latent space. A shared Swin-style backbone with scale-specific pixel-shuffle heads supports 2x and 4x factors and remains compatible with image-space SR baselines, achieving comparable perceptual quality with nearly 3x lower decoding and upscaling time (adding only +0.42 s for 1024 px generation from 512 px, compared to 1.87 s for pixel-space SR using the same SwinIR architecture). Furthermore, LUA shows strong generalization across the latent spaces of different VAEs, making it easy to deploy without retraining from scratch for each new decoder. Extensive experiments demonstrate that LUA closely matches the fidelity of native high-resolution generation while offering a practical and efficient path to scalable, high-fidelity image synthesis in modern diffusion pipelines.

  2. PAN: A World Model for General, Interactable, and Long-Horizon World Simulation

    A world model enables an intelligent agent to imagine, predict, and reason about how the world evolves in response to its actions, and accordingly to plan and strategize. While recent video generation models produce realistic visual sequences, they typically operate in the prompt-to-full-video manner without causal control, interactivity, or long-horizon consistency required for purposeful reasoning. Existing world modeling efforts, on the other hand, often focus on restricted domains (e.g., physical, game, or 3D-scene dynamics) with limited depth and controllability, and struggle to generalize across diverse environments and interaction formats. In this work, we introduce PAN, a general, interactable, and long-horizon world model that predicts future world states through high-quality video simulation conditioned on history and natural language actions. PAN employs the Generative Latent Prediction (GLP) architecture that combines an autoregressive latent dynamics backbone based on a large language model (LLM), which grounds simulation in extensive text-based knowledge and enables conditioning on language-specified actions, with a video diffusion decoder that reconstructs perceptually detailed and temporally coherent visual observations, to achieve a unification between latent space reasoning (imagination) and realizable world dynamics (reality). Trained on large-scale video-action pairs spanning diverse domains, PAN supports open-domain, action-conditioned simulation with coherent, long-term dynamics. Extensive experiments show that PAN achieves strong performance in action-conditioned world simulation, long-horizon forecasting, and simulative reasoning compared to other video generators and world models, taking a step towards general world models that enable predictive simulation of future world states for reasoning and acting.

  3. Black-Box On-Policy Distillation of Large Language Models

    Black-box distillation creates student large language models (LLMs) by learning from a proprietary teacher model's text outputs alone, without access to its internal logits or parameters. In this work, we introduce Generative Adversarial Distillation (GAD), which enables on-policy and black-box distillation. GAD frames the student LLM as a generator and trains a discriminator to distinguish its responses from the teacher LLM's, creating a minimax game. The discriminator acts as an on-policy reward model that co-evolves with the student, providing stable, adaptive feedback. Experimental results show that GAD consistently surpasses the commonly used sequence-level knowledge distillation. In particular, Qwen2.5-14B-Instruct (student) trained with GAD becomes comparable to its teacher, GPT-5-Chat, on the LMSYS-Chat automatic evaluation. The results establish GAD as a promising and effective paradigm for black-box LLM distillation.

  4. UniVA: Universal Video Agent towards Open-Source Next-Generation Video Generalist

    While specialized AI models excel at isolated video tasks like generation or understanding, real-world applications demand complex, iterative workflows that combine these capabilities. To bridge this gap, we introduce UniVA, an open-source, omni-capable multi-agent framework for next-generation video generalists that unifies video understanding, segmentation, editing, and generation into cohesive workflows. UniVA employs a Plan-and-Act dual-agent architecture that drives a highly automated and proactive workflow: a planner agent interprets user intentions and decomposes them into structured video-processing steps, while executor agents execute these through modular, MCP-based tool servers (for analysis, generation, editing, tracking, etc.). Through a hierarchical multi-level memory (global knowledge, task context, and user-specific preferences), UniVA sustains long-horizon reasoning, contextual continuity, and inter-agent communication, enabling interactive and self-reflective video creation with full traceability. This design enables iterative and any-conditioned video workflows (e.g., text/image/video-conditioned generation rightarrow multi-round editing rightarrow object segmentation rightarrow compositional synthesis) that were previously cumbersome to achieve with single-purpose models or monolithic video-language models. We also introduce UniVA-Bench, a benchmark suite of multi-step video tasks spanning understanding, editing, segmentation, and generation, to rigorously evaluate such agentic video systems. Both UniVA and UniVA-Bench are fully open-sourced, aiming to catalyze research on interactive, agentic, and general-purpose video intelligence for the next generation of multimodal AI systems. (https://univa.online/)

  5. Depth Anything 3: Recovering the Visual Space from Any Views

    We present Depth Anything 3 (DA3), a model that predicts spatially consistent geometry from an arbitrary number of visual inputs, with or without known camera poses. In pursuit of minimal modeling, DA3 yields two key insights: a single plain transformer (e.g., vanilla DINO encoder) is sufficient as a backbone without architectural specialization, and a singular depth-ray prediction target obviates the need for complex multi-task learning. Through our teacher-student training paradigm, the model achieves a level of detail and generalization on par with Depth Anything 2 (DA2). We establish a new visual geometry benchmark covering camera pose estimation, any-view geometry and visual rendering. On this benchmark, DA3 sets a new state-of-the-art across all tasks, surpassing prior SOTA VGGT by an average of 44.3% in camera pose accuracy and 25.1% in geometric accuracy. Moreover, it outperforms DA2 in monocular depth estimation. All models are trained exclusively on public academic datasets.

  6. Hail to the Thief: Exploring Attacks and Defenses in Decentralised GRPO

    Group Relative Policy Optimization (GRPO) has demonstrated great utilization in post-training of Large Language Models (LLMs). In GRPO, prompts are answered by the model and, through reinforcement learning, preferred completions are learnt. Owing to the small communication volume, GRPO is inherently suitable for decentralised training as the prompts can be concurrently answered by multiple nodes and then exchanged in the forms of strings. In this work, we present the first adversarial attack in decentralised GRPO. We demonstrate that malicious parties can poison such systems by injecting arbitrary malicious tokens in benign models in both out-of-context and in-context attacks. Using empirical examples of math and coding tasks, we show that adversarial attacks can easily poison the benign nodes, polluting their local LLM post-training, achieving attack success rates up to 100% in as few as 50 iterations. We propose two ways to defend against these attacks, depending on whether all users train the same model or different models. We show that these defenses can achieve stop rates of up to 100%, making the attack impossible.

  7. Solving a Million-Step LLM Task with Zero Errors

    LLMs have achieved remarkable breakthroughs in reasoning, insights, and tool use, but chaining these abilities into extended processes at the scale of those routinely executed by humans, organizations, and societies has remained out of reach. The models have a persistent error rate that prevents scale-up: for instance, recent experiments in the Towers of Hanoi benchmark domain showed that the process inevitably becomes derailed after at most a few hundred steps. Thus, although LLM research is often still benchmarked on tasks with relatively few dependent logical steps, there is increasing attention on the ability (or inability) of LLMs to perform long range tasks. This paper describes MAKER, the first system that successfully solves a task with over one million LLM steps with zero errors, and, in principle, scales far beyond this level. The approach relies on an extreme decomposition of a task into subtasks, each of which can be tackled by focused microagents. The high level of modularity resulting from the decomposition allows error correction to be applied at each step through an efficient multi-agent voting scheme. This combination of extreme decomposition and error correction makes scaling possible. Thus, the results suggest that instead of relying on continual improvement of current LLMs, massively decomposed agentic processes (MDAPs) may provide a way to efficiently solve problems at the level of organizations and societies.

  8. AlphaResearch: Accelerating New Algorithm Discovery with Language Models

    Large language models have made significant progress in complex but easy-to-verify problems, yet they still struggle with discovering the unknown. In this paper, we present AlphaResearch, an autonomous research agent designed to discover new algorithms on open-ended problems. To synergize the feasibility and innovation of the discovery process, we construct a novel dual research environment by combining the execution-based verify and simulated real-world peer review environment. AlphaResearch discovers new algorithm by iteratively running the following steps: (1) propose new ideas (2) verify the ideas in the dual research environment (3) optimize the research proposals for better performance. To promote a transparent evaluation process, we construct AlphaResearchComp, a new evaluation benchmark that includes an eight open-ended algorithmic problems competition, with each problem carefully curated and verified through executable pipelines, objective metrics, and reproducibility checks. AlphaResearch gets a 2/8 win rate in head-to-head comparison with human researchers, demonstrate the possibility of accelerating algorithm discovery with LLMs. Notably, the algorithm discovered by AlphaResearch on the ``packing circles'' problem achieves the best-of-known performance, surpassing the results of human researchers and strong baselines from recent work (e.g., AlphaEvolve). Additionally, we conduct a comprehensive analysis of the remaining challenges of the 6/8 failure cases, providing valuable insights for future research.

  9. Superpositional Gradient Descent: Harnessing Quantum Principles for Model Training

    Large language models (LLMs) are increasingly trained with classical optimization techniques like AdamW to improve convergence and generalization. However, the mechanisms by which quantum-inspired methods enhance classical training remain underexplored. We introduce Superpositional Gradient Descent (SGD), a novel optimizer linking gradient updates with quantum superposition by injecting quantum circuit perturbations. We present a mathematical framework and implement hybrid quantum-classical circuits in PyTorch and Qiskit. On synthetic sequence classification and large-scale LLM fine-tuning, SGD converges faster and yields lower final loss than AdamW. Despite promising results, scalability and hardware constraints limit adoption. Overall, this work provides new insights into the intersection of quantum computing and deep learning, suggesting practical pathways for leveraging quantum principles to control and enhance model behavior.

  10. Music Flamingo: Scaling Music Understanding in Audio Language Models

    We introduce Music Flamingo, a novel large audio-language model designed to advance music (including song) understanding in foundational audio models. While audio-language research has progressed rapidly, music remains challenging due to its dynamic, layered, and information-dense nature. Progress has been further limited by the difficulty of scaling open audio understanding models, primarily because of the scarcity of high-quality music data and annotations. As a result, prior models are restricted to producing short, high-level captions, answering only surface-level questions, and showing limited generalization across diverse musical cultures. To address these challenges, we curate MF-Skills, a large-scale dataset labeled through a multi-stage pipeline that yields rich captions and question-answer pairs covering harmony, structure, timbre, lyrics, and cultural context. We fine-tune an enhanced Audio Flamingo 3 backbone on MF-Skills and further strengthen multiple skills relevant to music understanding. To improve the model's reasoning abilities, we introduce a post-training recipe: we first cold-start with MF-Think, a novel chain-of-thought dataset grounded in music theory, followed by GRPO-based reinforcement learning with custom rewards. Music Flamingo achieves state-of-the-art results across 10+ benchmarks for music understanding and reasoning, establishing itself as a generalist and musically intelligent audio-language model. Beyond strong empirical results, Music Flamingo sets a new standard for advanced music understanding by demonstrating how models can move from surface-level recognition toward layered, human-like perception of songs. We believe this work provides both a benchmark and a foundation for the community to build the next generation of models that engage with music as meaningfully as humans do.

  11. Rubric-Based Benchmarking and Reinforcement Learning for Advancing LLM Instruction Following

    Recent progress in large language models (LLMs) has led to impressive performance on a range of tasks, yet advanced instruction following (IF)-especially for complex, multi-turn, and system-prompted instructions-remains a significant challenge. Rigorous evaluation and effective training for such capabilities are hindered by the lack of high-quality, human-annotated benchmarks and reliable, interpretable reward signals. In this work, we introduce AdvancedIF (we will release this benchmark soon), a comprehensive benchmark featuring over 1,600 prompts and expert-curated rubrics that assess LLMs ability to follow complex, multi-turn, and system-level instructions. We further propose RIFL (Rubric-based Instruction-Following Learning), a novel post-training pipeline that leverages rubric generation, a finetuned rubric verifier, and reward shaping to enable effective reinforcement learning for instruction following. Extensive experiments demonstrate that RIFL substantially improves the instruction-following abilities of LLMs, achieving a 6.7% absolute gain on AdvancedIF and strong results on public benchmarks. Our ablation studies confirm the effectiveness of each component in RIFL. This work establishes rubrics as a powerful tool for both training and evaluating advanced IF in LLMs, paving the way for more capable and reliable AI systems.

  12. ResearchRubrics: A Benchmark of Prompts and Rubrics For Evaluating Deep Research Agents

    Deep Research (DR) is an emerging agent application that leverages large language models (LLMs) to address open-ended queries. It requires the integration of several capabilities, including multi-step reasoning, cross-document synthesis, and the generation of evidence-backed, long-form answers. Evaluating DR remains challenging because responses are lengthy and diverse, admit many valid solutions, and often depend on dynamic information sources. We introduce ResearchRubrics, a standardized benchmark for DR built with over 2,800+ hours of human labor that pairs realistic, domain-diverse prompts with 2,500+ expert-written, fine-grained rubrics to assess factual grounding, reasoning soundness, and clarity. We also propose a new complexity framework for categorizing DR tasks along three axes: conceptual breadth, logical nesting, and exploration. In addition, we develop human and model-based evaluation protocols that measure rubric adherence for DR agents. We evaluate several state-of-the-art DR systems and find that even leading agents like Gemini's DR and OpenAI's DR achieve under 68% average compliance with our rubrics, primarily due to missed implicit context and inadequate reasoning about retrieved information. Our results highlight the need for robust, scalable assessment of deep research capabilities, to which end we release ResearchRubrics(including all prompts, rubrics, and evaluation code) to facilitate progress toward well-justified research assistants.

  13. AffordBot: 3D Fine-grained Embodied Reasoning via Multimodal Large Language Models

    Effective human-agent collaboration in physical environments requires understanding not only what to act upon, but also where the actionable elements are and how to interact with them. Existing approaches often operate at the object level or disjointedly handle fine-grained affordance reasoning, lacking coherent, instruction-driven grounding and reasoning. In this work, we introduce a new task: Fine-grained 3D Embodied Reasoning, which requires an agent to predict, for each referenced affordance element in a 3D scene, a structured triplet comprising its spatial location, motion type, and motion axis, based on a task instruction. To solve this task, we propose AffordBot, a novel framework that integrates Multimodal Large Language Models (MLLMs) with a tailored chain-of-thought (CoT) reasoning paradigm. To bridge the gap between 3D input and 2D-compatible MLLMs, we render surround-view images of the scene and project 3D element candidates into these views, forming a rich visual representation aligned with the scene geometry. Our CoT pipeline begins with an active perception stage, prompting the MLLM to select the most informative viewpoint based on the instruction, before proceeding with step-by-step reasoning to localize affordance elements and infer plausible interaction motions. Evaluated on the SceneFun3D dataset, AffordBot achieves state-of-the-art performance, demonstrating strong generalization and physically grounded reasoning with only 3D point cloud input and MLLMs.

  14. Benchmarking Diversity in Image Generation via Attribute-Conditional Human Evaluation

    Despite advances in generation quality, current text-to-image (T2I) models often lack diversity, generating homogeneous outputs. This work introduces a framework to address the need for robust diversity evaluation in T2I models. Our framework systematically assesses diversity by evaluating individual concepts and their relevant factors of variation. Key contributions include: (1) a novel human evaluation template for nuanced diversity assessment; (2) a curated prompt set covering diverse concepts with their identified factors of variation (e.g. prompt: An image of an apple, factor of variation: color); and (3) a methodology for comparing models in terms of human annotations via binomial tests. Furthermore, we rigorously compare various image embeddings for diversity measurement. Notably, our principled approach enables ranking of T2I models by diversity, identifying categories where they particularly struggle. This research offers a robust methodology and insights, paving the way for improvements in T2I model diversity and metric development.

  15. CC30k: A Citation Contexts Dataset for Reproducibility-Oriented Sentiment Analysis

    Sentiments about the reproducibility of cited papers in downstream literature offer community perspectives and have shown as a promising signal of the actual reproducibility of published findings. To train effective models to effectively predict reproducibility-oriented sentiments and further systematically study their correlation with reproducibility, we introduce the CC30k dataset, comprising a total of 30,734 citation contexts in machine learning papers. Each citation context is labeled with one of three reproducibility-oriented sentiment labels: Positive, Negative, or Neutral, reflecting the cited paper's perceived reproducibility or replicability. Of these, 25,829 are labeled through crowdsourcing, supplemented with negatives generated through a controlled pipeline to counter the scarcity of negative labels. Unlike traditional sentiment analysis datasets, CC30k focuses on reproducibility-oriented sentiments, addressing a research gap in resources for computational reproducibility studies. The dataset was created through a pipeline that includes robust data cleansing, careful crowd selection, and thorough validation. The resulting dataset achieves a labeling accuracy of 94%. We then demonstrated that the performance of three large language models significantly improves on the reproducibility-oriented sentiment classification after fine-tuning using our dataset. The dataset lays the foundation for large-scale assessments of the reproducibility of machine learning papers. The CC30k dataset and the Jupyter notebooks used to produce and analyze the dataset are publicly available at https://github.com/lamps-lab/CC30k .

Solidot(15)

  1. NASA 宇航员的离异妻子承认撒谎

    2019 年 NASA 宇航员 Anne McClain 被控盗窃身份访问了离异妻子 Summer Worden 的银行账号,但在 2020 年 Worden 被控对联邦调查机构做出虚假陈述。现年 50 岁的 Worden 女士本周认罪,她目前保释中,将于 2 月 12 日接受判决,可能面临最高五年监禁。Worden 是前美国空军情报官,于 2014 年与 McClain 女士结婚,2018 年申请离婚,2019 年她投诉当时还在国际空间站的 McClain 盗取其身份从太空访问了银行账号。McClain 承认有此事但否认了任何犯罪行为。她通过律师称,访问银行账号只是为了确保家庭的财务状况良好,有足够的钱支付账单和照顾好 Worden 女士的儿子——为体外受精代孕,两人正争夺其抚养权。McClain 称,银行账号是两人公用的,一直使用相同的密码,她从未被告知停止使用该账号。McClain 于 2018 年 12 月至 2019 年 6 月期间在国际空间站执行任务,今年 3 月担任 SpaceX Crew-10 载人任务指挥官驻扎国际空间站至 8 月。

  2. 本世纪末全球气温可能上升 2.6C

    根据 Climate Action Tracker 的最新报告,到本世纪末全球气温预计将比工业化前水平上升 2.6C。世界各国在减排上仍然做得不够多,而与此同时化石燃料排放量今年将增长约 1% 创历史新高,虽然其增长速度过去几年已下降逾 50%。过去十年煤炭、石油和天然气的排放量每年增长 0.8%,而前十年则为每年 2.0%。可再生能源的加速部署已接近满足全球能源需求的年增长,但尚未超过。新的分析还显示,地球的天然碳汇正在减弱。科学家表示,全球暖化和树木砍伐的双重影响使东南亚和南美洲大部分地区的热带森林从二氧化碳的吸收源转变为排放源。报告预测 2025 年大气中的二氧化碳浓度将达到 425ppm,而工业化前为 280ppm。如果碳汇没有减弱,二氧化碳浓度应能降低 8ppm。

  3. 研究发现中国家庭肥胖谈话与青少年进食障碍症状存在关联

    围绕体重或体形的自我贬低式对话,被称为“肥胖谈话”。这类对话通常被视为拉近距离的社交调和剂,在社交场合中较为常见。然而研究显示,这类对话或增加个体进食障碍风险。青春期是体重体形快速变化、自我意识快速发展的阶段,也是进食障碍高发期。为探讨家庭中“肥胖谈话”与青少年进食障碍之间的关联,中国科学院心理研究所研究团队在 1049 个初中生家庭中开展了调查研究。研究发现,有 67.1% 的家庭存在“肥胖谈话”现象,这类谈话与青少年进食障碍风险存在中高度相关,且与他们的身体不满意度及负性情绪呈中高度相关。同时,研究显示,家庭“肥胖谈话”直接与青少年进食障碍症状之间存在关联,还可能通过体像不满和负性情绪,间接地与进食障碍症状关联在一起。尤其在女生中,家庭“肥胖谈话”经由体像不满和进食障碍症状之间的关联更需关注。

  4. 德国法院裁决 Google 需要向德国比价平台 Idealo 赔偿 4.65 亿欧元

    德国柏林的一家法院裁定,Google 滥用其市场支配地位,需要向德国比价平台 Idealo 赔偿 4.65 亿欧元。 除 Idealo 外,另一家德国比价网站 Producto 也将获得 1.07 亿欧元的赔偿。裁决公布后,Idealo 表示会继续对 Google 采取法律行动,而 Google 则表示强烈反对将提起上诉。Google 称,该公司于 2017 年进行了调整,确保竞争对手的比价购物服务与自家的 Google Shopping 一样在搜索结果页面上有同等机会展示广告。

  5. 英国脱欧的经济影响

    2016 年 6 月 23 日英国举行脱离欧盟的全民公投,投票结果为脱离欧盟。之后英国开始启动脱欧程序,于 2020 年 1 月 31 日正式退出欧盟。这一事件被称为 Brexit。美国国家经济研究局(NBER)发表了一篇工作论文,讨论了英国脱欧的经济影响。研究人员利用脱欧近十年的数据估算,到 2025 年脱欧使得英国 GDP下降 6%-8%,且影响会随着时间的推移而逐渐累积。英国的投资减少了 12%-18%,就业减少了 3%-4%,生产率下降 3%-4%。这些负面影响是多种因素综合作用的结果,包括不确定性加剧、需求下降、管理时间被分散,以及漫长脱欧程序导致资源错配加剧。

  6. 亚马逊的卫星宽带项目从 Project Kuiper 改名为 Amazon Leo

    亚马逊宣布其卫星宽带项目的名字从 Project Kuiper 改名为 Amazon Leo,其中 Leo 代表 low Earth orbit(低地球轨道)。亚马逊已经向低地球轨道发射了逾 150 颗宽带卫星,最终将建立一个拥有逾 3200 颗卫星的宽带星座。亚马逊称,宽带卫星项目最初只有几位工程师和几张设计图,项目的灵感来自于位于外太阳系的柯伊伯带(Kuiper Belt)。亚马逊表示一旦 Amazon Leo 网络覆盖范围足够大容量足够高后将会推出卫星宽带服务。

  7. 订阅付费电视的美国家庭比例降至五成

    根据 Madison and Wall 的数据,2025 年第三季度美国家庭付费电视普及率降至 50.2%,预计到 12 月将进一步降至 50% 甚至更低。而十五年前近九成美国家庭都订阅了付费有线电视服务。这一趋势促使各大媒体公司剥离有线电视资产。Comcast、Warner Bros.、Discovery 和 A&E 正寻求出售或剥离其有线电视业务。派拉蒙表示不会出售其有线电视频道,但也承认“每个季度都在加速下滑”。

  8. Epstein-Barr 病毒可能是狼疮的病因

    根据发表在《Science Translational Medicine》期刊上的一项研究,常见病毒 Epstein-Barr 可能是狼疮的病因。狼疮是人体免疫系统错误攻击身体健康组织而导致的一种慢性自体免疫疾病。其症状轻重因人而异,有疗法但无治愈之法,其单一病因一直没有找到。Epstein-Barr 病毒一种非常常见的病毒,95% 的人在其一生中的某个时间会被感染,它主要通过唾液传播,比如接吻或共用饮料、食物、餐具或牙刷。感染后病毒会永久潜伏在体内,通常处于非活跃状态,一般不会有症状。论文共同作者、斯坦福大学的 William Robinson 博士表示绝大多数感染该病毒的人不会发展成狼疮,只有特定毒株的病毒才会引发自身免疫反应。研究主要针对 B 细胞——一种帮助抵抗感染的白细胞。研究人员发现,狼疮患者体内携带 Epstein-Barr 病毒的 B 细胞数量是正常人的 25 倍。研究人员发现,病毒会感染并重编程 B 细胞,使其产生攻击自身组织的抗核抗体(antinuclear antibodies),从而导致狼疮。

  9. Mozilla 宣布 AI Window for Firefox

    Mozilla 周四宣布正在为 Firefox 浏览器开发一个新的可选浏览模式 AI Window,在该模式下用户可以与 AI 助手和聊天机器人互动。AI Window 将成为 Firefox 的三种浏览体验之一,与现有的经典窗口和隐私浏览窗口并列。在 AI Window 中用户可以选择使用哪种 AI 模型。

  10. 欧盟计划加速对小额包裹征税

    欧盟计划加快取消 150 欧元的免税额度。根据欧盟委员会的数据,2024 年大约有 46 亿个小额包裹(指单价不超过 150 欧元的包裹)进入欧盟市场,相当于每天 1200 万个小包裹。其中超过 90% 来自中国。迄今这些小包裹可以免税进入欧盟。欧盟贸易委员 Maros Sefcovic 周四表示,布鲁塞尔希望加快对进入欧盟的小额包裹征收关税。从原计划的 2028 年提前至 2026 年第一季度。此举将影响 Shein、Temu、AliExpress 等电商平台。

  11. 狗相伴人类迁徙和交易逾万年

    根据发表在《科学》期刊上的一项研究,狗相伴人类迁徙和交易逾万年。中科院昆明动物所的研究人员对来自西伯利亚、欧亚大陆中部草原和中国西北地区的 17 个距今 9700 年至 870 年的古代犬类基因组进行了测序;在全新世时,人类祖先在这些地区经历了文化上的重大转变。研究人员将这些新获得的基因组与此前已发表的 57 个古代犬类基因组、160 个现代犬类基因组和 18 个古代人类基因组一同进行了分析,使得他们能够一窥古代犬类谱系与人类迁徙和文化互动的交融方式。研究结果表明,在欧亚草原、东亚及西伯利亚东部地区,家犬的踪迹与狩猎-采集者群体及农牧民的迁徙轨迹常常重合,提示家犬与人类一路相随已司空见惯,并沿途融入了不同的人类社会。犬类遗传谱系与人类族群史之间的一些失配脱节表明,有着不同血统的人类社群间可能存在犬只交换。北极犬的谱系尤其如此,因为它们见于欧亚大陆不同血统的狩猎采集者群体之中。

  12. Android 采用 Rust 后大幅减少内存安全漏洞,加快代码审查

    Google 安全博客谈论了使用 Rust 语言开发之后带来的显著效果。Rust 是内存安全编程语言,Google 称相比 Android 的 C 和 C++ 代码,Rust 的内存安全漏洞密度减少到千分之一。但最令人惊喜的是 Rust 加快了软件交付速度。Rust 代码回滚率降低到原来的四分之一,代码审查时间减少了 25%。代码审查是开发过程中耗时且高延迟的环节。代码返工是造成延迟的主要原因。数据显示 Rust 代码需要的修改次数更少。相比 C++ 代码,Rust 代码在处理类似规模的变更时所需的修改次数减少约 20%。

  13. Thunderbird 145 释出

    Thunderbird 邮件客户端释出了 v145。新版本的主要变化包括:启用 DNS over HTTPS、通过 Exchange Web Services 支持 Microsoft Exchange,以及修复了多个 bug。和 Firefox 145 一样,Thunderbird 从 v145 起不再提供 32 位 Linux 版本。

  14. Blue Origin 完成首次 New Glenn 火箭回收

    贝佐斯(Jeff Bezos)旗下的 Blue Origin 公司仅用了两次尝试就成功将 New Glenn 巨型火箭的助推器降落在大西洋的无人驳船上,成为 SpaceX 之后第二家完成这一壮举的公司。New Glenn 是直径 7 米的两级构型火箭。其第一级火箭由 7个 BE-4 发动机提供动力,它被设计为可重复使用。第二级则为一次性使用。New Glenn 于 2025 年 1 月 16 日执行了首次发射。本周四的任务是它的第二次发射,执行将两颗 NASA 探测器送到火星的商业任务。这次任务的成功意味着 Blue Origin 有能力与 SpaceX 展开竞争。

  15. 小鹏汇天开始量产“飞行汽车”

    小鹏旗下的汇天飞行汽车量产工厂已于 11 月 3日 试产并顺利下线首台陆地航母飞行器。这座占地 12 万平方米的工厂规划年产能 10,000 辆,初期年产能 5,000 辆,满产状态下,生产线每 30 分钟可下线一台飞行器,将加速 2026 年陆地航母实现大规模量产。小鹏的飞行汽车和传统意义上会飞行的汽车不同,它实际上由汽车和垂直起降混电飞机组成——汽车被称为陆地航母,飞机被称为全倾转混电飞行汽车 A868,航速 360 公里/时,续航超过 500 公里,可搭载 6 人。