WEEK · 2025-W48

Weekly Digest — 2025-W48

110 unique stories (2025-11-242025-11-30), aggregated across 8 sources.

Hacker News(42)

  1. PS5 now costs less than 64GB of DDR5 memory. RAM jumps to $600 due to shortage (www.tomshardware.com)
  2. Claude Advanced Tool Use (www.anthropic.com)
  3. Pebble Watch software is now 100% open source (ericmigi.com)
  4. Claude Opus 4.5 (www.anthropic.com)
  5. GrapheneOS migrates server infrastructure from France (www.privacyguides.org)
  6. France threatens GrapheneOS with arrests / server seizure for refusing backdoors (mamot.fr)
  7. Ilya Sutskever: We're moving from the age of scaling to the age of research (www.dwarkesh.com)
  8. Unison 1.0 (www.unison-lang.org)
  9. Google Antigravity exfiltrates data via indirect prompt injection attack (www.promptarmor.com)
  10. Jakarta is now the biggest city in the world (www.axios.com)
  11. How to repurpose your old phone into a web server (far.computer)
  12. Show HN: We built an open source, zero webhooks payment processor (github.com)

GitHub Trending(6)

  1. sansan0 / TrendRadar

    🎯 告别信息过载,AI 助你看懂新闻资讯热点,简单的舆情监控分析 - 多平台热点聚合+基于 MCP 的AI分析工具。监控35个平台(抖音、知乎、B站、华尔街见闻、财联社等),智能筛选+自动推送+AI对话分析(用自然语言深度挖掘新闻:趋势追踪、情感分析、相似检索等13种工具)。支持企业微信/个人微信/飞书/钉钉/Telegram/邮件/ntfy/bark 推送,30秒网页部署,1分钟手机通知,无需编程。支持Docker部署⭐ 让算法为你服务,用AI理解热点

  2. google / adk-go

    An open-source, code-first Go toolkit for building, evaluating, and deploying sophisticated AI agents with flexibility and control.

  3. TapXWorld / ChinaTextbook

    所有小初高、大学PDF教材。

  4. yeongpin / cursor-free-vip

    [Support 0.49.x](Reset Cursor AI MachineID & Bypass Higher Token Limit) Cursor Ai ,自动重置机器ID , 免费升级使用Pro功能: You've reached your trial request limit. / Too many free trial accounts used on this machine. Please upgrade to pro. We have this limit in place to prevent abuse. Please let us know if you believe this is a mistake.

  5. nvm-sh / nvm

    Node Version Manager - POSIX-compliant bash script to manage multiple active node.js versions

  6. traefik / traefik

    The Cloud Native Application Proxy

Hugging Face(31)

  1. OpenMMReasoner: Pushing the Frontiers for Multimodal Reasoning with an Open and General Recipe

    Recent advancements in large reasoning models have fueled growing interest in extending such capabilities to multimodal domains. However, despite notable progress in visual reasoning, the lack of transparent and reproducible data curation and training strategies remains a major barrier to scalable research. In this work, we introduce OpenMMReasoner, a fully transparent two-stage recipe for multimodal reasoning spanning supervised fine-tuning (SFT) and reinforcement learning (RL). In the SFT stage, we construct an 874K-sample cold-start dataset with rigorous step-by-step validation, providing a strong foundation for reasoning capabilities. The subsequent RL stage leverages a 74K-sample dataset across diverse domains to further sharpen and stabilize these abilities, resulting in a more robust and efficient learning process. Extensive evaluations demonstrate that our training recipe not only surpasses strong baselines but also highlights the critical role of data quality and training design in shaping multimodal reasoning performance. Notably, our method achieves a 11.6% improvement over the Qwen2.5-VL-7B-Instruct baseline across nine multimodal reasoning benchmarks, establishing a solid empirical foundation for future large-scale multimodal reasoning research. We open-sourced all our codes, pipeline, and data at https://github.com/EvolvingLMMs-Lab/OpenMMReasoner.

  2. Unveiling Intrinsic Dimension of Texts: from Academic Abstract to Creative Story

    Intrinsic dimension (ID) is an important tool in modern LLM analysis, informing studies of training dynamics, scaling behavior, and dataset structure, yet its textual determinants remain underexplored. We provide the first comprehensive study grounding ID in interpretable text properties through cross-encoder analysis, linguistic features, and sparse autoencoders (SAEs). In this work, we establish three key findings. First, ID is complementary to entropy-based metrics: after controlling for length, the two are uncorrelated, with ID capturing geometric complexity orthogonal to prediction quality. Second, ID exhibits robust genre stratification: scientific prose shows low ID (~8), encyclopedic content medium ID (~9), and creative/opinion writing high ID (~10.5) across all models tested. This reveals that contemporary LLMs find scientific text "representationally simple" while fiction requires additional degrees of freedom. Third, using SAEs, we identify causal features: scientific signals (formal tone, report templates, statistics) reduce ID; humanized signals (personalization, emotion, narrative) increase it. Steering experiments confirm these effects are causal. Thus, for contemporary models, scientific writing appears comparatively "easy", whereas fiction, opinion, and affect add representational degrees of freedom. Our multi-faceted analysis provides practical guidance for the proper use of ID and the sound interpretation of ID-based results.

  3. GeoVista: Web-Augmented Agentic Visual Reasoning for Geolocalization

    Current research on agentic visual reasoning enables deep multimodal understanding but primarily focuses on image manipulation tools, leaving a gap toward more general-purpose agentic models. In this work, we revisit the geolocalization task, which requires not only nuanced visual grounding but also web search to confirm or refine hypotheses during reasoning. Since existing geolocalization benchmarks fail to meet the need for high-resolution imagery and the localization challenge for deep agentic reasoning, we curate GeoBench, a benchmark that includes photos and panoramas from around the world, along with a subset of satellite images of different cities to rigorously evaluate the geolocalization ability of agentic models. We also propose GeoVista, an agentic model that seamlessly integrates tool invocation within the reasoning loop, including an image-zoom-in tool to magnify regions of interest and a web-search tool to retrieve related web information. We develop a complete training pipeline for it, including a cold-start supervised fine-tuning (SFT) stage to learn reasoning patterns and tool-use priors, followed by a reinforcement learning (RL) stage to further enhance reasoning ability. We adopt a hierarchical reward to leverage multi-level geographical information and improve overall geolocalization performance. Experimental results show that GeoVista surpasses other open-source agentic models on the geolocalization task greatly and achieves performance comparable to closed-source models such as Gemini-2.5-flash and GPT-5 on most metrics.

  4. SAM 3: Segment Anything with Concepts

    We present Segment Anything Model (SAM) 3, a unified model that detects, segments, and tracks objects in images and videos based on concept prompts, which we define as either short noun phrases (e.g., "yellow school bus"), image exemplars, or a combination of both. Promptable Concept Segmentation (PCS) takes such prompts and returns segmentation masks and unique identities for all matching object instances. To advance PCS, we build a scalable data engine that produces a high-quality dataset with 4M unique concept labels, including hard negatives, across images and videos. Our model consists of an image-level detector and a memory-based video tracker that share a single backbone. Recognition and localization are decoupled with a presence head, which boosts detection accuracy. SAM 3 doubles the accuracy of existing systems in both image and video PCS, and improves previous SAM capabilities on visual segmentation tasks. We open source SAM 3 along with our new Segment Anything with Concepts (SA-Co) benchmark for promptable concept segmentation.

  5. O-Mem: Omni Memory System for Personalized, Long Horizon, Self-Evolving Agents

    Recent advancements in LLM-powered agents have demonstrated significant potential in generating human-like responses; however, they continue to face challenges in maintaining long-term interactions within complex environments, primarily due to limitations in contextual consistency and dynamic personalization. Existing memory systems often depend on semantic grouping prior to retrieval, which can overlook semantically irrelevant yet critical user information and introduce retrieval noise. In this report, we propose the initial design of O-Mem, a novel memory framework based on active user profiling that dynamically extracts and updates user characteristics and event records from their proactive interactions with agents. O-Mem supports hierarchical retrieval of persona attributes and topic-related context, enabling more adaptive and coherent personalized responses. O-Mem achieves 51.67% on the public LoCoMo benchmark, a nearly 3% improvement upon LangMem,the previous state-of-the-art, and it achieves 62.99% on PERSONAMEM, a 3.5% improvement upon A-Mem,the previous state-of-the-art. O-Mem also boosts token and interaction response time efficiency compared to previous memory frameworks. Our work opens up promising directions for developing efficient and human-like personalized AI assistants in the future.

  6. Parrot: Persuasion and Agreement Robustness Rating of Output Truth -- A Sycophancy Robustness Benchmark for LLMs

    This study presents PARROT (Persuasion and Agreement Robustness Rating of Output Truth), a robustness focused framework designed to measure the degradation in accuracy that occurs under social pressure exerted on users through authority and persuasion in large language models (LLMs) the phenomenon of sycophancy (excessive conformity). PARROT (i) isolates causal effects by comparing the neutral version of the same question with an authoritatively false version using a double-blind evaluation, (ii) quantifies confidence shifts toward the correct and imposed false responses using log-likelihood-based calibration tracking, and (iii) systematically classifies failure modes (e.g., robust correct, sycophantic agreement, reinforced error, stubborn error, self-correction, etc.) using an eight-state behavioral taxonomy. We evaluated 22 models using 1,302 MMLU-style multiple-choice questions across 13 domains and domain-specific authority templates. Findings show marked heterogeneity: advanced models (e.g., GPT-5, GPT-4.1, Claude Sonnet 4.5) exhibit low "follow rates" (leq 11%, GPT-5: 4\%) and minimal accuracy loss, while older/smaller models show severe epistemic collapse (GPT-4: 80\%, Qwen 2.5-1.5B: 94\%). The danger is not limited to response changes; weak models reduce confidence in the correct response while increasing confidence in the imposed incorrect response. While international law and global knowledge at the domain level exhibit high fragility, elementary mathematics is relatively resilient. Consequently, we argue that the goal of "resistance to overfitting pressure" should be addressed as a primary objective alongside accuracy, harm avoidance, and privacy for safe deployment in the real world.

  7. General Agentic Memory Via Deep Research

    Memory is critical for AI agents, yet the widely-adopted static memory, aiming to create readily available memory in advance, is inevitably subject to severe information loss. To address this limitation, we propose a novel framework called general agentic memory (GAM). GAM follows the principle of "just-in time (JIT) compilation" where it focuses on creating optimized contexts for its client at runtime while keeping only simple but useful memory during the offline stage. To this end, GAM employs a duo-design with the following components. 1) Memorizer, which highlights key historical information using a lightweight memory, while maintaining complete historical information within a universal page-store. 2) Researcher, which retrieves and integrates useful information from the page-store for its online request guided by the pre-constructed memory. This design allows GAM to effectively leverage the agentic capabilities and test-time scalability of frontier large language models (LLMs), while also facilitating end-to-end performance optimization through reinforcement learning. In our experimental study, we demonstrate that GAM achieves substantial improvement on various memory-grounded task completion scenarios against existing memory systems.

  8. AutoEnv: Automated Environments for Measuring Cross-Environment Agent Learning

    Humans naturally adapt to diverse environments by learning underlying rules across worlds with different dynamics, observations, and reward structures. In contrast, existing agents typically demonstrate improvements via self-evolving within a single domain, implicitly assuming a fixed environment distribution. Cross-environment learning has remained largely unmeasured: there is no standard collection of controllable, heterogeneous environments, nor a unified way to represent how agents learn. We address these gaps in two steps. First, we propose AutoEnv, an automated framework that treats environments as factorizable distributions over transitions, observations, and rewards, enabling low-cost (4.12 USD on average) generation of heterogeneous worlds. Using AutoEnv, we construct AutoEnv-36, a dataset of 36 environments with 358 validated levels, on which seven language models achieve 12-49% normalized reward, demonstrating the challenge of AutoEnv-36. Second, we formalize agent learning as a component-centric process driven by three stages of Selection, Optimization, and Evaluation applied to an improvable agent component. Using this formulation, we design eight learning methods and evaluate them on AutoEnv-36. Empirically, the gain of any single learning method quickly decrease as the number of environments increases, revealing that fixed learning methods do not scale across heterogeneous environments. Environment-adaptive selection of learning methods substantially improves performance but exhibits diminishing returns as the method space expands. These results highlight both the necessity and the current limitations of agent learning for scalable cross-environment generalization, and position AutoEnv and AutoEnv-36 as a testbed for studying cross-environment agent learning. The code is avaiable at https://github.com/FoundationAgents/AutoEnv.

  9. Computer-Use Agents as Judges for Generative User Interface

    Computer-Use Agents (CUA) are becoming increasingly capable of autonomously operating digital environments through Graphical User Interfaces (GUI). Yet, most GUI remain designed primarily for humans--prioritizing aesthetics and usability--forcing agents to adopt human-oriented behaviors that are unnecessary for efficient task execution. At the same time, rapid advances in coding-oriented language models (Coder) have transformed automatic GUI design. This raises a fundamental question: Can CUA as judges to assist Coder for automatic GUI design? To investigate, we introduce AUI-Gym, a benchmark for Automatic GUI development spanning 52 applications across diverse domains. Using language models, we synthesize 1560 tasks that simulate real-world scenarios. To ensure task reliability, we further develop a verifier that programmatically checks whether each task is executable within its environment. Building on this, we propose a Coder-CUA in Collaboration framework: the Coder acts as Designer, generating and revising websites, while the CUA serves as Judge, evaluating functionality and refining designs. Success is measured not by visual appearance, but by task solvability and CUA navigation success rate. To turn CUA feedback into usable guidance, we design a CUA Dashboard that compresses multi-step navigation histories into concise visual summaries, offering interpretable guidance for iterative redesign. By positioning agents as both designers and judges, our framework shifts interface design toward agent-native efficiency and reliability. Our work takes a step toward shifting agents from passive use toward active participation in digital environments. Our code and dataset are available at https://github.com/showlab/AUI.

  10. DeCo: Frequency-Decoupled Pixel Diffusion for End-to-End Image Generation

    Pixel diffusion aims to generate images directly in pixel space in an end-to-end fashion. This approach avoids the limitations of VAE in the two-stage latent diffusion, offering higher model capacity. Existing pixel diffusion models suffer from slow training and inference, as they usually model both high-frequency signals and low-frequency semantics within a single diffusion transformer (DiT). To pursue a more efficient pixel diffusion paradigm, we propose the frequency-DeCoupled pixel diffusion framework. With the intuition to decouple the generation of high and low frequency components, we leverage a lightweight pixel decoder to generate high-frequency details conditioned on semantic guidance from the DiT. This thus frees the DiT to specialize in modeling low-frequency semantics. In addition, we introduce a frequency-aware flow-matching loss that emphasizes visually salient frequencies while suppressing insignificant ones. Extensive experiments show that DeCo achieves superior performance among pixel diffusion models, attaining FID of 1.62 (256x256) and 2.22 (512x512) on ImageNet, closing the gap with latent diffusion methods. Furthermore, our pretrained text-to-image model achieves a leading overall score of 0.86 on GenEval in system-level comparison. Codes are publicly available at https://github.com/Zehong-Ma/DeCo.

  11. DR Tulu: Reinforcement Learning with Evolving Rubrics for Deep Research

    Deep research models perform multi-step research to produce long-form, well-attributed answers. However, most open deep research models are trained on easily verifiable short-form QA tasks via reinforcement learning with verifiable rewards (RLVR), which does not extend to realistic long-form tasks. We address this with Reinforcement Learning with Evolving Rubrics (RLER), in which we construct and maintain rubrics that co-evolve with the policy model during training; this allows the rubrics to incorporate information that the model has newly explored and to provide discriminative, on-policy feedback. Using RLER, we develop Deep Research Tulu (DR Tulu-8B), the first open model that is directly trained for open-ended, long-form deep research. Across four long-form deep research benchmarks in science, healthcare and general domains, DR Tulu substantially outperforms existing open deep research models, and matches or exceeds proprietary deep research systems, while being significantly smaller and cheaper per query. To facilitate future research, we release all data, models, and code, including our new MCP-based agent infrastructure for deep research systems.

  12. UltraFlux: Data-Model Co-Design for High-quality Native 4K Text-to-Image Generation across Diverse Aspect Ratios

    Diffusion transformers have recently delivered strong text-to-image generation around 1K resolution, but we show that extending them to native 4K across diverse aspect ratios exposes a tightly coupled failure mode spanning positional encoding, VAE compression, and optimization. Tackling any of these factors in isolation leaves substantial quality on the table. We therefore take a data-model co-design view and introduce UltraFlux, a Flux-based DiT trained natively at 4K on MultiAspect-4K-1M, a 1M-image 4K corpus with controlled multi-AR coverage, bilingual captions, and rich VLM/IQA metadata for resolution- and AR-aware sampling. On the model side, UltraFlux couples (i) Resonance 2D RoPE with YaRN for training-window-, frequency-, and AR-aware positional encoding at 4K; (ii) a simple, non-adversarial VAE post-training scheme that improves 4K reconstruction fidelity; (iii) an SNR-Aware Huber Wavelet objective that rebalances gradients across timesteps and frequency bands; and (iv) a Stage-wise Aesthetic Curriculum Learning strategy that concentrates high-aesthetic supervision on high-noise steps governed by the model prior. Together, these components yield a stable, detail-preserving 4K DiT that generalizes across wide, square, and tall ARs. On the Aesthetic-Eval at 4096 benchmark and multi-AR 4K settings, UltraFlux consistently outperforms strong open-source baselines across fidelity, aesthetic, and alignment metrics, and-with a LLM prompt refiner-matches or surpasses the proprietary Seedream 4.0.

Solidot(31)

  1. Valve 年收入预计超过 160 亿美元,每名员工产生 5000 万美元

    研究公司 Alinea Analytics 估计,Valve 在 2025 年的年收入在 160-170 亿美元之间。而 Valve 大约有 350 名员工,意味着每名员工产生约 5000 万美元的收入。Valve 是一家私营公司,它无需公开披露收入等数据。它的员工数据还是因为诉讼而泄露的。Valve 为员工提供了丰厚的薪酬,根据泄露的数据,它在员工工资上花了近 4.5 亿美元,平均每位员工逾 130 万美元。

  2. 蝙蝠侠会促进人的友善行为

    根据发表在《Mental Health Research》期刊上的一项研究,打扮成蝙蝠侠可能会在公共场合促进亲社会行为。意大利研究人员在米兰地铁展开了研究,观察了 138 次乘车。对照组由一名装扮成孕妇的女性与一位观察员组成,她们一起登上列车。实验组成员打扮成蝙蝠侠登上列车。结果显示,当蝙蝠侠出现时,乘客让座的概率显著高于对照组。值得注意的是,实验组中 44% 的让座者表示并没有看到蝙蝠侠。这表明意外事件能促进亲社会行为,这项发现对于在公共场合鼓励善意行为有重要意义。

  3. Git 3.0 将用 main 而不是 master 为默认分支名

    从 Git 3.0 起,默认分支名将是 main 而不是 master。关于 main 和 master 名字的争论可以追溯到 2020 年,而 GitHub 早在 2020 年 10 月 1 日将新建代码库的默认分支名改为 main。Git 3.0 预计会在 2026 年底左右发布,主要变化包括:默认哈希函数从 SHA-1 改为 SHA-256 以提高安全性;改变默认存储格式以更好支持 macOS 和 Windows 并提升性能;更正式的将 Rust 集成到 Git 自身构建流程中。

  4. 看不见的微塑料通过空气扩散到全球

    看不见的微塑料通过空气扩散到全球。早稻田大学环境化学教授 Hiroshi Okochi 称,最近的研究表明空气传播的塑料污染正以惊人速度扩散。空气传播的微塑料直径小于 2.5 微米。Okochi 团队在 2023 年发表的一项研究发现,富士山顶云层中的水每升含有 6.7 个微塑料颗粒。德国和瑞士的团队报告,他们在北极每升雪中发现了逾万个微塑料颗粒。这些微塑料可能是通过空气传播随雪沉积。尽管在人体各部位都发现微塑料,但目前尚不清楚空气中的塑料颗粒对健康的影响。1 微米或更小的通过空气传播的塑料颗粒被认为能到达肺泡。英国一项研究表明,在 13 名接受肺部手术的患者的肺组织样本中,有 11 份检测到了微塑料。其中肺下部的微塑料含量最高。人每天呼吸超过 2 万次,一生累计呼吸 6-7 亿次。Okochi 表示人类不可避免的会吸入空气中的微塑料,但因为看不见所以也丝毫不知。

  5. X 展示账号地理位置暴露众多 MAGA 账号在外国运营

    马斯克(Elon Musk)旗下的社交媒体 X/Twitter 开始展示账号注册的地理位置,如果该账号使用了 VPN 隐藏 IP 它还会提示可能使用了 VPN。该功能在上线之后一度下线,之后又恢复上线。地理位置信息显示很多政治网红的账号其实都是在美国之外运营的。MAGA NATION 有逾 39.2 万粉丝,其运营地点位于东欧;Dark Maga 有逾 1.5 万粉丝,其运营地点位于泰国;MAGA Scope 有逾 5.1 万粉丝,其运营地点位于尼日利亚;America First 有逾 6.7 万粉丝,其运营地点位于孟加拉国。反 MAGA 账号 Ron Smith 有逾 5.2 万名粉丝,其运营地点位于肯尼亚;Republicans Against Trump 有逾 97 万粉丝,其运营地点位于奥地利,目前使用美国 IP 的 VPN 隐藏原 IP。

  6. Chrome 考虑恢复支持 JPEG-XL

    2023 年 Google Chrome 移除了对实验性的 JPEG-XL 图像格式的支持。JPEG-XL 是免专利新的图像格式。Google 此举引发了很多争议,因为 Chrome/Chromium 占据了九成市场份额,它是 Web 标准事实上的仲裁者。到了 2025 年事情有了戏剧性转变。Google 开发者 Rick Byers 表示考虑恢复支持 JPEG-XL,预计将使用 JPEG-XL 的 Rust 语言实现。Google 开发者称,Safari 加入了对 JPEG-XL 支持,Firefox 也表明了立场,PDF 也准备添加 JPEG-XL 支持。Chromium 要默认启用 JPEG XL 解码器,需要有长期维护的承诺,满足这些条件的话将会恢复支持。

  7. SSD 长期断电后会缓慢丢失数据

    固态硬盘(SSD)基本上取代了机械硬盘成为最流行的存储设备,SSD 速度更快,功耗更低。但如果你计划将 SSD 作为冷存储设备使用,将数据储存在 SSD 里面然后离线保存几年,那么你可能需要三思而行,因为 SSD 在长时间断电后会缓慢损失或丢失数据。最便宜的 QLC NAND SSD 能在完全断电的情况下安全保存数据约一年时间,而价格更贵的 TLC NAND SSD 能保存数据三年,MLC 和 SLC NAND SSD 在断电的情况下分别能保存数据 5 年和 10 年。绝大多数消费级 SSD 使用的是 TLC 或 QLC NAND,因此断电超过一年就可能面临数据丢失的风险。相比下,机械硬盘比 SSD 更适合长时间断电保存数据。

  8. 微软警告 Windows AI 功能可能会产生幻觉

    微软正在 Windows 11 中集成越来越多的 AI 功能,最新的测试版本 v26220.7262 加入了 Copilot Actions,但默认没有启用,需要管理员权限才能激活。对于这些基于大模型的 AI 功能,微软试图推卸掉自己的责任,它通过其支持文档警告,称 Copilot Actions 之类的 AI 功能会引入新的安全风险,如跨提示注入(cross-prompt injection 或 XPIA)——文档或 UI 元素中的恶意内容能覆盖 AI 指令,导致数据泄露或安装恶意程序等意外操作。它建议用户在了解安全风险的情况下启用 AI 功能。Copilot Actions 具有高访问权限,能对用户的 Documents、Downloads、Desktop、Pictures、Videos 和 Music 等文件夹进行读写操作。微软还表示 AI 也可能会产生幻觉,“产生意外之外的输出”。

  9. 皮尤调查显示美国最流行的社媒仍然是 YouTube

    皮尤研究中心于 2 月 5 日至 6 月 18 日之间调查了 5022 名美国人的社媒使用情况。结果显示:YouTube 84%,Facebook 71%,Instagram 50%,TikTok 37%,WhatsApp 32%,Reddit 26%,Snapchat 25%,X.com(Twitter)21%,Threads 8%,Bluesky 4% 和 Truth Social 3%。YouTube 和 Facebook 仍然是美国占据主导地位的社交媒体,但其使用比例长期保持稳定,而年轻人更可能使用 YouTube,30-49 岁人群则更可能使用 Facebook(80%)。逾半数女性使用 Instagram (55%),男性则只有 44%;男性更可能使用 X 和 Reddit,民主党人和倾向民主党的独立人士更可能使用 WhatsApp、Reddit、TikTok、Bluesky 和 ​​Threads。

  10. 接吻行为可以追溯到 2100 万年前

    人、猴甚至北极熊都会接吻。研究人员将嘴对嘴的接吻行为追溯到 2100 万年前。接吻行为可能传播疾病,且似乎并不能直接提高生存或繁殖能力。虽然接吻对许多人类群体具有强烈的情感和文化意义,但其演化背景却很少被深入研究。在这项研究中,研究人员首次尝试跨物种追踪接吻的起源——利用灵长类动物间的演化关系进行分析。结果表明,接吻在大型类人猿中有很深的渊源,在 2150 万至 1690 万年前的祖先中就已出现。这种行为似乎在演化过程中持续存在,并且在该类群的大多数物种中仍可观察到。研究小组还得出结论,已经灭绝的人类近亲——尼安德特人可能也会接吻。这一结论得到了早期研究的支持,人类与尼安德特人曾交换口腔微生物(通过唾液转移)并进行杂交,这意味着接吻是他们互动的一部分。

  11. 英伟达证实 Windows 十月更新导致游戏性能问题

    英伟达上周释出了 GeForce Hotfix Display Driver v581.94,称微软在十月份释出的 Windows 11 24H2 和 Windows 11 25H 更新导致游戏性能出现问题。存在问题的补丁是 KB5066835,安装之后部分游戏的性能可能会下降。除此之外,微软的十月份例行更新还被发现会破坏 localhost HTTP 连接,智能卡身份验证问题,以及 Windows Recovery Environment(WinRE)无法使用 USB 鼠标和键盘。

  12. 欧洲议会呼吁限制未成年人使用社媒

    欧洲议会周三呼吁欧盟设定儿童使用社交媒体的最低年龄限制,以应对青少年因过度接触社交媒体而导致的心理健康问题日益增多的现状。此前澳大利亚通过了全球首个针对 16 岁以下儿童的社交媒体禁令,丹麦和马来西亚也计划效仿。欧洲议会以 483 票赞成、92 票反对、86 票弃权通过决议呼吁欧盟范围内禁止 16 岁以下儿童在未经家长同意的情况下访问在线平台、视频分享网站和人工智能助手,并彻底禁止 13 岁以下儿童使用。决议还呼吁禁止“战利品箱”以及针对未成年人的基于用户参与度的推荐算法,并要求制定相关法律,规定内容设计必须符合儿童的年龄特点。