DIGEST · 2025-08-03

OrangeBot.AI Digest — 2025-08-03

66 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. Shrinking freshwater availability increasing land contribution to sea level rise (news.asu.edu)
  2. The Dollar Is Dead (mathmeetsmoney.substack.com)
  3. Modern Node.js Patterns (kashw1n.com)
  4. Yosemite embodies the long war over US national park privatization (theconversation.com)
  5. Persona vectors: Monitoring and controlling character traits in language models (www.anthropic.com)
  6. UN report finds UN reports are not widely read (www.reuters.com)
  7. Palantir: The Most Evil Company (politicaleconomist.substack.com)
  8. How to make almost anything (2019) (fab.cba.mit.edu)
  9. Tokens are getting more expensive (ethanding.substack.com)
  10. 2,500-year-old Siberian 'ice mummy' had intricate tattoos, imaging reveals (www.bbc.com)
  11. If you're remote, ramble (stephango.com)
  12. Self-employed, self-exhausted (theisolationjournals.substack.com)
  13. A Real PowerBook: The Macintosh Application Environment on a Pa-RISC Laptop (oldvcr.blogspot.com)
  14. Build Your Own Minisforum N5 Inspired Mini NAS (jackharvest.com)
  15. Seed7 – Extensible Programming Language (seed7.net)

GitHub Trending(15)

  1. dyad-sh / dyad

    Free, local, open-source AI app builder | v0 / lovable / Bolt alternative | 🌟 Star if you like it!

  2. wg-easy / wg-easy

    The easiest way to run WireGuard VPN + Web-based Admin UI.

  3. eclipse-sumo / sumo

    Eclipse SUMO is an open source, highly portable, microscopic and continuous traffic simulation package designed to handle large networks. It allows for intermodal simulation including pedestrians and comes with a large set of tools for scenario creation.

  4. trekhleb / javascript-algorithms

    📝 Algorithms and data structures implemented in JavaScript with explanations and links to further readings

  5. XTLS / Xray-core

    Xray, Penetrates Everything. Also the best v2ray-core. Where the magic happens. An open platform for various uses.

  6. jellyfin / jellyfin

    The Free Software Media System - Server Backend & API

  7. rasbt / LLMs-from-scratch

    Implement a ChatGPT-like LLM in PyTorch from scratch, step by step

  8. LadybirdBrowser / ladybird

    Truly independent web browser

  9. sst / opencode

    AI coding agent, built for the terminal.

  10. TideDra / zotero-arxiv-daily

    Recommend new arxiv papers of your interest daily according to your Zotero libarary.

  11. TandoorRecipes / recipes

    Application for managing recipes, planning meals, building shopping lists and much much more!

  12. MotiaDev / motia

    Modern Backend Framework that unifies APIs, background jobs, workflows, and AI agents into a single cohesive system with built-in observability and state management.

  13. reflex-dev / reflex

    🕸️ Web apps in pure Python 🐍

  14. flydelabs / flyde

    Open-source Visual programming for backend logic that integrates with existing codebases. Flyde bridges the gap between technical and non-technical team members. Product managers, designers, and backend developers can collaborate on the same visual flows.

  15. pointfreeco / swift-composable-architecture

    A library for building applications in a consistent and understandable way, with composition, testing, and ergonomics in mind.

Product Hunt(7)

  1. SEO Speed Test

    Google & ChatGPT ignore slow pages, check if yours is fast!

  2. Cipher by Byterover

    Open-source, shared memory for coding agents

  3. Watchman AI

    Capturing invisible B2B buyers with AI agents

  4. Hypertune

    Type-safe feature flags, optimized for React and Next.js

  5. Deposure

    Launch your APIs live effortlessly

  6. Standout

    Your personal AI headhunter on WhatsApp to find startup jobs

  7. Seed Diffusion

    A faster, more holistic way to generate code

Hugging Face(15)

  1. Seed-Prover: Deep and Broad Reasoning for Automated Theorem Proving

    LLMs have demonstrated strong mathematical reasoning abilities by leveraging reinforcement learning with long chain-of-thought, yet they continue to struggle with theorem proving due to the lack of clear supervision signals when solely using natural language. Dedicated domain-specific languages like Lean provide clear supervision via formal verification of proofs, enabling effective training through reinforcement learning. In this work, we propose Seed-Prover, a lemma-style whole-proof reasoning model. Seed-Prover can iteratively refine its proof based on Lean feedback, proved lemmas, and self-summarization. To solve IMO-level contest problems, we design three test-time inference strategies that enable both deep and broad reasoning. Seed-Prover proves 78.1% of formalized past IMO problems, saturates MiniF2F, and achieves over 50\% on PutnamBench, outperforming the previous state-of-the-art by a large margin. To address the lack of geometry support in Lean, we introduce a geometry reasoning engine Seed-Geometry, which outperforms previous formal geometry engines. We use these two systems to participate in IMO 2025 and fully prove 5 out of 6 problems. This work represents a significant advancement in automated mathematical reasoning, demonstrating the effectiveness of formal verification with long chain-of-thought reasoning.

  2. Phi-Ground Tech Report: Advancing Perception in GUI Grounding

    With the development of multimodal reasoning models, Computer Use Agents (CUAs), akin to Jarvis from "Iron Man", are becoming a reality. GUI grounding is a core component for CUAs to execute actual actions, similar to mechanical control in robotics, and it directly leads to the success or failure of the system. It determines actions such as clicking and typing, as well as related parameters like the coordinates for clicks. Current end-to-end grounding models still achieve less than 65\% accuracy on challenging benchmarks like ScreenSpot-pro and UI-Vision, indicating they are far from being ready for deployment. % , as a single misclick can result in unacceptable consequences. In this work, we conduct an empirical study on the training of grounding models, examining details from data collection to model training. Ultimately, we developed the Phi-Ground model family, which achieves state-of-the-art performance across all five grounding benchmarks for models under 10B parameters in agent settings. In the end-to-end model setting, our model still achieves SOTA results with scores of \textbf{43.2} on ScreenSpot-pro and \textbf{27.2} on UI-Vision. We believe that the various details discussed in this paper, along with our successes and failures, not only clarify the construction of grounding models but also benefit other perception tasks. Project homepage: https://zhangmiaosen2000.github.io/Phi-Ground/{https://zhangmiaosen2000.github.io/Phi-Ground/}

  3. RecGPT Technical Report

    Recommender systems are among the most impactful applications of artificial intelligence, serving as critical infrastructure connecting users, merchants, and platforms. However, most current industrial systems remain heavily reliant on historical co-occurrence patterns and log-fitting objectives, i.e., optimizing for past user interactions without explicitly modeling user intent. This log-fitting approach often leads to overfitting to narrow historical preferences, failing to capture users' evolving and latent interests. As a result, it reinforces filter bubbles and long-tail phenomena, ultimately harming user experience and threatening the sustainability of the whole recommendation ecosystem. To address these challenges, we rethink the overall design paradigm of recommender systems and propose RecGPT, a next-generation framework that places user intent at the center of the recommendation pipeline. By integrating large language models (LLMs) into key stages of user interest mining, item retrieval, and explanation generation, RecGPT transforms log-fitting recommendation into an intent-centric process. To effectively align general-purpose LLMs to the above domain-specific recommendation tasks at scale, RecGPT incorporates a multi-stage training paradigm, which integrates reasoning-enhanced pre-alignment and self-training evolution, guided by a Human-LLM cooperative judge system. Currently, RecGPT has been fully deployed on the Taobao App. Online experiments demonstrate that RecGPT achieves consistent performance gains across stakeholders: users benefit from increased content diversity and satisfaction, merchants and the platform gain greater exposure and conversions. These comprehensive improvement results across all stakeholders validates that LLM-driven, intent-centric design can foster a more sustainable and mutually beneficial recommendation ecosystem.

  4. iLRM: An Iterative Large 3D Reconstruction Model

    Feed-forward 3D modeling has emerged as a promising approach for rapid and high-quality 3D reconstruction. In particular, directly generating explicit 3D representations, such as 3D Gaussian splatting, has attracted significant attention due to its fast and high-quality rendering, as well as numerous applications. However, many state-of-the-art methods, primarily based on transformer architectures, suffer from severe scalability issues because they rely on full attention across image tokens from multiple input views, resulting in prohibitive computational costs as the number of views or image resolution increases. Toward a scalable and efficient feed-forward 3D reconstruction, we introduce an iterative Large 3D Reconstruction Model (iLRM) that generates 3D Gaussian representations through an iterative refinement mechanism, guided by three core principles: (1) decoupling the scene representation from input-view images to enable compact 3D representations; (2) decomposing fully-attentional multi-view interactions into a two-stage attention scheme to reduce computational costs; and (3) injecting high-resolution information at every layer to achieve high-fidelity reconstruction. Experimental results on widely used datasets, such as RE10K and DL3DV, demonstrate that iLRM outperforms existing methods in both reconstruction quality and speed. Notably, iLRM exhibits superior scalability, delivering significantly higher reconstruction quality under comparable computational cost by efficiently leveraging a larger number of input views.

  5. villa-X: Enhancing Latent Action Modeling in Vision-Language-Action Models

    Visual-Language-Action (VLA) models have emerged as a popular paradigm for learning robot manipulation policies that can follow language instructions and generalize to novel scenarios. Recent work has begun to explore the incorporation of latent actions, an abstract representation of visual change between two frames, into VLA pre-training. In this paper, we introduce villa-X, a novel Visual-Language-Latent-Action (ViLLA) framework that advances latent action modeling for learning generalizable robot manipulation policies. Our approach improves both how latent actions are learned and how they are incorporated into VLA pre-training. Together, these contributions enable villa-X to achieve superior performance across simulated environments including SIMPLER and LIBERO, as well as on two real-world robot setups including gripper and dexterous hand manipulation. We believe the ViLLA paradigm holds significant promise, and that our villa-X provides a strong foundation for future research.

  6. C3: A Bilingual Benchmark for Spoken Dialogue Models Exploring Challenges in Complex Conversations

    Spoken Dialogue Models (SDMs) have recently attracted significant attention for their ability to generate voice responses directly to users' spoken queries. Despite their increasing popularity, there exists a gap in research focused on comprehensively understanding their practical effectiveness in comprehending and emulating human conversations. This is especially true compared to text-based Large Language Models (LLMs), which benefit from extensive benchmarking. Human voice interactions are inherently more complex than text due to characteristics unique to spoken dialogue. Ambiguity poses one challenge, stemming from semantic factors like polysemy, as well as phonological aspects such as heterograph, heteronyms, and stress patterns. Additionally, context-dependency, like omission, coreference, and multi-turn interaction, adds further complexity to human conversational dynamics. To illuminate the current state of SDM development and to address these challenges, we present a benchmark dataset in this paper, which comprises 1,079 instances in English and Chinese. Accompanied by an LLM-based evaluation method that closely aligns with human judgment, this dataset facilitates a comprehensive exploration of the performance of SDMs in tackling these practical challenges.

  7. Persona Vectors: Monitoring and Controlling Character Traits in Language Models

    Large language models interact with users through a simulated 'Assistant' persona. While the Assistant is typically trained to be helpful, harmless, and honest, it sometimes deviates from these ideals. In this paper, we identify directions in the model's activation space-persona vectors-underlying several traits, such as evil, sycophancy, and propensity to hallucinate. We confirm that these vectors can be used to monitor fluctuations in the Assistant's personality at deployment time. We then apply persona vectors to predict and control personality shifts that occur during training. We find that both intended and unintended personality changes after finetuning are strongly correlated with shifts along the relevant persona vectors. These shifts can be mitigated through post-hoc intervention, or avoided in the first place with a new preventative steering method. Moreover, persona vectors can be used to flag training data that will produce undesirable personality changes, both at the dataset level and the individual sample level. Our method for extracting persona vectors is automated and can be applied to any personality trait of interest, given only a natural-language description.

  8. Scalable Multi-Task Reinforcement Learning for Generalizable Spatial Intelligence in Visuomotor Agents

    While Reinforcement Learning (RL) has achieved remarkable success in language modeling, its triumph hasn't yet fully translated to visuomotor agents. A primary challenge in RL models is their tendency to overfit specific tasks or environments, thereby hindering the acquisition of generalizable behaviors across diverse settings. This paper provides a preliminary answer to this challenge by demonstrating that RL-finetuned visuomotor agents in Minecraft can achieve zero-shot generalization to unseen worlds. Specifically, we explore RL's potential to enhance generalizable spatial reasoning and interaction capabilities in 3D worlds. To address challenges in multi-task RL representation, we analyze and establish cross-view goal specification as a unified multi-task goal space for visuomotor policies. Furthermore, to overcome the significant bottleneck of manual task design, we propose automated task synthesis within the highly customizable Minecraft environment for large-scale multi-task RL training, and we construct an efficient distributed RL framework to support this. Experimental results show RL significantly boosts interaction success rates by 4times and enables zero-shot generalization of spatial reasoning across diverse environments, including real-world settings. Our findings underscore the immense potential of RL training in 3D simulated environments, especially those amenable to large-scale task generation, for significantly advancing visuomotor agents' spatial reasoning.

  9. TARS: MinMax Token-Adaptive Preference Strategy for Hallucination Reduction in MLLMs

    Multimodal large language models (MLLMs) enable vision-language reasoning, yet often generate plausible outputs that are factually incorrect or visually ungrounded, thereby compromising their reliability. Direct preference optimization (DPO) is a common strategy for correcting hallucinations by aligning model outputs with human preferences. Existing DPO strategies typically treat hallucination-related preferences as fixed targets, relying on static supervision signals during training. This approach tends to overfit to superficial linguistic cues in preference data, leading to distributional rigidity and spurious correlations that impair grounding in causally relevant visual information. To overcome this limitation, we propose TARS, a token-adaptive preference strategy that reformulates DPO as a min-max optimization problem. TARS maximizes token-level distributional shifts under semantic constraints to simulate alignment uncertainty, and simultaneously minimizes the expected preference loss under these controlled perturbations. This joint objective preserves causal grounding while mitigating overfitting to preference patterns, thereby reducing hallucinations in multimodal reasoning. We evaluate TARS on multiple hallucination benchmarks and find consistently strong performance. Using only 4.8k preference samples and no expert feedback, TARS reduces hallucination rates from 26.4% to 13.2% and decreases cognition value from 2.5 to 0.4. It outperforms standard DPO and matches GPT-4o on several key metrics.

  10. NeRF Is a Valuable Assistant for 3D Gaussian Splatting

    We introduce NeRF-GS, a novel framework that jointly optimizes Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS). This framework leverages the inherent continuous spatial representation of NeRF to mitigate several limitations of 3DGS, including sensitivity to Gaussian initialization, limited spatial awareness, and weak inter-Gaussian correlations, thereby enhancing its performance. In NeRF-GS, we revisit the design of 3DGS and progressively align its spatial features with NeRF, enabling both representations to be optimized within the same scene through shared 3D spatial information. We further address the formal distinctions between the two approaches by optimizing residual vectors for both implicit features and Gaussian positions to enhance the personalized capabilities of 3DGS. Experimental results on benchmark datasets show that NeRF-GS surpasses existing methods and achieves state-of-the-art performance. This outcome confirms that NeRF and 3DGS are complementary rather than competing, offering new insights into hybrid approaches that combine 3DGS and NeRF for efficient 3D scene representation.

  11. AgroBench: Vision-Language Model Benchmark in Agriculture

    Precise automated understanding of agricultural tasks such as disease identification is essential for sustainable crop production. Recent advances in vision-language models (VLMs) are expected to further expand the range of agricultural tasks by facilitating human-model interaction through easy, text-based communication. Here, we introduce AgroBench (Agronomist AI Benchmark), a benchmark for evaluating VLM models across seven agricultural topics, covering key areas in agricultural engineering and relevant to real-world farming. Unlike recent agricultural VLM benchmarks, AgroBench is annotated by expert agronomists. Our AgroBench covers a state-of-the-art range of categories, including 203 crop categories and 682 disease categories, to thoroughly evaluate VLM capabilities. In our evaluation on AgroBench, we reveal that VLMs have room for improvement in fine-grained identification tasks. Notably, in weed identification, most open-source VLMs perform close to random. With our wide range of topics and expert-annotated categories, we analyze the types of errors made by VLMs and suggest potential pathways for future VLM development. Our dataset and code are available at https://dahlian00.github.io/AgroBenchPage/ .

  12. Beyond Linear Bottlenecks: Spline-Based Knowledge Distillation for Culturally Diverse Art Style Classification

    Art style classification remains a formidable challenge in computational aesthetics due to the scarcity of expertly labeled datasets and the intricate, often nonlinear interplay of stylistic elements. While recent dual-teacher self-supervised frameworks reduce reliance on labeled data, their linear projection layers and localized focus struggle to model global compositional context and complex style-feature interactions. We enhance the dual-teacher knowledge distillation framework to address these limitations by replacing conventional MLP projection and prediction heads with Kolmogorov-Arnold Networks (KANs). Our approach retains complementary guidance from two teacher networks, one emphasizing localized texture and brushstroke patterns, the other capturing broader stylistic hierarchies while leveraging KANs' spline-based activations to model nonlinear feature correlations with mathematical precision. Experiments on WikiArt and Pandora18k demonstrate that our approach outperforms the base dual teacher architecture in Top-1 accuracy. Our findings highlight the importance of KANs in disentangling complex style manifolds, leading to better linear probe accuracy than MLP projections.

  13. On the Expressiveness of Softmax Attention: A Recurrent Neural Network Perspective

    Since its introduction, softmax attention has become the backbone of modern transformer architectures due to its expressiveness and scalability across a wide range of tasks. However, the main drawback of softmax attention is the quadratic memory requirement and computational complexity with respect to the sequence length. By replacing the softmax nonlinearity, linear attention and similar methods have been introduced to avoid the quadratic bottleneck of softmax attention. Despite these linear forms of attention being derived from the original softmax formulation, they typically lag in terms of downstream accuracy. While strong intuition of the softmax nonlinearity on the query and key inner product suggests that it has desirable properties compared to other nonlinearities, the question of why this discrepancy exists still remains unanswered. This work demonstrates that linear attention is an approximation of softmax attention by deriving the recurrent form of softmax attention. Using this form, each part of softmax attention can be described in the language of recurrent neural networks (RNNs). Describing softmax attention as an RNN allows for the ablation of the components of softmax attention to understand the importance of each part and how they interact. In this way, our work helps explain why softmax attention is more expressive than its counterparts.

  14. Flow Equivariant Recurrent Neural Networks

    Data arrives at our senses as a continuous stream, smoothly transforming from one instant to the next. These smooth transformations can be viewed as continuous symmetries of the environment that we inhabit, defining equivalence relations between stimuli over time. In machine learning, neural network architectures that respect symmetries of their data are called equivariant and have provable benefits in terms of generalization ability and sample efficiency. To date, however, equivariance has been considered only for static transformations and feed-forward networks, limiting its applicability to sequence models, such as recurrent neural networks (RNNs), and corresponding time-parameterized sequence transformations. In this work, we extend equivariant network theory to this regime of `flows' -- one-parameter Lie subgroups capturing natural transformations over time, such as visual motion. We begin by showing that standard RNNs are generally not flow equivariant: their hidden states fail to transform in a geometrically structured manner for moving stimuli. We then show how flow equivariance can be introduced, and demonstrate that these models significantly outperform their non-equivariant counterparts in terms of training speed, length generalization, and velocity generalization, on both next step prediction and sequence classification. We present this work as a first step towards building sequence models that respect the time-parameterized symmetries which govern the world around us.

  15. Enhanced Arabic Text Retrieval with Attentive Relevance Scoring

    Arabic poses a particular challenge for natural language processing (NLP) and information retrieval (IR) due to its complex morphology, optional diacritics and the coexistence of Modern Standard Arabic (MSA) and various dialects. Despite the growing global significance of Arabic, it is still underrepresented in NLP research and benchmark resources. In this paper, we present an enhanced Dense Passage Retrieval (DPR) framework developed specifically for Arabic. At the core of our approach is a novel Attentive Relevance Scoring (ARS) that replaces standard interaction mechanisms with an adaptive scoring function that more effectively models the semantic relevance between questions and passages. Our method integrates pre-trained Arabic language models and architectural refinements to improve retrieval performance and significantly increase ranking accuracy when answering Arabic questions. The code is made publicly available at https://github.com/Bekhouche/APR{GitHub}.

Solidot(14)

  1. 印度将惩罚论文撤稿太多的大学

    如果一所大学的研究人员发表的论文大量撤稿,印度国家大学排名将会对将该大学进行惩罚。此举旨在遏制日益增多的因科学不端行为而导致论文撤稿的问题。论文撤稿一部分是因为无意造成的错误,但还有一部分是因为有意的不端行为。根据 Retraction Watch 对过去 30 年撤稿数据库的分析,印度的撤稿论文数量仅次于中国和美国。美国每发表 1000 篇论文中只有不到 1 篇被撤稿,中国每发表 1000 篇论文中有逾 3 篇被撤稿,而印度是每发表 1000 篇论文有 2 篇被撤稿。印度和中国的论文撤稿大部分是因为科学不端行为或科学诚信问题。

  2. 比利时限制访问互联网档案馆的在线图书馆

    比利时布鲁塞尔商事法庭发布了一份禁令,旨在限制对影子图书馆的访问,受影响的网站包括安娜的档案 (Anna's Archive)、Libgen、OceanofPDF、Z-Library 以及互联网档案馆的 Open Library。除了 ISP,搜索引擎、DNS 解析器、广告商、域名服务商、内容分发网络 (CDN) 和托管商都需要采取行动限制对上述网站的访问。Open Library 由已故的 Aaron Swartz 和互联网档案馆创始人 Brewster Kahle 等人创办,旨在存档所有已出版书籍,允许读者在线借阅。与其它电子图书馆类似,它的每本书每次只能借出一份拷贝。但不同之处是它的电子书没有获得授权,而是通过自己扫描去创建电子版。

  3. Google 改变关闭 goo.gl 短链接的计划

    搜索巨人去年宣布,它将于 2025 年 8 月 25 日关闭 Google URL Shortener 短链接服务(goo.gl/*),届时所有 goo.gl 链接将会停止响应。距离关闭日期不到一个月时间,在依赖于 goo.gl 短链接的开发者、教育工作者和记者等表达担忧之后,Google 改变了主意,采取了更温和的立场:它将只禁用自 2024 年底以来没有任何活动的 goo.gl 链接,如果 goo.gl 链接在活跃使用或点击,这些链接将能继续使用。

  4. 17 岁的 Hannah Cairo 解决了有 40 年历史的数学猜想

    2025 年 2 月,Hannah Cairo 在预印本平台 arxiv 上发表了一篇论文,解决了有 40 年历史的 Mizohata-Takeuchi 猜想,她年仅 17 岁,主要依靠自学,一时间震惊了数学界。Cairo 证明该猜想是错误的。她在巴哈马的 Nassau 长大,父亲是程序员,在这里获得了一份工作,因此一家人搬来这里。她还有一位大三岁的哥哥和小八岁的弟弟。在巴拿马他们都是在家中学习。Cairo 通过 Khan Academy 的在线课程学习数学,到她 11 岁时已经读完了微积分课程。父母为她找了几位数学教授远程辅导,她大部分时间仍然是自学,以至于其中一位教授、Clark 大学的 Amir Aazami 认为收钱有愧。到 14 岁时她已经读完了本科高年级数学课程。2021 年由于新冠疫情,一家人困住在芝加哥的祖父母家。这对她反而是好事,她开始扩大数学圈,接触越来越多的同行。2023 年,她申请了多数大学,但由于没有读完高中很多大学都拒绝了。她跟着哥哥去了加州伯克利,选修高等数学课程,其中一门是关于傅里叶限制理论(Fourier restriction theory)的研究生课程,授课老师是张瑞祥。几周后张瑞祥布置了一道 Mizohata-Takeuchi 猜想的简化版本作为作业,此举主要是鼓励学生探索数学领域的高级技巧。她完成了习题,在张的鼓励下进一步探索。她构造了一个函数否定了 Mizohata-Takeuchi 猜想。在完成证明之后,她决定跳过大学阶段,直接读数学博士。由于没有读完大学,她申请的多所大学也拒绝了,只有马里兰大学和约翰霍普金斯大学愿意录取,她选择了马里兰大学,将从秋天开始入学,当她完成学业,这将是她的第一个学位。

  5. 美国政客的年龄比其他国家都年长

    美国政客的年龄比世界其他国家的政客都要老。根据发表在《Journal of Public Economics》期刊上的一项研究,斯坦福大学和加州伯克利的研究人员认为,这一现象背后的一大原因与为政客竞选资金捐款的人的年龄相关。通过关联美国竞选捐款和登记选民,研究人员发现捐款选民的中位数年龄是 66 岁。年长捐款者的意识形态比年轻捐款者保守得多,他们也更可能捐款给年龄相近的候选人,且捐款金额更大。

  6. 男子的十字架纹身神秘消失然后皮肤开始坏死

    《JAMA Otolaryngology–Head & Neck Surgery》期刊报道了一起不同寻常的病例:一名 20 岁的中国籍男子脖子上的十字架纹身在五个月后神秘消失,之后皮肤出现严重的坏死性溃疡和炎症。这起病例是如此奇特以至于医生认为它扩大了纹身相关病理学的范围。医生报告,他们没有发现任何感染的迹象,纹身用的红色颜料已从皮肤上消失,只在溃疡未覆盖的地方留下疤痕。当人体排斥纹身时,异常的免疫反应通常留在皮肤组织的上层,几乎不会导致组织坏死。但该男子的病变很深,明显是侵袭性、结痂、出血性坏死性溃疡。医生还发现颈部病变两侧肿胀,MRI 显示形成了较大的肿块。CT 扫描显示颈部两侧的颈内静脉形成了血栓。医生通过手术切除了溃疡和肿块,封闭了凝结的静脉。医生提出了几种可能的原因:纹身产生了异常的免疫反应;源自纹身的慢性炎症导致静脉壁侵蚀,也导致细胞死亡。

  7. 调查显示 AI 编程工具使用率上升的同时对其信任度在下降

    Stack Overflow 对 4.9 万名程序员的调查发现,2025 年八成开发者在工作流程中使用 AI 辅助编程工具,但开发者对其准确性的信任度从前几年的 40% 降至今年的 29%。45% 的受访者认为,AI 辅助编程工具最让他们不满的地方是“解决方案几乎正确但并不完全正确”,相比输出明显错误的答案,几乎正确但不完全正确的答案可能会在程序中引入隐藏的 bug 或者其它难以识别需要时间解决的问题。逾三分之一的开发者表示他们如今访问 Stack Overflow 部分是为了寻找 AI 相关的问题。大模型的问题不可能完全解决,因为这是其工作原理决定的。开发者仍然使用大模型的原因包括经理要求他们使用,以及 AI 工具仍然有用但不能被误用。

  8. WMO 确认了有记录以来最长的闪电

    世界气象组织(WMO)确认了最长闪电新世界纪录 — 闪电距离长达 829 公里。这个特大闪电发生在 2017 年 10 月,它从美国德克萨斯州东部延伸到堪萨斯城附近 — 相当于欧洲巴黎和威尼斯之间的距离。开车大约需要 8 到 9 个小时,商用飞机至少需要飞行 90 分钟才能覆盖这段距离。新记录误差幅度为±8 公里。它比之前的记录(2020 年 4 月 29 日横跨美国南部部分地区的 768±8 公里(477.2±5英里)长了 61 公里。

  9. 澳大利亚第一枚民用火箭升空 14 秒后坠毁

    澳大利亚第一枚国产民用火箭 Eris 升空 14 秒后坠毁,但制造商 Gilmour Space Technologies 表示发射任务取得成功,所有引擎都点火而火箭也脱离了发射台。CEO Adam Gilmou 表示对结果满意,他此前表示第一次发射就能入轨是几乎闻所未闻的。Eris 火箭是面向小型卫星发射市场,它高 23 米,周三在昆士兰小镇 Bowen 附近的一个发射场发射升空,火箭的起飞高度超过了发射塔,在空中悬停之后坠落起火爆炸,飞行时间 14 秒。澳大利亚政府向该项目资助了 500 万澳元。

  10. Google 将利用 AI 估算美国用户年龄

    Google 宣布将利用 AI 技术估算美国用户年龄是否年满 18 岁。年龄估算将在未来几周内推出,一开始将只会影响少数用户,之后它计划进一步扩大范围。Google 称,它将使用用户搜索过的信息或观看过的 YouTube 视频类型去判断用户的年龄。如果 Google 认为用户年龄未满 18 岁,它将对其采取对未成年人用户实施的相同限制。

  11. 苹果二季度在华销售收入 153.7 亿美元

    苹果公布了2021 年以来最强劲的季度营收增长,iPhone 销量增长 13%,总营收增长 10%。CEO 库克(Tim Cook)表示,约 1% 的营收增长可归因于消费者为应对潜在关税而购买更多产品。苹果最重要的产品仍然是 iPhone,销售额同比增长 13% 至 445.8 亿美元。苹果在中国市场的销售额同比增长 4% 达到 153.7 亿美元。库克表示一大原因是中国的国补政策,对该公司产品非常有帮助。

  12. 《战地6》的社区关卡编辑器使用开源引擎 Godot

    EA 雄心勃勃的希望《战地6》能吸引千万玩家,能长线运营,游戏将提供免费大逃杀模式,以及被称为 Battlefield Portal 的门户关卡编辑器,允许玩家创建自定义地图和玩法,该编辑器使用了开源游戏引擎 Godot。用户使用 Godot 创建的数据会通过翻译层翻译到《战地6》使用的私有引擎 Frostbite 4。此前 Blender 基金会曾使用 Godot 引擎和 Blender 3D 软件开发了一款小游戏,暂时不清楚 EA 或 DICE 是否会向 Godot 项目捐款。

  13. 为遏制登革热疫情巴西释放实验室培育的蚊子

    为阻止蚊子传播登革热病毒,巴西将释放数百万只实验室培育的蚊子,这些蚊子携带了沃尔巴克氏体细菌(Wolbachia bacteria),通过传播沃尔巴克氏体细菌阻止蚊子携带登革热病毒。该项目旨在未来十年内保护 40 个城市的 1.4 亿居民。巴西此前已在尼特罗伊(Niteroi)市测试了释放携带沃尔巴克氏体细菌的蚊子,效果显著,登革热病例下降了约 90%。现在该市几乎所有蚊子都携带沃尔巴克氏体细菌,Chikungunya(基孔肯雅热)病例和寨卡(Zika)病例也分别下降超过 96% 和 99%。沃尔巴克氏细菌天然存在于约半数的昆虫物种中,它让登革热病毒无法在蚊子体内复制,从而有效遏制登革热病毒传播。

  14. 英伟达宣布了结束旧架构 GPU 驱动支持的时间表

    英伟达宣布,自 2025 年 10 月起新 Game Ready 驱动更新将不再支持 Maxwell、Pascal 或 Volta GPU 架构。这意味着 GeForce GTX 1060 之类的旧显卡将不再获得针对新游戏进行优化的驱动版本。英伟达还表示将于 2026 年 10 月停止所有 Windows 10 驱动支持,比微软官方的 Windows 10 终止支持时间晚一年。此后如果 Windows 10 用户希望继续获得较新型号显卡的新驱动,他们需要升级到 Windows 11。英伟达表示会在 2028 年 10 月之前为 Maxwell、Pascal 和 Volta 系列显卡发布季度安全更新。