WEEK · 2025-W33

Weekly Digest — 2025-W33

185 unique stories (2025-08-112025-08-17), aggregated across 8 sources.

Hacker News(42)

  1. Token growth indicates future AI spend per dev (blog.kilocode.ai)
  2. Claude is the drug, Cursor is the dealer (middlelayer.substack.com)
  3. Wikipedia loses challenge against Online Safety Act (www.bbc.com)
  4. GitHub is no longer independent at Microsoft after CEO resignation (www.theverge.com)
  5. Auf Wiedersehen, GitHub (github.blog)
  6. Meta Leaks Part 1: Israel and Meta (archive.org)
  7. Debian GNU/Hurd 2025 released (lists.debian.org)
  8. Multimodal WFH setup: flight SIM, EE lab, and music studio in 60sqft/5.5M² (www.sdo.group)
  9. Show HN: Omnara – Run Claude Code from anywhere (github.com)
  10. Show HN: Building a web search engine from scratch with 3B neural embeddings (blog.wilsonl.in)
  11. Claude Sonnet 4 now supports 1M tokens of context (www.anthropic.com)
  12. Perplexity Makes Longshot $34.5B Offer for Chrome (www.wsj.com)

GitHub Trending(32)

  1. nomic-ai / gpt4all

    GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.

  2. tadata-org / fastapi_mcp

    Expose your FastAPI endpoints as Model Context Protocol (MCP) tools, with Auth!

  3. trailofbits / buttercup
  4. patchy631 / ai-engineering-hub

    In-depth tutorials on LLMs, RAGs and real-world AI agent applications.

  5. openai / codex

    Lightweight coding agent that runs in your terminal

  6. menloresearch / jan

    Jan is an open source alternative to ChatGPT that runs 100% offline on your computer

  7. ubicloud / ubicloud

    Open source alternative to AWS. Elastic compute, block storage (non replicated), firewall and load balancer, managed Postgres, K8s, AI inference, and IAM services.

  8. microsoft / poml

    Prompt Orchestration Markup Language

  9. denizsafak / abogen

    Generate audiobooks from EPUBs, PDFs and text with synchronized captions.

  10. umami-software / umami

    Umami is a modern, privacy-focused alternative to Google Analytics.

  11. unslothai / notebooks

    100+ Fine-tuning LLM Notebooks on Google Colab, Kaggle, and more.

  12. apple / embedding-atlas

    Embedding Atlas is a tool that provides interactive visualizations for large embeddings. It allows you to visualize, cross-filter, and search embeddings and metadata.

Product Hunt(42)

  1. nFactorial AI

    Video calls with world's best minds as your personal tutors

  2. My Juno Health: AI Doctor

    Smarter Health. Sharper Mind. Reach Your Peak Productivity

  3. Hyprnote

    AI Notepad for Private Meetings — fully on your device

  4. Dad Reply

    Auto-respond with a 👍 - Minimal effort - Maximum ambiguity

  5. SuperCraft

    Figma for designing physical products

  6. Weave

    The coolest, bestest, newest AI product for engineers.

  7. Airbook AI

    Cursor for Analytics

  8. Recall

    Chat with everything you’ve read, heard, watched, or noted

  9. Finden

    AI workspace to unify, automate, and run your business

  10. v0.app by Vercel

    The AI builder for everyone

  11. Sellinger AI

    Autonomous AI LinkedIn Outreach

  12. PopHop

    Turn your audience into an active economy

Hugging Face(31)

  1. GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models

    We present GLM-4.5, an open-source Mixture-of-Experts (MoE) large language model with 355B total parameters and 32B activated parameters, featuring a hybrid reasoning method that supports both thinking and direct response modes. Through multi-stage training on 23T tokens and comprehensive post-training with expert model iteration and reinforcement learning, GLM-4.5 achieves strong performance across agentic, reasoning, and coding (ARC) tasks, scoring 70.1% on TAU-Bench, 91.0% on AIME 24, and 64.2% on SWE-bench Verified. With much fewer parameters than several competitors, GLM-4.5 ranks 3rd overall among all evaluated models and 2nd on agentic benchmarks. We release both GLM-4.5 (355B parameters) and a compact version, GLM-4.5-Air (106B parameters), to advance research in reasoning and agentic AI systems. Code, models, and more information are available at https://github.com/zai-org/GLM-4.5.

  2. Voost: A Unified and Scalable Diffusion Transformer for Bidirectional Virtual Try-On and Try-Off

    Virtual try-on aims to synthesize a realistic image of a person wearing a target garment, but accurately modeling garment-body correspondence remains a persistent challenge, especially under pose and appearance variation. In this paper, we propose Voost - a unified and scalable framework that jointly learns virtual try-on and try-off with a single diffusion transformer. By modeling both tasks jointly, Voost enables each garment-person pair to supervise both directions and supports flexible conditioning over generation direction and garment category, enhancing garment-body relational reasoning without task-specific networks, auxiliary losses, or additional labels. In addition, we introduce two inference-time techniques: attention temperature scaling for robustness to resolution or mask variation, and self-corrective sampling that leverages bidirectional consistency between tasks. Extensive experiments demonstrate that Voost achieves state-of-the-art results on both try-on and try-off benchmarks, consistently outperforming strong baselines in alignment accuracy, visual fidelity, and generalization.

  3. InfiGUI-G1: Advancing GUI Grounding with Adaptive Exploration Policy Optimization

    The emergence of Multimodal Large Language Models (MLLMs) has propelled the development of autonomous agents that operate on Graphical User Interfaces (GUIs) using pure visual input. A fundamental challenge is robustly grounding natural language instructions. This requires a precise spatial alignment, which accurately locates the coordinates of each element, and, more critically, a correct semantic alignment, which matches the instructions to the functionally appropriate UI element. Although Reinforcement Learning with Verifiable Rewards (RLVR) has proven to be effective at improving spatial alignment for these MLLMs, we find that inefficient exploration bottlenecks semantic alignment, which prevent models from learning difficult semantic associations. To address this exploration problem, we present Adaptive Exploration Policy Optimization (AEPO), a new policy optimization framework. AEPO employs a multi-answer generation strategy to enforce broader exploration, which is then guided by a theoretically grounded Adaptive Exploration Reward (AER) function derived from first principles of efficiency eta=U/C. Our AEPO-trained models, InfiGUI-G1-3B and InfiGUI-G1-7B, establish new state-of-the-art results across multiple challenging GUI grounding benchmarks, achieving significant relative improvements of up to 9.0% against the naive RLVR baseline on benchmarks designed to test generalization and semantic understanding. Resources are available at https://github.com/InfiXAI/InfiGUI-G1.

  4. Memp: Exploring Agent Procedural Memory

    Large Language Models (LLMs) based agents excel at diverse tasks, yet they suffer from brittle procedural memory that is manually engineered or entangled in static parameters. In this work, we investigate strategies to endow agents with a learnable, updatable, and lifelong procedural memory. We propose Memp that distills past agent trajectories into both fine-grained, step-by-step instructions and higher-level, script-like abstractions, and explore the impact of different strategies for Build, Retrieval, and Update of procedural memory. Coupled with a dynamic regimen that continuously updates, corrects, and deprecates its contents, this repository evolves in lockstep with new experience. Empirical evaluation on TravelPlanner and ALFWorld shows that as the memory repository is refined, agents achieve steadily higher success rates and greater efficiency on analogous tasks. Moreover, procedural memory built from a stronger model retains its value: migrating the procedural memory to a weaker model yields substantial performance gains.

  5. Pruning the Unsurprising: Efficient Code Reasoning via First-Token Surprisal

    Recently, Large Reasoning Models (LRMs) have demonstrated remarkable capabilities in code reasoning by scaling up the length of Chain-of-Thought (CoT). However, excessively long reasoning traces introduce substantial challenges in terms of training cost, inference latency, and deployment feasibility. While various CoT compression approaches have emerged to address this challenge, they face inherent trade-offs: token-level methods often disrupt syntactic and logical coherence, while step-level methods based on perplexity fail to reliably capture the logically critical reasoning steps. In this paper, we propose ASAP (Anchor-guided, Surprisal-based Pruning), a novel coarse-to-fine framework for CoT compression. ASAP first performs anchor-guided pruning to preserve the core reasoning structure, which efficiently reduces the search space for subsequent processing. It then enables a logic-aware pruning by selecting logically essential reasoning steps based on a novel first-token surprisal metric. Finally, ASAP teaches models to autonomously generate and leverage these concise CoTs at inference time, enabling efficient reasoning in coding tasks. Experiments show that ASAP achieves state-of-the-art accuracy across multiple code generation benchmarks while substantially reducing training and inference costs. On the challenging LiveCodeBench v4_v5 benchmark, our approach reduces token generation by 23.5% and inference latency by 43.5% compared to the strongest baseline, while achieving a competitive accuracy of 36.19% in Pass@1. Our results highlight a promising direction for building powerful and efficient LRMs.

  6. GENIE: Gaussian Encoding for Neural Radiance Fields Interactive Editing

    Neural Radiance Fields (NeRF) and Gaussian Splatting (GS) have recently transformed 3D scene representation and rendering. NeRF achieves high-fidelity novel view synthesis by learning volumetric representations through neural networks, but its implicit encoding makes editing and physical interaction challenging. In contrast, GS represents scenes as explicit collections of Gaussian primitives, enabling real-time rendering, faster training, and more intuitive manipulation. This explicit structure has made GS particularly well-suited for interactive editing and integration with physics-based simulation. In this paper, we introduce GENIE (Gaussian Encoding for Neural Radiance Fields Interactive Editing), a hybrid model that combines the photorealistic rendering quality of NeRF with the editable and structured representation of GS. Instead of using spherical harmonics for appearance modeling, we assign each Gaussian a trainable feature embedding. These embeddings are used to condition a NeRF network based on the k nearest Gaussians to each query point. To make this conditioning efficient, we introduce Ray-Traced Gaussian Proximity Search (RT-GPS), a fast nearest Gaussian search based on a modified ray-tracing pipeline. We also integrate a multi-resolution hash grid to initialize and update Gaussian features. Together, these components enable real-time, locality-aware editing: as Gaussian primitives are repositioned or modified, their interpolated influence is immediately reflected in the rendered output. By combining the strengths of implicit and explicit representations, GENIE supports intuitive scene manipulation, dynamic interaction, and compatibility with physical simulation, bridging the gap between geometry-based editing and neural rendering. The code can be found under (https://github.com/MikolajZielinski/genie)

  7. ReasonRank: Empowering Passage Ranking with Strong Reasoning Ability

    Large Language Model (LLM) based listwise ranking has shown superior performance in many passage ranking tasks. With the development of Large Reasoning Models, many studies have demonstrated that step-by-step reasoning during test-time helps improve listwise ranking performance. However, due to the scarcity of reasoning-intensive training data, existing rerankers perform poorly in many complex ranking scenarios and the ranking ability of reasoning-intensive rerankers remains largely underdeveloped. In this paper, we first propose an automated reasoning-intensive training data synthesis framework, which sources training queries and passages from diverse domains and applies DeepSeek-R1 to generate high-quality training labels. A self-consistency data filtering mechanism is designed to ensure the data quality. To empower the listwise reranker with strong reasoning ability, we further propose a two-stage post-training approach, which includes a cold-start supervised fine-tuning (SFT) stage for reasoning pattern learning and a reinforcement learning (RL) stage for further ranking ability enhancement. During the RL stage, based on the nature of listwise ranking, we design a multi-view ranking reward, which is more effective than a ranking metric-based reward. Extensive experiments demonstrate that our trained reasoning-intensive reranker ReasonRank outperforms existing baselines significantly and also achieves much lower latency than pointwise reranker Rank1. Through further experiments, our ReasonRank has achieved state-of-the-art (SOTA) performance 40.6 on the BRIGHT leaderboard\footnote{https://brightbenchmark.github.io/.} Our codes are available at https://github.com/8421BCD/ReasonRank.

  8. WideSearch: Benchmarking Agentic Broad Info-Seeking

    From professional research to everyday planning, many tasks are bottlenecked by wide-scale information seeking, which is more repetitive than cognitively complex. With the rapid development of Large Language Models (LLMs), automated search agents powered by LLMs offer a promising solution to liberate humans from this tedious work. However, the capability of these agents to perform such "wide-context" collection reliably and completely remains largely unevaluated due to a lack of suitable benchmarks. To bridge this gap, we introduce WideSearch, a new benchmark engineered to evaluate agent reliability on these large-scale collection tasks. The benchmark features 200 manually curated questions (100 in English, 100 in Chinese) from over 15 diverse domains, grounded in real user queries. Each task requires agents to collect large-scale atomic information, which could be verified one by one objectively, and arrange it into a well-organized output. A rigorous five-stage quality control pipeline ensures the difficulty, completeness, and verifiability of the dataset. We benchmark over 10 state-of-the-art agentic search systems, including single-agent, multi-agent frameworks, and end-to-end commercial systems. Most systems achieve overall success rates near 0\%, with the best performer reaching just 5\%. However, given sufficient time, cross-validation by multiple human testers can achieve a near 100\% success rate. These results demonstrate that present search agents have critical deficiencies in large-scale information seeking, underscoring urgent areas for future research and development in agentic search. Our dataset, evaluation pipeline, and benchmark results have been publicly released at https://widesearch-seed.github.io/

  9. Omni-Effects: Unified and Spatially-Controllable Visual Effects Generation

    Visual effects (VFX) are essential visual enhancements fundamental to modern cinematic production. Although video generation models offer cost-efficient solutions for VFX production, current methods are constrained by per-effect LoRA training, which limits generation to single effects. This fundamental limitation impedes applications that require spatially controllable composite effects, i.e., the concurrent generation of multiple effects at designated locations. However, integrating diverse effects into a unified framework faces major challenges: interference from effect variations and spatial uncontrollability during multi-VFX joint training. To tackle these challenges, we propose Omni-Effects, a first unified framework capable of generating prompt-guided effects and spatially controllable composite effects. The core of our framework comprises two key innovations: (1) LoRA-based Mixture of Experts (LoRA-MoE), which employs a group of expert LoRAs, integrating diverse effects within a unified model while effectively mitigating cross-task interference. (2) Spatial-Aware Prompt (SAP) incorporates spatial mask information into the text token, enabling precise spatial control. Furthermore, we introduce an Independent-Information Flow (IIF) module integrated within the SAP, isolating the control signals corresponding to individual effects to prevent any unwanted blending. To facilitate this research, we construct a comprehensive VFX dataset Omni-VFX via a novel data collection pipeline combining image editing and First-Last Frame-to-Video (FLF2V) synthesis, and introduce a dedicated VFX evaluation framework for validating model performance. Extensive experiments demonstrate that Omni-Effects achieves precise spatial control and diverse effect generation, enabling users to specify both the category and location of desired effects.

  10. A Comprehensive Survey of Self-Evolving AI Agents: A New Paradigm Bridging Foundation Models and Lifelong Agentic Systems

    Recent advances in large language models have sparked growing interest in AI agents capable of solving complex, real-world tasks. However, most existing agent systems rely on manually crafted configurations that remain static after deployment, limiting their ability to adapt to dynamic and evolving environments. To this end, recent research has explored agent evolution techniques that aim to automatically enhance agent systems based on interaction data and environmental feedback. This emerging direction lays the foundation for self-evolving AI agents, which bridge the static capabilities of foundation models with the continuous adaptability required by lifelong agentic systems. In this survey, we provide a comprehensive review of existing techniques for self-evolving agentic systems. Specifically, we first introduce a unified conceptual framework that abstracts the feedback loop underlying the design of self-evolving agentic systems. The framework highlights four key components: System Inputs, Agent System, Environment, and Optimisers, serving as a foundation for understanding and comparing different strategies. Based on this framework, we systematically review a wide range of self-evolving techniques that target different components of the agent system. We also investigate domain-specific evolution strategies developed for specialised fields such as biomedicine, programming, and finance, where optimisation objectives are tightly coupled with domain constraints. In addition, we provide a dedicated discussion on the evaluation, safety, and ethical considerations for self-evolving agentic systems, which are critical to ensuring their effectiveness and reliability. This survey aims to provide researchers and practitioners with a systematic understanding of self-evolving AI agents, laying the foundation for the development of more adaptive, autonomous, and lifelong agentic systems.

  11. Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization

    We present Klear-Reasoner, a model with long reasoning capabilities that demonstrates careful deliberation during problem solving, achieving outstanding performance across multiple benchmarks. Although there are already many excellent works related to inference models in the current community, there are still many problems with reproducing high-performance inference models due to incomplete disclosure of training details. This report provides an in-depth analysis of the reasoning model, covering the entire post-training workflow from data preparation and long Chain-of-Thought supervised fine-tuning (long CoT SFT) to reinforcement learning (RL), along with detailed ablation studies for each experimental component. For SFT data, our experiments show that a small number of high-quality data sources are more effective than a large number of diverse data sources, and that difficult samples can achieve better results without accuracy filtering. In addition, we investigate two key issues with current clipping mechanisms in RL: Clipping suppresses critical exploration signals and ignores suboptimal trajectories. To address these challenges, we propose Gradient-Preserving clipping Policy Optimization (GPPO) that gently backpropagates gradients from clipped tokens. GPPO not only enhances the model's exploration capacity but also improves its efficiency in learning from negative samples. Klear-Reasoner exhibits exceptional reasoning abilities in mathematics and programming, scoring 90.5\% on AIME 2024, 83.2\% on AIME 2025, 66.0\% on LiveCodeBench V5 and 58.1\% on LiveCodeBench V6.

  12. BrowseComp-Plus: A More Fair and Transparent Evaluation Benchmark of Deep-Research Agent

    Deep-Research agents, which integrate large language models (LLMs) with search tools, have shown success in improving the effectiveness of handling complex queries that require iterative search planning and reasoning over search results. Evaluations on current benchmarks like BrowseComp relies on black-box live web search APIs, have notable limitations in (1) fairness: dynamic and opaque web APIs hinder fair comparisons and reproducibility of deep research methods; (2) transparency: lack of control over the document corpus makes it difficult to isolate retriever contributions. In other words, the current evaluations may compare a complete deep research system at a given time, but they do not foster well-controlled experiments to provide insights into the capability of underlying deep research LLMs. To address these challenges, we introduce BrowseComp-Plus, a benchmark derived from BrowseComp, employing a fixed, carefully curated corpus. Each query in BrowseComp-Plus includes human-verified supporting documents and mined challenging negatives, enabling controlled experimentation. The benchmark is shown to be effective in distinguishing the performance of deep research systems. For instance, the open-source model Search-R1, when paired with the BM25 retriever, achieves 3.86% accuracy, whereas the GPT-5 achieves 55.9%. Integrating the GPT-5 with the Qwen3-Embedding-8B retriever further enhances its accuracy to 70.1% with fewer search calls. This benchmark allows comprehensive evaluation and disentangled analysis of deep research agents and retrieval methods, fostering insights into retrieval effectiveness, citation accuracy, and context engineering in Deep-Research system.

Solidot(38)

  1. 英伟达和 AMD 同意将 15% 的中国营收上缴给美国

    作为获得向中国出口芯片的许可证的一部分,英伟达和 AMD 同意将 15% 的中国营收上缴给美国。英伟达称,它一直遵守美国政府制定的参与全球市场的规则。它已经几个月没有向中国交付 H2O 芯片,希望出口管控规则能让美国公司参与中国的竞争。AMD 尚未置评。根据出口许可证协议,英伟达将把在华销售的 H20 芯片营收的 15% 上缴给美国政府,而 AMD 将把 MI308 芯片营收的 15% 上缴给美国政府。

  2. 读卖新闻起诉 Perplexity 侵犯著作权

    日本读卖新闻集团向东京地方法院起诉了使用生成 AI 提供搜索服务的美国新兴公司 Perplexity。诉讼称Perplexity 通过 AI 搜索未经授权使用文章侵犯了著作权,要求赔偿约 21.68 亿日元。这是日本媒体首次围绕AI搜 索提起诉讼。诉状显示,Perplexity 于 2025 年 2~6 月获取并复制了 11 万 9467 篇读卖新闻在线文章的信息,制作并向用户发送了包含相似文本和图像的内容。诉状指出,Perplexity 侵犯了著作权法规定的复制权和公众传播权,并因用户不能访问原始网站的“零点击搜索”妨碍了经营。诉讼还要求停止复制文章等行为。

  3. 人与自然联结度 220 年来下降逾 60%

    《地球》(Earth)期刊发表的一项研究显示,自 1800 年以来,人类与自然的联结度下降了 60% 以上。通过城市化进程、社区野生动植物减少的数据,以及父母不再向子女传递亲近自然的习惯等因素,研究人员追踪了 220 年来人类生活中自然元素的缺失。结果显示从 1800 年到 2020 年,自然词汇在书籍中逐渐消失,其中 1990 年的降幅达到 60.6% 的峰值。计算机模型预测,如果没有深远的政策和社会变革,人类与自然的联结度将继续下降。随着社区日益城市化、父母不再传递“面向自然”的价值观,下一代将继续失去对自然的认知。而最有效的干预措施是让儿童从小接触自然,以及对城市环境进行大规模绿化。

  4. 安全加固的 Android 社区发行版 Graphene OS

    手机已经成为日常生活的一部分,储存了大量敏感信息,但我们如何确保手机安全可靠?Google 的 Android 系统提供了开源版本 AOSP,而 Android 本身并未以安全为重心设计的,但基于开源系统,社区发行版如 GrapheneOS 对 Android 进行了加固。GrapheneOS 始于 CopperheadOS 项目,它的两位创始人因分歧而分道扬镳,其中之一的 Daniel Micay 创建了独立的项目 GrapheneOS。它旨在加强安全,并没有优先考虑支持更多设备,它支持的设备类型非常有限,仅限于 Google Pixel 6 到 Pixel 9,新的 Pixel 设备使用了新的 ARMv9 CPU 核心,支持如硬件内存标记(memory tagging)等安全功能。GrapheneOS 默认使用硬件内存标记保护操作系统和用户安装的兼容应用免遭攻击。它没有预装原版 Android 提供的大量开箱即用的应用,没有 Google Play store,而是提供了自己的浏览器、相机应用、PDF 阅读器以及总共只有 13 款应用的应用商店。浏览器是 Chromium 分支 Vanadium,启用了严格的网站隔离,它并不推荐 Firefox,认为它容易受到攻击,用户可选择安装的一个浏览器是 IronFox——Firefox 加固版。

  5. Debian 13 trixie 释出

    Debian 项目宣布释出最新的稳定版本 Debian 13 trixie,该版本将支持到 2030 年,主要变化包括 GNOME 48、KDE Plasma 6.3、Xfce 4.20、Linux 6.12、GCC 14.2、Python 3.13 和 systemd 257。Debian 13 新增 14,100 个软件包,移除了 8,840 个过时的软件包,软件包总数达到 69,830 个,有 44,326 个软件包更新。riscv64 成为 Debian 官方支持的架构,停止支持i386 架构。

  6. AI 淘汰初级编程开发者

    Jonathan Kim 在 2023 年花了近 2 万美元参加了一个编程训练营(coding bootcamp),希望这能帮助他找到一份程序员的工作。他在毕业之后申请了 600 多个程序员职位,没有一家公司向他伸出橄榄枝。他目前在叔叔的冰激凌店工作,还在继续寻找工作。过去十多年,编程训练营是非编程相关专业求职者获得硅谷高薪程序员工作的踏脚石。但今天编程训练营已经过时,AI 为其棺材敲上最后的钉子。数据显示,在 Kim 参加的 2023 年 Codesmith 训练营中,只有 37% 学生在毕业后六个月内找到了全职技术工作,远低于 2021 下半年的 83%。AI 非常擅长编程,结果是入门级的编程职位显著减少。Signalfire 今年五月发表一份报告称,应届毕业生招聘数量比 2019 年疫情前的水平下跌了一半。

  7. 沃茨对 YouTube 的欺诈诉讼停滞不前

    2020 年,骗子利用苹果联合创始人沃茨(Steve Wozniak)的片段在 YouTube 上发布视频以骗取比特币,沃茨的妻子 Janet Wozniak 多次举报了该视频,但 YouTube 对此没有采取任何行动,两人都认为 YouTube 是在助纣为虐,他们为此提起了诉讼。然而五年之后,沃茨接受采访透露案件停滞不前,原因是名为 Section 230 的联邦法规。Section 230 是非常宽泛的法规,它限制了对社媒平台提起任何诉讼的能力。沃茨说,Section 230 规定平台不需要对上面发布的任何内容承担任何责任。对于沃茨的诉讼,Google/YouTube 公关部门的 José Castañeda 发表了一份冠冕堂皇但没有任何正面回应的措辞,声称公司严肃对待平台上滥用行为,会在发现违规行为时迅速采取行动。

  8. CEO 辞职,GitHub 不再在微软内部独立运营

    GitHub 首席执行官 Thomas Dohmke 宣布将于年底离职,而微软不再任命新 CEO,GitHub 领导团队将直接向 CoreAI 部门汇报。微软于 2018 年以 75 亿美元收购 GitHub 后,这家代码托管平台一直在公司内部独立运营,但最新的人事变动意味着 GitHub 的运营方式发生了重大改变。微软的 CoreAI 部门由 Meta 前高管 Jay Parikh 领导,专注于为微软及其客户构建 AI 平台和工具。

  9. 高危 WinRAR 0day 正被利用

    两个俄罗斯网络犯罪组织过去两周正通过含有恶意附件的钓鱼邮件利用一个高危 WinRAR 0day。WinRAR 是广泛使用的文件压缩工具,用户数多达 5 亿,安全公司 ESET 于 7 月 18 日首次检测到针对 WinRAR 的攻击,7 月 24 日确定利用了一个 WinRAR 0day,同一天通知 WinRAR 开发商,6 天后漏洞修复。该漏洞滥用了名为交换数据流(Alternate Data Streams,ADS)的 Windows 功能,该功能允许同一文件路径可以有不同的表示方式。漏洞利用滥用该功能触发了一个此前未知的路径遍历漏洞,导致 WinRAR 将恶意可执行文件植入攻击者选择的文件路径 %TEMP% 和 %LOCALAPPDATA%,因为能执行代码 Windows 通常禁止访问这些路径。利用该漏洞的俄罗斯黑客组织包括 RomCom 和 Paper Werewolf。

  10. 年轻血清配合骨髓细胞逆转皮肤衰老

    从吸血鬼传说到实验室培养的组织,年轻血逆转衰老不再是纯粹的神话。一项新研究发现,在实验室环境中,年轻血液激活骨髓细胞分泌蛋白质能逆转皮肤衰老。皮肤品公司德国拜尔斯道夫(Beiersdorf AG)的研究人员发现,仅仅凭借年轻血液并不能逆转衰老,必须存在骨髓细胞。如果不存在骨髓细胞,皮肤衰老标志没有改善的迹象。只有将年轻血清和骨髓细胞共同培养,年轻血清引发骨髓细胞分泌出恢复皮肤活力的蛋白质因子。研究人员识别出 55 种与年龄相关的蛋白质,其中 7 种在测试中表现出了明显的抗衰老效果。

  11. 研究发现素食者癌症风险比肉食者低 12%

    根据发表在《The American Journal of Clinical Nutrition》期刊上的一项研究,素食者罹患癌症的风险比肉食者低 12%,纯素食者罹患癌症的风险比肉食者低 24%。研究使用了始于 2002-2007 年的 Adventist Health Study 数据,涉及到 95,863 名北美基督复临安息日会信徒,有 79,468 名信徒最初未患有癌症。基督复临安息日会是基督新教教派之一,推崇素食饮食。结果显示,相比肉食者,素食者整体癌症风险低约 12%,中等发病率癌症的风险降低约 18%。素食者患乳腺癌、结直肠癌、前列腺癌、胃癌和淋巴增生性癌症的风险较低。

  12. 量子流体首次观测到类似梵高名画《星空》的漩涡结构

    日本大阪公立大学与韩国科学技术院研究团队首次在量子流体中观测到“量子开尔文—亥姆霍兹不稳定性”(KHI),并发现了一种形态酷似梵高名画《星空(The Starry Night)》中弯月的新型涡旋结构,即偏心分数斯格明子(EFS)。这一现象早在数十年前便被理论预测,却从未在实验中直接观测到。KHI 是经典流体力学中的重要现象,当两种速度不同的流体在边界处相遇时,会形成波浪与涡旋。这种现象可在风吹起的海浪、翻卷的云层,甚至《星空》旋动的天空中找到。研究团队提出疑问:量子流体中也会发生类似的不稳定性吗?为验证这一设想,团队将锂原子气体冷却至接近绝对零度,制备出一种多组分玻色—爱因斯坦凝聚态(量子超流体),并在其中形成两股速度不同的流体。在它们的交界面上,首先出现了波状指形结构,类似经典湍流;随后,在量子力学与拓扑学规则的作用下,生成了特殊涡旋。