DIGEST · 2025-08-11

OrangeBot.AI Digest — 2025-08-11

63 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. Token growth indicates future AI spend per dev (blog.kilocode.ai)
  2. Claude is the drug, Cursor is the dealer (middlelayer.substack.com)
  3. Wikipedia loses challenge against Online Safety Act (www.bbc.com)
  4. GitHub is no longer independent at Microsoft after CEO resignation (www.theverge.com)
  5. Auf Wiedersehen, GitHub (github.blog)
  6. Meta Leaks Part 1: Israel and Meta (archive.org)
  7. Claude Code is all you need (dwyer.co.za)
  8. I tried every todo app and ended up with a .txt file (www.al3rez.com)
  9. Trump Orders National Guard to Washington, D.C., and Takeover of City’s Police (www.nytimes.com)
  10. Wikimedia Foundation Challenges UK Online Safety Act Regulations (wikimediafoundation.org)
  11. Pricing Pages – A Curated Gallery of Pricing Page Designs (pricingpages.design)
  12. OpenSSH Post-Quantum Cryptography (www.openssh.com)
  13. Hand-picked selection of articles on AI fundamentals/concepts (aman.ai)
  14. GPT-OSS-120B runs on just 8GB VRAM & 64GB+ system RAM (old.reddit.com)
  15. Faster substring search with SIMD in Zig (aarol.dev)

GitHub Trending(14)

  1. nomic-ai / gpt4all

    GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.

  2. tadata-org / fastapi_mcp

    Expose your FastAPI endpoints as Model Context Protocol (MCP) tools, with Auth!

  3. trailofbits / buttercup
  4. patchy631 / ai-engineering-hub

    In-depth tutorials on LLMs, RAGs and real-world AI agent applications.

  5. openai / codex

    Lightweight coding agent that runs in your terminal

  6. menloresearch / jan

    Jan is an open source alternative to ChatGPT that runs 100% offline on your computer

  7. microsoft / generative-ai-for-beginners

    21 Lessons, Get Started Building with Generative AI 🔗 https://microsoft.github.io/generative-ai-for-beginners/

  8. midday-ai / midday

    Invoicing, Time tracking, File reconciliation, Storage, Financial Overview & your own Assistant made for Freelancers

  9. xiaoyaocz / dart_simple_live

    简简单单的看直播

  10. fastapi / full-stack-fastapi-template

    Full stack, modern web application template. Using FastAPI, React, SQLModel, PostgreSQL, Docker, GitHub Actions, automatic HTTPS and more.

  11. mendableai / firecrawl

    🔥 Turn entire websites into LLM-ready markdown or structured data. Scrape, crawl and extract with a single API.

  12. umami-software / umami

    Umami is a modern, privacy-focused alternative to Google Analytics.

  13. libsdl-org / SDL

    Simple Directmedia Layer

  14. idosal / git-mcp

    Put an end to code hallucinations! GitMCP is a free, open-source, remote MCP server for any GitHub project

Product Hunt(11)

  1. nFactorial AI

    Video calls with world's best minds as your personal tutors

  2. My Juno Health: AI Doctor

    Smarter Health. Sharper Mind. Reach Your Peak Productivity

  3. Hyprnote

    AI Notepad for Private Meetings — fully on your device

  4. Dad Reply

    Auto-respond with a 👍 - Minimal effort - Maximum ambiguity

  5. SuperCraft

    Figma for designing physical products

  6. Weave

    The coolest, bestest, newest AI product for engineers.

  7. Rid

    Sell with a Text

  8. Sixteen

    Your Focus is worth Fighting for

  9. Zpeakr

    Fluency starts when your fear shuts up. Zpeakr helps.

  10. Embedding Atlas

    Compute & interactively visualize large embeddings

  11. Convo - Chat Insights

    Conversation insights app

Hugging Face(12)

  1. GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models

    We present GLM-4.5, an open-source Mixture-of-Experts (MoE) large language model with 355B total parameters and 32B activated parameters, featuring a hybrid reasoning method that supports both thinking and direct response modes. Through multi-stage training on 23T tokens and comprehensive post-training with expert model iteration and reinforcement learning, GLM-4.5 achieves strong performance across agentic, reasoning, and coding (ARC) tasks, scoring 70.1% on TAU-Bench, 91.0% on AIME 24, and 64.2% on SWE-bench Verified. With much fewer parameters than several competitors, GLM-4.5 ranks 3rd overall among all evaluated models and 2nd on agentic benchmarks. We release both GLM-4.5 (355B parameters) and a compact version, GLM-4.5-Air (106B parameters), to advance research in reasoning and agentic AI systems. Code, models, and more information are available at https://github.com/zai-org/GLM-4.5.

  2. Voost: A Unified and Scalable Diffusion Transformer for Bidirectional Virtual Try-On and Try-Off

    Virtual try-on aims to synthesize a realistic image of a person wearing a target garment, but accurately modeling garment-body correspondence remains a persistent challenge, especially under pose and appearance variation. In this paper, we propose Voost - a unified and scalable framework that jointly learns virtual try-on and try-off with a single diffusion transformer. By modeling both tasks jointly, Voost enables each garment-person pair to supervise both directions and supports flexible conditioning over generation direction and garment category, enhancing garment-body relational reasoning without task-specific networks, auxiliary losses, or additional labels. In addition, we introduce two inference-time techniques: attention temperature scaling for robustness to resolution or mask variation, and self-corrective sampling that leverages bidirectional consistency between tasks. Extensive experiments demonstrate that Voost achieves state-of-the-art results on both try-on and try-off benchmarks, consistently outperforming strong baselines in alignment accuracy, visual fidelity, and generalization.

  3. InfiGUI-G1: Advancing GUI Grounding with Adaptive Exploration Policy Optimization

    The emergence of Multimodal Large Language Models (MLLMs) has propelled the development of autonomous agents that operate on Graphical User Interfaces (GUIs) using pure visual input. A fundamental challenge is robustly grounding natural language instructions. This requires a precise spatial alignment, which accurately locates the coordinates of each element, and, more critically, a correct semantic alignment, which matches the instructions to the functionally appropriate UI element. Although Reinforcement Learning with Verifiable Rewards (RLVR) has proven to be effective at improving spatial alignment for these MLLMs, we find that inefficient exploration bottlenecks semantic alignment, which prevent models from learning difficult semantic associations. To address this exploration problem, we present Adaptive Exploration Policy Optimization (AEPO), a new policy optimization framework. AEPO employs a multi-answer generation strategy to enforce broader exploration, which is then guided by a theoretically grounded Adaptive Exploration Reward (AER) function derived from first principles of efficiency eta=U/C. Our AEPO-trained models, InfiGUI-G1-3B and InfiGUI-G1-7B, establish new state-of-the-art results across multiple challenging GUI grounding benchmarks, achieving significant relative improvements of up to 9.0% against the naive RLVR baseline on benchmarks designed to test generalization and semantic understanding. Resources are available at https://github.com/InfiXAI/InfiGUI-G1.

  4. Memp: Exploring Agent Procedural Memory

    Large Language Models (LLMs) based agents excel at diverse tasks, yet they suffer from brittle procedural memory that is manually engineered or entangled in static parameters. In this work, we investigate strategies to endow agents with a learnable, updatable, and lifelong procedural memory. We propose Memp that distills past agent trajectories into both fine-grained, step-by-step instructions and higher-level, script-like abstractions, and explore the impact of different strategies for Build, Retrieval, and Update of procedural memory. Coupled with a dynamic regimen that continuously updates, corrects, and deprecates its contents, this repository evolves in lockstep with new experience. Empirical evaluation on TravelPlanner and ALFWorld shows that as the memory repository is refined, agents achieve steadily higher success rates and greater efficiency on analogous tasks. Moreover, procedural memory built from a stronger model retains its value: migrating the procedural memory to a weaker model yields substantial performance gains.

  5. Pruning the Unsurprising: Efficient Code Reasoning via First-Token Surprisal

    Recently, Large Reasoning Models (LRMs) have demonstrated remarkable capabilities in code reasoning by scaling up the length of Chain-of-Thought (CoT). However, excessively long reasoning traces introduce substantial challenges in terms of training cost, inference latency, and deployment feasibility. While various CoT compression approaches have emerged to address this challenge, they face inherent trade-offs: token-level methods often disrupt syntactic and logical coherence, while step-level methods based on perplexity fail to reliably capture the logically critical reasoning steps. In this paper, we propose ASAP (Anchor-guided, Surprisal-based Pruning), a novel coarse-to-fine framework for CoT compression. ASAP first performs anchor-guided pruning to preserve the core reasoning structure, which efficiently reduces the search space for subsequent processing. It then enables a logic-aware pruning by selecting logically essential reasoning steps based on a novel first-token surprisal metric. Finally, ASAP teaches models to autonomously generate and leverage these concise CoTs at inference time, enabling efficient reasoning in coding tasks. Experiments show that ASAP achieves state-of-the-art accuracy across multiple code generation benchmarks while substantially reducing training and inference costs. On the challenging LiveCodeBench v4_v5 benchmark, our approach reduces token generation by 23.5% and inference latency by 43.5% compared to the strongest baseline, while achieving a competitive accuracy of 36.19% in Pass@1. Our results highlight a promising direction for building powerful and efficient LRMs.

  6. GENIE: Gaussian Encoding for Neural Radiance Fields Interactive Editing

    Neural Radiance Fields (NeRF) and Gaussian Splatting (GS) have recently transformed 3D scene representation and rendering. NeRF achieves high-fidelity novel view synthesis by learning volumetric representations through neural networks, but its implicit encoding makes editing and physical interaction challenging. In contrast, GS represents scenes as explicit collections of Gaussian primitives, enabling real-time rendering, faster training, and more intuitive manipulation. This explicit structure has made GS particularly well-suited for interactive editing and integration with physics-based simulation. In this paper, we introduce GENIE (Gaussian Encoding for Neural Radiance Fields Interactive Editing), a hybrid model that combines the photorealistic rendering quality of NeRF with the editable and structured representation of GS. Instead of using spherical harmonics for appearance modeling, we assign each Gaussian a trainable feature embedding. These embeddings are used to condition a NeRF network based on the k nearest Gaussians to each query point. To make this conditioning efficient, we introduce Ray-Traced Gaussian Proximity Search (RT-GPS), a fast nearest Gaussian search based on a modified ray-tracing pipeline. We also integrate a multi-resolution hash grid to initialize and update Gaussian features. Together, these components enable real-time, locality-aware editing: as Gaussian primitives are repositioned or modified, their interpolated influence is immediately reflected in the rendered output. By combining the strengths of implicit and explicit representations, GENIE supports intuitive scene manipulation, dynamic interaction, and compatibility with physical simulation, bridging the gap between geometry-based editing and neural rendering. The code can be found under (https://github.com/MikolajZielinski/genie)

  7. Adapting Vision-Language Models Without Labels: A Comprehensive Survey

    Vision-Language Models (VLMs) have demonstrated remarkable generalization capabilities across a wide range of tasks. However, their performance often remains suboptimal when directly applied to specific downstream scenarios without task-specific adaptation. To enhance their utility while preserving data efficiency, recent research has increasingly focused on unsupervised adaptation methods that do not rely on labeled data. Despite the growing interest in this area, there remains a lack of a unified, task-oriented survey dedicated to unsupervised VLM adaptation. To bridge this gap, we present a comprehensive and structured overview of the field. We propose a taxonomy based on the availability and nature of unlabeled visual data, categorizing existing approaches into four key paradigms: Data-Free Transfer (no data), Unsupervised Domain Transfer (abundant data), Episodic Test-Time Adaptation (batch data), and Online Test-Time Adaptation (streaming data). Within this framework, we analyze core methodologies and adaptation strategies associated with each paradigm, aiming to establish a systematic understanding of the field. Additionally, we review representative benchmarks across diverse applications and highlight open challenges and promising directions for future research. An actively maintained repository of relevant literature is available at https://github.com/tim-learn/Awesome-LabelFree-VLMs.

  8. MELLA: Bridging Linguistic Capability and Cultural Groundedness for Low-Resource Language MLLMs

    Multimodal Large Language Models (MLLMs) have shown remarkable performance in high-resource languages. However, their effectiveness diminishes significantly in the contexts of low-resource languages. Current multilingual enhancement methods are often limited to text modality or rely solely on machine translation. While such approaches help models acquire basic linguistic capabilities and produce "thin descriptions", they neglect the importance of multimodal informativeness and cultural groundedness, both of which are crucial for serving low-resource language users effectively. To bridge this gap, in this study, we identify two significant objectives for a truly effective MLLM in low-resource language settings, namely 1) linguistic capability and 2) cultural groundedness, placing special emphasis on cultural awareness. To achieve these dual objectives, we propose a dual-source strategy that guides the collection of data tailored to each goal, sourcing native web alt-text for culture and MLLM-generated captions for linguistics. As a concrete implementation, we introduce MELLA, a multimodal, multilingual dataset. Experiment results show that after fine-tuning on MELLA, there is a general performance improvement for the eight languages on various MLLM backbones, with models producing "thick descriptions". We verify that the performance gains are from both cultural knowledge enhancement and linguistic capability enhancement. Our dataset can be found at https://opendatalab.com/applyMultilingualCorpus.

  9. MeshLLM: Empowering Large Language Models to Progressively Understand and Generate 3D Mesh

    We present MeshLLM, a novel framework that leverages large language models (LLMs) to understand and generate text-serialized 3D meshes. Our approach addresses key limitations in existing methods, including the limited dataset scale when catering to LLMs' token length and the loss of 3D structural information during mesh serialization. We introduce a Primitive-Mesh decomposition strategy, which divides 3D meshes into structurally meaningful subunits. This enables the creation of a large-scale dataset with 1500k+ samples, almost 50 times larger than previous methods, which aligns better with the LLM scaling law principles. Furthermore, we propose inferring face connectivity from vertices and local mesh assembly training strategies, significantly enhancing the LLMs' ability to capture mesh topology and spatial structures. Experiments show that MeshLLM outperforms the state-of-the-art LLaMA-Mesh in both mesh generation quality and shape understanding, highlighting its great potential in processing text-serialized 3D meshes.

  10. UI-AGILE: Advancing GUI Agents with Effective Reinforcement Learning and Precise Inference-Time Grounding

    The emergence of Multimodal Large Language Models (MLLMs) has driven significant advances in Graphical User Interface (GUI) agent capabilities. Nevertheless, existing GUI agent training and inference techniques still suffer from a dilemma for reasoning designs, ineffective reward, and visual noise. To address these issues, we introduce UI-AGILE, a comprehensive framework enhancing GUI agents at both the training and inference stages. For training, we propose a suite of improvements to the Supervised Fine-Tuning (SFT) process: 1) a Continuous Reward function to incentivize high-precision grounding; 2) a "Simple Thinking" reward to balance planning with speed and grounding accuracy; and 3) a Cropping-based Resampling strategy to mitigate the sparse reward problem and improve learning on complex tasks. For inference, we present Decomposed Grounding with Selection, a novel method that dramatically improves grounding accuracy on high-resolution displays by breaking the image into smaller, manageable parts. Experiments show that UI-AGILE achieves the state-of-the-art performance on two benchmarks ScreenSpot-Pro and ScreenSpot-v2. For instance, using both our proposed training and inference enhancement methods brings 23% grounding accuracy improvement over the best baseline on ScreenSpot-Pro.

  11. OS Agents: A Survey on MLLM-based Agents for General Computing Devices Use

    The dream to create AI assistants as capable and versatile as the fictional J.A.R.V.I.S from Iron Man has long captivated imaginations. With the evolution of (multi-modal) large language models ((M)LLMs), this dream is closer to reality, as (M)LLM-based Agents using computing devices (e.g., computers and mobile phones) by operating within the environments and interfaces (e.g., Graphical User Interface (GUI)) provided by operating systems (OS) to automate tasks have significantly advanced. This paper presents a comprehensive survey of these advanced agents, designated as OS Agents. We begin by elucidating the fundamentals of OS Agents, exploring their key components including the environment, observation space, and action space, and outlining essential capabilities such as understanding, planning, and grounding. We then examine methodologies for constructing OS Agents, focusing on domain-specific foundation models and agent frameworks. A detailed review of evaluation protocols and benchmarks highlights how OS Agents are assessed across diverse tasks. Finally, we discuss current challenges and identify promising directions for future research, including safety and privacy, personalization and self-evolution. This survey aims to consolidate the state of OS Agents research, providing insights to guide both academic inquiry and industrial development. An open-source GitHub repository is maintained as a dynamic resource to foster further innovation in this field. We present a 9-page version of our work, accepted by ACL 2025, to provide a concise overview to the domain.

  12. LightSwitch: Multi-view Relighting with Material-guided Diffusion

    Recent approaches for 3D relighting have shown promise in integrating 2D image relighting generative priors to alter the appearance of a 3D representation while preserving the underlying structure. Nevertheless, generative priors used for 2D relighting that directly relight from an input image do not take advantage of intrinsic properties of the subject that can be inferred or cannot consider multi-view data at scale, leading to subpar relighting. In this paper, we propose Lightswitch, a novel finetuned material-relighting diffusion framework that efficiently relights an arbitrary number of input images to a target lighting condition while incorporating cues from inferred intrinsic properties. By using multi-view and material information cues together with a scalable denoising scheme, our method consistently and efficiently relights dense multi-view data of objects with diverse material compositions. We show that our 2D relighting prediction quality exceeds previous state-of-the-art relighting priors that directly relight from images. We further demonstrate that LightSwitch matches or outperforms state-of-the-art diffusion inverse rendering methods in relighting synthetic and real objects in as little as 2 minutes.

Solidot(11)

  1. 英伟达和 AMD 同意将 15% 的中国营收上缴给美国

    作为获得向中国出口芯片的许可证的一部分,英伟达和 AMD 同意将 15% 的中国营收上缴给美国。英伟达称,它一直遵守美国政府制定的参与全球市场的规则。它已经几个月没有向中国交付 H2O 芯片,希望出口管控规则能让美国公司参与中国的竞争。AMD 尚未置评。根据出口许可证协议,英伟达将把在华销售的 H20 芯片营收的 15% 上缴给美国政府,而 AMD 将把 MI308 芯片营收的 15% 上缴给美国政府。

  2. 读卖新闻起诉 Perplexity 侵犯著作权

    日本读卖新闻集团向东京地方法院起诉了使用生成 AI 提供搜索服务的美国新兴公司 Perplexity。诉讼称Perplexity 通过 AI 搜索未经授权使用文章侵犯了著作权,要求赔偿约 21.68 亿日元。这是日本媒体首次围绕AI搜 索提起诉讼。诉状显示,Perplexity 于 2025 年 2~6 月获取并复制了 11 万 9467 篇读卖新闻在线文章的信息,制作并向用户发送了包含相似文本和图像的内容。诉状指出,Perplexity 侵犯了著作权法规定的复制权和公众传播权,并因用户不能访问原始网站的“零点击搜索”妨碍了经营。诉讼还要求停止复制文章等行为。

  3. 人与自然联结度 220 年来下降逾 60%

    《地球》(Earth)期刊发表的一项研究显示,自 1800 年以来,人类与自然的联结度下降了 60% 以上。通过城市化进程、社区野生动植物减少的数据,以及父母不再向子女传递亲近自然的习惯等因素,研究人员追踪了 220 年来人类生活中自然元素的缺失。结果显示从 1800 年到 2020 年,自然词汇在书籍中逐渐消失,其中 1990 年的降幅达到 60.6% 的峰值。计算机模型预测,如果没有深远的政策和社会变革,人类与自然的联结度将继续下降。随着社区日益城市化、父母不再传递“面向自然”的价值观,下一代将继续失去对自然的认知。而最有效的干预措施是让儿童从小接触自然,以及对城市环境进行大规模绿化。

  4. 安全加固的 Android 社区发行版 Graphene OS

    手机已经成为日常生活的一部分,储存了大量敏感信息,但我们如何确保手机安全可靠?Google 的 Android 系统提供了开源版本 AOSP,而 Android 本身并未以安全为重心设计的,但基于开源系统,社区发行版如 GrapheneOS 对 Android 进行了加固。GrapheneOS 始于 CopperheadOS 项目,它的两位创始人因分歧而分道扬镳,其中之一的 Daniel Micay 创建了独立的项目 GrapheneOS。它旨在加强安全,并没有优先考虑支持更多设备,它支持的设备类型非常有限,仅限于 Google Pixel 6 到 Pixel 9,新的 Pixel 设备使用了新的 ARMv9 CPU 核心,支持如硬件内存标记(memory tagging)等安全功能。GrapheneOS 默认使用硬件内存标记保护操作系统和用户安装的兼容应用免遭攻击。它没有预装原版 Android 提供的大量开箱即用的应用,没有 Google Play store,而是提供了自己的浏览器、相机应用、PDF 阅读器以及总共只有 13 款应用的应用商店。浏览器是 Chromium 分支 Vanadium,启用了严格的网站隔离,它并不推荐 Firefox,认为它容易受到攻击,用户可选择安装的一个浏览器是 IronFox——Firefox 加固版。

  5. Debian 13 trixie 释出

    Debian 项目宣布释出最新的稳定版本 Debian 13 trixie,该版本将支持到 2030 年,主要变化包括 GNOME 48、KDE Plasma 6.3、Xfce 4.20、Linux 6.12、GCC 14.2、Python 3.13 和 systemd 257。Debian 13 新增 14,100 个软件包,移除了 8,840 个过时的软件包,软件包总数达到 69,830 个,有 44,326 个软件包更新。riscv64 成为 Debian 官方支持的架构,停止支持i386 架构。

  6. AI 淘汰初级编程开发者

    Jonathan Kim 在 2023 年花了近 2 万美元参加了一个编程训练营(coding bootcamp),希望这能帮助他找到一份程序员的工作。他在毕业之后申请了 600 多个程序员职位,没有一家公司向他伸出橄榄枝。他目前在叔叔的冰激凌店工作,还在继续寻找工作。过去十多年,编程训练营是非编程相关专业求职者获得硅谷高薪程序员工作的踏脚石。但今天编程训练营已经过时,AI 为其棺材敲上最后的钉子。数据显示,在 Kim 参加的 2023 年 Codesmith 训练营中,只有 37% 学生在毕业后六个月内找到了全职技术工作,远低于 2021 下半年的 83%。AI 非常擅长编程,结果是入门级的编程职位显著减少。Signalfire 今年五月发表一份报告称,应届毕业生招聘数量比 2019 年疫情前的水平下跌了一半。

  7. 签署捐赠誓言的 256 名亿万富翁只有 9 人信守诺言

    2010 年,盖茨夫妇和沃伦巴菲特发起了捐赠誓言(The Giving Pledge)运动,旨在鼓励亿万富翁们将至少一半的净资产用于慈善机构,无论是在他们有生之年还是在他们去世时。捐赠誓言是一种公开的捐赠意向承诺,并非是具有法律约束力的合同。在捐赠誓言发起 15 年后,在 256 名署名的亿万富翁(其中之一是马斯克)中,大部分人未兑现承诺。大部分署名者的资产远超签署时,大部分慈善捐款都流向了私人基金和捐赠者指定基金,而不是直接支持实际运营的慈善机构。在世的署名者中,只有 Laura 和 John Arnold 夫妇捐了半数资产。在 22 位已故署名者中,只有 8 位生前兑现了承诺,其中 Chuck Feeney 在世时捐出了全部财富。在 194 名美国署名者中,110 人仍然是亿万富翁,总资产 1.7 万亿美元,而不再属于亿万富翁的署名者,他们的财富缩水并不是因为捐赠。

  8. 科学家研发能针对多种病毒的通用疫苗

    大多数疫苗的设计是针对一种病原体提供免疫力。例如水痘疫苗是针对水痘-带状疱疹病毒。但在 COVID-19 疫情爆发后,世界各地的免疫系统研究人员正在努力超越传统的单一病原体疫苗。根据《细胞》上一项研究,研究人员已经开发出一套研究流程,以推动“通用疫苗”的研发。这些疫苗将针对广泛的病毒家族和变异的病毒变体。如果成功,这种方法有望研制出能够中和新出现的 SARS-CoV-2 变体以及许多其他可能引发大流行的病毒的疫苗。研究针对的是病毒在演化过程中始终保持不变的蛋白质序列。

  9. Google 发现了一种新骗局,随后自己成为了该骗局的受害者

    Google 在今年 6 月披露了一种针对 Salesforce 账号的新骗局,两个月后搜索巨人报告它也成为了该骗局的受害者。该骗局是一种社会工程攻击,攻击者滥用了 Salesforce 的一项功能,该功能允许客户关联账户与第三方应用,将数据与用于博客、地图工具和类似资源的内部系统集成。攻击者伪装成 IT 部门直接联络目标要求访问权限,攻击者指示公司员工将外部应用连接到 Salesforce 实例,之后攻击者会要求员工输入 Salesforce 界面的八位数安全码。攻击者随后使用安全码访问该实例及其中存储的所有数据,窃取数据之后高价出售给买家。遭到此类骗局的知名公司包括了阿迪达斯、澳洲航空、安联人寿、思科,LVMH 旗下奢侈品牌路易威登、迪奥和蒂芙尼,现在还有 Google。Google 本周表示它的 Salesforce 实例遭到类似骗局的攻击,数据被窃取。攻击发生在六月份,Google 现在才披露,可能是因为最近才发现,它表示攻击者窃取的大部分数据都是公开的非机密信息。

  10. 一颗大质量白矮星被发现是由双星合并而成

    白矮星通常是类太阳恒星在燃料耗尽之后塌缩留下的致密核心,质量通常为太阳的一半,最高不超过 1.44 倍太阳质量,超过这一上限会爆炸或塌缩成中子星。大质量白矮星 WD 0525+526 质量是太阳的 1.2 倍,研究发现它并非是恒星塌缩而是双星合并的结果。WD 0525+526 与其它白矮星不同之处包括大气中有微弱的碳信号,低碳含量以及超高温(表面温度约太阳的四倍)表明这颗白矮星处于合并后演化较早的阶段。

  11. 中国工程师解决磁悬浮列车的隧道微压波噪音

    最新型号的磁悬浮列车的时速能达到 600 公里。高速行驶的列车长期以来一直面临类似超音速飞机音爆的“隧道微压波噪音(tunnel boom)”问题:高速列车进入隧道撞击空气产生高压波,往隧道下游传递形成活塞效应,压力波抵达下游隧道口冲出隧道口外产生微压波,伴随令人不悦的噪音。微压波噪音对列车运营安全构成了严峻挑战,因为冲击波会干扰附近的人和动物,还会造成结构损坏。中国工程师报告,在隧道口内安装新型的隔音缓冲器能将微压波减少最多 96%。100 米长缓冲器采用多孔结构,结合隧道本体的多孔涂层,能使被困在封闭空间的空气在火车到达隧道口前逃逸,有效抑制噪音,类似在枪管上安装消音器。